Welcome Ekebygruppen to TIQQE

We are proud to welcome Ekebygruppen as a new customer to TIQQE. Ekebygruppen has decided to move business critical applications to AWS.

Ekebygruppen is a group with several subsidiaries that is active in healthcare and care. They provide high quality primary care and housing for young people and young adults.

Ekebygruppen has decided to move their business critical infrastructure to AWS, including all domains for the group. When looking for a supplier, Ekebygruppen wanted to find a partner with extensive experience of cloud security as their data contain critical health care information. They also looked for a partner/friend/buddy to trust for their future cloud journey. Due to the sensitivity of the data, TIQQE will provision the services from the AWS region Stockholm.

Welcome to TIQQE, we’re looking forward to a long term partnership.


Fanny Uhr just joined TIQQE

We are happy to welcome Fanny Uhr to our growing family as developer. We asked Fanny a couple of questions about her first impressions of TIQQE.

What did you know about TIQQE before you started?

Since my brother Johannes has been working at TIQQE since the beginning, I feel like I know this company pretty well. I knew that TIQQE is a growing serverless company with great AWS knowledge. I’ve always thought that TIQQE seemed to have great values and I feel like the way everyone takes care of each other is very rare to see.

Why did you want to join TIQQE?

My brother and TIQQE inspired me to study web development. When my first internship period started this spring, I got the chance to join the TIQQE family for 7 weeks. Little did I know, I got to chance to stay longer! The opportunity to learn new things and be a part of this team is better than I could’ve dreamed of when I started my education almost a year ago.

What was your first impression of TIQQE?

My first impression of TIQQE was how friendly everyone was and it felt like I was cheered on by everyone from day one. All my expectations of the company were confirmed to be true.

What is your role at TIQQE?

I will be working as a SysOps Tech for Mimiro.

How has your first time been at TIQQE?

My experience during my internship at TIQQE was exciting, instructive, fun and a bit different due to the COVID-19. I’ve been working from a safe distance at home with daily contact with smart and helpful people that’s been doing a brilliant job keeping up with questions and supporting me through the project.

What are you looking forward to in the nearest future?

I want to keep learning and continue to grow both as a developer and a person. I feel excited and thankful to be a part of such a smart and passionate team.

What do you know about TIQQE now?

I know that TIQQE is a company that wants their employees and customers to succeed. Everyone is professional and they stay true to what they believe in.

Welcome Fanny and thanks for sharing!

Cloud security

The Swedish Corona App, nothing for American clouds, or..?

A colleague came some time ago and said that the reporting around the Swedish Corona App questioned Amazon Web Services (AWS) as host. Not good for an AWS Partner. Based on what I read, some high-pitched screams in that direction existed. But what I found was at least one crucial misconception – storage, some discomfort about cloud, and eSam references of course.

My unscientific summary of what I read is that it is about costs, hasty decisions and a sense of urgency, possible disregard of the Swedish Public Procurement Act, privacy concerns due to storage of health data and eSam recommendations, the suitability of American cloud operators and, some implicit misconception and general discomfort about utilizing the cloud.

My intention is not to review the reporting in this blog post even though I will touch on some aspects related to the suitability of using American cloud providers below, as well. But I start with the storage confusion.

Cloud service does not equal cloud storage

Primarily I address an implicit assumption many outside our industry often make. That you always are forced to store your data in that cloud when you use a cloud service provider such as AWS, Microsoft Azure or Google. This is not true. Data can be stored in the cloud or somewhere else. All depends on the service you use or provide.

When reading the reporting I can see this misconception shines through. It is an implicit assumption we often meet in our customer dialogues as well. My guess is that this misconception comes from the frequent use of cloud based services in our daily life and the discussion about privacy.

Cloud storage optional for SaaS providers. Why not for customers?

When developing a SaaS (Software-as-a-Service) service in a cloud such as AWS, Microsoft Azure or Google Cloud you as a developer can choose where data shall be stored. In short it is a design decision. This opens up for a foresighted SaaS developer to give the customer a choice as well.

It provides an opportunity to differentiate the offerings and have different solutions for data storage as options for the customers. A do-or-die requirement in some industries where data and storage location is crucial. It can be a business blocker to lack this agility for customers in some industries.

In AWS there are several different services and solutions that can be used to provide this flexibility for both the SaaS provider and the customer.

The use of American cloud providers or not?

The other thing I want to comment on is the underlying concern about using AWS as a platform when they developed the Swedish Corona app (RIP?). When reading the reporting it seems like there are two concerns in relation to this.

  1. The fear that data shall be stored on US servers.
  2. The fact that AWS is an American company and therefore obeys to American laws.

Point 1: Mitigated by automatically enforcing Region Blocking to Sweden

It is possible for a SaaS provider utilizing AWS to explicitly limit both the storage and processing to specific regions by using region blocking rules that are applied automatically. In AWS it is possible to limit access to i.e. region Stockholm. And then it is guaranteed that no data or processing of data is performed outside Sweden.

Combining this with the storage differentiation discussed above makes a strong argument for the possibility to use an American cloud provider for sensitive data processing.

Point 2: Mitigated with strong arguments before selecting cloud provider

I have always been a strong advocate for using cloud services and I love the flexibility and freedom given by AWS. Now is that said. Again! When reading the reporting and the concern about using AWS it is clear that the eSam recommendation to public authorities about the risk to use cloud providers that is subject to foreign laws, come into play. The eSam recommendation is about law interpretation and as a non-lawyer I will not step into that area. But one thing is clear. At least for me.

Not everyone agrees with eSam and their recommendations. Both SKR and respected IT lawyers disagree with eSam about the strong guarantees needed for a swedish authority to use non-swedish cloud service providers. This disagreement will most likely end up in court sometime.

What to do?

It is hard to give general advice due to legal implications. But I think a good idea is to consider starting an investigation about the suitability of using large cloud providers for a selective set of data. And carefully document every step in the process up to a decision of which one to use. It is a better way to ask yourself if the cloud is suitable for you, instead of claiming that it is not, based upon fear.

What shall I think when discussing the suitability of cloud usage?

One way is to start reading my blog post where I argue why the question “Is The Cloud suitable for me?” is better than “Is The Cloud Secure?”. It is a  discussion of cloud security from a business benefit perspective –

And then it might be of interest to evaluate if a Cloud First Strategy can be something for you. What I mean with a Cloud First Strategy (CFS) is available in my blog post – In the post, I argue that it is all about creating a cloud positive mindset.


Watch our Biztalk Replace webinar

In March, we launched a series of ideas of how companies who are suffering from the Covid-19 pandemic can quickly reduce cost and increase liquidity. If you missed the webinar around our second idea, reducing cost by replacing you Biztalk platform, you can watch it today.

Many companies are under tremendous financial pressure due to the COVID-19 virus. In early March, we sat down to figure out what we can do to help and came up with 4 ways of how we can reduce cost and increase liquidity in the short term for a company.

You can read a summary of the cost saving series here. The summary include links to all 4 ideas to give you a deeper insight of each idea. Every idea also include a financial business case which have two purposes:

  • Translate technology into tangible financials to motivate your CFO to support the idea.
  • Provide a business case template to reflect your specific prerequisites.

Biztalk Replace

Every organization needs to connect data between applications and databases to support their business processes. There are a lot of ways of solving the integration need but many companies have bought an integration platform from one or more of the major product vendors in the market such as Microsoft Biztalk, Tibco, Mulesoft, IBM Websphere etc. If you’re one of them, we have good news for you and your CFO.

According to Radar Group, who made a survey of 200 Swedish companies a few years back, integration is a hidden cost bomb. On average, companies spend 140 000 SEK in maintenance cost per year and per integration. On average, a company with 300 employees have 50 integrations if you’re in the retail or in the distribution sector, 70 integrations if you’re in the manufacturing sector according to the survey. The cost of integration is substantial.

You can read the full blog post here

You can watch the webinar here


Cloud optimization webinar

Join our cloud optimization webinar the 25:th of August between 08:30 to 09:15. Learn how to lower your monthly AWS bill with 40-50% by optimizing your AWS accounts.

Managing cloud infrastructure is different to managing infrastructure on-prem. It’s easy to provision new resources but it’s equally easy to forget to decommission resources when they’re not needed. Further more, performance tuning is often not part of daily routines and only performed when there are performance problems. Optimization is not supposed to be performed occasionally but rather on a regular basis to ensure a cost effective use of cloud computing.

Join this webinar to find out how you can work with continuous optimization to lower your monthly AWS bill. You can also read this blog post which includes a financial comparison between optimized vs. non optimized AWS infrastructure.

Join our webinar the 25th of August at 08:30 to 09:15.

Please enroll here


We’re hiring in Gothenburg!

At TIQQE, we’re proud of growing full-stack developers that has a passion for Serverless tech stack and architecture. That doesn’t mean that you have to know everything now and be a full blown tech lead, it means that you have the possibility to be one if you join us.

At TIQQE, we’re proud of growing full-stack developers that has a passion for Serverless tech stack and architecture. That doesn’t mean that you have to know everything now and be a full blown tech lead, it means that you have the possibility to be one if you join us.

We’re taking our presence in Gothenburg to the next level and we want to grow our office with nice and passionate techies. So if you have some experience of AWS, love to write code and wants to join a company where: 

  • humans comes first   
  • you will work in teams
  • you will have a tech-mentor & a designated buddy

If you’re interested in joining our TIQQE-family, please get in touch

Sofia Sundqvist

Chief Operating Officer

Alicia Hed

Recruitment Assistant


My experience with AWS certifications

Preparing for an AWS certificate can feel scary and overwhelming for many of us. How do I prepare? How difficult is it? Is this certification relevant for me? Since the beginning of this year, I’ve passed both AWS Professional exams and I would like to share my thoughts and answer some of the questions I had before I got started.

For sitting absolutely still, few things get the heart rate going as those three seconds between pressing the “Submit” button and finding out if you passed the AWS exam or not. You’ve probably studied for a few weeks, took time out of your busy day to travel to a certification center. You sat through a draining barrage of questions and you are about to find out if you managed to do enough. I hope that this article will improve your chances of seeing that message that says “Grade: Pass”.

Why should I study for an AWS certificate?

AWS releases new features and services every other week. It can be difficult to keep up and even harder to know how to best combine these services to solve your business problems.

While hands-on experience is king, knowing the strengths and capabilities of a wider range of services can give you an edge and help you gain the courage to try things outside your comfort zone. 

Additionally, organizations can register the certifications of their employees in the APN Partner Central and having a certain amount of certifications in a company is a requirement to reach different partner tiers with AWS.

Which certification fits me?

There are different certifications that you choose from depending on your role and experience within AWS. If you have little or no experience with AWS, you are recommended to start with Cloud Practitioner and then choose an Associate certification based on your role. If you pass the Associate exam, you can move to a Professional certification. With each step, the level of complexity ramps up and you are expected to have a deeper understanding of each service to pass the exam.

The image shows the available certifications, excluding the Specialty level. From

You should choose the path based on which role you want to excel at, but know that the exams have many similarities and you will often get the same type of question regardless if you are doing a developer or solutions architect exam.

There is also a certification type called Specialty. These go in-depth on specific topics such as Advanced Networking, Security, Big Data and more. I have not tried these myself.

How I studied

Regardless of exam difficulty I would say that everything below still applies. The difference is just that the professional exams require more time spent studying and will be very difficult if you have no prior AWS experience.

For me, answering scenario based questions and being told in bold red text why you were wrong tends to stick more than reading and listening to course material. With that said, I like to start with one course to set a foundation and my choice is usually at The material is relatively short compared to some of the other training courses I’ve seen out there and I like the mix of theory, hands-on labs and topic specific quizzes. 

Then I seek out as many practice exams I can get my hands on and this is really where I feel like I get the most knowledge out of each hour spent. You want to find the practice exams that explain why each correct answer was correct and why the incorrect answers didn’t make sense. It’s very important that you go back and read the explanations for each question. Even the ones that you got right. I do not use these exams to try and figure out where I am in terms of passing grade. 

I recommend which has one exam for each certificate, but I find that the best practice exams can be bought over at There are tons of community made exams here and it’s wise to pick the ones with many good reviews. The questions will closely resemble the more difficult questions you will see in the real exam. For the DevOps Professional exam I actually failed 4 out of 5 practice exams on the week of the real exam while still passing the real exam with a 85% score. So don’t be completely discouraged if you are hovering just below the minimum passing score when doing the practice exams!

AWS provides sample questions for each certificate and also a practice exam that you sign up for just like you would a real exam. You won’t be given a detailed explanation to each question here, so you will have to do the research yourself which is also good practice.

One point could be made that by focusing on practice exams, you move the focus from building AWS skills to simply learning how to pass exams. But I disagree. The practice exams and real exams are made up of scenario based questions that will look exactly like real world problems. Then you need to figure out which combination of methods and AWS services that best solves that scenario. Just like you would at your job as a DevOps technician or Solutions architect.

The exam

To pass an AWS Certification exam you need between 70-75% score depending on each exam. Each question can either have one correct answer or require a combination of up to 3 answers.

Be careful with time management during the real exam and especially for the professional exams. These questions and answers can be pretty long and take a lot of time to read through. A professional exam consists of 75 questions in 180 minutes and looking at the clock after each question will just disrupt your focus and make you stressed. Try to think of it as 25 questions each hour and look up at the clock every 30 minutes to see if you are on time or not. Same principle applies for associate and practitioner exams even though the total question and minute count will differ.

Pay attention to certain keywords in the exam questions. The questions can sometimes feel like essays and you think that many of the answers could possibly be correct. But then the question ends with something like “how can you achieve this in the MOST cost effective way”. Just paying extra attention to the words “most cost effective” can often remove half of the answers. An example could be that two answers involve sending logs to an Elasticsearch cluster while the other two answers use CloudWatch Logs instead. Both approaches might solve the overall problem, but you know that CloudWatch is going to be cheaper than Elasticsearch. With each exam I’ve become better at identifying these key phrases in the question that instantly remove half the answers. Not only does it help me in finding the correct answer, but it saves me tons of time if I only have to thoroughly read through half the answers.

Note: AWS’ own practice exams and real exams can be quite costly and if you’ve completed an exam in the past or know someone who has you can use vouchers available in the AWS certification account that can severely bring down the prices (50% off for real exams and 100% off for practice exams). Use them! And don’t forget to press the “Apply” button after you’ve entered them during checkout, like I did once.

Exam from home

In these times it has become a possibility to do your exam from home or in your office. You will be recorded using your webcam and you will do a system checkup prior to doing the exam. I tried this for my last certificate. A few things to be extra careful with, if you attempt this:

  • Clear your room of any extra monitors, notebooks, post-it notes, etc.
  • Make sure you won’t be disturbed or have any surrounding noise which could be mistaken for someone communicating with you. Your exam will be invalidated if they suspect any third party communication.
  • Make sure the name in your certification account exactly matches the name in your passport.

Last words

It’s difficult to go into an exam feeling 100% prepared. That’s why I think it’s very important to help build a company where we applaud anyone who tries, regardless of the result. Where attempting and failing is considered a learning experience instead of a failure. I bet that environment will result in more passed exams and coworkers that are a lot less stressed. 

Feel free to reach out if you have any comments on this post or AWS certifications in general!


Incident automation webinar

Join our incident automation webinar the 9:th of June between 08:30 to 09:15. Learn how to automate 70% of your incidents with AWS Step Functions.

Incident handling is often a highly manual process in most companies. It requires 1st, 2nd and 3rd line resources in a service desk to manage error handling of the applications, databases and infrastructure. Further more, some expert forum, or Change Advisory Board, are usually in place to work with improvements to reduce tickets and incidents. A lot of people is required just to keep the lights on.

What if you could set up monitoring alerts that automatically triggers automated processes and resolves the incidents before the users even notice them and place a ticket to your service desk. Sounds like science fiction? Check out this webinar where Max Koldenius will reveal how to set up incident automation using AWS Step Functions.

Join our webinar the 9th of June at 08:30 to 09:15.

Please enroll here


Continuous delivery in AWS – tools overview

Continuous Delivery (CD) is a term that is used for a collection of practices that strive for enabling an organisation to provide both speed and quality in their software delivery process. It is a complex topic and in this article we will focus on one aspect, which is selecting tools for CD pipelines when deploying software in AWS.

Before we dive into various tooling options for continuous delivery though, let us define some scope, terminology and also talk a bit why we would bother with this in the first place.


Our scope for this overview is for delivering solutions that runs in AWS. Source code lives in a version control system and we assume that it is a hosted solution (not on-premise) and that it is Git. Git is currently the most common version control system in use. Some services mentioned may also work with other version control systems, such as Subversion, for example.

Continuous delivery tools can either be a software-as-a-service (SaaS) solution, or we can manage it ourselves on servers – in the cloud or on-premise. Our scope here is only for the SaaS solutions.

If you have a version control system that is hosted on-premise or on your own servers in some cloud, that typically works with continuous delivery software that you can host yourself – either on-premise or in the cloud. These options we will not cover here.


First of all, there are a few terms and concepts to look at:

  • Pipeline – this generally refers to the process that starts with changing the code to the release and deployment of the updated software. This would be a mostly or even completely automated process in some cases.
  • Continuous integration (CI) – the first part of the pipeline, in which developers can perform code updates in a consistent and safe way with fast feedback loops. The idea is to do this often and that it should be quick, so any errors in changes can be caught and corrected quickly. Doing it often means that there are only small changes each time, which makes it easier to pinpoint and correct any errors. For CI to work well, it needs a version control system and a good suite of automated tests that can be executed when updates someone commits updates to a version control system.
  • Continuous Delivery (CD) – This refers to the whole process from code changes and CI to the point where a software release is ready for deployment. This includes everything in the continuous integration part and other steps that may be needed to make the software ready for release. Ideally this is also fully automated, although may include manual steps. Again, the goal is that this process is quick, consistent and safe, so that it would be possible to make a deployment “at the click of a button”, or similar simple procedure. But the deployment is not part of continuous delivery.
  • Continuous Deployment (CD) – Unfortunately the abbreviation is the same as for continuous delivery, but it is not the same. This is continuous delivery plus automated deployment. In practice, this is applicable for some solutions but not all. With serverless solutions it is generally easier to do this technically, but in many cases it is not a technology decision, but a business decision.

Why continuous delivery?

Speed and safety for the business/organisation – that is essentially what it boils down to. To be able to adapt and change based on market and changing business requirements and to do this in a way that minimises disruption of the business.

Depending on which stakeholders you look at, there are typically different aspects of this process that are of interest:

  • Business people’s interests are in speed and predictability of delivery of changed business requirements and that services continues to work satisfyingly for customers.
  • Operations people’s interests are in safety, simplicity and predictability of updates and that disruptions can be avoided.
  • Developers’ interest is in fast feedback on the work they do and that they can do changes without fear of messing things up for themselves and their colleagues. Plus that they can focus on solving problems and building useful or cool solutions.

It is a long process to reach continuous delivery Nirvana, and the world of IT a mess to various degrees – we are never done. A sane choice of tooling for continuous delivery can at least get us part of the way.

Continuous delivery tools

If we want a continuous delivery tool which targets AWS, uses git and runs as a SaaS solution, we have a few categories:

  • Services provided by AWS
  • Services provided by the managed version control system solution
  • Third party continuous delivery SaaS tools

Services provided by AWS

AWS has a number of services that is related to continuous delivery, which all have names that start with “Code” in them. This includes:

  • AWS CodeCommit
  • AWS CodePipeline
  • AWS CodeBuild
  • AWS CodeDeploy
  • AWS CodeGuru
  • AWS CodeStar

A key advantage with using AWS services is that credentials and access is the regular identity and access management (IAM) in AWS and encryption with key management service (KMS). There is no AWS secrets information that has to be stored elsewhere outside of AWS, since it all lives in AWS – assuming your CI/CD workflow goes all-in on AWS – or to a large extent at least.

A downside with these AWS services is that they are not the most user-friendly, plus there are a number of them. They can be used together to set up elaborate CI/CD workflows, but it requires a fair amount of effort to do so. CodeStar is a service here that was an attempt to set up an end-to-end development workflow with CI/CD.
I like the idea behind CodeStar and for some use cases it may be just fine. But it has not received so much love from AWS since it was launched.

You do not necessarily need all of these services to set up a CI/CD workflow – in its simplest form you just need a supported source code repository (CodeCommit/Github/Bitbucket) and CodeBuild. But things can quickly get more complicated, in particular once the number of repositories, developers and/or AWS accounts involved starts to grow. One project that tries to alleviate that pain is the AWS Deployment Framework.

Services provided by the managed version control system solution

Three of the more prominent version control system hosting services are Github, Gitlab and Bitbucket. They all have CI/CD services bundled with their hosted service offering. Both Bitbucket and Gitlab also provide on-premise/self-hosted versions of their source code repository software as well as continuous delivery tooling and other tools for the software lifecycle. The on-premise continuous delivery tooling for Bitbucket is Bamboo, while the hosted (cloud) version is Bitbucket Pipelines. For Gitlab the software is the same for both hosted and on-premise. We only cover the cloud options here.

On the surface the continuous delivery tooling is similar for all these three – a file in each repository which describes the CI/CD workflow(s) for that particular repository. They are all based on running Docker containers to execute steps in the workflow and can handle multiple branches and pipelines. They all have some kind of organisational and team handling capabilities.

Beyond the continuous delivery basics they start to deviate a bit in their capabilities and priorities. Bitbucket, being an Atlassian product/service, focus on good integration with Jira in particular, but also some 3rd party solutions. Gitlab prides itself on providing a one-stop solution/application for the whole software lifecycle – what features are enabled depends on which edition of the software that is used. Github, being perhaps the most well-known source code repository provider, has a well-established ecosystem for integration with various tools into their toolchain, provided by 3rd parties and community – more so than the other providers.

Github and Gitlab have the concept of runners that allow you to set up your own machines to run tasks in the pipelines.

So if you are already using other Atlassian products, Bitbucket and Bitbucket Pipelines might be a good fit. If you want an all-in-one solution then Gitlab can suite well. For a best-of-breed approach to pick different components, then Github is likely a good fit.

Third party continuous delivery SaaS tools

There are many providers which provide hosted continuous delivery tooling. Some of these providers have been in this space for a reasonably long time, before the managed version control system providers added their own continuous delivery tooling.

In this segment there may be providers that support specific use cases better, or are able to set up faster and/or parallel pipelines easily. They also tend to support multiple managed version control system solutions and multiple cloud provider targets. Some of them also provide self-hosted/on-premise versions of their software solutions. Thus this category of providers may be interesting for integrating with a diverse portfolio of existing solutions.

Some of the more popular SaaS providers in this space include:

Pricing models

Regardless of category, pretty much all the different providers mentioned here provide some kind of free tier and then one or more on-demand paid tiers.

For example: Github Actions, Bitbucket Pipelines, Gitlab CI/CD and AWS CodeBuild provide a number of free build minutes per month. This is however limited to certain machine sizes used in executing the tasks in the pipelines.

A simple price model of just counting build minutes is easy to grasp, but will also not allow flexibility in machine sizes, since larger machine will require more capacity from the provider. In AWS case with CodeBuild, you can select a number of different machine sizes – but you need to pay for anything larger than the smaller machines from the first minute.

The third party continuous delivery providers have slightly different free tier models, I believe partially in order to distinguish them from the offerings of the managed version control system providers. For example, CircleCI provides a number of free “credits” per week. Depending on machine capacity and feature, pipeline execution will cost different amounts of credits.

The number of parallel pipeline executions is typically also a factor for all the different providers – free tiers tend to have 1 pipeline that can execute at any time, while more parallel execution will cost more.

Many pricing models also a restriction on the number of users and there may be a price tag attached to each active user also. All in all, you pay for compute capacity, to save time on pipeline execution and to have more people utilize the continuous delivery pipelines.

AWS, with a number of services fulfilling various parts of the continuous delivery solution, may be a bit more complex to grasp initially what things will actually cost. Also, the machine sizes may not be identical across the different services either, so a build minute for one service may not necessarily be one build minute at another provider.

Trying to calculate the exact amount the continuous delivery solution will cost may be counterproductive at an early stage though. Look at features needed first and their importance, then consider pricing after that.

End notes

Selecting continuous delivery tooling can be a complex topic. The bottom line is that it is intended to deliver software faster, more secure and more consistently, with fewer problems – and with good insight into the workflow for different stakeholders. Do not loose sight of that goal and what your requirements are – beyond the simple cases. Most alternatives will be ok for the very simple cases. Do not be afraid to try out some of them, but time box the effort.

If you wish to discuss anything of the above, please feel free to contact me at

Cloud economics

License to kill

Using commercial software and paying expensive licenses is old school and no longer necessary. Cloud provide you with flexibility and you only need to pay for what you use. No investments necessary.

In May I’m sure many of you, including myself, was looking forward to the release of the new James Bond film, with the famous slogan – License To Kill.

Unfortunately, due to Covid-19, this film premier has been postponed but the reality of License to Kill within IT-licenses and infrastructure has never been more important than now.

We are in contact with roughly 150 companies across Sweden every month, mainly to understand where the market is at this point of time and how we need to align to be able to meet the market with their challenges.

In the past few months the market has really changed, most companies are “pulling the handbrake” and cutting down their variable costs, freezing new initiatives etc. What comes to a surprise is the number of licenses many companies have, everything from Office365, different on-premise & cloud platforms which are based on traditional license models which are core based and very expensive.

When buying licenses, with a traditional license model, you buy a capacity up-front which you are planning to use during a longer term, usually between 1-5 years. Of course, during this period, you are able to “scale up” and purchase more cores. But overall you will always be paying for more than you need at the point of time of the purchase.

Traditional on-premise platforms when scaling will have the following effects:

  • Additional cores
  • Additional servers
  • Not fully utilized 
  • Generate additional costs

This is costing companies across the globe huge amounts of money which could be spent on better things or even in these uncertain times also saved.

Here are a few examples:

  • On-Prem infrastructure
  • Integration platforms, Enterprise Service Bus
  • API-Platforms
  • Identity & Access Management platforms
  • Service & Assessment Platforms

The list is long and most likely you are running one or several at your workplace today.

So what’s the solution?

Both from a license and an infrastructure perspective the Cloud is the obvious choice this enables you to both scale up and down. At TIQQE we purely focus on AWS and the capabilities to scaling, not paying up-front license costs, pay for what you need at this point of time are all the key points to moving to the cloud.

Ask yourself if you need to renew your licenses anytime soon. Do you want to buy more licenses or do you want a second opinion?

We have all the tools in place to quickly identify your costs today and what the costs would be if you would instead operate in the cloud.

This blog is mainly focused on a cost saving perspective but there are many more examples what the cloud provides you with.

I really recommend checking the following blogs out:

4 ways of reducing cost and increase liquidity