COVID-19

Post corona thoughts

The corona pandemic has shown that being able to adjust cost according to market demand is a core capability for a company. Serverless computing is the solution to the problem.

It’s of course too early to claim that we are a through the corona pandemic and things will be going back to as they were before. We really doubt that it ever will go back to the way it was before. Only in the past few months we have seen a huge change in how we work, all from running online meetings, a huge increase in number of digital events, how we collaborate etc.

Things that we saw in the market at the start of Covid-19 was of course cost cutting, freezing costs and postponing different initiatives and projects. Unfortunately, Covid-19 might be with us for a while so is this a long-term solution? Just looking back 6-7 months the market was entirely different than it is right now. We have also seen companies who have been booming during this period.

At TIQQE we reach out to 150 companies each month to get an understanding of where the market is and where it’s heading.

If we would highlight two interesting areas it would be the following:

High demand, lack of capacity causing downtime and lost business

When we have reached out to companies who are booming at the moment, companies with high demand struggle with the amount of load they need to handle. IT has challenges with handling the loads which in turn causes downtime and of course lost business. Is the answer then to scale up the infrastructure during this period?

Low demand, over capacity and cutting back costs

When speaking with customers who in one way or another have entered into a recession, their challenge is unused capacity. When looking at cost cuts it makes sense to cut it back but at the same time, how do they scale up once it picks up again?

The benefit with serverless

One of the main benefits with serverless is exactly that, you have a scalable, flexible IT which is adaptable over time no matter if it’s a recession or booming.

In uncertain times it’s important to take control over of what you can, define your prediction of the future might look like and make sure not to make decisions which could be a win on a short-term basis but be a loss in a long-term perspective.

At TIQQE this is exactly what we help our customers with, we help you find the right solution, which is scalable, flexible and adaptable to change no matter what the market situation is for you.

Please feel free to reach out to us if you have questions or need to scale your business to address the higher or lower demand.

Event

Kodayoga is the new black

In March a young lady reached out to us, Yasnia. She told us about an initiative that she, together with Caroline and Frida, wanted to create an event around. With the mission to inspire and attract more women to our tech-industry through kodayoga. The unbeatable combination of developing code and practicing yoga.

Of course we wanted to be a part of this. So we can proudly present that TIQQE is not only Yasnia’s new workplace from 1st of September, we are also one of the sponsors of this upcoming event taking place the 3rd of October at Creative House in Örebro.

After the event we invite women to our office at TIQQE to mingle and meet like-minded and hopefully get some inspiration. We will provide lighter snacks and drinks but due to Corona we only have a limited number of seats.

Registration will open the 14th of September so stay tuned on their website or subscribe to #kodayoga on LinkedIn for updates.

People

Are you our next Cloud Architect?

We’re looking for a Cloud Architect to our Gothenburg office. If you know the AWS tech stack and want to work in an inspiring company with great potential, this is for you.

We are growing our business on Swedens west coast and even if we in many cases work distributed, we also see the importance of being present in person to be able to interact with our growing number of customers. Therefore we are looking for a Serverless Cloud Architect who has the ability to handle both interaction with developers and DevOps teams, as well as a deep knowledge in AWS infrastructural services. You will be working together with passionate people and both develop and test infrastructure.  

We believe that you already:

  • Have your home in Gothenburg or its surroundings
  • Have experience from Cloud solution architecture in general
  • Have specific knowledge in AWS Serverless Architecture and development
  • Have one or more AWS certifications
  • Are analytic and solution-oriented
  • Have a genuin interest in customers businesses and challenges

We wish that you:

  • Have the urge to learn more
  • Put teams over individuals
  • Are professionally driven by serverless technology and become your best version of yourself when surrounded by other ”techies”
  • Wants to develop your soft skills as well as your technical skills

What to expect from us:

  • A burning love for all things serverless
  • A place where people matters
  • Courage to say “we were wrong”
  • Technical excellency
  • A startup company with great visions
  • Distributed teams
  • A place where we don’t always do what the customer tells us, but instead always does what is best for the customer

Please get in contact with us to learn more

Sofia Sundqvist

Chief Operating Officer

sofia.sundqvist@tiqqe.com

Alicia Hed

Recruitment Assistant

alicia.hed@tiqqe.com

Jobs

We’re hiring in Gothenburg!

At TIQQE, we’re proud of growing full-stack developers that has a passion for Serverless tech stack and architecture. That doesn’t mean that you have to know everything now and be a full blown tech lead, it means that you have the possibility to be one if you join us.

At TIQQE, we’re proud of growing full-stack developers that has a passion for Serverless tech stack and architecture. That doesn’t mean that you have to know everything now and be a full blown tech lead, it means that you have the possibility to be one if you join us.

We’re taking our presence in Gothenburg to the next level and we want to grow our office with nice and passionate techies. So if you have some experience of AWS, love to write code and wants to join a company where: 

  • humans comes first   
  • you will work in teams
  • you will have a tech-mentor & a designated buddy

If you’re interested in joining our TIQQE-family, please get in touch

Sofia Sundqvist

Chief Operating Officer

sofia.sundqvist@tiqqe.com

Alicia Hed

Recruitment Assistant

alicia.hed@tiqqe.com

AWS

AWS Summit Online 2020

June 17, 2020 – You are well on your way to the best day of the year for cloud! Join the AWS Summit Online and deepen your knowledge with this free, virtual event if you are a technologist at any level. There is something for everybody.

Hear about the latest trends, customers and partners in EMEA, followed by the opening keynote with Werner Vogels, CTO, Amazon.com. All developers at TIQQE are always attending Werner’s keynotes.

After the keynote, dive deep in 55 breakout sessions across 11 tracks, including getting started, building advanced architectures, app development, DevOps and more. Tune in live to network with fellow technologists, have your questions answered in real-time by AWS Experts and claim your certificate of attendance.

So, whether you are just getting started on the cloud or are an advanced user, come and learn something new at the AWS Summit Online.

Want to get started with AWS? At TIQQE, we have loads of experience and are an Advanced Partner to AWS. Contact us, we’re here to help.

What to expect

Webinar

Incident automation webinar

Join our incident automation webinar the 9:th of June between 08:30 to 09:15. Learn how to automate 70% of your incidents with AWS Step Functions.

Incident handling is often a highly manual process in most companies. It requires 1st, 2nd and 3rd line resources in a service desk to manage error handling of the applications, databases and infrastructure. Further more, some expert forum, or Change Advisory Board, are usually in place to work with improvements to reduce tickets and incidents. A lot of people is required just to keep the lights on.

What if you could set up monitoring alerts that automatically triggers automated processes and resolves the incidents before the users even notice them and place a ticket to your service desk. Sounds like science fiction? Check out this webinar where Max Koldenius will reveal how to set up incident automation using AWS Step Functions.

Join our webinar the 9th of June at 08:30 to 09:15.

Please enroll here

AWS

Continuous delivery in AWS – tools overview

Continuous Delivery (CD) is a term that is used for a collection of practices that strive for enabling an organisation to provide both speed and quality in their software delivery process. It is a complex topic and in this article we will focus on one aspect, which is selecting tools for CD pipelines when deploying software in AWS.

Before we dive into various tooling options for continuous delivery though, let us define some scope, terminology and also talk a bit why we would bother with this in the first place.

Scope

Our scope for this overview is for delivering solutions that runs in AWS. Source code lives in a version control system and we assume that it is a hosted solution (not on-premise) and that it is Git. Git is currently the most common version control system in use. Some services mentioned may also work with other version control systems, such as Subversion, for example.

Continuous delivery tools can either be a software-as-a-service (SaaS) solution, or we can manage it ourselves on servers – in the cloud or on-premise. Our scope here is only for the SaaS solutions.

If you have a version control system that is hosted on-premise or on your own servers in some cloud, that typically works with continuous delivery software that you can host yourself – either on-premise or in the cloud. These options we will not cover here.

Terminology

First of all, there are a few terms and concepts to look at:

  • Pipeline – this generally refers to the process that starts with changing the code to the release and deployment of the updated software. This would be a mostly or even completely automated process in some cases.
  • Continuous integration (CI) – the first part of the pipeline, in which developers can perform code updates in a consistent and safe way with fast feedback loops. The idea is to do this often and that it should be quick, so any errors in changes can be caught and corrected quickly. Doing it often means that there are only small changes each time, which makes it easier to pinpoint and correct any errors. For CI to work well, it needs a version control system and a good suite of automated tests that can be executed when updates someone commits updates to a version control system.
  • Continuous Delivery (CD) – This refers to the whole process from code changes and CI to the point where a software release is ready for deployment. This includes everything in the continuous integration part and other steps that may be needed to make the software ready for release. Ideally this is also fully automated, although may include manual steps. Again, the goal is that this process is quick, consistent and safe, so that it would be possible to make a deployment “at the click of a button”, or similar simple procedure. But the deployment is not part of continuous delivery.
  • Continuous Deployment (CD) – Unfortunately the abbreviation is the same as for continuous delivery, but it is not the same. This is continuous delivery plus automated deployment. In practice, this is applicable for some solutions but not all. With serverless solutions it is generally easier to do this technically, but in many cases it is not a technology decision, but a business decision.

Why continuous delivery?

Speed and safety for the business/organisation – that is essentially what it boils down to. To be able to adapt and change based on market and changing business requirements and to do this in a way that minimises disruption of the business.

Depending on which stakeholders you look at, there are typically different aspects of this process that are of interest:

  • Business people’s interests are in speed and predictability of delivery of changed business requirements and that services continues to work satisfyingly for customers.
  • Operations people’s interests are in safety, simplicity and predictability of updates and that disruptions can be avoided.
  • Developers’ interest is in fast feedback on the work they do and that they can do changes without fear of messing things up for themselves and their colleagues. Plus that they can focus on solving problems and building useful or cool solutions.

It is a long process to reach continuous delivery Nirvana, and the world of IT a mess to various degrees – we are never done. A sane choice of tooling for continuous delivery can at least get us part of the way.

Continuous delivery tools

If we want a continuous delivery tool which targets AWS, uses git and runs as a SaaS solution, we have a few categories:

  • Services provided by AWS
  • Services provided by the managed version control system solution
  • Third party continuous delivery SaaS tools

Services provided by AWS

AWS has a number of services that is related to continuous delivery, which all have names that start with “Code” in them. This includes:

  • AWS CodeCommit
  • AWS CodePipeline
  • AWS CodeBuild
  • AWS CodeDeploy
  • AWS CodeGuru
  • AWS CodeStar

A key advantage with using AWS services is that credentials and access is the regular identity and access management (IAM) in AWS and encryption with key management service (KMS). There is no AWS secrets information that has to be stored elsewhere outside of AWS, since it all lives in AWS – assuming your CI/CD workflow goes all-in on AWS – or to a large extent at least.

A downside with these AWS services is that they are not the most user-friendly, plus there are a number of them. They can be used together to set up elaborate CI/CD workflows, but it requires a fair amount of effort to do so. CodeStar is a service here that was an attempt to set up an end-to-end development workflow with CI/CD.
I like the idea behind CodeStar and for some use cases it may be just fine. But it has not received so much love from AWS since it was launched.

You do not necessarily need all of these services to set up a CI/CD workflow – in its simplest form you just need a supported source code repository (CodeCommit/Github/Bitbucket) and CodeBuild. But things can quickly get more complicated, in particular once the number of repositories, developers and/or AWS accounts involved starts to grow. One project that tries to alleviate that pain is the AWS Deployment Framework.

Services provided by the managed version control system solution

Three of the more prominent version control system hosting services are Github, Gitlab and Bitbucket. They all have CI/CD services bundled with their hosted service offering. Both Bitbucket and Gitlab also provide on-premise/self-hosted versions of their source code repository software as well as continuous delivery tooling and other tools for the software lifecycle. The on-premise continuous delivery tooling for Bitbucket is Bamboo, while the hosted (cloud) version is Bitbucket Pipelines. For Gitlab the software is the same for both hosted and on-premise. We only cover the cloud options here.

On the surface the continuous delivery tooling is similar for all these three – a file in each repository which describes the CI/CD workflow(s) for that particular repository. They are all based on running Docker containers to execute steps in the workflow and can handle multiple branches and pipelines. They all have some kind of organisational and team handling capabilities.

Beyond the continuous delivery basics they start to deviate a bit in their capabilities and priorities. Bitbucket, being an Atlassian product/service, focus on good integration with Jira in particular, but also some 3rd party solutions. Gitlab prides itself on providing a one-stop solution/application for the whole software lifecycle – what features are enabled depends on which edition of the software that is used. Github, being perhaps the most well-known source code repository provider, has a well-established ecosystem for integration with various tools into their toolchain, provided by 3rd parties and community – more so than the other providers.

Github and Gitlab have the concept of runners that allow you to set up your own machines to run tasks in the pipelines.

So if you are already using other Atlassian products, Bitbucket and Bitbucket Pipelines might be a good fit. If you want an all-in-one solution then Gitlab can suite well. For a best-of-breed approach to pick different components, then Github is likely a good fit.

Third party continuous delivery SaaS tools

There are many providers which provide hosted continuous delivery tooling. Some of these providers have been in this space for a reasonably long time, before the managed version control system providers added their own continuous delivery tooling.

In this segment there may be providers that support specific use cases better, or are able to set up faster and/or parallel pipelines easily. They also tend to support multiple managed version control system solutions and multiple cloud provider targets. Some of them also provide self-hosted/on-premise versions of their software solutions. Thus this category of providers may be interesting for integrating with a diverse portfolio of existing solutions.

Some of the more popular SaaS providers in this space include:

Pricing models

Regardless of category, pretty much all the different providers mentioned here provide some kind of free tier and then one or more on-demand paid tiers.

For example: Github Actions, Bitbucket Pipelines, Gitlab CI/CD and AWS CodeBuild provide a number of free build minutes per month. This is however limited to certain machine sizes used in executing the tasks in the pipelines.

A simple price model of just counting build minutes is easy to grasp, but will also not allow flexibility in machine sizes, since larger machine will require more capacity from the provider. In AWS case with CodeBuild, you can select a number of different machine sizes – but you need to pay for anything larger than the smaller machines from the first minute.

The third party continuous delivery providers have slightly different free tier models, I believe partially in order to distinguish them from the offerings of the managed version control system providers. For example, CircleCI provides a number of free “credits” per week. Depending on machine capacity and feature, pipeline execution will cost different amounts of credits.

The number of parallel pipeline executions is typically also a factor for all the different providers – free tiers tend to have 1 pipeline that can execute at any time, while more parallel execution will cost more.

Many pricing models also a restriction on the number of users and there may be a price tag attached to each active user also. All in all, you pay for compute capacity, to save time on pipeline execution and to have more people utilize the continuous delivery pipelines.

AWS, with a number of services fulfilling various parts of the continuous delivery solution, may be a bit more complex to grasp initially what things will actually cost. Also, the machine sizes may not be identical across the different services either, so a build minute for one service may not necessarily be one build minute at another provider.

Trying to calculate the exact amount the continuous delivery solution will cost may be counterproductive at an early stage though. Look at features needed first and their importance, then consider pricing after that.

End notes

Selecting continuous delivery tooling can be a complex topic. The bottom line is that it is intended to deliver software faster, more secure and more consistently, with fewer problems – and with good insight into the workflow for different stakeholders. Do not loose sight of that goal and what your requirements are – beyond the simple cases. Most alternatives will be ok for the very simple cases. Do not be afraid to try out some of them, but time box the effort.

If you wish to discuss anything of the above, please feel free to contact me at erik.lundevall@tiqqe.com

Cloud economics

License to kill

Using commercial software and paying expensive licenses is old school and no longer necessary. Cloud provide you with flexibility and you only need to pay for what you use. No investments necessary.

In May I’m sure many of you, including myself, was looking forward to the release of the new James Bond film, with the famous slogan – License To Kill.

Unfortunately, due to Covid-19, this film premier has been postponed but the reality of License to Kill within IT-licenses and infrastructure has never been more important than now.

We are in contact with roughly 150 companies across Sweden every month, mainly to understand where the market is at this point of time and how we need to align to be able to meet the market with their challenges.

In the past few months the market has really changed, most companies are “pulling the handbrake” and cutting down their variable costs, freezing new initiatives etc. What comes to a surprise is the number of licenses many companies have, everything from Office365, different on-premise & cloud platforms which are based on traditional license models which are core based and very expensive.

When buying licenses, with a traditional license model, you buy a capacity up-front which you are planning to use during a longer term, usually between 1-5 years. Of course, during this period, you are able to “scale up” and purchase more cores. But overall you will always be paying for more than you need at the point of time of the purchase.

Traditional on-premise platforms when scaling will have the following effects:

  • Additional cores
  • Additional servers
  • Not fully utilized 
  • Generate additional costs

This is costing companies across the globe huge amounts of money which could be spent on better things or even in these uncertain times also saved.

Here are a few examples:

  • On-Prem infrastructure
  • Integration platforms, Enterprise Service Bus
  • API-Platforms
  • Identity & Access Management platforms
  • Service & Assessment Platforms

The list is long and most likely you are running one or several at your workplace today.

So what’s the solution?

Both from a license and an infrastructure perspective the Cloud is the obvious choice this enables you to both scale up and down. At TIQQE we purely focus on AWS and the capabilities to scaling, not paying up-front license costs, pay for what you need at this point of time are all the key points to moving to the cloud.

Ask yourself if you need to renew your licenses anytime soon. Do you want to buy more licenses or do you want a second opinion?

We have all the tools in place to quickly identify your costs today and what the costs would be if you would instead operate in the cloud.

This blog is mainly focused on a cost saving perspective but there are many more examples what the cloud provides you with.

I really recommend checking the following blogs out:

4 ways of reducing cost and increase liquidity

Monitoring

Monitoring stuff in AWS

My name is Max Koldenius and I’m responsible for Operations at TIQQE. This blog post will cover some personal thoughts on monitoring and also some specific examples from AWS Quicksight.

Monitoring

Monitoring. Logs. If everyone is doing their job properly, monitoring should be a very boring activity. Nothing exciting should happen, that’s kind of the idea. And, as with all boring things, that usually leads to people doing more fun things instead, and I can’t blame them. But it could be very dangerous if your business is dependent on some component which is very boring to monitor and your DevOps team is developing some new fun stuff instead of checking on it.

Of course, setting up alarms and alerts is your first line of actions to avoid these problems, but there will be stuff that is hard to catch with alarms, thresholds and triggers. For example, if your “incoming-order-integration” looks normal on Black Friday, that is not normal but will be very hard to detect with an alarm or even all the fancy Machine Learning tools available.

So, from my experience, here’s some important things to remember:

  • Push monitoring to people rather than having people (hopefully) pull stuff from logs etc. A nice tv-monitor on a strategic place in the office is working fine.
  • Visualize things! Computers are good with numbers, people are good at detecting patterns. So let’s leave the numbers for the computer and let’s visualize for the humans.
  • Give it a good thought when choosing what to monitor. Don’t start with what the tool is capable of or what data you have access to, but ask yourself what you really need to know to avoid any problems.
  • Always improve your monitoring. Ideally you should ask yourself for every new incident if it could have been avoided by alarms or monitoring.

AWS Quicksight

Over the years I’ve been using a lot of different tools for monitoring, and every application usually has their own report-section with nice reports that you can create, sometimes easy sometimes hard…

The problem with this, related to my list above, is that we usually are dependent on data from many different sources to create some meaningful content for our tv-screen. You usually run into problems very quickly when using different tools, authorization-issues, different design templates, keeping up to date with different tools, increasing costs etc.

From my experience it’s highly recommended to gather data in one tool and use that for monitoring. For us, that tool is AWS Quicksight.

Max Koldenius with AWS Quicksight monitoring

There are tons of documentation about Quicksight, start here if you want to know more: https://aws.amazon.com/quicksight/. I will just add some personal reflections on using Quicksight in our daily operation:

  • We almost always use plain files as input data for our dashboards. Just dump files in a S3-bucket and you can extract all kinds of interesting data from it. Nice!
  • It’s super easy to quickly create basic graphics, perfect for monitoring incoming files, check trends, identify strange patterns etc. It’s a bit limited if you have specific design needs.
  •  It’s worth putting some extra effort into the basic data, preferably at an early stage. If the basics are there, the rest is very simple.
  • Since all our workloads are in AWS, there is no reason for us to use anything else.

Example of my favourite Quicksight visualization

One of my favourite KPI:s is this one where we can see the most frequent incoming alerts that are not handled be our automatic incident handler. For more on this topic, take a look at my AutoOps talk.

Most frequent incoming alerts not automatically handled

So, these are the activities we have completed to create this visualization:

  1. Create an outgoing API-call from our issue handling system that triggers on resolved issues. The API-call sends a request to a Lambda function that simply saves the json for the issue to an s3-bucket. The most important thing is in place!
  2. Configure an Athena database to enable SQL querying on the json data.
  3. Create a data set in Quicksight pointing at the Athena database.
  4. Create a new visualization in Quicksight, drag and drop the desired data into the visualization, in this case Topic and Owner.
  5. Done. An all serverless monitoring solution is set up!

Step 1 and 2 are where you need to put some effort, but it’s surprisingly easy. And when the data is in S3, you can easily create new visualizations based on it.

Quicksight is perfect for follow-up and analysis of data over time. It is not suitable for live monitoring, then other tools work better.

Please drop me a mail if you got questions or comments on the content of this blog post, I’d love to hear your feedback!

Best regards

Max Koldenius, TIQQE

max.koldenius@tiqqe.com

COVID-19

4 ways to reduce cost and increase liquidity.

Many companies are under tremendous financial pressure due to the COVID-19 virus. We sat down to figure out what we can do to help and came up with 4 ways of how we can reduce cost and increase liquidity in the short term for a company. We are posting these 4 ideas in a blog series.

4 ways to reduce cost and increase liquidity

We provide 4 hands-on ideas of how you can reduce cost and increase liquidity in the short term. All ideas include financial examples to provide a clear view of the potential of each idea in your context. We have created 4 business case templates to help you customize and translate each idea into tangible value for your organization, just give us a call and we will help you. Bring some good news to your CFO in these challenging times with some hands-on, concrete and proactive ideas of how to reduce IT costs.

Idea #1 – Hardware refresh

With a depreciation cycle of 36 months, you’re looking at a 33% replacement of servers and storage in your datacenter this year. Now is a good time to challenge the default decision to replace those servers with new ones and consider cloud instead.

Read hardware refresh post

Idea #2 – Integration platform replace

Every organization needs to connect data between applications and databases to support their business processes. There are a lot of ways of solving the integration need but many companies have bought an integration platform from one or more of the major product vendors in the market such as Microsoft Biztalk, Tibco, Mulesoft, IBM Websphere etc. If you’re one of them, we have good news for you and your CFO.

Read integration platform replace post

Idea #3 – incident automation

Incident handling is often a highly manual process in most companies. It requires 1st, 2nd and 3rd line resources in a service desk to manage error handling of the applications, databases and infrastructure. Further more, some expert forum, or Change Advisory Board, are usually in place to work with improvements to reduce tickets and incidents. A lot of people is required just to keep the lights on. Imagine if you could automate most of your incidents.

Read incident automation post

Idea #4 – infrastructure optimization

Managing cloud infrastructure is different to managing infrastructure on-prem. It’s easy to provision new resources but it’s equally easy to forget to decommission resources when they’re not needed. Further more, performance tuning is often not part of daily routines and only performed when there are performance problems. Optimization is not supposed to be performed occasionally but rather on a regular basis to ensure a cost effective use of cloud computing. If you need to find quick ways of reducing your costs, optimizing will be one tool to use to bring good news to your CFO.

Read Infrastructure optimization post