Webinar

Incident automation webinar

Join our incident automation webinar the 9:th of June between 08:00 to 09:15. Learn how to automate 70% of your incidents with AWS Step Functions.

Incident handling is often a highly manual process in most companies. It requires 1st, 2nd and 3rd line resources in a service desk to manage error handling of the applications, databases and infrastructure. Further more, some expert forum, or Change Advisory Board, are usually in place to work with improvements to reduce tickets and incidents. A lot of people is required just to keep the lights on.

What if you could set up monitoring alerts that automatically triggers automated processes and resolves the incidents before the users even notice them and place a ticket to your service desk. Sounds like science fiction? Check out this webinar where Max Koldenius will reveal how to set up incident automation using AWS Step Functions.

Join our webinar the 9th of June at 08:30 to 09:15.

Please enroll here

AWS

Continuous delivery in AWS – tools overview

Continuous Delivery (CD) is a term that is used for a collection of practices that strive for enabling an organisation to provide both speed and quality in their software delivery process. It is a complex topic and in this article we will focus on one aspect, which is selecting tools for CD pipelines when deploying software in AWS.

Before we dive into various tooling options for continuous delivery though, let us define some scope, terminology and also talk a bit why we would bother with this in the first place.

Scope

Our scope for this overview is for delivering solutions that runs in AWS. Source code lives in a version control system and we assume that it is a hosted solution (not on-premise) and that it is Git. Git is currently the most common version control system in use. Some services mentioned may also work with other version control systems, such as Subversion, for example.

Continuous delivery tools can either be a software-as-a-service (SaaS) solution, or we can manage it ourselves on servers – in the cloud or on-premise. Our scope here is only for the SaaS solutions.

If you have a version control system that is hosted on-premise or on your own servers in some cloud, that typically works with continuous delivery software that you can host yourself – either on-premise or in the cloud. These options we will not cover here.

Terminology

First of all, there are a few terms and concepts to look at:

  • Pipeline – this generally refers to the process that starts with changing the code to the release and deployment of the updated software. This would be a mostly or even completely automated process in some cases.
  • Continuous integration (CI) – the first part of the pipeline, in which developers can perform code updates in a consistent and safe way with fast feedback loops. The idea is to do this often and that it should be quick, so any errors in changes can be caught and corrected quickly. Doing it often means that there are only small changes each time, which makes it easier to pinpoint and correct any errors. For CI to work well, it needs a version control system and a good suite of automated tests that can be executed when updates someone commits updates to a version control system.
  • Continuous Delivery (CD) – This refers to the whole process from code changes and CI to the point where a software release is ready for deployment. This includes everything in the continuous integration part and other steps that may be needed to make the software ready for release. Ideally this is also fully automated, although may include manual steps. Again, the goal is that this process is quick, consistent and safe, so that it would be possible to make a deployment “at the click of a button”, or similar simple procedure. But the deployment is not part of continuous delivery.
  • Continuous Deployment (CD) – Unfortunately the abbreviation is the same as for continuous delivery, but it is not the same. This is continuous delivery plus automated deployment. In practice, this is applicable for some solutions but not all. With serverless solutions it is generally easier to do this technically, but in many cases it is not a technology decision, but a business decision.

Why continuous delivery?

Speed and safety for the business/organisation – that is essentially what it boils down to. To be able to adapt and change based on market and changing business requirements and to do this in a way that minimises disruption of the business.

Depending on which stakeholders you look at, there are typically different aspects of this process that are of interest:

  • Business people’s interests are in speed and predictability of delivery of changed business requirements and that services continues to work satisfyingly for customers.
  • Operations people’s interests are in safety, simplicity and predictability of updates and that disruptions can be avoided.
  • Developers’ interest is in fast feedback on the work they do and that they can do changes without fear of messing things up for themselves and their colleagues. Plus that they can focus on solving problems and building useful or cool solutions.

It is a long process to reach continuous delivery Nirvana, and the world of IT a mess to various degrees – we are never done. A sane choice of tooling for continuous delivery can at least get us part of the way.

Continuous delivery tools

If we want a continuous delivery tool which targets AWS, uses git and runs as a SaaS solution, we have a few categories:

  • Services provided by AWS
  • Services provided by the managed version control system solution
  • Third party continuous delivery SaaS tools

Services provided by AWS

AWS has a number of services that is related to continuous delivery, which all have names that start with “Code” in them. This includes:

  • AWS CodeCommit
  • AWS CodePipeline
  • AWS CodeBuild
  • AWS CodeDeploy
  • AWS CodeGuru
  • AWS CodeStar

A key advantage with using AWS services is that credentials and access is the regular identity and access management (IAM) in AWS and encryption with key management service (KMS). There is no AWS secrets information that has to be stored elsewhere outside of AWS, since it all lives in AWS – assuming your CI/CD workflow goes all-in on AWS – or to a large extent at least.

A downside with these AWS services is that they are not the most user-friendly, plus there are a number of them. They can be used together to set up elaborate CI/CD workflows, but it requires a fair amount of effort to do so. CodeStar is a service here that was an attempt to set up an end-to-end development workflow with CI/CD.
I like the idea behind CodeStar and for some use cases it may be just fine. But it has not received so much love from AWS since it was launched.

You do not necessarily need all of these services to set up a CI/CD workflow – in its simplest form you just need a supported source code repository (CodeCommit/Github/Bitbucket) and CodeBuild. But things can quickly get more complicated, in particular once the number of repositories, developers and/or AWS accounts involved starts to grow. One project that tries to alleviate that pain is the AWS Deployment Framework.

Services provided by the managed version control system solution

Three of the more prominent version control system hosting services are Github, Gitlab and Bitbucket. They all have CI/CD services bundled with their hosted service offering. Both Bitbucket and Gitlab also provide on-premise/self-hosted versions of their source code repository software as well as continuous delivery tooling and other tools for the software lifecycle. The on-premise continuous delivery tooling for Bitbucket is Bamboo, while the hosted (cloud) version is Bitbucket Pipelines. For Gitlab the software is the same for both hosted and on-premise. We only cover the cloud options here.

On the surface the continuous delivery tooling is similar for all these three – a file in each repository which describes the CI/CD workflow(s) for that particular repository. They are all based on running Docker containers to execute steps in the workflow and can handle multiple branches and pipelines. They all have some kind of organisational and team handling capabilities.

Beyond the continuous delivery basics they start to deviate a bit in their capabilities and priorities. Bitbucket, being an Atlassian product/service, focus on good integration with Jira in particular, but also some 3rd party solutions. Gitlab prides itself on providing a one-stop solution/application for the whole software lifecycle – what features are enabled depends on which edition of the software that is used. Github, being perhaps the most well-known source code repository provider, has a well-established ecosystem for integration with various tools into their toolchain, provided by 3rd parties and community – more so than the other providers.

Github and Gitlab have the concept of runners that allow you to set up your own machines to run tasks in the pipelines.

So if you are already using other Atlassian products, Bitbucket and Bitbucket Pipelines might be a good fit. If you want an all-in-one solution then Gitlab can suite well. For a best-of-breed approach to pick different components, then Github is likely a good fit.

Third party continuous delivery SaaS tools

There are many providers which provide hosted continuous delivery tooling. Some of these providers have been in this space for a reasonably long time, before the managed version control system providers added their own continuous delivery tooling.

In this segment there may be providers that support specific use cases better, or are able to set up faster and/or parallel pipelines easily. They also tend to support multiple managed version control system solutions and multiple cloud provider targets. Some of them also provide self-hosted/on-premise versions of their software solutions. Thus this category of providers may be interesting for integrating with a diverse portfolio of existing solutions.

Some of the more popular SaaS providers in this space include:

Pricing models

Regardless of category, pretty much all the different providers mentioned here provide some kind of free tier and then one or more on-demand paid tiers.

For example: Github Actions, Bitbucket Pipelines, Gitlab CI/CD and AWS CodeBuild provide a number of free build minutes per month. This is however limited to certain machine sizes used in executing the tasks in the pipelines.

A simple price model of just counting build minutes is easy to grasp, but will also not allow flexibility in machine sizes, since larger machine will require more capacity from the provider. In AWS case with CodeBuild, you can select a number of different machine sizes – but you need to pay for anything larger than the smaller machines from the first minute.

The third party continuous delivery providers have slightly different free tier models, I believe partially in order to distinguish them from the offerings of the managed version control system providers. For example, CircleCI provides a number of free “credits” per week. Depending on machine capacity and feature, pipeline execution will cost different amounts of credits.

The number of parallel pipeline executions is typically also a factor for all the different providers – free tiers tend to have 1 pipeline that can execute at any time, while more parallel execution will cost more.

Many pricing models also a restriction on the number of users and there may be a price tag attached to each active user also. All in all, you pay for compute capacity, to save time on pipeline execution and to have more people utilize the continuous delivery pipelines.

AWS, with a number of services fulfilling various parts of the continuous delivery solution, may be a bit more complex to grasp initially what things will actually cost. Also, the machine sizes may not be identical across the different services either, so a build minute for one service may not necessarily be one build minute at another provider.

Trying to calculate the exact amount the continuous delivery solution will cost may be counterproductive at an early stage though. Look at features needed first and their importance, then consider pricing after that.

End notes

Selecting continuous delivery tooling can be a complex topic. The bottom line is that it is intended to deliver software faster, more secure and more consistently, with fewer problems – and with good insight into the workflow for different stakeholders. Do not loose sight of that goal and what your requirements are – beyond the simple cases. Most alternatives will be ok for the very simple cases. Do not be afraid to try out some of them, but time box the effort.

If you wish to discuss anything of the above, please feel free to contact me at erik.lundevall@tiqqe.com

Cloud economics

License to kill

Using commercial software and paying expensive licenses is old school and no longer necessary. Cloud provide you with flexibility and you only need to pay for what you use. No investments necessary.

In May I’m sure many of you, including myself, was looking forward to the release of the new James Bond film, with the famous slogan – License To Kill.

Unfortunately, due to Covid-19, this film premier has been postponed but the reality of License to Kill within IT-licenses and infrastructure has never been more important than now.

We are in contact with roughly 150 companies across Sweden every month, mainly to understand where the market is at this point of time and how we need to align to be able to meet the market with their challenges.

In the past few months the market has really changed, most companies are “pulling the handbrake” and cutting down their variable costs, freezing new initiatives etc. What comes to a surprise is the number of licenses many companies have, everything from Office365, different on-premise & cloud platforms which are based on traditional license models which are core based and very expensive.

When buying licenses, with a traditional license model, you buy a capacity up-front which you are planning to use during a longer term, usually between 1-5 years. Of course, during this period, you are able to “scale up” and purchase more cores. But overall you will always be paying for more than you need at the point of time of the purchase.

Traditional on-premise platforms when scaling will have the following effects:

  • Additional cores
  • Additional servers
  • Not fully utilized 
  • Generate additional costs

This is costing companies across the globe huge amounts of money which could be spent on better things or even in these uncertain times also saved.

Here are a few examples:

  • On-Prem infrastructure
  • Integration platforms, Enterprise Service Bus
  • API-Platforms
  • Identity & Access Management platforms
  • Service & Assessment Platforms

The list is long and most likely you are running one or several at your workplace today.

So what’s the solution?

Both from a license and an infrastructure perspective the Cloud is the obvious choice this enables you to both scale up and down. At TIQQE we purely focus on AWS and the capabilities to scaling, not paying up-front license costs, pay for what you need at this point of time are all the key points to moving to the cloud.

Ask yourself if you need to renew your licenses anytime soon. Do you want to buy more licenses or do you want a second opinion?

We have all the tools in place to quickly identify your costs today and what the costs would be if you would instead operate in the cloud.

This blog is mainly focused on a cost saving perspective but there are many more examples what the cloud provides you with.

I really recommend checking the following blogs out:

4 ways of reducing cost and increase liquidity

Monitoring

Monitoring stuff in AWS

My name is Max Koldenius and I’m responsible for Operations at TIQQE. This blog post will cover some personal thoughts on monitoring and also some specific examples from AWS Quicksight.

Monitoring

Monitoring. Logs. If everyone is doing their job properly, monitoring should be a very boring activity. Nothing exciting should happen, that’s kind of the idea. And, as with all boring things, that usually leads to people doing more fun things instead, and I can’t blame them. But it could be very dangerous if your business is dependent on some component which is very boring to monitor and your DevOps team is developing some new fun stuff instead of checking on it.

Of course, setting up alarms and alerts is your first line of actions to avoid these problems, but there will be stuff that is hard to catch with alarms, thresholds and triggers. For example, if your “incoming-order-integration” looks normal on Black Friday, that is not normal but will be very hard to detect with an alarm or even all the fancy Machine Learning tools available.

So, from my experience, here’s some important things to remember:

  • Push monitoring to people rather than having people (hopefully) pull stuff from logs etc. A nice tv-monitor on a strategic place in the office is working fine.
  • Visualize things! Computers are good with numbers, people are good at detecting patterns. So let’s leave the numbers for the computer and let’s visualize for the humans.
  • Give it a good thought when choosing what to monitor. Don’t start with what the tool is capable of or what data you have access to, but ask yourself what you really need to know to avoid any problems.
  • Always improve your monitoring. Ideally you should ask yourself for every new incident if it could have been avoided by alarms or monitoring.

AWS Quicksight

Over the years I’ve been using a lot of different tools for monitoring, and every application usually has their own report-section with nice reports that you can create, sometimes easy sometimes hard…

The problem with this, related to my list above, is that we usually are dependent on data from many different sources to create some meaningful content for our tv-screen. You usually run into problems very quickly when using different tools, authorization-issues, different design templates, keeping up to date with different tools, increasing costs etc.

From my experience it’s highly recommended to gather data in one tool and use that for monitoring. For us, that tool is AWS Quicksight.

Max Koldenius with AWS Quicksight monitoring

There are tons of documentation about Quicksight, start here if you want to know more: https://aws.amazon.com/quicksight/. I will just add some personal reflections on using Quicksight in our daily operation:

  • We almost always use plain files as input data for our dashboards. Just dump files in a S3-bucket and you can extract all kinds of interesting data from it. Nice!
  • It’s super easy to quickly create basic graphics, perfect for monitoring incoming files, check trends, identify strange patterns etc. It’s a bit limited if you have specific design needs.
  •  It’s worth putting some extra effort into the basic data, preferably at an early stage. If the basics are there, the rest is very simple.
  • Since all our workloads are in AWS, there is no reason for us to use anything else.

Example of my favourite Quicksight visualization

One of my favourite KPI:s is this one where we can see the most frequent incoming alerts that are not handled be our automatic incident handler. For more on this topic, take a look at my AutoOps talk.

Most frequent incoming alerts not automatically handled

So, these are the activities we have completed to create this visualization:

  1. Create an outgoing API-call from our issue handling system that triggers on resolved issues. The API-call sends a request to a Lambda function that simply saves the json for the issue to an s3-bucket. The most important thing is in place!
  2. Configure an Athena database to enable SQL querying on the json data.
  3. Create a data set in Quicksight pointing at the Athena database.
  4. Create a new visualization in Quicksight, drag and drop the desired data into the visualization, in this case Topic and Owner.
  5. Done. An all serverless monitoring solution is set up!

Step 1 and 2 are where you need to put some effort, but it’s surprisingly easy. And when the data is in S3, you can easily create new visualizations based on it.

Quicksight is perfect for follow-up and analysis of data over time. It is not suitable for live monitoring, then other tools work better.

Please drop me a mail if you got questions or comments on the content of this blog post, I’d love to hear your feedback!

Best regards

Max Koldenius, TIQQE

max.koldenius@tiqqe.com

Serverless

Choosing the right tool for your serverless journey.

What tools are available if I want to start building my own serverless applications? We will go over some of the most popular frameworks and tools available for the developer who wants to get started with AWS Lambda.

There are a lot of tools out there to use when building software that are powered by AWS Lambda. These tools aim to ease the process of coding, configuring and deploying the Lambdas themselves but also the surrounding AWS infrastructure. We will discuss the following alternatives:

  • AWS Cloud development kit (CDK)
  • Serverless Framework
  • AWS Serverless Application Model (SAM)
  • Terraform

About AWS CloudFormation

AWS CloudFormation is a service within AWS that lets you group resources (usually AWS resources) into stacks that you can deploy and continuously update. These CloudFormation templates are written in JSON or YAML and manually typing those can be very tedious. Especially when your stack grows to a lot of resources that reference each other in multiple ways. What lots of these frameworks and tools do is to provide an abstraction layer in front of CloudFormation so that the developer can more rapidly create and focus on the business value of the service they are building.

AWS Cloud development kit (CDK)

The AWS CDK went into general availability in the summer of 2019 and has been getting a lot of traction lately. It is an open source framework that lets you create your infrastructure as code instead of CloudFormation. You then generate CloudFormation templates from your code by running the command cdk synthesize.

You can choose from Python, TypeScript, JavaScript, .NET and Java to describe your resources instead of having to do it in pure CloudFormation. This gives you benefits such as code completion and being able to assign resources to variables, which helps you when a resource needs referencing to another one. Another great benefit is that it has helper functions in place to help with common developer use cases. For example setting up a NodeJs Lambda function or constructing ARNs.

The CDK code in the screeenshot will be synthesized to CloudFormation at build time. This example creates a Lambda function, a DynamoDB table and an API Gateway. The example also shows referencing between the resources: for example giving the Lambda function rights to read from the table.

Serverless Framework

This one has been around since 2015 and was called JAWS before quickly changing to its current name. As the very descriptive name says, it’s a framework for building serverless applications! The framework is easy to get started with and setting up an API with a few Lambdas require very little configuration for the developer as the framework takes care of the underlying CloudFormation template.

Because of its specific focus in serverless applications, the framework is not as broad as the CDK and that comes with pros and cons. You will get a lot of help if you are setting up Lambdas or events that trigger those Lambdas, but setting up the surrounding infrastructure such as queues, tables, kinesis streams and cognito user pools will often require you to write pure CloudFormation. At TIQQE, some of us like to create and deploy this surrounding infrastructure with the CDK, while developing the Lambdas  and the API gateway in Serverless Framework.

Serverless Framework is open source and multi-cloud. It’s also extendable with a wide range of plugins created by the community.

Shows the central configuration file in a Serverless Framework service: serverless.yml. When deployed, a Lambda function “myFirstLambda” is created and an API Gateway will be set up with a method to invoke the Lambda at the path /hello.

AWS Serverless Application Model (SAM)

AWS SAM is another framework similar to Serverless Framework that it let’s the developer write less code when building serverless applications. Unlike Serverless Framework, SAM is specific to AWS and its main configuration file template.yml is written with CloudFormation. So if you have previous experience with AWS and CloudFormation you will likely find it easy to get started with SAM. A neat feature in SAM is that it has support to deploy APIs using swagger out of the box. 

Template.yml in a AWS SAM project. Deploying this template will produce the same result as the Serverless Framework example above.

Terraform

This multi-cloud tool for infrastructure as code is worth a mention! It has been around since 2014 and is written in Go. For AWS it uses the aws-sdk to manage resources instead of CloudFormation, which gives the benefit of not having a resource limit of 200 that AWS impose for CloudFormation templates.

How do I choose which one to pick?

It comes down to some characteristics of your application, and a fair bit of personal preference! 

  • Are you building a service with an API endpoint and you have little or no previous experience in AWS or serverless architecture? We recommend you to check out Serverless Framework.
  • Are you not a fan of writing CloudFormation and your architecture needs a lot of infrastructure? Check out AWS CDK.
  • Are you familiar with CloudFormation and want to get started with serverless applications? AWS SAM could be the perfect match! 

There are countless forum posts and articles debating whether to go with AWS SAM or Serverless Framework. The frameworks are very similar and many times it comes down to personal taste. At TIQQE we have lots of experience working with Serverless Framework and some of us would debate that you get the job done in less lines of code with Serverless Framework. With that said, SAM does not have to worry about being generic for multiple clouds and that can be an edge if you are working only with AWS. SAM also defaults to giving Lambda functions least privilege access rights (AWS best practice), while Serverless Frameworks share a role between all Lambdas.

Terraform can be a good match if you are creating infrastructure as code across multiple clouds. While Terraform is capable of doing many things, it is not specialised in serverless technologies and you will have to write a lot of code to achieve the same results as the other frameworks described in this post. Not having a 200 resource limit is nice but should not be a problem that often if you are designing your systems in terms of microservices.

Do you have any comments or questions about this article? Please reach out!

johannes.uhr@tiqqe.com

COVID-19

4 ways to reduce cost and increase liquidity.

Many companies are under tremendous financial pressure due to the COVID-19 virus. We sat down to figure out what we can do to help and came up with 4 ways of how we can reduce cost and increase liquidity in the short term for a company. We are posting these 4 ideas in a blog series.

4 ways to reduce cost and increase liquidity

We provide 4 hands-on ideas of how you can reduce cost and increase liquidity in the short term. All ideas include financial examples to provide a clear view of the potential of each idea in your context. We have created 4 business case templates to help you customize and translate each idea into tangible value for your organization, just give us a call and we will help you. Bring some good news to your CFO in these challenging times with some hands-on, concrete and proactive ideas of how to reduce IT costs.

Idea #1 – Hardware refresh

With a depreciation cycle of 36 months, you’re looking at a 33% replacement of servers and storage in your datacenter this year. Now is a good time to challenge the default decision to replace those servers with new ones and consider cloud instead.

Read blog post

Idea #2 – [ insert integration product here ] replace

Every organization needs to connect data between applications and databases to support their business processes. There are a lot of ways of solving the integration need but many companies have bought an integration platform from one or more of the major product vendors in the market such as Microsoft Biztalk, Tibco, Mulesoft, IBM Websphere etc. If you’re one of them, we have good news for you and your CFO.

Read blog post

Idea #3 – incident automation

Incident handling is often a highly manual process in most companies. It requires 1st, 2nd and 3rd line resources in a service desk to manage error handling of the applications, databases and infrastructure. Further more, some expert forum, or Change Advisory Board, are usually in place to work with improvements to reduce tickets and incidents. A lot of people is required just to keep the lights on. Imagine if you could automate most of your incidents.

Read blog post

Idea #4 – infrastructure optimization

Managing cloud infrastructure is different to managing infrastructure on-prem. It’s easy to provision new resources but it’s equally easy to forget to decommission resources when they’re not needed. Further more, performance tuning is often not part of daily routines and only performed when there are performance problems. Optimization is not supposed to be performed occasionally but rather on a regular basis to ensure a cost effective use of cloud computing. If you need to find quick ways of reducing your costs, optimizing will be one tool to use to bring good news to your CFO.

Read blog post

Webinar

Join our Biztalk replace webinar

Your integration platform is a cost bomb according to Radar Group so if your company are looking for quick savings due to the COVID-19 crisis, we can help you save 50-60% and provide an ROI in less than a year.

Many companies are under financial pressure during the ongoing pandemic and are looking for ways to reduce cost in the short term.

Your integration platform is a cost bomb according to a study by Radar Group. This webinar will present how you can replace your existing integration platform with a modern cloud solution from TIQQE and cheer up your CFO with substantial savings.

Join our webinar the 5th of May, 08:30-09:15, to learn more.

Please enroll here

You can also read our blog post including a financial example of a company with 100 integrations.

  • 2MSEK in savings the first year
  • 8MSEK in savings the following years
  • 34MSEK in accumulated savings in 5 years (49%)
  • Return on investment, less than a year

If you’re planning an upgrade from Biztalk 2016 to 2020, you will have an even greater business case.

Welcome!

People

Richard Vergis just joined TIQQE!

We are very proud to announce that Richard has joined the TIQQE family and will be engaged in important development tasks at PostNord.

Richard is a backend-developer who likes to build and improve software that solves real-world business problems, aiming for positive business impact. He is inspired by a high productivity culture and innovation, always walking the line between perfection and a getting-it-done mentality.

Richard uses the best practices he has learned to help his clients achieve their goals. He also loves learning about anything, especially if it makes him better at his craft. We look forward to work with you and to be inspired by you and your great experience in AWS and node.js

#hackthecrisis

#hackthecrisis

Most of us are affected by the COVID-19 virus, or Corona, either in our private life or business life. Hack The Crisis is an initiative from the Swedish Government and TIQQE contributed with a mobile digital queue solution for the retail industry.

Hack the Crisis is an online hackathon organized by DIGGHack for SwedenOpenhack and The Swedish Government. The mission is to design, test and execute ideas for the future of Sweden and the world. The idea is to gather creative ideas and develop concepts in an attempt to create solutions helping to make further progress in the ongoing resistance.

David Borgenvik and Johannes Uhr at TIQQE developed a mobile digital queue solution for the retail industry in just 48 hours. Consumers can use their mobile phone to get a ticket in a queue instead of pressing a button on the ticketing machine in the store, which could be contaminated and spread the COVID-19 virus to others.

Mobile Digital Queue solution by TIQQE
Serverless

Simply: AWS Lambda

Why should I use AWS Lambda and how does it work? In this blog post I provide you with a practical hands-on guide of how to create your first AWS Lambda service and explain why you should use it to create awesome customer value.

What is AWS Lambda?

With AWS lambda we can write code and execute it without caring about configuring servers.

Why should I use it?

It enables you to quickly develop business relevant code and deliver value for your customers and stakeholders.

How do I start?

First you’re gonna need an AWS account, follow this guide.

Creating our first Lambda

From the AWS console head to Services and search for Lambda select the first option.

Click Create Function

Enter your name for the lambda and select runtime (I’m going with Node.js) Leave everything else default.

Writing code

When your lambda is created you’ll be taken to that lambdas page where you can see and setup lots of information and options about your lambda, let’s not worry too much about that right now and just scroll down to “Function Code”.

Using the inline editor (you are of course able to write code with any IDE you want and deploy it to AWS but I’ll cover that in another post) let’s enter some code, this is what I used.

Testing our code

At the top of the screen click configure test event and create an event to execute the function with.

The event in JSON format

Hit Create and finally click the “Test” button.

After its execution you’ll see the result and the output by clicking Details in the green result box, you can also click (logs) to enter CloudWatch Logs and get a better look into all executions of your lambda.

Good job!

You’ve just created a lambda and the possibilities with it are endless, in future posts I’ll discuss how we can connect an API to our lambda via API Gateway and how we can store our data in the NoSQL database DynamoDB.

Discussion: what about the price?

With Lambda the first million requests each month are alway free after that you pay $0.20 per 1M requests and $0.0000166667 for every GB-second, read more here. Lambda is usually used together with other AWS services that might also incur cost such as Cloudwatch logs which we touched upon in this post, Cloudwatch logs also offer a free tier, 5GB of Log Data Ingestion and 5GB of Log Data Archive, which means nothing we did in this post will result in any cost even if you do no cleanup.
Read more about the economics of cloud here “Cloud is expensive”

I don’t want to use the inline code editor!

Great, me neither, I suggest as a first step either looking into exporting your code to zip and uploading to the lambda

or exploring the Serverless framework, a tool that makes it easy to deploy serverless applications such as Lambda!

You’re welcome to contact me if you have any questions.

Mail: filip.pettersson@tiqqe.com
LinkedIn: Filip Pettersson