AWS

AWS Summit Online 2020

June 17, 2020 – You are well on your way to the best day of the year for cloud! Join the AWS Summit Online and deepen your knowledge with this free, virtual event if you are a technologist at any level. There is something for everybody.

Hear about the latest trends, customers and partners in EMEA, followed by the opening keynote with Werner Vogels, CTO, Amazon.com. All developers at TIQQE are always attending Werner’s keynotes.

After the keynote, dive deep in 55 breakout sessions across 11 tracks, including getting started, building advanced architectures, app development, DevOps and more. Tune in live to network with fellow technologists, have your questions answered in real-time by AWS Experts and claim your certificate of attendance.

So, whether you are just getting started on the cloud or are an advanced user, come and learn something new at the AWS Summit Online.

Want to get started with AWS? At TIQQE, we have loads of experience and are an Advanced Partner to AWS. Contact us, we’re here to help.

What to expect

AWS

Continuous delivery in AWS – tools overview

Continuous Delivery (CD) is a term that is used for a collection of practices that strive for enabling an organisation to provide both speed and quality in their software delivery process. It is a complex topic and in this article we will focus on one aspect, which is selecting tools for CD pipelines when deploying software in AWS.

Before we dive into various tooling options for continuous delivery though, let us define some scope, terminology and also talk a bit why we would bother with this in the first place.

Scope

Our scope for this overview is for delivering solutions that runs in AWS. Source code lives in a version control system and we assume that it is a hosted solution (not on-premise) and that it is Git. Git is currently the most common version control system in use. Some services mentioned may also work with other version control systems, such as Subversion, for example.

Continuous delivery tools can either be a software-as-a-service (SaaS) solution, or we can manage it ourselves on servers – in the cloud or on-premise. Our scope here is only for the SaaS solutions.

If you have a version control system that is hosted on-premise or on your own servers in some cloud, that typically works with continuous delivery software that you can host yourself – either on-premise or in the cloud. These options we will not cover here.

Terminology

First of all, there are a few terms and concepts to look at:

  • Pipeline – this generally refers to the process that starts with changing the code to the release and deployment of the updated software. This would be a mostly or even completely automated process in some cases.
  • Continuous integration (CI) – the first part of the pipeline, in which developers can perform code updates in a consistent and safe way with fast feedback loops. The idea is to do this often and that it should be quick, so any errors in changes can be caught and corrected quickly. Doing it often means that there are only small changes each time, which makes it easier to pinpoint and correct any errors. For CI to work well, it needs a version control system and a good suite of automated tests that can be executed when updates someone commits updates to a version control system.
  • Continuous Delivery (CD) – This refers to the whole process from code changes and CI to the point where a software release is ready for deployment. This includes everything in the continuous integration part and other steps that may be needed to make the software ready for release. Ideally this is also fully automated, although may include manual steps. Again, the goal is that this process is quick, consistent and safe, so that it would be possible to make a deployment “at the click of a button”, or similar simple procedure. But the deployment is not part of continuous delivery.
  • Continuous Deployment (CD) – Unfortunately the abbreviation is the same as for continuous delivery, but it is not the same. This is continuous delivery plus automated deployment. In practice, this is applicable for some solutions but not all. With serverless solutions it is generally easier to do this technically, but in many cases it is not a technology decision, but a business decision.

Why continuous delivery?

Speed and safety for the business/organisation – that is essentially what it boils down to. To be able to adapt and change based on market and changing business requirements and to do this in a way that minimises disruption of the business.

Depending on which stakeholders you look at, there are typically different aspects of this process that are of interest:

  • Business people’s interests are in speed and predictability of delivery of changed business requirements and that services continues to work satisfyingly for customers.
  • Operations people’s interests are in safety, simplicity and predictability of updates and that disruptions can be avoided.
  • Developers’ interest is in fast feedback on the work they do and that they can do changes without fear of messing things up for themselves and their colleagues. Plus that they can focus on solving problems and building useful or cool solutions.

It is a long process to reach continuous delivery Nirvana, and the world of IT a mess to various degrees – we are never done. A sane choice of tooling for continuous delivery can at least get us part of the way.

Continuous delivery tools

If we want a continuous delivery tool which targets AWS, uses git and runs as a SaaS solution, we have a few categories:

  • Services provided by AWS
  • Services provided by the managed version control system solution
  • Third party continuous delivery SaaS tools

Services provided by AWS

AWS has a number of services that is related to continuous delivery, which all have names that start with “Code” in them. This includes:

  • AWS CodeCommit
  • AWS CodePipeline
  • AWS CodeBuild
  • AWS CodeDeploy
  • AWS CodeGuru
  • AWS CodeStar

A key advantage with using AWS services is that credentials and access is the regular identity and access management (IAM) in AWS and encryption with key management service (KMS). There is no AWS secrets information that has to be stored elsewhere outside of AWS, since it all lives in AWS – assuming your CI/CD workflow goes all-in on AWS – or to a large extent at least.

A downside with these AWS services is that they are not the most user-friendly, plus there are a number of them. They can be used together to set up elaborate CI/CD workflows, but it requires a fair amount of effort to do so. CodeStar is a service here that was an attempt to set up an end-to-end development workflow with CI/CD.
I like the idea behind CodeStar and for some use cases it may be just fine. But it has not received so much love from AWS since it was launched.

You do not necessarily need all of these services to set up a CI/CD workflow – in its simplest form you just need a supported source code repository (CodeCommit/Github/Bitbucket) and CodeBuild. But things can quickly get more complicated, in particular once the number of repositories, developers and/or AWS accounts involved starts to grow. One project that tries to alleviate that pain is the AWS Deployment Framework.

Services provided by the managed version control system solution

Three of the more prominent version control system hosting services are Github, Gitlab and Bitbucket. They all have CI/CD services bundled with their hosted service offering. Both Bitbucket and Gitlab also provide on-premise/self-hosted versions of their source code repository software as well as continuous delivery tooling and other tools for the software lifecycle. The on-premise continuous delivery tooling for Bitbucket is Bamboo, while the hosted (cloud) version is Bitbucket Pipelines. For Gitlab the software is the same for both hosted and on-premise. We only cover the cloud options here.

On the surface the continuous delivery tooling is similar for all these three – a file in each repository which describes the CI/CD workflow(s) for that particular repository. They are all based on running Docker containers to execute steps in the workflow and can handle multiple branches and pipelines. They all have some kind of organisational and team handling capabilities.

Beyond the continuous delivery basics they start to deviate a bit in their capabilities and priorities. Bitbucket, being an Atlassian product/service, focus on good integration with Jira in particular, but also some 3rd party solutions. Gitlab prides itself on providing a one-stop solution/application for the whole software lifecycle – what features are enabled depends on which edition of the software that is used. Github, being perhaps the most well-known source code repository provider, has a well-established ecosystem for integration with various tools into their toolchain, provided by 3rd parties and community – more so than the other providers.

Github and Gitlab have the concept of runners that allow you to set up your own machines to run tasks in the pipelines.

So if you are already using other Atlassian products, Bitbucket and Bitbucket Pipelines might be a good fit. If you want an all-in-one solution then Gitlab can suite well. For a best-of-breed approach to pick different components, then Github is likely a good fit.

Third party continuous delivery SaaS tools

There are many providers which provide hosted continuous delivery tooling. Some of these providers have been in this space for a reasonably long time, before the managed version control system providers added their own continuous delivery tooling.

In this segment there may be providers that support specific use cases better, or are able to set up faster and/or parallel pipelines easily. They also tend to support multiple managed version control system solutions and multiple cloud provider targets. Some of them also provide self-hosted/on-premise versions of their software solutions. Thus this category of providers may be interesting for integrating with a diverse portfolio of existing solutions.

Some of the more popular SaaS providers in this space include:

Pricing models

Regardless of category, pretty much all the different providers mentioned here provide some kind of free tier and then one or more on-demand paid tiers.

For example: Github Actions, Bitbucket Pipelines, Gitlab CI/CD and AWS CodeBuild provide a number of free build minutes per month. This is however limited to certain machine sizes used in executing the tasks in the pipelines.

A simple price model of just counting build minutes is easy to grasp, but will also not allow flexibility in machine sizes, since larger machine will require more capacity from the provider. In AWS case with CodeBuild, you can select a number of different machine sizes – but you need to pay for anything larger than the smaller machines from the first minute.

The third party continuous delivery providers have slightly different free tier models, I believe partially in order to distinguish them from the offerings of the managed version control system providers. For example, CircleCI provides a number of free “credits” per week. Depending on machine capacity and feature, pipeline execution will cost different amounts of credits.

The number of parallel pipeline executions is typically also a factor for all the different providers – free tiers tend to have 1 pipeline that can execute at any time, while more parallel execution will cost more.

Many pricing models also a restriction on the number of users and there may be a price tag attached to each active user also. All in all, you pay for compute capacity, to save time on pipeline execution and to have more people utilize the continuous delivery pipelines.

AWS, with a number of services fulfilling various parts of the continuous delivery solution, may be a bit more complex to grasp initially what things will actually cost. Also, the machine sizes may not be identical across the different services either, so a build minute for one service may not necessarily be one build minute at another provider.

Trying to calculate the exact amount the continuous delivery solution will cost may be counterproductive at an early stage though. Look at features needed first and their importance, then consider pricing after that.

End notes

Selecting continuous delivery tooling can be a complex topic. The bottom line is that it is intended to deliver software faster, more secure and more consistently, with fewer problems – and with good insight into the workflow for different stakeholders. Do not loose sight of that goal and what your requirements are – beyond the simple cases. Most alternatives will be ok for the very simple cases. Do not be afraid to try out some of them, but time box the effort.

If you wish to discuss anything of the above, please feel free to contact me at erik.lundevall@tiqqe.com

AWS

Where do I start with AWS?

So our organization would like to start using or migrate to the AWS Cloud, where do I start? Creating a safe and effective foundation for either a migration or a starter-pack for using the AWS cloud requires substantial cloud expertise and can be a complex process.

We need to design a scalable infrastructure and configure the base environment in which we have to create multiple accounts for accessing multiple resources. Due to the complexity to migrate a large-scale organization, this could lead to several issues such as multiple design architectures, data security, lack of automation etc.

Organizations would attempt to follow the jungle of defined “best practices” before being able to have resources spun up safely. Do we have a consensus on what really are the “best practices”? Is the “best practice” up-to-date as updates are released continuously? This is thoughts that might pop into your head, and it should be taken seriously.

There are however several tools and strategies available to help you with these concerns and challenges. In this blog-post I will introduce you to some of them and what my thoughts are on them.

AWS Landing Zone

Landing Zone is the result of a lot of time and effort spent to define recommended best practices for a multi-account organization and to codify that architecture into a service that is deployable within AWS. It succeeds in creating this baseline infrastructure, but is still fairly complex.

It requires quite a lot of effort to make some of the modular components useful, which could be a downside in terms of an administrative standpoint. Some organizations are used to the AWS cloud and already manage everything with IAC and CloudFormation, this will reduce the amount of time spent on complexity and management of the solution. But it will most likely be troublesome for new AWS customers.

Many organizations have separation of duties between admin and developer teams, when some developers are more familiar with AWS, but a lot of components usually fall under the domain of the admin team which requires them to have this knowledge as well.

Account creation is not as smooth as it could be due to accounts being created in Service Catalog. It could also be troublesome to investigate issues related to the landing zone and does, as previously mentioned, expect a lot of expertise in the area.

AWS Control Tower

AWS Control Tower has a lot of things in common with the AWS Landing Zone solution. It could also be referred to as the “managed AWS Landing Zone”, meaning that this is a service that AWS provides that is equal to AWS Landing Zone, but managed and offered like a service.

So this would then eliminate the time consuming troubleshooting and complexity of the AWS Landing Zone since this is provided as a service? Well, yes in some way. It is much easier to manage and provides you with a lot of great guardrails and best practices, all deployable in an effortless way. There are also integrations to other AWS services which are neat.

So is Control Tower “the shit”? There is unfortunately a but..

While Control Tower offers great features it does lack flexibility and the ability to customize in the way many organizations need.

One of the great features of Control Tower is that you are given guardrails, but you can’t create your own SCPs and have Control Tower govern them. Sure, you can create your own SCPs from AWS Organizations, but you cannot have Control Tower to manage them.

Ability to create new OUs under the root OU (i.e. building a tree-like hierarchy is not currently possible), could be complex if you “drift” the Control Tower configuration.

AWS Deployment Framework (ADF)

This is a framework built by AWS and their enterprise customers, this framework is getting a lot of traction and is popular among AWS ProServe.

ADF certainly provides a lot of great features and does provide something that other solutions don’t. It is like if you have been walking around thinking pizza didn’t have cheese, then ADF would be that cheese which would make your pizza so much more tasty! In other words, ADF might be the piece of the puzzle your organization is lacking.

So what is ADF?

ADF is an open-source flexible framework which helps you to manage and deploy resources in multi-accounts and regions, it is also extensive and allows for staged, parallel, multi-account, cross-region deployments of applications or resources via the structure defined in AWS Organizations.

ADF allows you to do the whole account creation, guardrails and other foundation resources you find necessary deployable using a CI/CD approach. ADF is taking advantage of AWS CI/CD tools to alleviate the heavy lifting and management compared to a traditional CI/CD setup.

Sounds like a lot of work? Well yes, it requires quite some IAC and could be troublesome for users not used to this. It does not give you any guardrails out of the box and is pretty much a platform that you can customize in any way you would like. ADF requires some knowledge to master but has a lot of benefits and is worth pursuing in my opinion, you do not lock yourself in to this solution either as it is mostly CloudFormation doing its work.

As previously mentioned, this is an open source framework developed by AWS and its enterprise customers, but should in my opinion be a service provided by AWS, and maybe it will be some day.

Conclusion

So what is the verdict? Which option is the best one?

I would say that there is no option alone that can fulfil all your requirements. Usually enterprises want to customize their infrastructure foundation to fit their needs and their internal processes etc. Are you fine with a stiff solution providing you with out of the box guardrails and nothing more then I would choose Control Tower.

However if you would like the good stuff that Control Tower offers such as guardrails and the fact that it is a service provided by AWS, plus being able to customize your foundation, I would choose a combination of Control Tower and ADF.

This leaves you with the best practices guardrails provided by AWS through Control Tower and have the ability to customize the foundation to fit your needs using ADF. ADF is also great at managing pipelines at scale and provides transparency through the organization. Pipelines are defined using a file called deployment map which can be modified to fit your needs. AWS Landing Zone is an customizable version of Control Tower but does lack a lot of features that ADF provides.

There is some lacking functionality in AWS CI/CD tools which ADF yields together in a great way. Developers in most organizations have no problem creating new repositories and pushing code to them, but creating CD for their code is not usually something they would like to handle. ADF makes this process much easier. Pipelines are defined in the deployment map, developers can then focus on developing code and keep this up to date in the repository they define.

If you have any questions or just want to get in contact with me or any of my colleagues, I’m reachable on any of the following channels.

Mail: christoffer.pozeus@tiqqe.com
LinkedIn: Christoffer Pozeus


AWS

Something worth bragging about!

Last December TIQQE was awarded the AWS Advanced Partner status for the second time. Second time? Nothing new there. So what are they bragging about. We brag because it’s of strategic importance for us to hold the AWS Advanced Partner status to be able to support you all, in the best way possible. And of course, just to be able to brag about the achievement. Why you may ask yourself? Allow me to explain.

As a partner, not only to AWS, but also to our customers. We want to be relevant as an AWS expert partner, not just a partner providing resources. A strategic and important steppingstone on this journey is the Advanced Partner status. The Advanced Partner status opens up several different competence tracks inside AWS for a partner company like TIQQE. When we seek to deepen our knowledge in AWS for the benefit of our customers. Not having the Advanced Partner status will keep these tracks closed for an AWS partner as well as their customers.

What does the Advanced Partner label say about TIQQE? The Advanced Partners status is nothing you get without a track record. It shows that a partner company has a proven track record in providing business value on the AWS platform for its customers.

To be awarded an AWS Advanced Partner status, a partner company needs to prove for AWS that they have:

  • Documented and public testimonies from customers about what kind of business value they have contributed with.
  • Several named individuals that have reached a certain level of technical and business certifications on AWS.
  • Good and documented relationship with AWS customers.
  • A drive to continuously improve the knowledge in the AWS platform.
  • Capability to develop the business value of their customers AWS investments.

And we need to do this over and over again. And we cannot do it without asking our customers to contribute. Therefore we need to continuously develop our partnership with our customers in order to motivate them to helping us keeping the Advanced Partner status with AWS. We think this is a win-win-win situation. 

I dare to claim that if you truly looking for an AWS partner, you shall not accept anything less than one that holds an AWS Advanced Partner status. So why not select one that brag about it? Welcome to contact us!

AWS

AWS Advanced Partner

We’re pleased to announce that TIQQE has been approved as AWS Advanced Partner status for 2020 in AWS annual partner review.

We continue to see strong growth and demand for AWS experts in the market. As specialization is a key objective for TIQQE, certifications and moving up the qualification ladder on AWS are important evidence to prove our commitment and our skills for customers looking for deep technical expertise in their digital journey on AWS.

TIQQE have invested significantly and built a strong AWS practice and are committed to building a leading cloud practice. We have extensive experience in deploying customer solutions on AWS with a strong bench of trained and certified technical experts.

Jacob Welsh, CEO

The partner status at AWS indicate our ability to help customers of all types and sizes to design, architect, build, migrate, and manage their workloads and applications on AWS, accelerating their journey to the cloud. The AWS Advanced Partner requires a high certification level, a proven ability to identify AWS opportunities, a large amount of approved customer satisfaction responses and official customer references.

If you’re looking for the best AWS experts in the market, you can safely turn to TIQQE for advice and good ideas. Our trademark is execution, we get stuff done according to good practices and our proven track record.