AWS

Simply: AWS DynamoDB

My “Simply AWS” series is aimed at absolute beginners to quickly get started with the wonderful world of AWS, this one is about DynamoDB.

What is DynamoDB?

DynamoDB is a NoSQL, fully managed, key-value database within AWS.

Why should I use it?

When it comes to data storage, selecting what technology to use is always a big decisions, DynamoDB is like any other technology not a silver bullet but it does offer a lot of positives if you need a document based key-value storage.

Benefits of DynamoDB

How do I start?

First you’re gonna need an AWS account, follow this guide.

Setting up our first DynamoDB database

If you feel like it you can set your region on the top right corner of the AWS console, it should default to us-east-1 but you can select something closer to you, read more about regions here.

From the AWS console, head to Services and search for DynamoDB, select the first option.

The first time you open up DynamoDB you should see a blue button with the text Create Table, click it.

Now you’re presented with some options for creating your table, enter myFirstTable (this can be anything) in the Table name.

Primary key

A key in a database is something used to identify items in the table and as such it must always be unique for every item. In DynamoDB the key is built up by a Partion key and an option Sort key

  • Partition Key: As the tooltip in the AWS console describes the Partion key is used to partion data across hosts because of that for best practice you should use an attribute that has a wide range of values, for now we don’t need to worry much about this, the main thing to takeaway is that if the Partion key is used alone it must be unique
  • Sort key: if the optional sort key is included the partion key does not have to be unique (but the combination of partion key and sort key does) it allows us to sort within a partion.

Let’s continue, for this example I’m gonna say i’m creating something like a library system, so I’ll put Author as the Partion key and BookTitle as the sort Key.

Note that this is just one of many ways you could setup this type of table and choosing a good primary key is arguably one of the most important decisions when creating a DynamoDB table, what’s good about AWS is that we can create a table, try it out, change our minds and just create a new one with ease.

Next up are table settings, these are things like secondary indexesprovisioned capacityautoscalingencryption and such. It’s a good idea to eventually get a bit comfortable with these options and I highly recommend looking into on-demand read/write capacity mode, but as we just want to get going now the default settings are fine and will not cost you anything for what we are doing today.

Hit create and wait for your table to be created!

Now you should be taken to the tables view of DynamoDB and your newly created table should be selected, this can be a bit daunting as there is a lot of options and information, but let’s head over to the Items tab.

From here we could create an Item directly from the console (feel free to try it out if you want) but I think we can do one better and setup a lambda for interacting with the table.

Creating our first item

If you’ve never created an AWS lambda before I have written a similar guide to this one on the topic, you can find it here.

Create a lambda called DynamoDBInteracter

Make sure to select to create a new role from a template and search for the template role Simple microservice permissions (this will allow us to perform any actions agains DynamoDB).

After creating the lambda we can directly edit it in the AWS console, copy and paste this code.

const AWS = require('aws-sdk')
const client = new AWS.DynamoDB.DocumentClient();
exports.handler = async (event) => {
    try {
        console.log(event.action)
        console.log(event.options)

        let response;
        switch (event.action) {
            case 'put':
                response = await putItem(event.options);
                break;
            case 'get':
                response = await getItem(event.options);
                break;
        }
        return response;
    } catch (e) {
        console.error(e)
        return e.message || e
    }
};


let putItem = async (options) => {
    var params = {
      TableName : 'myFirstTable',
      Item: {
         Author: options.author,
         BookTitle: options.bookTitle,
         genre: options.genre
      }
    };

    return await client.put(params).promise();
}

let getItem = async (options) => {
    var params = {
      TableName : 'myFirstTable',
      Key: {
        Author: options.author,
        BookTitle: options.bookTitle
      }
    };


    return await client.get(params).promise();
}

hit Save then create a new test event like this.

{
    "action": "put",
    "options": {
        "author": "Douglas Adams",
        "bookTitle": "The Hitchhiker's Guide to the Galaxy",
        "genre": "Sci-fi"
    }
}

and run that test event.

Go back to DynamoDB and the Items tab and you should see your newly created Item!

Notice that we did not have to specify the genre attribute that is because DynamoDB is NoSQL it follows no schemea and any field + value can be added to any item irregardless of the other items composition as long as the primary key is valid.

Retrieving our item

Now let’s try to get that item, create another test event like this.

{
    "action": "get",
    "options": {
         "author": "Douglas Adams",
        "bookTitle": "The Hitchhiker's Guide to the Galaxy"
    }
}

and run it.

You can expand the execution results and you should see your response with the full data of the item.

Congratulations!

You’ve just configured your first DynamoDB table and performed calls against it with a lambda but don’t stop here the possibilities with just these two AWS services are endless, my next guide in this series will cover API-Gateway and how we can connect an API to our lambda that then communicates with our database table, stay tuned!

What’s next?

As I’m sure you understand we’ve just begun to scratch the surface of what DynamoDB has to offer, my goal with this series of guides is to get your foot in the door with some AWS services and show that although powerful and vast they are still easy to get started with, to check out more of what calls can be made with the DynamoDB API (such as more complicated queries, updates and scans as well as batch writes and reads) check this out, feel free to edit the code and play around.

I would also like to recommend this guide if you want even more in-depth look into DynamoDB, it covers most of what I have here but more detailed and also goes over some of the API actions mentioned above.

Contact me

Questions? Thoughts? Feedback?
Twitter: @tqfipe
Linkedin: Filip Pettersson

Webinar

Hardware refresh webinar

Join our hardware refresh webinar the 29:th of September between 08:30 to 09:15. Learn how to reduce your infrastructure cost by 30-40% by moving to cloud.

With a depreciation cycle of 36 months, you’re looking at a 33% replacement of servers and storage in your datacenter this year. Now is a good time to challenge the default decision to replace those servers with new ones and consider cloud instead. Here are a few reasons why:

  • You don’t have to make the capital expenditure upfront which will have a positive impact on your cashflow and your balance sheet.
  • You will lower your cost by an average of 30-40%
  • You don’t have to buy capacity to last for 36 months with low utilization the first couple of years.
  • You pay for what you use and you can scale up or down in capacity by pressing a button.
  • You are making the inevitable move to cloud sooner than later

Join our webinar the 29th of September at 08:30 to 09:15. We will provide you with the tools to assess cloud vs. on-prem workloads from a financial, security and technical perspective. The webinar will be hosted in Swedish with English slides

You can also read our blog post of a business case example of a company with 300 servers and 50TB storage.

Please enroll here.

COVID-19

Post corona thoughts

The corona pandemic has shown that being able to adjust cost according to market demand is a core capability for a company. Serverless computing is the solution to the problem.

It’s of course too early to claim that we are a through the corona pandemic and things will be going back to as they were before. We really doubt that it ever will go back to the way it was before. Only in the past few months we have seen a huge change in how we work, all from running online meetings, a huge increase in number of digital events, how we collaborate etc.

Things that we saw in the market at the start of Covid-19 was of course cost cutting, freezing costs and postponing different initiatives and projects. Unfortunately, Covid-19 might be with us for a while so is this a long-term solution? Just looking back 6-7 months the market was entirely different than it is right now. We have also seen companies who have been booming during this period.

At TIQQE we reach out to 150 companies each month to get an understanding of where the market is and where it’s heading.

If we would highlight two interesting areas it would be the following:

High demand, lack of capacity causing downtime and lost business

When we have reached out to companies who are booming at the moment, companies with high demand struggle with the amount of load they need to handle. IT has challenges with handling the loads which in turn causes downtime and of course lost business. Is the answer then to scale up the infrastructure during this period?

Low demand, over capacity and cutting back costs

When speaking with customers who in one way or another have entered into a recession, their challenge is unused capacity. When looking at cost cuts it makes sense to cut it back but at the same time, how do they scale up once it picks up again?

The benefit with serverless

One of the main benefits with serverless is exactly that, you have a scalable, flexible IT which is adaptable over time no matter if it’s a recession or booming.

In uncertain times it’s important to take control over of what you can, define your prediction of the future might look like and make sure not to make decisions which could be a win on a short-term basis but be a loss in a long-term perspective.

At TIQQE this is exactly what we help our customers with, we help you find the right solution, which is scalable, flexible and adaptable to change no matter what the market situation is for you.

Please feel free to reach out to us if you have questions or need to scale your business to address the higher or lower demand.

Event

Kodayoga is the new black

In March a young lady reached out to us, Yasnia. She told us about an initiative that she, together with Caroline and Frida, wanted to create an event around. With the mission to inspire and attract more women to our tech-industry through kodayoga. The unbeatable combination of developing code and practicing yoga.

Of course we wanted to be a part of this. So we can proudly present that TIQQE is not only Yasnia’s new workplace from 1st of September, we are also one of the sponsors of this upcoming event taking place the 3rd of October at Creative House in Örebro.

After the event we invite women to our office at TIQQE to mingle and meet like-minded and hopefully get some inspiration. We will provide lighter snacks and drinks but due to Corona we only have a limited number of seats.

Registration will open the 14th of September so stay tuned on their website or subscribe to #kodayoga on LinkedIn for updates.

People

Are you our next Cloud Architect?

We’re looking for a Cloud Architect to our Gothenburg office. If you know the AWS tech stack and want to work in an inspiring company with great potential, this is for you.

We are growing our business on Swedens west coast and even if we in many cases work distributed, we also see the importance of being present in person to be able to interact with our growing number of customers. Therefore we are looking for a Serverless Cloud Architect who has the ability to handle both interaction with developers and DevOps teams, as well as a deep knowledge in AWS infrastructural services. You will be working together with passionate people and both develop and test infrastructure.  

We believe that you already:

  • Have your home in Gothenburg or its surroundings
  • Have experience from Cloud solution architecture in general
  • Have specific knowledge in AWS Serverless Architecture and development
  • Have one or more AWS certifications
  • Are analytic and solution-oriented
  • Have a genuin interest in customers businesses and challenges

We wish that you:

  • Have the urge to learn more
  • Put teams over individuals
  • Are professionally driven by serverless technology and become your best version of yourself when surrounded by other ”techies”
  • Wants to develop your soft skills as well as your technical skills

What to expect from us:

  • A burning love for all things serverless
  • A place where people matters
  • Courage to say “we were wrong”
  • Technical excellency
  • A startup company with great visions
  • Distributed teams
  • A place where we don’t always do what the customer tells us, but instead always does what is best for the customer

Please get in contact with us to learn more

Sofia Sundqvist

Chief Operating Officer

sofia.sundqvist@tiqqe.com

Alicia Hed

Recruitment Assistant

alicia.hed@tiqqe.com

Jobs

We’re hiring in Gothenburg!

At TIQQE, we’re proud of growing full-stack developers that has a passion for Serverless tech stack and architecture. That doesn’t mean that you have to know everything now and be a full blown tech lead, it means that you have the possibility to be one if you join us.

At TIQQE, we’re proud of growing full-stack developers that has a passion for Serverless tech stack and architecture. That doesn’t mean that you have to know everything now and be a full blown tech lead, it means that you have the possibility to be one if you join us.

We’re taking our presence in Gothenburg to the next level and we want to grow our office with nice and passionate techies. So if you have some experience of AWS, love to write code and wants to join a company where: 

  • humans comes first   
  • you will work in teams
  • you will have a tech-mentor & a designated buddy

If you’re interested in joining our TIQQE-family, please get in touch

Sofia Sundqvist

Chief Operating Officer

sofia.sundqvist@tiqqe.com

Alicia Hed

Recruitment Assistant

alicia.hed@tiqqe.com

AWS

AWS Summit Online 2020

June 17, 2020 – You are well on your way to the best day of the year for cloud! Join the AWS Summit Online and deepen your knowledge with this free, virtual event if you are a technologist at any level. There is something for everybody.

Hear about the latest trends, customers and partners in EMEA, followed by the opening keynote with Werner Vogels, CTO, Amazon.com. All developers at TIQQE are always attending Werner’s keynotes.

After the keynote, dive deep in 55 breakout sessions across 11 tracks, including getting started, building advanced architectures, app development, DevOps and more. Tune in live to network with fellow technologists, have your questions answered in real-time by AWS Experts and claim your certificate of attendance.

So, whether you are just getting started on the cloud or are an advanced user, come and learn something new at the AWS Summit Online.

Want to get started with AWS? At TIQQE, we have loads of experience and are an Advanced Partner to AWS. Contact us, we’re here to help.

What to expect

Webinar

Incident automation webinar

Join our incident automation webinar the 9:th of June between 08:30 to 09:15. Learn how to automate 70% of your incidents with AWS Step Functions.

Incident handling is often a highly manual process in most companies. It requires 1st, 2nd and 3rd line resources in a service desk to manage error handling of the applications, databases and infrastructure. Further more, some expert forum, or Change Advisory Board, are usually in place to work with improvements to reduce tickets and incidents. A lot of people is required just to keep the lights on.

What if you could set up monitoring alerts that automatically triggers automated processes and resolves the incidents before the users even notice them and place a ticket to your service desk. Sounds like science fiction? Check out this webinar where Max Koldenius will reveal how to set up incident automation using AWS Step Functions.

Join our webinar the 9th of June at 08:30 to 09:15.

Please enroll here

AWS

Continuous delivery in AWS – tools overview

Continuous Delivery (CD) is a term that is used for a collection of practices that strive for enabling an organisation to provide both speed and quality in their software delivery process. It is a complex topic and in this article we will focus on one aspect, which is selecting tools for CD pipelines when deploying software in AWS.

Before we dive into various tooling options for continuous delivery though, let us define some scope, terminology and also talk a bit why we would bother with this in the first place.

Scope

Our scope for this overview is for delivering solutions that runs in AWS. Source code lives in a version control system and we assume that it is a hosted solution (not on-premise) and that it is Git. Git is currently the most common version control system in use. Some services mentioned may also work with other version control systems, such as Subversion, for example.

Continuous delivery tools can either be a software-as-a-service (SaaS) solution, or we can manage it ourselves on servers – in the cloud or on-premise. Our scope here is only for the SaaS solutions.

If you have a version control system that is hosted on-premise or on your own servers in some cloud, that typically works with continuous delivery software that you can host yourself – either on-premise or in the cloud. These options we will not cover here.

Terminology

First of all, there are a few terms and concepts to look at:

  • Pipeline – this generally refers to the process that starts with changing the code to the release and deployment of the updated software. This would be a mostly or even completely automated process in some cases.
  • Continuous integration (CI) – the first part of the pipeline, in which developers can perform code updates in a consistent and safe way with fast feedback loops. The idea is to do this often and that it should be quick, so any errors in changes can be caught and corrected quickly. Doing it often means that there are only small changes each time, which makes it easier to pinpoint and correct any errors. For CI to work well, it needs a version control system and a good suite of automated tests that can be executed when updates someone commits updates to a version control system.
  • Continuous Delivery (CD) – This refers to the whole process from code changes and CI to the point where a software release is ready for deployment. This includes everything in the continuous integration part and other steps that may be needed to make the software ready for release. Ideally this is also fully automated, although may include manual steps. Again, the goal is that this process is quick, consistent and safe, so that it would be possible to make a deployment “at the click of a button”, or similar simple procedure. But the deployment is not part of continuous delivery.
  • Continuous Deployment (CD) – Unfortunately the abbreviation is the same as for continuous delivery, but it is not the same. This is continuous delivery plus automated deployment. In practice, this is applicable for some solutions but not all. With serverless solutions it is generally easier to do this technically, but in many cases it is not a technology decision, but a business decision.

Why continuous delivery?

Speed and safety for the business/organisation – that is essentially what it boils down to. To be able to adapt and change based on market and changing business requirements and to do this in a way that minimises disruption of the business.

Depending on which stakeholders you look at, there are typically different aspects of this process that are of interest:

  • Business people’s interests are in speed and predictability of delivery of changed business requirements and that services continues to work satisfyingly for customers.
  • Operations people’s interests are in safety, simplicity and predictability of updates and that disruptions can be avoided.
  • Developers’ interest is in fast feedback on the work they do and that they can do changes without fear of messing things up for themselves and their colleagues. Plus that they can focus on solving problems and building useful or cool solutions.

It is a long process to reach continuous delivery Nirvana, and the world of IT a mess to various degrees – we are never done. A sane choice of tooling for continuous delivery can at least get us part of the way.

Continuous delivery tools

If we want a continuous delivery tool which targets AWS, uses git and runs as a SaaS solution, we have a few categories:

  • Services provided by AWS
  • Services provided by the managed version control system solution
  • Third party continuous delivery SaaS tools

Services provided by AWS

AWS has a number of services that is related to continuous delivery, which all have names that start with “Code” in them. This includes:

  • AWS CodeCommit
  • AWS CodePipeline
  • AWS CodeBuild
  • AWS CodeDeploy
  • AWS CodeGuru
  • AWS CodeStar

A key advantage with using AWS services is that credentials and access is the regular identity and access management (IAM) in AWS and encryption with key management service (KMS). There is no AWS secrets information that has to be stored elsewhere outside of AWS, since it all lives in AWS – assuming your CI/CD workflow goes all-in on AWS – or to a large extent at least.

A downside with these AWS services is that they are not the most user-friendly, plus there are a number of them. They can be used together to set up elaborate CI/CD workflows, but it requires a fair amount of effort to do so. CodeStar is a service here that was an attempt to set up an end-to-end development workflow with CI/CD.
I like the idea behind CodeStar and for some use cases it may be just fine. But it has not received so much love from AWS since it was launched.

You do not necessarily need all of these services to set up a CI/CD workflow – in its simplest form you just need a supported source code repository (CodeCommit/Github/Bitbucket) and CodeBuild. But things can quickly get more complicated, in particular once the number of repositories, developers and/or AWS accounts involved starts to grow. One project that tries to alleviate that pain is the AWS Deployment Framework.

Services provided by the managed version control system solution

Three of the more prominent version control system hosting services are Github, Gitlab and Bitbucket. They all have CI/CD services bundled with their hosted service offering. Both Bitbucket and Gitlab also provide on-premise/self-hosted versions of their source code repository software as well as continuous delivery tooling and other tools for the software lifecycle. The on-premise continuous delivery tooling for Bitbucket is Bamboo, while the hosted (cloud) version is Bitbucket Pipelines. For Gitlab the software is the same for both hosted and on-premise. We only cover the cloud options here.

On the surface the continuous delivery tooling is similar for all these three – a file in each repository which describes the CI/CD workflow(s) for that particular repository. They are all based on running Docker containers to execute steps in the workflow and can handle multiple branches and pipelines. They all have some kind of organisational and team handling capabilities.

Beyond the continuous delivery basics they start to deviate a bit in their capabilities and priorities. Bitbucket, being an Atlassian product/service, focus on good integration with Jira in particular, but also some 3rd party solutions. Gitlab prides itself on providing a one-stop solution/application for the whole software lifecycle – what features are enabled depends on which edition of the software that is used. Github, being perhaps the most well-known source code repository provider, has a well-established ecosystem for integration with various tools into their toolchain, provided by 3rd parties and community – more so than the other providers.

Github and Gitlab have the concept of runners that allow you to set up your own machines to run tasks in the pipelines.

So if you are already using other Atlassian products, Bitbucket and Bitbucket Pipelines might be a good fit. If you want an all-in-one solution then Gitlab can suite well. For a best-of-breed approach to pick different components, then Github is likely a good fit.

Third party continuous delivery SaaS tools

There are many providers which provide hosted continuous delivery tooling. Some of these providers have been in this space for a reasonably long time, before the managed version control system providers added their own continuous delivery tooling.

In this segment there may be providers that support specific use cases better, or are able to set up faster and/or parallel pipelines easily. They also tend to support multiple managed version control system solutions and multiple cloud provider targets. Some of them also provide self-hosted/on-premise versions of their software solutions. Thus this category of providers may be interesting for integrating with a diverse portfolio of existing solutions.

Some of the more popular SaaS providers in this space include:

Pricing models

Regardless of category, pretty much all the different providers mentioned here provide some kind of free tier and then one or more on-demand paid tiers.

For example: Github Actions, Bitbucket Pipelines, Gitlab CI/CD and AWS CodeBuild provide a number of free build minutes per month. This is however limited to certain machine sizes used in executing the tasks in the pipelines.

A simple price model of just counting build minutes is easy to grasp, but will also not allow flexibility in machine sizes, since larger machine will require more capacity from the provider. In AWS case with CodeBuild, you can select a number of different machine sizes – but you need to pay for anything larger than the smaller machines from the first minute.

The third party continuous delivery providers have slightly different free tier models, I believe partially in order to distinguish them from the offerings of the managed version control system providers. For example, CircleCI provides a number of free “credits” per week. Depending on machine capacity and feature, pipeline execution will cost different amounts of credits.

The number of parallel pipeline executions is typically also a factor for all the different providers – free tiers tend to have 1 pipeline that can execute at any time, while more parallel execution will cost more.

Many pricing models also a restriction on the number of users and there may be a price tag attached to each active user also. All in all, you pay for compute capacity, to save time on pipeline execution and to have more people utilize the continuous delivery pipelines.

AWS, with a number of services fulfilling various parts of the continuous delivery solution, may be a bit more complex to grasp initially what things will actually cost. Also, the machine sizes may not be identical across the different services either, so a build minute for one service may not necessarily be one build minute at another provider.

Trying to calculate the exact amount the continuous delivery solution will cost may be counterproductive at an early stage though. Look at features needed first and their importance, then consider pricing after that.

End notes

Selecting continuous delivery tooling can be a complex topic. The bottom line is that it is intended to deliver software faster, more secure and more consistently, with fewer problems – and with good insight into the workflow for different stakeholders. Do not loose sight of that goal and what your requirements are – beyond the simple cases. Most alternatives will be ok for the very simple cases. Do not be afraid to try out some of them, but time box the effort.

If you wish to discuss anything of the above, please feel free to contact me at erik.lundevall@tiqqe.com

Cloud economics

License to kill

Using commercial software and paying expensive licenses is old school and no longer necessary. Cloud provide you with flexibility and you only need to pay for what you use. No investments necessary.

In May I’m sure many of you, including myself, was looking forward to the release of the new James Bond film, with the famous slogan – License To Kill.

Unfortunately, due to Covid-19, this film premier has been postponed but the reality of License to Kill within IT-licenses and infrastructure has never been more important than now.

We are in contact with roughly 150 companies across Sweden every month, mainly to understand where the market is at this point of time and how we need to align to be able to meet the market with their challenges.

In the past few months the market has really changed, most companies are “pulling the handbrake” and cutting down their variable costs, freezing new initiatives etc. What comes to a surprise is the number of licenses many companies have, everything from Office365, different on-premise & cloud platforms which are based on traditional license models which are core based and very expensive.

When buying licenses, with a traditional license model, you buy a capacity up-front which you are planning to use during a longer term, usually between 1-5 years. Of course, during this period, you are able to “scale up” and purchase more cores. But overall you will always be paying for more than you need at the point of time of the purchase.

Traditional on-premise platforms when scaling will have the following effects:

  • Additional cores
  • Additional servers
  • Not fully utilized 
  • Generate additional costs

This is costing companies across the globe huge amounts of money which could be spent on better things or even in these uncertain times also saved.

Here are a few examples:

  • On-Prem infrastructure
  • Integration platforms, Enterprise Service Bus
  • API-Platforms
  • Identity & Access Management platforms
  • Service & Assessment Platforms

The list is long and most likely you are running one or several at your workplace today.

So what’s the solution?

Both from a license and an infrastructure perspective the Cloud is the obvious choice this enables you to both scale up and down. At TIQQE we purely focus on AWS and the capabilities to scaling, not paying up-front license costs, pay for what you need at this point of time are all the key points to moving to the cloud.

Ask yourself if you need to renew your licenses anytime soon. Do you want to buy more licenses or do you want a second opinion?

We have all the tools in place to quickly identify your costs today and what the costs would be if you would instead operate in the cloud.

This blog is mainly focused on a cost saving perspective but there are many more examples what the cloud provides you with.

I really recommend checking the following blogs out:

4 ways of reducing cost and increase liquidity