Blog

#theTIQQEcode

Leadership program at TIQQE

Why would a company at its right mind want to invest in a leadership program for all employees?

I am a people person. Throughout my whole career I’ve been working in the technology industry, and don’t get me wrong, I love technology and all things that IT enables, but even more I love the people that come with the territory and everyone’s little peculiarities. Because we all have them, our customers, employees, partners, friends and families. But the fact is that the more we know and understand each other, the little quirks that you perceive as annoyance transforms into likability. So to get there, we need to have a common ground on how we interact and communicate with each other.
We have a number of core values at TIQQE and we were born with a strong culture that we want to make sure that we maintain over time.

  • Awesomeness – Always make our customers heroes, our teammates awesome
  • Autonomy – Let the teams be self driven with the knowledge that they’ll always do what is right for our customers
  • Ideas over status – Anyone can have the best idea. Encourage bravery in all to present smart thoughts. And be prepared to change your mind, without seeing it as a defeat.
  • Courage – We want to challenge current “truths” to achieve new things.

This isn’t just important to us. It’s our highest belief that this is what will make us succeed in our joint mission to “create a company where we all want to work” that we never will want to leave and in just that small quote comes big responsibility from everyone. In order for us all to take charge of our own personal #theTIQQEcode. We’re doing this work. Simply because it is our responsibility as a company to assure that everyone has the opportunity to affect what we are and who we are. And we want to do it Together.

What if we all were leaders and How many leaders can a company have?Here at TIQQE we believe that we can have as many leaders as we are employees. Let’s start with the definition of a Leader, which of course can be very individual. Forbes published an article of the most essential qualities that define great leadership a couple of years ago. Here is some of the qualities from that article:

  • Sincere enthusiasm – True enthusiasm for a business, its products or mission cannot be faked.
  • Integrity – Whether it’s giving proper credit for accomplishments, acknowledging mistakes or putting safety and quality first. Great leaders always do what’s right. Even if it isn’t the best thing for the current project or even the bottom line.
  • Communication skills – You can’t accomplish motivation and inspiration if you’re not a good communicator. Nor can you underestimate the importance of listening, as an integral part of communication.
  • Loyalty – True loyalty is ensuring that all team members have the training and resources to do their jobs. It’s standing up for team members in crisis and conflict.
  • Decisiveness – You are willing to take on the risk of decision making, with the knowledge that if things don’t work out, you’ll be held accountable. 
  • Empowerment – A good leader has faith in their ability to train and develop the employees. Because of this, they have the willingness to empower those they lead to act autonomously. When employees are empowered, they are more likely to make decisions that are in the best interest of the company and the customer as well.

All of the above, is something that I don’t believe is my exclusive right as my every-day work. I am convinced that all of us that are employed at TIQQE possess this within, and we all matter and have a part in the above.  We are all leaders of ourselves and we all just have to practice on it to maintain it and get better at it. 
Imagine the difference between one or two people defining the truth for a whole company, rather than the whole company defining what we want the company to be together. What would the difference be in commitment to the company, the mission and purpose of our business? We believe that the difference is humongous. That’s why we invest in a leadership program for all our employees. We’re building trust, practice how we communicate and how we can coach each other, discuss and practice different tools, experiences, life and work. So if you wonder what’s happening in our office one thursday a month? We’re preparing everyone to be tomorrow’s leaders today. 

Digitalization

Digitalization is dead, long live digitalization

Half of the companies on the fortune 500 list have disappeared in the last 20 years. That means 250 multibillion-dollar companies, 12 companies a year, 1 per month for the past 20 years has been replaced. Why? Emerging technologies are the main reason and it is both an opportunity and a threat. An opportunity if you explore it and a threat if you neglect it.

Half of the companies on the Fortune 500 list have disappeared in the last 20 years. Why?

The simple answer to that question is because they haven’t been able to adapt to changing demands and changing markets. For the past 40 years, technology has been the main driver for change and 25 years ago, Internet was introduced to universities. Since then, Internet has enabled completely new business models and digitalized complete supply chains, from manufacturers to end customers. It has disrupted almost every industry and changed the way we communicate and how we consume and ship goods and services on a global level. There is substantial evidence of the impact of the Internet where retail, logistics and entertainment being a few of them.

The problem with emerging technologies is that the effects are slow enough to go under the radar for boards and management teams for a long time and once changes are visible, it’s often too late to respond. Changing large organizations is a time-consuming process, especially if it includes changing the very way of how they do business. Another problem is that new technology starts in the tech community and is hard for non-technical people to encode and translate into business impact. The decision power in an organization are board and management teams who often do not have deep technical insights.

Furthermore, emerging technologies often get a conceptual name, which then becomes a buzz word such as “ebusiness” in the late 90’s and “on-demand” in the beginning of the century. The buzz word today is “digitalization”. You get 15.5 million hits on google if you make a search for digitalization. There are thousands of interpretations of what it is and what you should do making it really hard to translate it into your own context. These concepts usually get warned out before the new technology has even made an impact, further complicating the process of spotting and translating new technology for a board or a management team.

I read an interesting article by William Bergh, who practiced as CEO for one month at Adecco. The article is a reflection of his time as CEO during the Covid-19 outbreak and how to manage a company in the midst of a crisis. He argues that the characteristics of a crisis is the element of surprise, the lack of information, the need for speed and the opportunity to change. The element of surprise is central because it arises no matter the speed of the cause. Some might argue that surprises are sudden by nature but that is not the case according to William Bergh. He mentions Covid-19 as a good example. Many were aware of Covid-19 for months before it hit but its impact surprised all of us. The same goes for mega trends. We know that automation, AI and machine learning will change our world profoundly, but we will nevertheless be surprised about how they will impact us.

Trust me, digitalization or whatever we call the new emerging technologies, is just in the beginning. It will continue to disrupt industries and companies making even more remarkable changes in the Fortune 500 list in the future. This is both an opportunity and a threat for every company across the globe. It’s an opportunity if you start exploring how new technology can enhance and develop your business but a threat if you neglect it and just hope for the best. One thing is sure, there will be changes in every industry and if you don’t reinvent yourself, someone else will do it for you.

So, what is our advice at TIQQE and how can we help?

We offer you 3 pieces of advice.

Start with the customers

You need to start with the customer interface and work backwards from there. How can you improve the customer experience with new technology? How can you leverage your data to build new digital products or solutions for the benefit of your customers? Don’t wait until your customers are asking for something you can’t provide because then it will be too late. Align with someone who understands both sides of the fence; customer innovation and modern technology.

Enable your digital capabilities

Your digital capabilities will not be enabled in your legacy infrastructure. You need to establish an agile and scalable business infrastructure on top of your legacy infrastructure to enable digital products and services.

Start small

Our final advice is to start small and prove the value. Identify one thing that would provide your customer with some form of new value. Build that service all the way from the customer to the backend systems and establish your new business infrastructure as you go. Do not fall in the trap of big architectural plans and designs as it will never be completed.

We have extensive experience of designing and building digital product and services for both small and large customers. Please feel free to contact any of us.

If you wish to read William Bergh’s interesting article of managing a company in crisis you can find it here. He is 25 years old by the way and the analysis is razor sharp.

People

Torbjörn Stavenek just joined TIQQE

We’re proud to welcome Torbjörn Stavenek to our team at TIQQE. Torbjörn is an AI and Machine Learning expert and will lead our investment in the AI and Machine Learning domain due to an increasing demand from our customers.

Who is Torbjörn?

My name is Torbjörn and I have been working with IT for more than 20 years. I am originally from the south of Sweden, studied in Linköping and worked for many years in Stockholm. However, 5 years ago me and my family bought a farm in Vintrosa just outside Örebro and moved there. 
I have a wife and 3 kids, a dog, a cat, some hens, and various other small animals.

I am in general an optimistic and curious person. My interests include cooking, wine, creating and listening to music, carpentry, forestry, investing, exploring business ideas, gaming, etc.

What did you know about TIQQE before you started?

Since I had been working with many of the people at Tiqqe before, I had a pretty good idea about the company. I knew about the AWS and serverless focus, and I also knew about the employee and customer focus.

Why did you want to join TIQQE?

As I mentioned I know many of the people at Tiqqe already and I want to work with nice people  – so that was easy! I have been working with cloud and serverless for some years now which is also a great fit with Tiqqe. 
But what really sold me was their new AI-initiative which I will be an integral part of.

What was your first impression of TIQQE the first week?

Much as expected: nice people, drive, engagement, and lots of discussions about customers and projects.

What is your role at TIQQE?

I will lead the investment into the AI and Machine Learning domain together with Andreas Vallberg and Anna Klang. Several of our existing customers have come quite far in their digitalization journey. They are looking to explore the opportunities within their data which we will be able to help them with. We’re already involved in several AI and Machine Learning projects, some really ground breaking capabilities within prediction.

How has your fist time been at TIQQE?

A mix of working concentrated from home, socializing and working at the office, joining the first lesson in an internal leadership programme.
So as they say in the movie industry: “In medias res” – dropped into the center of things.

What are you looking forward to in the nearest future?

I am really excited about growing the AI-initiative with more customers and projects and finding interesting problems to solve in this space.

What do you know about TIQQE now?

I think basically what I thought I knew has been confirmed, so I am happy about that!

Welcome Torbjörn!

People

Yasnia Deras Cruz just joined TIQQE

We’re proud to welcome Yasnia Deras Cruz to our growing family at TIQQE. Yasnia is the founder of #kodayoga and will strengthen our fullstack delivery team. As usual, we ask a couple of questions to get to know our new stars a little bit better.

Who is Yasnia?

My name is Yasnia Deras Cruz and I’m 26 years old. I’ve been studying a bachelor’s in system science here in Örebro. As a person I love to be creative and find new challenges. I also love to practice yoga, dance and go to the gym. On my spare time I like to explore new places, get to know new people or come up with new ideas. Except being active, I really enjoy watching anime or read a good book.

What did you know about TIQQE before you started?

I came in contact with TIQQE in 2018 on one of their re:Invent event and learned that they focus on serverless and AWS. I also got the impression that they cared about their customers and employees and that they are a company that wants to make a workplace modern and fun.

Why did you want to join TIQQE?

I wanted to learn and develop more as programmer and as a person and TIQQE seemed to be the place to be! For me it is important to be in a company that cares and wants their employees to grow and that’s why I wanted to join TIQQE.

What was your first impression of TIQQE the first week?

My impression of TIQQE have been very welcoming and for sure a place where to grow!

What is your role at TIQQE?

I will be working as a Fullstack Developer

How has your fist time been at TIQQE?

I was very nervous the first day. But I can say that the nervosity went away after a few hours! TIQQE made sure that I got an IT-mentor and a buddy that can answer all my questions I have! Which makes it easy to come in the company culture. Then everybody that I have meet has been very welcoming and nice, which makes it much easier to get know everyone! Looking forward for the coming weeks and learn from all at TIQQE.

What are you looking forward to in the nearest future?

I’m really looking forward to learn and get comfortable with AWS and most of the programming languages TIQQE works with. I’m also looking forward to meet and get to know more of TIQQE employees, not only in Sweden but in Philippines and Italy!

What do you know about TIQQE now?

That they really care for the employees and customer, and that #theTIQQEcode is not just words. TIQQE is a company that truly stands for their values and a place where everyone grows in their different ways.

Welcome Yasnia!

COVID-19

Watch our Optimization webinar

In March, we launched a series of ideas of how companies who are suffering from the Covid-19 pandemic can quickly reduce cost and increase liquidity. If you missed the webinar around our fourth idea, reducing cost by optimizing your existing AWS workloads, you can watch it today.

Many companies are under tremendous financial pressure due to the COVID-19 virus. In early March, we sat down to figure out what we can do to help and came up with 4 ways of how we can reduce cost and increase liquidity in the short term for a company.

You can read a summary of the cost saving series here. The summary include links to all 4 ideas to give you a deeper insight of each idea. Every idea also include a financial business case which have two purposes:

  • Translate technology into tangible financials to motivate your CFO to support the idea.
  • Provide a business case template to reflect your specific prerequisites.

AWS workload optimization

Managing cloud infrastructure is different to managing infrastructure on-prem. It’s easy to provision new resources but it’s equally easy to forget to decommission resources when they’re not needed. Further more, performance tuning is often not part of daily routines and only performed when there are performance problems. Optimization is not supposed to be performed occasionally but rather on a regular basis to ensure a cost effective use of cloud computing.

You can read the full blog post here.

You can watch the Swedish webinar here.

You can watch the English webinar here.

AWS

Decide security posture for your Landing Zone.

This is the second post in the series “Where do I start with AWS?”. In this blog post, we will turn our focus on securing our data.

So, we have previously discussed best practises in regards to setting up and governing a new, secure multi-account AWS environment and a framework that is being used to deliver our Infrastructure-as-Code (IaC), as well as application code.

If you haven’t read this blogpost, you can find it here: Where do I start with AWS?

It’s now time to take care of securing our state of the art infrastructure and how we can start automating security incidents that might occur.

Blog post agenda:

  1. Fundamentals of security in your AWS environment
  2. Where to start with security practice?
  3. Introduction to AWS Security Hub
  4. How to start with AWS Security Hub

Fundamentals of security in your AWS environment

Amazon Web Services has a concept they call the Shared Responsibility Model. In this model the responsibility is, as the name implies, a responsibility between you – the customer, the consumer of the services and AWS themselves, the provider of the services.

Below picture describes the Shared Responsibility Model.

Source: aws.amazon.com

As the picture implies, you can say that “You are responsible for what is running ON the cloud. AWS is responsible for running the cloud.”

Let’s take an example related to configuration management stated on the compliance page related to the Shared Responsibility Model at AWS.

“Configuration Management: AWS maintains the configuration of its infrastructure devices, but businesses are responsible for configuring their own guest operating systems, databases, and applications.”

Source: aws.amazon.com

Above example is somewhat related to AWS IaaS services. AWS does however have a lot of other services, services which you don’t have to manage other than providing your code and some minor configuration related to that.

A service like that is AWS Lambda. The AWS Shared Responsibility Model for Lambda sees another level of layer peeled away. Instead of having to manage, maintain and run a EC2 Instance to run their code, or having to track software dependencies in a user-managed container, Lambda allows organizations to upload their code and let AWS figure out how to run it at scale.

Below is a picture of the AWS Shared Responsibility Model for Lambda.

Source: aws.amazon.com

To learn more about the Shared Responsibility Model, please visit the official compliance page at AWS: Shared Responsibility Model.

Where to start with security practice?

Now that we have learned about what we have to take in consideration and what we are responsible for when running workloads in AWS, it is time to actually start with our security practice.

Setting up a baseline

When setting up a baseline for your security practice it is important to first identify which data is important for your business. Classifying data does not only enable you to understand, categorize, label and protect data today, but also in the future when preparing new data structures, regulations and compliance frameworks etc. that might come in your way. Without proper classification, no proper protection.

Get to know what your organization’s “Sacred data”, i.e. the crown jewel data that is of greatest value to your business and would cause the most damage if compromised. Every organization has different needs, and will therefore also have different sacred data. This data will need the most restrictive controls applied to it and should be protected at all cost.

Organizations should create classification categories that make sense for its needs, basically classifying data for what it is worth.

A common method being used is a green, yellow and red color model. As you might think, the color-coded scale depends on the data value and the importance of it.

An example of this could be:

Green data: Likely related to data that is publicly available or confidential company records, not something that would impact a stock price for example but would be a minor reputation hit.

Yellow data: Would be something that is very concerning for your business, sensitive customer data leakage for example.

Red data: Big news event, extreme fines and loss of customer trust, something that might take your business to bankruptcy.

Next steps would be to secure your data according to its classification. When you are aware of the importance levels of your data, you can work backwards to employ security controls that are aligned with its criticality. In this way you can minimize the probability of breaches happening and ask appropriate questions related to your data.

Examples of those questions could be:

  • What systems is processing red data?
  • Does this data need encryption in-transit and at-rest?
  • Who has access to encryption keys?
  • Are there systems that inappropriately move red data over to systems with fewer security controls, such as systems built for green or yellow data?
  • Are you working with least privilege access to your data?

Data classification is a start with the goal of reaching compliance, but there are other things to take in consideration as well. Security is also a people process and needs ongoing collaborative dialog in your organization. You need to grow security awareness within your Operations and Cloud Development teams as a solid understanding and awareness of the implications of running software in the cloud is crucial. If you have a base set of guardrails it becomes important to train your developers to take the responsibility themselves. This could be considered as a “Trust but verify” approach, where you have baseline guardrails in place, but you also provide reviews to the teams to ensure they are compliant with the expectations. This can be a tough thing, and it’s important to work together with teams and be there to support them in succeeding, rather than to be a control mechanism that prevents progress (this is what you are, but it’s all in the attitude towards your coworkers).

Before continuing I would like to say that there are tons of security solutions that can accomplish the same tasks out there, and you might already have one in place that cover some of your needs. I will in this blogpost go through a AWS-based alternative related to security, which is a cost efficient alternative to other more license-heavy solutions out there.

Introduction to AWS Security Hub

At the General Availability announcement of AWS Security Hub, Dan Plastina, Vice President for External Security Services at AWS stated:

“AWS Security Hub is the glue that connects what AWS and our security partners do to help customers manage and reduce risk,” said Dan Plastina, Vice President for External Security Services at AWS. “By combining automated compliance checks, the aggregation of findings from more than 30 different AWS and partner sources, and partner-enabled response and remediation workflows, AWS Security Hub gives customers a simple way to unify management of their security and compliance.”

What is Dan Plastina really talking about here?

AWS Security Hub gives you a broad view of your security alerts and security aspect across your AWS accounts. Powerful security tools such as firewalls and endpoint protection to vulnerability and compliance scanners are all available in this single service which makes this a quite powerful one.

Security Hub is this neat single place that aggregates, organizes and prioritizes your security alerts and findings from AWS services such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Identity and Access Management (IAM) Access Analyzer, AWS Firewall Manager and other AWS Partner solutions.

AWS Security Hub continuously monitors your environment using automated security checks based on the AWS best practices and industry standards that your organization follows.

You can also take action by using other AWS services such as Amazon Detective or sending findings to your own ticketing system/chat of choice using CloudWatch Event rules. If you are using your own incident management tools, Security Information and Event Management (SIEM) or Security Orchestration Automation and Response (SOAR) it is possible to act on Security Hub findings in these systems as well.

You do also have the ability to develop your own custom remediation actions called playbooks that act on events you define. AWS have released a solution for developing playbooks that remediates security events related to security standards defined as part of the CIS AWS Foundations Benchmark.

The solution is also all serverless, no servers to manage, which means more time for fika or other useful things 😉

You can find this solution at below link:

AWS Security Hub Automated Response and Remediation

I want AWS Security Hub! How and where do I deploy it?

Alright alright, easy now.. We do first need an AWS account suitable for this kind of service, it is after all a critical component and something you only want the Security Team or other people in similar roles to see.

The account that acts as the Security Hub master should be an account that is responsible for security tools. The same account should also be the aggregator account for AWS Config.

It is also important to note that after you have enabled Security Hub in your account that is acting as the Security Hub master, you would also need to enable Security Hub in the other member accounts and then, from the Security Hub that is acting the master, invite the other member accounts. You will then be able to see all security findings related to your accounts in one place i.e. the Security Hub master account.

There is also a script available for deploying security hub in an multi-account environment at below link:

AWS Security Hub Multiaccount Scripts

For those who are using Control Tower, the Security Hub master account would be the shared account named Audit.

There is also an version of above multi-account script that is modified to work with Control Tower that you can find at below link:

AWS Control Tower Security Hub Enabler

Conclusion

In essence Security Hub is a SIEM aggregator, with remediation tips thrown in too. You can make use of a lot of mature AWS services such as CloudWatch and Lambda etc. which makes it very flexible. It can help you understand activity happening in your AWS environment and take appropriate action on this, as well as understand and monitor critical components of your environment.

When integrated with other services such as Amazon GuardDuty, Amazon Macie and Amazon Detective you will have a great toolset to put you in great advantage in terms of your security posture.

Security Hub has a very competitive pricing model and is beneficial for companies looking to get further insight in their AWS workloads.

Security Hub does also have integrations with a lot of third-party providers and is like many other AWS services, developed in an impressive phase as new features are added regularly.

Below is an monthly pricing example of an organization that uses 2 regions and 20 accounts, a quite large organization in other words.

500 security checks per account/region/month

10,000 finding ingestion events per account/region/month

Monthly charges = 500 * $0.0010 * 2 * 20 (first 100,000 checks/account/region/month)

+ 10,000 * $0 * 2 * 20 (first 10,000 events/account/region/month)

= $20 per month

Besides pricing, Security Hub is simple to use and provides several frameworks ready for use out-of-the-box. Security Hub is getting traction among larger respected players such as Splunk, Rackspace, GoDaddy for these reasons and is by no doubt a great service.

Security should be one of the top priorities among organizations but that is not usually the case. When investing in security solutions one organization should first estimate how much a security breach will cost them and which implications it might have and then use this information to set aside a budget dedicated to this field. Classify your data and think about how this data is being processed or used in-transit and at-rest, this can lead to great insights and should not be underestimated.

You will probably have a hard time finding a solution that provides more bang for the buck than Security Hub in regards to securing your AWS resources.

If you have any questions or just want to get in contact with me or any of my colleagues, I’m reachable on any of the following channels.

Mail: christoffer.pozeus@tiqqe.com

LinkedIn: Christoffer Pozeus

Webinar

Watch our Incident Automation webinar

In March, we launched a series of ideas of how companies who are suffering from the Covid-19 pandemic can quickly reduce cost and increase liquidity. If you missed the webinar around our third idea, reducing cost by automating your incident handling, you can watch it today.

Many companies are under tremendous financial pressure due to the COVID-19 virus. In early March, we sat down to figure out what we can do to help and came up with 4 ways of how we can reduce cost and increase liquidity in the short term for a company.

You can read a summary of the cost saving series here. The summary include links to all 4 ideas to give you a deeper insight of each idea. Every idea also include a financial business case which have two purposes:

  • Translate technology into tangible financials to motivate your CFO to support the idea.
  • Provide a business case template to reflect your specific prerequisites.

Incident Automation

Incident handling is often a highly manual process in most companies. It requires 1st, 2nd and 3rd line resources in a service desk to manage error handling of the applications, databases and infrastructure. Further more, some expert forum, or Change Advisory Board, are usually in place to work with improvements to reduce tickets and incidents. A lot of people is required just to keep the lights on.

What if you could set up monitoring alerts that automatically triggers automated processes and resolves the incidents before the users even notice them and place a ticket to your service desk. Sounds like science fiction? Check out this webinar where Max Koldenius will reveal how to set up incident automation using AWS Step Functions.

You can read the full blog post about incident automation here

You can watch the webinar here

AWS

Simply: AWS DynamoDB

My “Simply AWS” series is aimed at absolute beginners to quickly get started with the wonderful world of AWS, this one is about DynamoDB.

What is DynamoDB?

DynamoDB is a NoSQL, fully managed, key-value database within AWS.

Why should I use it?

When it comes to data storage, selecting what technology to use is always a big decisions, DynamoDB is like any other technology not a silver bullet but it does offer a lot of positives if you need a document based key-value storage.

Benefits of DynamoDB

How do I start?

First you’re gonna need an AWS account, follow this guide.

Setting up our first DynamoDB database

If you feel like it you can set your region on the top right corner of the AWS console, it should default to us-east-1 but you can select something closer to you, read more about regions here.

From the AWS console, head to Services and search for DynamoDB, select the first option.

The first time you open up DynamoDB you should see a blue button with the text Create Table, click it.

Now you’re presented with some options for creating your table, enter myFirstTable (this can be anything) in the Table name.

Primary key

A key in a database is something used to identify items in the table and as such it must always be unique for every item. In DynamoDB the key is built up by a Partion key and an option Sort key

  • Partition Key: As the tooltip in the AWS console describes the Partion key is used to partion data across hosts because of that for best practice you should use an attribute that has a wide range of values, for now we don’t need to worry much about this, the main thing to takeaway is that if the Partion key is used alone it must be unique
  • Sort key: if the optional sort key is included the partion key does not have to be unique (but the combination of partion key and sort key does) it allows us to sort within a partion.

Let’s continue, for this example I’m gonna say i’m creating something like a library system, so I’ll put Author as the Partion key and BookTitle as the sort Key.

Note that this is just one of many ways you could setup this type of table and choosing a good primary key is arguably one of the most important decisions when creating a DynamoDB table, what’s good about AWS is that we can create a table, try it out, change our minds and just create a new one with ease.

Next up are table settings, these are things like secondary indexesprovisioned capacityautoscalingencryption and such. It’s a good idea to eventually get a bit comfortable with these options and I highly recommend looking into on-demand read/write capacity mode, but as we just want to get going now the default settings are fine and will not cost you anything for what we are doing today.

Hit create and wait for your table to be created!

Now you should be taken to the tables view of DynamoDB and your newly created table should be selected, this can be a bit daunting as there is a lot of options and information, but let’s head over to the Items tab.

From here we could create an Item directly from the console (feel free to try it out if you want) but I think we can do one better and setup a lambda for interacting with the table.

Creating our first item

If you’ve never created an AWS lambda before I have written a similar guide to this one on the topic, you can find it here.

Create a lambda called DynamoDBInteracter

Make sure to select to create a new role from a template and search for the template role Simple microservice permissions (this will allow us to perform any actions agains DynamoDB).

After creating the lambda we can directly edit it in the AWS console, copy and paste this code.

const AWS = require('aws-sdk')
const client = new AWS.DynamoDB.DocumentClient();
exports.handler = async (event) => {
    try {
        console.log(event.action)
        console.log(event.options)

        let response;
        switch (event.action) {
            case 'put':
                response = await putItem(event.options);
                break;
            case 'get':
                response = await getItem(event.options);
                break;
        }
        return response;
    } catch (e) {
        console.error(e)
        return e.message || e
    }
};


let putItem = async (options) => {
    var params = {
      TableName : 'myFirstTable',
      Item: {
         Author: options.author,
         BookTitle: options.bookTitle,
         genre: options.genre
      }
    };

    return await client.put(params).promise();
}

let getItem = async (options) => {
    var params = {
      TableName : 'myFirstTable',
      Key: {
        Author: options.author,
        BookTitle: options.bookTitle
      }
    };


    return await client.get(params).promise();
}

hit Save then create a new test event like this.

{
    "action": "put",
    "options": {
        "author": "Douglas Adams",
        "bookTitle": "The Hitchhiker's Guide to the Galaxy",
        "genre": "Sci-fi"
    }
}

and run that test event.

Go back to DynamoDB and the Items tab and you should see your newly created Item!

Notice that we did not have to specify the genre attribute that is because DynamoDB is NoSQL it follows no schemea and any field + value can be added to any item irregardless of the other items composition as long as the primary key is valid.

Retrieving our item

Now let’s try to get that item, create another test event like this.

{
    "action": "get",
    "options": {
         "author": "Douglas Adams",
        "bookTitle": "The Hitchhiker's Guide to the Galaxy"
    }
}

and run it.

You can expand the execution results and you should see your response with the full data of the item.

Congratulations!

You’ve just configured your first DynamoDB table and performed calls against it with a lambda but don’t stop here the possibilities with just these two AWS services are endless, my next guide in this series will cover API-Gateway and how we can connect an API to our lambda that then communicates with our database table, stay tuned!

What’s next?

As I’m sure you understand we’ve just begun to scratch the surface of what DynamoDB has to offer, my goal with this series of guides is to get your foot in the door with some AWS services and show that although powerful and vast they are still easy to get started with, to check out more of what calls can be made with the DynamoDB API (such as more complicated queries, updates and scans as well as batch writes and reads) check this out, feel free to edit the code and play around.

I would also like to recommend this guide if you want even more in-depth look into DynamoDB, it covers most of what I have here but more detailed and also goes over some of the API actions mentioned above.

Contact me

Questions? Thoughts? Feedback?
Twitter: @tqfipe
Linkedin: Filip Pettersson

Development

How to make it work

I joined a new team from the beginning of this year at one of our customer. This project is more complex compared with my previous work since it involves more team-members and larger environments. I learnt a lot from this project and I would like to share some experiences that helped me grow.

1. We work closer to the customer.

My previous projects involve very little communications with customers. The customer delivery did most of the work. Differently, the current project requires the customer and the development team working closely. We meet at least once per week, either face to face or online (due to the current COVID-19 situations), to gather the user’s feedback with the following aspects from different locations,

  • User experience on current version of the product
  • Bugs and critical incidents reported by service desk
  • New feature requirements

Meanwhile the development team shows the progress to our customers. In my opinion, it is an efficient way to understand each other in both directions, and this helps us meet the customer requirements in time.

2. We work closer to the different teams.

My previous tasks require me working individually, while this project involves several teams with various assignments and co-workers from different time zones. That don’t cause us any trouble, since we find our way to work smoothly among the teams and individuals as follows: 

  • We start our job with a scrum daily, and it triggers my day by making a proper plan for myself and also keeping track of others’ progress. 
  • People in our team are nice and cooperative, we are willing to overcome obstacles together and share ideas with each other. When I was new in the team, I did the pair programming from time to time with senior members. The pair programming helped me to quickly understand the existing system and new knowledge.

3. We have stricter control of the code quality.

Code quality is important, as it impacts how secure, maintainable and reliable your codebase is. Particularly in this complex project, multiple teams can share the same repository. In order to improve the code quality, we have the code review and do the code refactoring. And I’ve gotten used to the “test-driven development” style, I realize the benefits of covering code by unit tests: 

  • Easy to troubleshoot if the codebase is broken due to a new submitting
  • Easy for others to understand the code being tested
  • Reduce the bugs and design flaws in the early stage

4. We have more comprehensive access to framework features and technologies.

It is always fun to learn new stuff. And I really appreciate that I have such an opportunity to learn AWS technologies by participating in this project together with awesome teammates.

Webinar

Hardware refresh webinar

Join our hardware refresh webinar the 29:th of September between 08:30 to 09:15. Learn how to reduce your infrastructure cost by 30-40% by moving to cloud.

With a depreciation cycle of 36 months, you’re looking at a 33% replacement of servers and storage in your datacenter this year. Now is a good time to challenge the default decision to replace those servers with new ones and consider cloud instead. Here are a few reasons why:

  • You don’t have to make the capital expenditure upfront which will have a positive impact on your cashflow and your balance sheet.
  • You will lower your cost by an average of 30-40%
  • You don’t have to buy capacity to last for 36 months with low utilization the first couple of years.
  • You pay for what you use and you can scale up or down in capacity by pressing a button.
  • You are making the inevitable move to cloud sooner than later

Join our webinar the 29th of September at 08:30 to 09:15. We will provide you with the tools to assess cloud vs. on-prem workloads from a financial, security and technical perspective. The webinar will be hosted in Swedish with English slides

You can also read our blog post of a business case example of a company with 300 servers and 50TB storage.

Please enroll here.