Webinar

Watch our Incident Automation webinar

In March, we launched a series of ideas of how companies who are suffering from the Covid-19 pandemic can quickly reduce cost and increase liquidity. If you missed the webinar around our third idea, reducing cost by automating your incident handling, you can watch it today.

Many companies are under tremendous financial pressure due to the COVID-19 virus. In early March, we sat down to figure out what we can do to help and came up with 4 ways of how we can reduce cost and increase liquidity in the short term for a company.

You can read a summary of the cost saving series here. The summary include links to all 4 ideas to give you a deeper insight of each idea. Every idea also include a financial business case which have two purposes:

  • Translate technology into tangible financials to motivate your CFO to support the idea.
  • Provide a business case template to reflect your specific prerequisites.

Incident Automation

Incident handling is often a highly manual process in most companies. It requires 1st, 2nd and 3rd line resources in a service desk to manage error handling of the applications, databases and infrastructure. Further more, some expert forum, or Change Advisory Board, are usually in place to work with improvements to reduce tickets and incidents. A lot of people is required just to keep the lights on.

What if you could set up monitoring alerts that automatically triggers automated processes and resolves the incidents before the users even notice them and place a ticket to your service desk. Sounds like science fiction? Check out this webinar where Max Koldenius will reveal how to set up incident automation using AWS Step Functions.

You can read the full blog post about incident automation here

You can watch the webinar here

AWS

Simply: AWS DynamoDB

My “Simply AWS” series is aimed at absolute beginners to quickly get started with the wonderful world of AWS, this one is about DynamoDB.

What is DynamoDB?

DynamoDB is a NoSQL, fully managed, key-value database within AWS.

Why should I use it?

When it comes to data storage, selecting what technology to use is always a big decisions, DynamoDB is like any other technology not a silver bullet but it does offer a lot of positives if you need a document based key-value storage.

Benefits of DynamoDB

How do I start?

First you’re gonna need an AWS account, follow this guide.

Setting up our first DynamoDB database

If you feel like it you can set your region on the top right corner of the AWS console, it should default to us-east-1 but you can select something closer to you, read more about regions here.

From the AWS console, head to Services and search for DynamoDB, select the first option.

The first time you open up DynamoDB you should see a blue button with the text Create Table, click it.

Now you’re presented with some options for creating your table, enter myFirstTable (this can be anything) in the Table name.

Primary key

A key in a database is something used to identify items in the table and as such it must always be unique for every item. In DynamoDB the key is built up by a Partion key and an option Sort key

  • Partition Key: As the tooltip in the AWS console describes the Partion key is used to partion data across hosts because of that for best practice you should use an attribute that has a wide range of values, for now we don’t need to worry much about this, the main thing to takeaway is that if the Partion key is used alone it must be unique
  • Sort key: if the optional sort key is included the partion key does not have to be unique (but the combination of partion key and sort key does) it allows us to sort within a partion.

Let’s continue, for this example I’m gonna say i’m creating something like a library system, so I’ll put Author as the Partion key and BookTitle as the sort Key.

Note that this is just one of many ways you could setup this type of table and choosing a good primary key is arguably one of the most important decisions when creating a DynamoDB table, what’s good about AWS is that we can create a table, try it out, change our minds and just create a new one with ease.

Next up are table settings, these are things like secondary indexesprovisioned capacityautoscalingencryption and such. It’s a good idea to eventually get a bit comfortable with these options and I highly recommend looking into on-demand read/write capacity mode, but as we just want to get going now the default settings are fine and will not cost you anything for what we are doing today.

Hit create and wait for your table to be created!

Now you should be taken to the tables view of DynamoDB and your newly created table should be selected, this can be a bit daunting as there is a lot of options and information, but let’s head over to the Items tab.

From here we could create an Item directly from the console (feel free to try it out if you want) but I think we can do one better and setup a lambda for interacting with the table.

Creating our first item

If you’ve never created an AWS lambda before I have written a similar guide to this one on the topic, you can find it here.

Create a lambda called DynamoDBInteracter

Make sure to select to create a new role from a template and search for the template role Simple microservice permissions (this will allow us to perform any actions agains DynamoDB).

After creating the lambda we can directly edit it in the AWS console, copy and paste this code.

const AWS = require('aws-sdk')
const client = new AWS.DynamoDB.DocumentClient();
exports.handler = async (event) => {
    try {
        console.log(event.action)
        console.log(event.options)

        let response;
        switch (event.action) {
            case 'put':
                response = await putItem(event.options);
                break;
            case 'get':
                response = await getItem(event.options);
                break;
        }
        return response;
    } catch (e) {
        console.error(e)
        return e.message || e
    }
};


let putItem = async (options) => {
    var params = {
      TableName : 'myFirstTable',
      Item: {
         Author: options.author,
         BookTitle: options.bookTitle,
         genre: options.genre
      }
    };

    return await client.put(params).promise();
}

let getItem = async (options) => {
    var params = {
      TableName : 'myFirstTable',
      Key: {
        Author: options.author,
        BookTitle: options.bookTitle
      }
    };


    return await client.get(params).promise();
}

hit Save then create a new test event like this.

{
    "action": "put",
    "options": {
        "author": "Douglas Adams",
        "bookTitle": "The Hitchhiker's Guide to the Galaxy",
        "genre": "Sci-fi"
    }
}

and run that test event.

Go back to DynamoDB and the Items tab and you should see your newly created Item!

Notice that we did not have to specify the genre attribute that is because DynamoDB is NoSQL it follows no schemea and any field + value can be added to any item irregardless of the other items composition as long as the primary key is valid.

Retrieving our item

Now let’s try to get that item, create another test event like this.

{
    "action": "get",
    "options": {
         "author": "Douglas Adams",
        "bookTitle": "The Hitchhiker's Guide to the Galaxy"
    }
}

and run it.

You can expand the execution results and you should see your response with the full data of the item.

Congratulations!

You’ve just configured your first DynamoDB table and performed calls against it with a lambda but don’t stop here the possibilities with just these two AWS services are endless, my next guide in this series will cover API-Gateway and how we can connect an API to our lambda that then communicates with our database table, stay tuned!

What’s next?

As I’m sure you understand we’ve just begun to scratch the surface of what DynamoDB has to offer, my goal with this series of guides is to get your foot in the door with some AWS services and show that although powerful and vast they are still easy to get started with, to check out more of what calls can be made with the DynamoDB API (such as more complicated queries, updates and scans as well as batch writes and reads) check this out, feel free to edit the code and play around.

I would also like to recommend this guide if you want even more in-depth look into DynamoDB, it covers most of what I have here but more detailed and also goes over some of the API actions mentioned above.

Contact me

Questions? Thoughts? Feedback?
Twitter: @tqfipe
Linkedin: Filip Pettersson

Webinar

Hardware refresh webinar

Join our hardware refresh webinar the 29:th of September between 08:30 to 09:15. Learn how to reduce your infrastructure cost by 30-40% by moving to cloud.

With a depreciation cycle of 36 months, you’re looking at a 33% replacement of servers and storage in your datacenter this year. Now is a good time to challenge the default decision to replace those servers with new ones and consider cloud instead. Here are a few reasons why:

  • You don’t have to make the capital expenditure upfront which will have a positive impact on your cashflow and your balance sheet.
  • You will lower your cost by an average of 30-40%
  • You don’t have to buy capacity to last for 36 months with low utilization the first couple of years.
  • You pay for what you use and you can scale up or down in capacity by pressing a button.
  • You are making the inevitable move to cloud sooner than later

Join our webinar the 29th of September at 08:30 to 09:15. We will provide you with the tools to assess cloud vs. on-prem workloads from a financial, security and technical perspective. The webinar will be hosted in Swedish with English slides

You can also read our blog post of a business case example of a company with 300 servers and 50TB storage.

Please enroll here.

COVID-19

Post corona thoughts

The corona pandemic has shown that being able to adjust cost according to market demand is a core capability for a company. Serverless computing is the solution to the problem.

It’s of course too early to claim that we are a through the corona pandemic and things will be going back to as they were before. We really doubt that it ever will go back to the way it was before. Only in the past few months we have seen a huge change in how we work, all from running online meetings, a huge increase in number of digital events, how we collaborate etc.

Things that we saw in the market at the start of Covid-19 was of course cost cutting, freezing costs and postponing different initiatives and projects. Unfortunately, Covid-19 might be with us for a while so is this a long-term solution? Just looking back 6-7 months the market was entirely different than it is right now. We have also seen companies who have been booming during this period.

At TIQQE we reach out to 150 companies each month to get an understanding of where the market is and where it’s heading.

If we would highlight two interesting areas it would be the following:

High demand, lack of capacity causing downtime and lost business

When we have reached out to companies who are booming at the moment, companies with high demand struggle with the amount of load they need to handle. IT has challenges with handling the loads which in turn causes downtime and of course lost business. Is the answer then to scale up the infrastructure during this period?

Low demand, over capacity and cutting back costs

When speaking with customers who in one way or another have entered into a recession, their challenge is unused capacity. When looking at cost cuts it makes sense to cut it back but at the same time, how do they scale up once it picks up again?

The benefit with serverless

One of the main benefits with serverless is exactly that, you have a scalable, flexible IT which is adaptable over time no matter if it’s a recession or booming.

In uncertain times it’s important to take control over of what you can, define your prediction of the future might look like and make sure not to make decisions which could be a win on a short-term basis but be a loss in a long-term perspective.

At TIQQE this is exactly what we help our customers with, we help you find the right solution, which is scalable, flexible and adaptable to change no matter what the market situation is for you.

Please feel free to reach out to us if you have questions or need to scale your business to address the higher or lower demand.

People

Are you our next Cloud Architect?

We’re looking for a Cloud Architect to our Gothenburg office. If you know the AWS tech stack and want to work in an inspiring company with great potential, this is for you.

We are growing our business on Swedens west coast and even if we in many cases work distributed, we also see the importance of being present in person to be able to interact with our growing number of customers. Therefore we are looking for a Serverless Cloud Architect who has the ability to handle both interaction with developers and DevOps teams, as well as a deep knowledge in AWS infrastructural services. You will be working together with passionate people and both develop and test infrastructure.  

We believe that you already:

  • Have your home in Gothenburg or its surroundings
  • Have experience from Cloud solution architecture in general
  • Have specific knowledge in AWS Serverless Architecture and development
  • Have one or more AWS certifications
  • Are analytic and solution-oriented
  • Have a genuin interest in customers businesses and challenges

We wish that you:

  • Have the urge to learn more
  • Put teams over individuals
  • Are professionally driven by serverless technology and become your best version of yourself when surrounded by other ”techies”
  • Wants to develop your soft skills as well as your technical skills

What to expect from us:

  • A burning love for all things serverless
  • A place where people matters
  • Courage to say “we were wrong”
  • Technical excellency
  • A startup company with great visions
  • Distributed teams
  • A place where we don’t always do what the customer tells us, but instead always does what is best for the customer

Please get in contact with us to learn more

Sofia Sundqvist

Chief Operating Officer

sofia.sundqvist@tiqqe.com

Alicia Hed

Recruitment Assistant

alicia.hed@tiqqe.com

Customers

Welcome Ekebygruppen to TIQQE

We are proud to welcome Ekebygruppen as a new customer to TIQQE. Ekebygruppen has decided to move business critical applications to AWS.

Ekebygruppen is a group with several subsidiaries that is active in healthcare and care. They provide high quality primary care and housing for young people and young adults.

Ekebygruppen has decided to move their business critical infrastructure to AWS, including all domains for the group. When looking for a supplier, Ekebygruppen wanted to find a partner with extensive experience of cloud security as their data contain critical health care information. They also looked for a partner/friend/buddy to trust for their future cloud journey. Due to the sensitivity of the data, TIQQE will provision the services from the AWS region Stockholm.

Welcome to TIQQE, we’re looking forward to a long term partnership.

People

Fanny Uhr just joined TIQQE

We are happy to welcome Fanny Uhr to our growing family as developer. We asked Fanny a couple of questions about her first impressions of TIQQE.

What did you know about TIQQE before you started?

Since my brother Johannes has been working at TIQQE since the beginning, I feel like I know this company pretty well. I knew that TIQQE is a growing serverless company with great AWS knowledge. I’ve always thought that TIQQE seemed to have great values and I feel like the way everyone takes care of each other is very rare to see.

Why did you want to join TIQQE?

My brother and TIQQE inspired me to study web development. When my first internship period started this spring, I got the chance to join the TIQQE family for 7 weeks. Little did I know, I got to chance to stay longer! The opportunity to learn new things and be a part of this team is better than I could’ve dreamed of when I started my education almost a year ago.

What was your first impression of TIQQE?

My first impression of TIQQE was how friendly everyone was and it felt like I was cheered on by everyone from day one. All my expectations of the company were confirmed to be true.

What is your role at TIQQE?

I will be working as a SysOps Tech for Mimiro.

How has your first time been at TIQQE?

My experience during my internship at TIQQE was exciting, instructive, fun and a bit different due to the COVID-19. I’ve been working from a safe distance at home with daily contact with smart and helpful people that’s been doing a brilliant job keeping up with questions and supporting me through the project.

What are you looking forward to in the nearest future?

I want to keep learning and continue to grow both as a developer and a person. I feel excited and thankful to be a part of such a smart and passionate team.

What do you know about TIQQE now?

I know that TIQQE is a company that wants their employees and customers to succeed. Everyone is professional and they stay true to what they believe in.

Welcome Fanny and thanks for sharing!

Cloud security

The Swedish Corona App, nothing for American clouds, or..?

A colleague came some time ago and said that the reporting around the Swedish Corona App questioned Amazon Web Services (AWS) as host. Not good for an AWS Partner. Based on what I read, some high-pitched screams in that direction existed. But what I found was at least one crucial misconception – storage, some discomfort about cloud, and eSam references of course.

My unscientific summary of what I read is that it is about costs, hasty decisions and a sense of urgency, possible disregard of the Swedish Public Procurement Act, privacy concerns due to storage of health data and eSam recommendations, the suitability of American cloud operators and, some implicit misconception and general discomfort about utilizing the cloud.

My intention is not to review the reporting in this blog post even though I will touch on some aspects related to the suitability of using American cloud providers below, as well. But I start with the storage confusion.

Cloud service does not equal cloud storage

Primarily I address an implicit assumption many outside our industry often make. That you always are forced to store your data in that cloud when you use a cloud service provider such as AWS, Microsoft Azure or Google. This is not true. Data can be stored in the cloud or somewhere else. All depends on the service you use or provide.

When reading the reporting I can see this misconception shines through. It is an implicit assumption we often meet in our customer dialogues as well. My guess is that this misconception comes from the frequent use of cloud based services in our daily life and the discussion about privacy.

Cloud storage optional for SaaS providers. Why not for customers?

When developing a SaaS (Software-as-a-Service) service in a cloud such as AWS, Microsoft Azure or Google Cloud you as a developer can choose where data shall be stored. In short it is a design decision. This opens up for a foresighted SaaS developer to give the customer a choice as well.

It provides an opportunity to differentiate the offerings and have different solutions for data storage as options for the customers. A do-or-die requirement in some industries where data and storage location is crucial. It can be a business blocker to lack this agility for customers in some industries.

In AWS there are several different services and solutions that can be used to provide this flexibility for both the SaaS provider and the customer.

The use of American cloud providers or not?

The other thing I want to comment on is the underlying concern about using AWS as a platform when they developed the Swedish Corona app (RIP?). When reading the reporting it seems like there are two concerns in relation to this.

  1. The fear that data shall be stored on US servers.
  2. The fact that AWS is an American company and therefore obeys to American laws.

Point 1: Mitigated by automatically enforcing Region Blocking to Sweden

It is possible for a SaaS provider utilizing AWS to explicitly limit both the storage and processing to specific regions by using region blocking rules that are applied automatically. In AWS it is possible to limit access to i.e. region Stockholm. And then it is guaranteed that no data or processing of data is performed outside Sweden.

Combining this with the storage differentiation discussed above makes a strong argument for the possibility to use an American cloud provider for sensitive data processing.

Point 2: Mitigated with strong arguments before selecting cloud provider

I have always been a strong advocate for using cloud services and I love the flexibility and freedom given by AWS. Now is that said. Again! When reading the reporting and the concern about using AWS it is clear that the eSam recommendation to public authorities about the risk to use cloud providers that is subject to foreign laws, come into play. The eSam recommendation is about law interpretation and as a non-lawyer I will not step into that area. But one thing is clear. At least for me.

Not everyone agrees with eSam and their recommendations. Both SKR and respected IT lawyers disagree with eSam about the strong guarantees needed for a swedish authority to use non-swedish cloud service providers. This disagreement will most likely end up in court sometime.

What to do?

It is hard to give general advice due to legal implications. But I think a good idea is to consider starting an investigation about the suitability of using large cloud providers for a selective set of data. And carefully document every step in the process up to a decision of which one to use. It is a better way to ask yourself if the cloud is suitable for you, instead of claiming that it is not, based upon fear.

What shall I think when discussing the suitability of cloud usage?

One way is to start reading my blog post where I argue why the question “Is The Cloud suitable for me?” is better than “Is The Cloud Secure?”. It is a  discussion of cloud security from a business benefit perspective –  https://tiqqe.com/is-the-cloud-secure/.

And then it might be of interest to evaluate if a Cloud First Strategy can be something for you. What I mean with a Cloud First Strategy (CFS) is available in my blog post – https://tiqqe.com/we-all-need-a-cfs-you-too. In the post, I argue that it is all about creating a cloud positive mindset.

COVID-19

Watch our Biztalk Replace webinar

In March, we launched a series of ideas of how companies who are suffering from the Covid-19 pandemic can quickly reduce cost and increase liquidity. If you missed the webinar around our second idea, reducing cost by replacing you Biztalk platform, you can watch it today.

Many companies are under tremendous financial pressure due to the COVID-19 virus. In early March, we sat down to figure out what we can do to help and came up with 4 ways of how we can reduce cost and increase liquidity in the short term for a company.

You can read a summary of the cost saving series here. The summary include links to all 4 ideas to give you a deeper insight of each idea. Every idea also include a financial business case which have two purposes:

  • Translate technology into tangible financials to motivate your CFO to support the idea.
  • Provide a business case template to reflect your specific prerequisites.

Biztalk Replace

Every organization needs to connect data between applications and databases to support their business processes. There are a lot of ways of solving the integration need but many companies have bought an integration platform from one or more of the major product vendors in the market such as Microsoft Biztalk, Tibco, Mulesoft, IBM Websphere etc. If you’re one of them, we have good news for you and your CFO.

According to Radar Group, who made a survey of 200 Swedish companies a few years back, integration is a hidden cost bomb. On average, companies spend 140 000 SEK in maintenance cost per year and per integration. On average, a company with 300 employees have 50 integrations if you’re in the retail or in the distribution sector, 70 integrations if you’re in the manufacturing sector according to the survey. The cost of integration is substantial.

You should reconsider your next Biztalk upgrade project

You can read the full blog post here

You can watch the webinar here

Webinar

Cloud optimization webinar

Join our cloud optimization webinar the 25:th of August between 08:30 to 09:15. Learn how to lower your monthly AWS bill with 40-50% by optimizing your AWS accounts.

Managing cloud infrastructure is different to managing infrastructure on-prem. It’s easy to provision new resources but it’s equally easy to forget to decommission resources when they’re not needed. Further more, performance tuning is often not part of daily routines and only performed when there are performance problems. Optimization is not supposed to be performed occasionally but rather on a regular basis to ensure a cost effective use of cloud computing.

Join this webinar to find out how you can work with continuous optimization to lower your monthly AWS bill. You can also read this blog post which includes a financial comparison between optimized vs. non optimized AWS infrastructure.

Join our webinar the 25th of August at 08:30 to 09:15.

Please enroll here