Customers

We are proud to welcome Zaplox to TIQQE!


Zaplox is a market innovator of the contact-free mobile guest journey and mobile key services for the global hotel market with a total market potential of approximately 20 million hotel rooms. Their mobile key solution has already been commercially deployed for 11 years and used during more than 3.5 million guest nights.

When looking for a supplier, Zaplox wanted to find a partner with extensive experience and are specialized within AWS Serverless who would be able to support them within Infrastructure, Operations and Architecture. 

They also looked for a partner to trust for their future cloud journey and would be adaptable and open to share ideas, success and failures and learn Together

Welcome to TIQQE, we’re looking forward to a long-term partnership.

#theTIQQEcode

Welcome Kevin, our new Business lead!

Who is Kevin?

My name is Kevin and I currently live in Stockholm. I have three citizenships (Canada, U.K and Swedish) but consider myself a Canadian at heart. I grew up in a little town called Alexandria (google it😊) but have lived in many places around the world, like New York, Venezuela, Korea, Glasgow, Vancouver, Halifax and Toronto. I enjoy being in new cultures as much as I can be. I have a Sambo, a dog named Seven, and am expecting my first baby in early March. I enjoy reminding my Swedish colleagues of which country is ‘actually’ the greatest hockey hotbed on the planet (yes, its Canada).

What did you know about TIQQE before you started?

I knew that they were a young, fast-growing company focused on AWS.   I have also worked with Tiqqe peoples in the past who were all high performers.

Why did you want to join TIQQE?

Tiqqe has a wide variety of skills, experience and cultures.  Whether I am working on an IoT initiative with a Senior Architect or mentoring a ‘newbie’ in how to run an agile project, I enjoy these different types of challenges that will inevitably come my way with such a dynamic group. 

Getting more experience in AWS and Cloud was a big factor and I also like the simplicity of Tiqqe’s offering. Everyone knows what Tiqqe does, what they are good at, and what they don’t do. This transparency with clients and aligned focus internally was compelling. 

What is your role at TIQQE?

Towards customers, my skills are generally used earlier on in the project lifecycle than most of my Tiqqe teammates.  Whether it’s process, policy, people or systems analysis, I analyze the as-is and identify a to-be state that addresses our client needs.  You could think of me as a Business Process Analyst, Data Scientist and/or wearing the traditional Project Management hat during implementation.  When its time to develop the solution (the hard part) I hand that over to the Tiqqe wizards and help ensure that the projects run smoothly.

Internally, I will be a part of the strategy team and am entrusted to mentor and coach in analysis and project management best practices.

What are you looking forward to in the nearest future?

Number 1 is for Corona to be over. From a work perspective, just to meet my team, get started on projects and learn as much as I can in AWS and from my teammates and customers.

Thank you Kevin and welcome to TIQQE!

AWS

Connect your Azure AD to AWS Single Sign-On (SSO)

In this blog post, I will provide a step-by-step guide for how to integrate your Azure AD with AWS Single Sign On (SSO). The integration will enable you to manage access to AWS accounts and applications centrally for single sign-on, and make use of automatic provisioning to reduce complexity when managing and using identities.

Organizations usually like to maintain a single identity across their range of applications and AWS cloud environments. Azure Active Directory (AD) is a common authentication method as Office 365 often is used among companies, and might be the hub of authentication as it often is integrated with other applications as well.

If you are using AWS Single Sign-On (SSO) you can leverage your existing identity infrastructure while enabling your users to access their AWS accounts and resources at one place. By connecting Azure AD with AWS SSO you can sign in to multiple AWS accounts and applications using your Azure AD identity with the possibility to enable automatic synchronization of Azure AD Users/Groups into AWS SSO.

This makes perfect sense and often improves your ability to further automate how you handle user-lifecycle and access to your AWS accounts as you might already have some identity manager connected to your HR system like Microsoft Identity Manager or Omada in place for example. You can also leverage your existing process for applying for access to different systems, ServiceNow or similar solution might already be connected to Azure AD in one way or another which then could be leveraged for applying for access to different AWS Accounts.
There are also other benefits such as levering your existing MFA solution if your organization has such a solution in place.

To the good stuff! I will in this blog-post demonstrate how you can connect your Azure AD to AWS SSO and take advantage of its capabilities.

Creating a new Azure Enterprise Application

Login to your Azure portal and open Azure Active Directory. Under the Manage section, click on Enterprise application.

Click New application and select a Non-gallery application, give your new application an appropriate name and click Add.

Once the Azure gods have created our new application, head into the Overview page and select Set up single sign-on and choose the SAML option.

Under section, SAML Signing Certificate click Download next to Federation Metadata XML.

Please keep this page open as we later need to upload the metadata XML file from AWS SSO.

Setup AWS SSO

Login to AWS management console and open AWS Single Sign-On, please ensure that you are in your preferred region. If you haven’t enabled AWS Single Sign-On already, you can enable it by clicking Enable AWS SSO as shown below.

Click Choose your identity source. You can also configure your custom User portal URL if you’d like but it is not required.

Select External identity provider. Upload the AD Federation Metadata XML file downloaded earlier inside the IdP SAML metadata section and download the AWS SAML metadata file.

In the Azure SAML page, click Upload metadata file and upload the AWS SSO SAML metadata file.

If you have configured a User portal URL earlier, you need to edit the Basic SAML Configuration section and match the Sign-on URL.

Setting up automatic provisioning

The connection between Azure AD and AWS SSO is now established, we can proceed to enable automatic provisioning to synchronise users/groups from Azure AD to AWS SSO.

Note that you can use Azure AD groups but not nested groups ie. groups that are into groups.

Head over to the Provisioning page and change the mode to Automatic. Please keep this page open as we will copy values from AWS SSO.

In the AWS SSO Settings page, click Enable automatic provisioning

Take note of both values given in the popup

In the Provisioning page in the Azure portal, expand the Admin Credentials section and insert the values from above. It is also recommended to add an email address for notification of failures.

SCIM endpoint > Tenant URL
Access token > Secret Token

Note that these tokens expire after 1 year and should be renewed for continuous connectivity.

Click Test Connection and it should result in a success message.

Expand the Mapping section and click Synchronize Azure Active Directory Users to customappsso

Which attributes you want to sync over depends on your setup, but default setups you can remove all attributes except:
userName
active
displayName
emails
name.givenName
name.familyName

You then create a new attribute mapping objectId with externalId.

Important to note is that you can modify the email attribute to use userPrincipalName over mail as not all users have Office365 licenses which leave that attribute null.

In the Provisioning page, you can now set the Status to On. It is recommended leaving Scope set to Sync only assigned users and groups.
Click Save, it should take about 5 minutes for it to start synchronizing.

Our AWS SSO and Azure AD connection is now fully set up, when you assign Azure Users/Groups to the enterprise app, they will then appear in AWS SSO Users/Groups within around 40 minutes.

Creation and assignments of AWS SSO Permission Sets

Using Permission Sets, we can assign permissions to synchronized Groups and Users, these permission sets will later create IAM roles in accounts which they are assigned.
You can create new Permission Sets based on AWS Identity and Access Management (IAM) managed policies or create your own custom policies.

To create a new Permission Set in the AWS Management console you can follow the below steps:

  1. Go to the AWS SSO management portal, in the navigation pane, choose AWS accounts and then the AWS organization tab.
  2. In AWS account, choose the account that you want to create a permission set for, and then choose Assign users.
  3. In Display name, choose the user name that you want to create the permission set for, and then choose Next: Permission sets.
  4. In Select permission sets, choose Create new permission set.
  5. In Create new permission set, choose Use an existing job function policy or Create a custom permission set depending on your needs, click Next Details, and then select a job function or create a custom policy or managed policy.
  6. You can then complete the guide and click Create.

You should then see the message “We have successfully configured your AWS account. Your users can access this AWS account with the permissions you assigned”.

If you are more comfortable with the AWS CLI you can use create-permission-set and create-account-assignment in the same way if you would like to.

The most preferred way is however to use Infrastructure as Code and keep this in version control to manage and deploy this.
If you want to use CloudFormation you can use the below template as a base to get started.
https://github.com/pozeus/aws-sso-management/blob/main/template.yml

But be careful on how you deploy these AWS SSO Permission Sets and assignments since it needs to be executed in the Master account. You should always follow the least privilege principle and should therefore carefully plan on which approach you use to deploy these Permission Sets and assignments.
If you want to automate assignments and creation of Permission Sets further, I suggest you go with an event-based approach and assign Permission Sets using Lambdas.

Summary

In this blog-post I showed how you can connect Azure AD to AWS Single Sign-On (SSO), you can now manage access to AWS accounts and applications centrally for single sign-on, and make use of automatic provisioning to reduce complexity when managing and using identities.
Azure AD can now act as a single source of truth for managing users, and users no longer need to manage additional identities and passwords to access their AWS accounts and applications.
Sign in is accomplished using the familiar Azure AD experience, and users will be able to choose the accounts and roles to assume in the AWS SSO portal.

You now also have the possibility to use your existing automation process on how you apply for access, grant and revoke access to systems.

If you have any questions or just want to get in contact with me or any of my colleagues, I’m reachable on any of the following channels.

Email: christoffer.pozeus@tiqqe.com

LinkedIn: Christoffer Pozeus

andreas vallberg serverless integration

Serverless Integration

The integration landscape is changing and you are paying too much!

Serverless integration is our offering where we replace your traditional on-prem enterprise integration software with auto-scalable, fully managed, pay-for-what-you-use connectivity between your software applications on-premise and in the clouds.

Why serverless integration?

Enterprises has struggled with integration, where projects were setting up integration dependencies as part of the project and when the project closed down after delivery the integration dependencies were left in limbo with nobody to management.

Enter the era of integration software, where we established integration competence centers and purchased specialized software that was trying to make the integrations easier, deliveries faster and integrations manageable.

With 15 years in using enterprise integration software, We can see with a bit of hindsight that the promise of integration software has failed to deliver to us:

  • Visibility of the cost now cause integration to be a problem, instead of being spread out among the projects
  • Feature based selling of integration platforms often leave customers with a lot more features than they will make use of
  • Centralization leads to more structure, yes, but the structure comes at the cost of red-tape and more lead time for implementing solutions

So the solution to this was the Self service API’s – already touted by Jeff Bezos back in 2002 in his now-famous Mandate which sternly forced everyone into an API-first approach. Suddenly teams can consume other teams data and do integrations without talking to the intermediary.

Even though it is almost 20 years ago, we still see corporations trying to adopt this way of thinking, while also trying to save the Enterprise Integration Center.

A battle of many fronts

We see the Integration Competence Center concept being attacked on many fronts:

  • The software application owners and teams are building their own dependencies directly using API driven approach
  • Infrastructure is moving to the cloud, leaving no Servers to manage, cluster and consider
  • The different building blocks (i.e. features) of the old integration platform are becoming increasingly available from the existing cloud vendors rendering your integration software platform obsolete
  • Infrastructure is becoming code, Security Operations is becoming code.
    Why should the integrations reside in proprietary formats deep inside custom software which only a few selected people have access and knowledge how to manage

The way out

This is the challenge we at TIQQE has seen, and that is why we are providing integration-as-a-service in our unique way. Knowing that a big part of the integration work is in the details of the specifications and the major part of the integrations within an organization is very similar we have a different approach.

We provide fully managed integrations and we do this using software implemented in standard languages, on a well-known cloud platform using serverless patterns.

This means the integrations are built securely, with auto scaling from the start. It means we are using standard development tools and standard programming languages that already millions of developers know.

Governance is still key!

Our value add is not mainly focusing on the implementation of the integrations, but rather the management of the integrations and standardization of monitoring and handling them.

The freedom of building high-order value add systems as integrations, and the standardization comes as a support in terms of operational excellence, security, reliability, performance efficiency and cost optimization (Yes – those are the 5 pillars of well-architected framework from AWS).

Many of our customers have felt their integrations to be a black-box experience and they feel a lack of understanding of what they have and how it works. We are handling this by providing our Harbor solution, where you as a customer get full transparency to the documentation, the integrations and their health.

Business Impact

  • You will save money
  • No license costs
  • No hardware costs
  • No patching costs
  • No lock-in
  • Pay for what you use
  • Adapt to change

Please feel free to reach out to Jacob Welsh and let us speak about how we can help lower your costs, increase your business agility and provide insights into your integration landscape.We will set you free from all major integration platforms such as Microsoft Biztalk, Teis, WebMethod and many others.

AWS

Continuous improvement – server access example

When you work with any of the big public cloud providers, one thing is for sure – there will be many changes and improvements to the services that they provide to you. Some changes may perhaps change the way you would architect and design a solution entirely, while others may bring smaller, but still useful improvements.

This post is a story of one of these smaller improvements. It is pretty technical, the gist of it is that with a mindset of continuous improvement, we can find nuggets to make life easier and better and it does not have to be completely new architectures and solutions.

A cloud journey

In 2012, before TIQQE existed, and when some of us at TIQQE started our journey in the public cloud, we created virtual machines in AWS to run the different solutions we built. It was a similar set-up to what we had used in on-premises data centres, and we used the EC2 service in AWS.

Using VPC (Virtual Private Cloud) we could set up different solutions isolated from each other. A regularly used pattern used back then was a single account per customer solution, with separate VPCs for test and production environments. These included both private and public (DMZ) subnets.

To login to a server (required in most cases, not so much immutable infrastructure back then) you needed credentials for the VPN server solution, with appropriate permissions set up. To log in to an actual server, you also needed a private SSH key. One such SSH key is the one which you select or create when you create the virtual machine, for the ec2-user user.

While this worked, it did provide some challenges in terms of security and management – which persons or roles should be able to connect to the VPNs, which VPCS should they be able to access? Of those users and roles, who should be able to SSH into a server and which servers?

There was a centrally managed secrets store solution for the SSH keys for the ec2-user user and different keys for different environments and purposes, but this was a challenge to maintain.

Serverless and Systems Manager

The serverless trend which kind of started with AWS Lambda removed some of these headaches since there were no server access or logins to consider – at least not where solution components can be AWS Lambda implementations. That was great – and still is!
Going serverless can provide other challenges, and it is not the answer to all problems either. There is a lot to say about benefits with serverless solutions. However, this story is focusing on when you still need to use servers.

AWS has another service, called Systems Manager, which is essentially a collection of tools and services to manage a fleet of servers. That service has steadily improved over the years, and a few years back it introduced a new feature called Session Manager. This feature allows a user to login to a server via the AWS Console or via the AWS CLI – no SSH keys necessary to be maintained and no ports to configure for SSH access. This feature also removes the need for a VPN solution for people who need direct access to the server for some reason.
Access control uses AWS Identity and access management (IAM) – no separate credential solution.

Some other major cloud providers already had similar features, so in this regard, AWS was doing some catch-up. It is good that they did!

A new solution to an old problem

For a solution that requires servers, there is a new access pattern to use. No VPN, no bastion hosts. Those that should have direct access to a server and login to that server can now login directly via the AWS Console in a browser tab. No VPN connections, no SSH keys to manage –  only select to connect log in to the server via the browser.  That is, assuming you have the IAM permissions to do so!


For those cases that the browser solution is not good enough, it is still possible to perform an SSH login from a local machine. In this case, it is possible with the help of the AWS CLI to make a connection to a server using Systems Manager Session Manager. The user can have their SSH key, which can be authorized temporarily for accessing a specific server.

Since it is then possible to use regular SSH software locally, it is then also possible to do port forwarding for example, so that the user can access some other resource (e.g. a database) that is only accessible via that server. AWS Systems Manager also allows for an audit trail of the access activities. 

Overall, I believe this approach is useful and helpful for situations where we need direct server access. The key here is though, with a mindset of continuous improvement – we can pick up ways to do better, both big and small.

AWS

AWS re:Invent Online 2020

Usually this time of the year we at TIQQE are getting prepared for re:Invent, traveling to Las Vegas and having our yearly Reinvent comes to you live streamed from our office in Örebro together with our friends, customers and employees. 

This year will of course be a little different but still the possibility to take part online! 

You are well on your way to the best few weeks of the year for Cloud. Make sure to join AWS re:Invent and learn about the latest trends, customers and partners. Followed by many excellent key notes, Break-out sessions, Tracks and not to forget all the possibilities to deepen your knowledge and be provided with training and certifications.

So, whether you are just getting started on the cloud or are an advanced user, come and learn something new at the AWS re:Invent Online 2020.

Make sure to register yourself on the link below and secure your place to re:Invent 2020! 

https://reinvent.awsevents.com/

Want to get started with AWS? At TIQQE, we have loads of experience and are an Advanced Partner to AWS. Contact us, we’re here to help.

Machine Learning

TIQQE enters the world of AI

In the past years we have seen a huge uprising of AI/ML companies across the market. Artificial intelligence and Machine Learning is a part of our everyday lives and will be for the foreseeable future.

This is an area which TIQQE has decided to invest heavily in, both to meet the needs of our customers and also to extend our offering to the market.

The first of September Torbjörn Stavenek joined our team at TIQQE. Torbjörn is an AI expert and will lead our investment in the AI domain.

AI has already started to grow within TIQQE and we have several customers in different market segments.

One of our strategic partners is Neurolearn. Neurolearn is a company based at the AI Innovation Hub close to Örebro Universitet and they are at the forefront of AI research. Together we combine our strengths within AI and cloud services. One example is our joint collaboration in supplying an AI solution to the start-up Beescanning which has won several awards thanks to the AI solution based on computer vision to fight the Varroa mite. In the next coming weeks, we will be sharing different customer cases where we have helped companies across the Nordics with AI solutions.

Since AI is applicable in so many different areas, our go to market approach is simple: we will never try to sell you an AI solution to fix a problem you were not aware of that you had. However we will ask you specifically which challenges you are facing, what goals you have and if there is a match then let us find a way forward together on how to solve it.

If you are interested in learning more about our AI investment then please don’t hesitate to reach out.

Webinar

Watch our Incident Automation webinar

In March, we launched a series of ideas of how companies who are suffering from the Covid-19 pandemic can quickly reduce cost and increase liquidity. If you missed the webinar around our third idea, reducing cost by automating your incident handling, you can watch it today.

Many companies are under tremendous financial pressure due to the COVID-19 virus. In early March, we sat down to figure out what we can do to help and came up with 4 ways of how we can reduce cost and increase liquidity in the short term for a company.

You can read a summary of the cost saving series here. The summary include links to all 4 ideas to give you a deeper insight of each idea. Every idea also include a financial business case which have two purposes:

  • Translate technology into tangible financials to motivate your CFO to support the idea.
  • Provide a business case template to reflect your specific prerequisites.

Incident Automation

Incident handling is often a highly manual process in most companies. It requires 1st, 2nd and 3rd line resources in a service desk to manage error handling of the applications, databases and infrastructure. Further more, some expert forum, or Change Advisory Board, are usually in place to work with improvements to reduce tickets and incidents. A lot of people is required just to keep the lights on.

What if you could set up monitoring alerts that automatically triggers automated processes and resolves the incidents before the users even notice them and place a ticket to your service desk. Sounds like science fiction? Check out this webinar where Max Koldenius will reveal how to set up incident automation using AWS Step Functions.

You can read the full blog post about incident automation here

You can watch the webinar here

AWS

Simply: AWS DynamoDB

My “Simply AWS” series is aimed at absolute beginners to quickly get started with the wonderful world of AWS, this one is about DynamoDB.

What is DynamoDB?

DynamoDB is a NoSQL, fully managed, key-value database within AWS.

Why should I use it?

When it comes to data storage, selecting what technology to use is always a big decisions, DynamoDB is like any other technology not a silver bullet but it does offer a lot of positives if you need a document based key-value storage.

Benefits of DynamoDB

How do I start?

First you’re gonna need an AWS account, follow this guide.

Setting up our first DynamoDB database

If you feel like it you can set your region on the top right corner of the AWS console, it should default to us-east-1 but you can select something closer to you, read more about regions here.

From the AWS console, head to Services and search for DynamoDB, select the first option.

The first time you open up DynamoDB you should see a blue button with the text Create Table, click it.

Now you’re presented with some options for creating your table, enter myFirstTable (this can be anything) in the Table name.

Primary key

A key in a database is something used to identify items in the table and as such it must always be unique for every item. In DynamoDB the key is built up by a Partion key and an option Sort key

  • Partition Key: As the tooltip in the AWS console describes the Partion key is used to partion data across hosts because of that for best practice you should use an attribute that has a wide range of values, for now we don’t need to worry much about this, the main thing to takeaway is that if the Partion key is used alone it must be unique
  • Sort key: if the optional sort key is included the partion key does not have to be unique (but the combination of partion key and sort key does) it allows us to sort within a partion.

Let’s continue, for this example I’m gonna say i’m creating something like a library system, so I’ll put Author as the Partion key and BookTitle as the sort Key.

Note that this is just one of many ways you could setup this type of table and choosing a good primary key is arguably one of the most important decisions when creating a DynamoDB table, what’s good about AWS is that we can create a table, try it out, change our minds and just create a new one with ease.

Next up are table settings, these are things like secondary indexesprovisioned capacityautoscalingencryption and such. It’s a good idea to eventually get a bit comfortable with these options and I highly recommend looking into on-demand read/write capacity mode, but as we just want to get going now the default settings are fine and will not cost you anything for what we are doing today.

Hit create and wait for your table to be created!

Now you should be taken to the tables view of DynamoDB and your newly created table should be selected, this can be a bit daunting as there is a lot of options and information, but let’s head over to the Items tab.

From here we could create an Item directly from the console (feel free to try it out if you want) but I think we can do one better and setup a lambda for interacting with the table.

Creating our first item

If you’ve never created an AWS lambda before I have written a similar guide to this one on the topic, you can find it here.

Create a lambda called DynamoDBInteracter

Make sure to select to create a new role from a template and search for the template role Simple microservice permissions (this will allow us to perform any actions agains DynamoDB).

After creating the lambda we can directly edit it in the AWS console, copy and paste this code.

const AWS = require('aws-sdk')
const client = new AWS.DynamoDB.DocumentClient();
exports.handler = async (event) => {
    try {
        console.log(event.action)
        console.log(event.options)

        let response;
        switch (event.action) {
            case 'put':
                response = await putItem(event.options);
                break;
            case 'get':
                response = await getItem(event.options);
                break;
        }
        return response;
    } catch (e) {
        console.error(e)
        return e.message || e
    }
};


let putItem = async (options) => {
    var params = {
      TableName : 'myFirstTable',
      Item: {
         Author: options.author,
         BookTitle: options.bookTitle,
         genre: options.genre
      }
    };

    return await client.put(params).promise();
}

let getItem = async (options) => {
    var params = {
      TableName : 'myFirstTable',
      Key: {
        Author: options.author,
        BookTitle: options.bookTitle
      }
    };


    return await client.get(params).promise();
}

hit Save then create a new test event like this.

{
    "action": "put",
    "options": {
        "author": "Douglas Adams",
        "bookTitle": "The Hitchhiker's Guide to the Galaxy",
        "genre": "Sci-fi"
    }
}

and run that test event.

Go back to DynamoDB and the Items tab and you should see your newly created Item!

Notice that we did not have to specify the genre attribute that is because DynamoDB is NoSQL it follows no schemea and any field + value can be added to any item irregardless of the other items composition as long as the primary key is valid.

Retrieving our item

Now let’s try to get that item, create another test event like this.

{
    "action": "get",
    "options": {
         "author": "Douglas Adams",
        "bookTitle": "The Hitchhiker's Guide to the Galaxy"
    }
}

and run it.

You can expand the execution results and you should see your response with the full data of the item.

Congratulations!

You’ve just configured your first DynamoDB table and performed calls against it with a lambda but don’t stop here the possibilities with just these two AWS services are endless, my next guide in this series will cover API-Gateway and how we can connect an API to our lambda that then communicates with our database table, stay tuned!

What’s next?

As I’m sure you understand we’ve just begun to scratch the surface of what DynamoDB has to offer, my goal with this series of guides is to get your foot in the door with some AWS services and show that although powerful and vast they are still easy to get started with, to check out more of what calls can be made with the DynamoDB API (such as more complicated queries, updates and scans as well as batch writes and reads) check this out, feel free to edit the code and play around.

I would also like to recommend this guide if you want even more in-depth look into DynamoDB, it covers most of what I have here but more detailed and also goes over some of the API actions mentioned above.

Contact me

Questions? Thoughts? Feedback?
Twitter: @tqfipe
Linkedin: Filip Pettersson