Enable ideas, challenge the present, never stop learning and always be nice.
What we do Who we are #TheTIQQEcode References Work at TIQQE Blog Contact. Privacy policy
First, there is a natural asymmetrical relationship between the provisioning and decommissioning of resources. We always know when we need something, but we don’t always remember when we no longer need it. Because of this, unused resources tend to accumulate. This principle applies at the application portfolio level, in the form of an inflated application count, as well as at the infrastructure level, in the form of unused servers, storage, and other components. We’re not used to actively decommissioning resources because in the fix-cost model the payoff isn’t as significant, but in the consumption-based model actively decommissioning resources is vital.
Second, performance tuning is typically only performed in response to performance problems. When too few resources are allocated, we feel it in the form of poor application performance, so we add resources, such as upsizing EC2 instances and adding Provisioned IOPS to EBS storage volumes. But when too many resources are allocated it is essentially invisible—perfectly allocated resources and over-allocated resources feel identical in the context of application performance. Performance and cost are directly correlated in a consumption-based model, so we should be looking for opportunities to reduce performance where possible in order to reduce cost. In the fixed-cost model investments in performance are a sunk cost, but in the consumption-based model the costs are recoverable.
Continuous optimization is an iterative process where we implement a set of simple, high-impact cost reduction methods across all applications, and then measure and report the cost savings results. The process is then repeated on a regular cadence. Below are two essential tenets of continuous optimization.
Cost optimization is not a project, it’s a way of life.
We are never finished with continuous optimization. It is integrated into our existing operating procedures and we work to improve the process every cycle. The process is designed to be low-cost and low-overhead. And within those limitations, continuous optimization is designed to find out exactly what level of super-optimization is possible. “How inexpensively can we run each application?” is the question to be answered.
Focus on big impact/low effort.
Each optimization idea should be ranked by its impact/effort ratio, and ideas should be implemented starting from the top of the list and progress downward until reaching a point where the effort exceeds the impact. This line will be drawn in a different place by different organizations, and can change over time to suit the business priorities. I give some examples of the ideas I’ve implemented below as a starting point.
Here are three categories of optimization along with several examples of each.
Category 1: Remove
These are the easiest ideas that produce the most cost savings.
Category 2: Resize
Everything that can’t be deleted should be evaluated to ensure it isn’t over provisioned.
Category 3: Refactor
This category should be done less frequently, as takes more effort and is less likely to produce results after the first pass. However, the first pass will likely produce significant results, so this step should be done at least once. After that a quarterly or annual review is a reasonable cadence. Look at each application and ensure that the architecture is as efficient as possible. We recommend to perform an AWS Well Architected review
Enable ideas, challenge the present, never stop learning and always be nice.
What we do Who we are #TheTIQQEcode References Work at TIQQE Blog Contact. Privacy policy
© 2025 TIQQE.