If you were looking at the hot topics in the IT world, you would have noticed a change. While in previous years it was all about Cloud, Blockchain and Quantum, a new topic is grabbing the headlines and captivating IT decision makers – making their IT landscape more sustainable. For many, the biggest focus is on reducing the carbon footprint of their IT operation or even becoming carbon neutral. In IT this is largely determined by the use of electricity to operate the applications, either in the Cloud or in on-premise datacenters.
In some regions this is a ridiculously easy task: Switch the energy supply of your datacenters to green electricity, such as from solar, wind or water power, and you’re done. However, the reality is that, for many regions, and particularly for the heavily industrialized ones, this is still a pipe dream. Why? Because their energy mix is still heavily dependent on coal, gas, or oil.
While not the only factor, the energy consumption of business applications over their lifecycle is a significant driver for overall datacenter energy consumption. That’s because load is both a key denominator of acute energy consumption and drives the addition of resources, such as CPUs, storage, or network components when demand increases.
Zombies and inefficient implementation
When analyzing applications running in a datacenter or in the Cloud, two sets of applications are of key interest: Those applications that are hogging resources while not being used as part of business process at all (i.e. zombies) and applications that fulfill a purpose but which have been implemented inefficiently so that they waste a substantial amount of energy and, thus, carbon dioxide.
In this post, I will be focusing on the latter group because they are the more challenging to identify and analyze. This is due to the fact that there are a multitude of sources for inefficiency ranging from sub-par coding to flaws built into the underlying design or even architecture of the application, so there is not one single pattern to look out for. Additionally, real world applications typically contain a mind-boggling amount of inefficient code – and for the vast majority of code, there is no harm in those inefficiencies. We are interested in those few high-profile inefficiencies that heavily impact an application’s overall carbon footprint.
3 steps to identifying carbon hungry applications
In order to find these, we follow a three-step approach when working with our clients to achieve key ‘green quality’ objectives:
Step 1: We create a baseline of what a typical time of operating the application looks like to understand which parts of the application are used at what times and how often. Pre-existing monitoring data can also be leveraged, as well as assets such as existing automated regression test suites.
Step 2: The application is analyzed using specialized tools to achieve transparency about which parts of the application contain specific kinds of inefficient implementations. This step is heavily dependent on the technologies used and will differ widely between technologies such as .NET, Java or SAP. A very basic example of such an inefficiency may be that a string of characters is constructed inefficiently, for instance by copying some area of memory into an ever-larger buffer until the string is complete.
Again, such an inefficiency may be irrelevant if it is used only once a day. If it is used thousands of times a second, however, it becomes a prime candidate for optimization. At this point we combine the available data about application usage and identified inefficiencies to derive this information, with the result being a prioritized list of potential optimizations.
Step 3: In the third and final step, the items above simply need to be refactored or optimized, tested and deployed into production to make a significant change to the carbon footprint of the application.
Take these three steps for your top 10 high workload applications and you are guaranteed to achieve a measurable improvement in the overall carbon footprint of your IT operations.
Find out more
Visit our webpage to find out more about Sogeti’s Quality Engineering for Sustainable IT
About Sven Euteneuer
Sven Euteneuer is the Portfolio Director of Sogeti Germany. After attaining a degree in Computer Science at the University of Bonn and after 12 years of filling a variety of different roles in the software development area, Sven specialized in Quality Engineering and managed the DACH Quality Engineering Unit of a leading provider of quality and testing services. At Sogeti Germany he is now responsible with transforming and innovating the service and solution portfolio. His main interests lie in Cybersecurity, quality in the IoT and OT spaces and impacts of AI and machine learning on quality assurance. He is author or co-author of several publications in the quality assurance, quality engineering and testing space.
More on Sven Euteneuer.