Green computing sounds like just another buzz phrase cooked up to grab headlines. But it’s not. It’s the future of the electronics industry. It has to be, if we want to meet global emissions targets and stop global warming in its tracks.
It may be less obvious than the number of flights you take or whether your car is electric, but every digital action you take has a carbon cost. Every time you stream a movie, every song you ask your digital assistant to play, your carbon footprint increases. Every device you own has a carbon footprint that starts with the greenhouse gases (GHGs) created during manufacture and increases every time the chips inside fire up to execute a command.
In fact, every aspect of modern digital life—from chips to devices to data centers—carries a carbon cost. And if we want to avoid catastrophic climate change, we need to do everything we can to reduce that.
As demand for cloud services and digital technologies grows, reducing their environmental impact is critical. How? By shifting to a green computing mindset to reduce energy and drive down carbon emissions.
How do we get to green computing?
If we start to consider where and how we can make changes to reduce carbon impact across the technology stack, the possibilities are almost endless. For Arm, however, hardware design has a key role to play.
For hardware designers, energy efficiency has long been an imperative. Delivering more and more compute performance, while simultaneously improving energy efficiency, is what our partners ask of us every day. But now we need to take that further. To have a substantial long-term impact, technology roadmaps can no longer focus on performance alone. Decarbonization must become a priority.
So how do we achieve that in the context of hardware design? And what are the challenges – both real and perceived – to decarbonization and the journey to green computing?
The power and energy paradox
The first – and most important – step is measuring energy and power accurately and consistently. (While these terms are sometimes used interchangeably, it should be noted that energy and power are not the same thing. Energy is the capacity to do work. Power is the rate at which energy is transmitted.)
Measurement is challenging, however, because energy and power data are only truly accurate towards the end of the design process. Yet, paradoxically, we need this information at the start of the process to inform the design and ensure that it’s energy-efficient. It’s a chicken and egg situation.
One approach is to use ‘good enough’ data as an early indicator. While these data may lack the accuracy of figures available later in the development cycle, they can allow us to identify trends and to understand if our design tweaks are improving energy efficiency or not. This is infinitely preferable to pure guesswork, but it does, of course, have its limitations.
Essentially, in the context of hardware design, measuring power and energy is always a trade-off between accuracy and time to result. Getting true value from early measurements means accepting the limitations of the data and understanding what’s NOT being measured as well as what is.
Optimization at this early stage is necessarily limited to the code describing the hardware, for example. Physical design aspects, such as transistor arrangement, can’t really be taken into account until further down the line. And comparative – rather than absolute – data may be the best way to measure progress. While an ultra-precise measurement may prove elusive, measuring energy and power relative to a previous iteration is achievable.
The importance of workloads
Hardware is typically optimized for very specific workloads. Measuring their energy consumption gives an indication of what can be expected from the final product. Being task-specific allows for a higher level of fine-tuning, which helps achieve optimal energy efficiency – the bedrock of green computing.
Workloads may be industry benchmarks (such as a frame of a specific video game) or real-world use cases (like video encoding or compression of a file). They may also be another carefully defined task chosen to reflect the stresses the hardware will be under when in use, or to test what happens when it’s pushed beyond its typical limits.
Selecting the right workloads to optimize is important – but it’s not always easy. Both industry benchmarks and real-world use cases are becoming increasingly data-heavy. And, put simply, complex workloads require more compute to be simulated on a prototype design. It can be done, but to save runtime and compute resources, it’s prudent to optimize the optimization process itself. By using ‘synthetic’ or ‘micro’ benchmarks – that is, using a subset of the workload data to replicate the behavior of the whole – we can achieve representative results efficiently.
Spending time to carefully define workloads can reap benefits later. A representative set of test workloads will ensure that the final design meets expectations. In this case, that means maximizing performance per watt, improving battery life and, thus, the user experience.
Another area that benefits from careful attention is the toolchain, and the methodology that supports it. Reliable, user-friendly tools and methodologies can help engineers quickly measure, analyze, and optimize designs for energy efficiency. Sounds great, doesn’t it?
Currently, though, notions of green computing tend to be an add-on, or a ‘nice to have’, rather than a priority. If we’re to change this, the holistic adoption of an energy-first approach across the design lifecycle is imperative. Coordinating the multiple teams that ‘own’ various aspects of the toolchain can be challenging, however. And interoperability between tools is not necessarily a given.
What’s required is a paradigm shift: a new mindset in which energy consumption is the key metric. This cannot be a single, standalone gesture. It must be a genuine shift in engineering goals, across all project areas and all disciplines.
Green computing: a goal within reach
Through genuine cross-team collaboration, we can create an easily replicable, systematic approach to analyzing and reducing power and energy for each iteration of hardware description and physical implementation.
This shift will require investment in both time and effort, to adopt new tools and workflows, and an understanding of the bigger picture. Technology can be part of the climate solution – or part of the problem. Only by uniting our efforts behind a common goal can we achieve both the marginal gains and technology leaps that will reduce the power envelope and pave the way for green computing.
Our Sustainability Vision
Connectivity cannot come at the expense of our planet. To minimize the environmental impact of our technology, we aspire to leverage our expertise in low-power compute to do more work per watt, providing a unique opportunity to drive up connectivity while driving down carbon consumption.