Addressing IT infrastructure efficiency and server power management is one of the most important subjects of improvements targeting data center sustainability and energy performance. Limiting power usage is essential to reducing the environmental impact of data centers and meeting future sustainability regulations and standards. As demand for renewable energy (and certification) is overgrowing supply, these are becoming increasingly expensive, forcing companies to look for ways to reduce the amount of energy wasted by their servers.
Server power management helps reduce the overall energy consumption of IT systems, which not only improves sustainability, but can result in significant cost savings as well. To optimize effectively and efficiently, teams must perform thorough analysis of all tasks a system is executing, and carefully configure every component to ensure that its performance settings are appropriate for the specific task.
Many factors can influence server power efficiency or how much work servers carry out with the amount of energy they consume. Still, the most important factor of them all is also the most straightforward one: how they are utilized.
This article will look into server power management, what it is, and how approaching server power usage with attention can improve energy efficiency in the data center.
Why is Server Power Management Important?
The importance of server power management lies in the fact that data center efficiency has been becoming increasingly important in recent years. The rise of AI, machine learning, and the boom of applications requiring real-time processing have significantly increased power demand. Data centers are struggling to keep up with these requirements while also meeting sustainability compliance requirements. Sourcing renewable energy has its limitations, and organizations are looking more and more into other ways of reducing power consumption, leading to data center consolidation efforts as well as exploring better server power management options.
Despite the need for lowering consumption, server power management features are still very underused. And while the difference doesn’t seem as big at first glance, those extra watts add up and contribute to a higher than necessary data center footprint, and consequently, power consumption.
The Benefits of Server Power Management
Using power management appropriately has a number of benefits, starting with reduced costs. The primary difference stems from the fact that operating on lower performance levels, when possible, reduces heat production. Heat reduction automatically lowers the costs associated with cooling, but proper power management can reduce further costs of space, generators, cables and UPSs, too. The list goes on with the benefit of extended battery life of devices, a reduced carbon footprint, and, last but not least, achieving sustainability compliance, which can be another non-negligible benefit of adequate server power management.
The Fundamental Principles of Server Power Management
Before getting to the details, let’s define a few crucial aspects of server power management.
Prior to doing any changes to configurations – and before all else, to mission-critical system configurations – it’s crucial to note that limiting the power consumption of a server or system will lead to lower performance. Because of this, it’s vital to understand and test how the decrease in performance affects the specific systems and adjust to the performance needs. By allowing components to consume always just as much as they need can yield the benefits of saving power and battery life and lower cooling needs. For specific use cases, though, which require the lowest possible latency, maximum throughput, and high CPU utilization, limiting server power consumption is probably not a good idea. So, before optimizing, always check and review all the use cases particularly.
Sleep Cycles and Power States
Many server components can have features governing power management, making it possible for them to slow down, or completely turn off. These power management settings can be administered statically by modes at system startup and dynamically by the OS or hypervisor in Advanced Configuration. OSs have specific software mechanisms allowing them to put off their operation, as well.
However, with servers that have to be always on, special server power management modes govern the consumption. Today’s server processors all have mechanisms that allow idling when the processor is not running code, which requires performance. These are described with levels of C-states (C stands for CPU usage), where C0 marks the fully active state. Thanks to the new power-saving features appearing over time, there are several C-states helping processors reduce energy usage in times of low demand. The idea is that if C0 is the fully awake state of the processor, the higher the number of C states, the more circuitry the CPU can send into a sleep state. The most commonly available states on processors are the following:
- C1/1E. A state when the processor stops working but is ready to restart operations with a very small effect on performance.
- C3. In this state, the processor clock distribution is turned off, and core caches are emptied.
- C4. An intensified C3.
- C6 can save states in order to be able to resume later and completely powers down cores.
- C7 and upwards can shut down shared resources between cores or shut down the whole processor package.
Going to Sleep in Time
Understanding C-states in server processors is crucial to server power management because they balance performance costs with power-saving benefits. Enabling specific power management features like entering deep states like C6 allows servers to reduce energy usage by 10 – 20%. These sleep states conserve power even when servers are active from a human perspective, as processors operate on nanoseconds, while software requests take milliseconds. This significant time difference means that processors often wait idly for an enormous amount of time at their scale, allowing for power savings. However, entering deeper sleep states comes with a performance tradeoff. As it takes thousands of cycles to enter and exit these states, latency appears. This latency translates to a 5-6% performance loss in specific scenarios, which is negligible for most applications. Yet, for performance-critical tasks like high-frequency trading or low-latency storage operations, no matter how small, the performance loss is unacceptable.
Server power management allows IT managers to customize the depth of sleep states, balancing power savings and performance needs. For servers frequently idle, the energy savings can reach 20% to 40%, yielding tens of watts of power savings for every mildly loaded server.
Beyond C-States
Optimizing server energy performance goes beyond C-states. P-states, which govern performance levels during active processing, provide further ways to balance power and performance. While C-states minimize waste when the processor is idle, P-states control the power used for active tasks. By leveraging both C and P states, organizations can achieve a more nuanced and efficient approach to energy savings that allows performance without compromising sustainability goals.
Conclusion
Server power management, though often overlooked, plays an important role in the energy efficiency of data centers. Despite some current tendencies to disable these features, growing demands for cost savings and sustainability will push organizations to adopt them. Enterprises and IT service providers are encouraged to proactively explore and implement server power management strategies to enhance energy performance and sustainability metrics. By doing so, they can stay ahead of industry changes and regulatory pressures, ensuring that their infrastructure remains both cost-effective and sustainable.
If you’d like to learn more about server power management and how it can impact your organization’s bottom line, call Volico Data Centers to talk to our professionals: (305) 735-8098, or leave us a message in chat.