Data centers are wasting energy by running processors at full speed

A simple server setting could slash data center power consumption

by · TechSpot

Serving tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.

Why it matters: Data centers are power hogs, and a key issue that operators have been trying to solve is how to reduce their energy and resource consumption. Some ingenious remedies have been found, such as using non-potable seawater to cool equipment, but one obvious solution appears to have been overlooked: enabling processors' various power-saving capabilities.

Data center power consumption has become a major concern as demand grows and utilities struggle to keep up. Operators are looking for ways to reduce energy use and costs, with many developing novel ways of cooling equipment and maximizing data center design.

A new post by Uptime Institute suggests enabling built-in power management features on servers could significantly reduce energy consumption. It says that OS-level governors and power profiles could reduce energy use by 25-50 percent, while enabling processor C-states could reduce idle power by nearly 20 percent.

These power-saving features are often disabled by default due to concerns about performance instability and latency. However, Uptime argues the performance impact is negligible for most workloads, except very latency-sensitive ones like high-frequency trading.

Indeed, modern processors often deliver more performance than is needed for acceptable service quality, and it is possible that running at full speed may waste energy. There's a point of diminishing returns where using more power yields minimal performance gains.

// Related Stories

To address this issue, CPU vendors have developed various power/performance management techniques. Software-based controls can cut power use by 25 to 50 percent but may impact latency more. Hardware-only implementations have less latency impact but offer only 10 percent or less in power savings. A combined software/hardware approach offers a middle ground with 15 percent to 20 percent savings.

Despite the performance tradeoffs, Uptime argues power consumption should be the main concern for most use cases and that maximizing performance and enabling these features across a data center could add up to substantial energy and cost savings.

This approach makes sense, as over-performance is rarely tracked, while many tools exist to maintain minimum service levels. Additionally, the energy consumption curve for processors gets steeper as they approach peak performance, suggesting potential for savings.

It's worth noting that power management techniques originated in mobile applications where energy efficiency is critical. This background suggests that for most workloads, the latency impact of power management may be less than feared.

Given these factors, data centers may be wasting energy by running processors at full speed when it's not necessary for the workload. Supporting this idea, Uptime cites benchmark data showing servers are often most energy-efficient when limited to lower performance states.