Core Hours and Carbon Credits: Incentivizing Sustainability in HPC
Researchers demonstrate that linking resource costs to energy consumption can reduce HPC energy use by 40%
- Jan. 26, 2026 by
- Alok Kamatar
High-performance computing enables groundbreaking scientific discoveries, but its environmental impact continues to grow. While HPC facilities invest heavily in renewable energy and efficient cooling, user choices—such as which machine to use or when to run jobs—can be equally important for sustainability. But do users actually consider energy efficiency in their decisions?
To find out, researchers collaborated to conduct the first large-scale survey of HPC users focused on energy awareness, receiving 316 responses from researchers across North America, Europe, and beyond. The results revealed a troubling disconnect: while 73% of users knew how many node-hours their jobs consumed, only 27% were aware of energy usage. Even more striking, when ranking factors that influence machine selection, energy efficiency came in last—well behind hardware availability, queue times, and performance.
This gap between provider sustainability efforts and user awareness suggested a fundamental problem with current accounting methods. By charging users based only on time (node-hours or core-hours), we provide no incentive to prioritize energy efficiency.
A New Accounting Model
To address this, researchers developed two impact-based accounting models:
- Energy-Based Accounting (EBA): Charges users for the energy their jobs consume rather than time spent computing
- Carbon-Based Accounting (CBA): Incorporates both operational carbon from electricity generation and a portion of the machine's embodied carbon
Under these models, users might receive an allocation of 10 kg CO2e instead of 100 node-hours, enabling them to compare the carbon footprint of running a computation on different machines and choose accordingly.
Testing on Chameleon
To evaluate these accounting models, researchers built green-ACCESS, a Function-as-a-Service platform on top of HPC infrastructure using Globus Compute. Chameleon Cloud's bare-metal access and heterogeneous hardware provided the ideal testbed for measuring real energy consumption across different systems.
Running a Cholesky decomposition benchmark on four different CPU systems revealed unexpected trade-offs. The Cascade Lake machine delivered the fastest runtime but consumed more than twice the energy of an AMD Zen3 system. Under traditional time-based accounting, Cascade Lake appeared cheapest because it was fastest. But under EBA, Zen3 had lower cost despite slightly longer runtime.
Similar patterns emerged with GPUs. Across three generations of NVIDIA GPUs (P100, V100, A100), newer hardware provided better performance but consumed significantly more energy—the A100 ran 6% faster than the V100 but used 60% more energy. EBA and CBA naturally balanced these trade-offs, steering users toward older, more efficient hardware when performance gains didn't justify the energy cost.
Large-Scale Simulations
Using a published dataset of 71,190 HPC jobs with measured energy consumption, researchers simulated how different accounting methods would affect real workloads. Users optimizing for energy efficiency under EBA completed 28% more work than performance-focused users with the same allocation, while consuming 40% less energy overall.
With CBA, researchers incorporated real carbon intensity data from electricity grids. As grid carbon intensity varied throughout the day with renewable energy availability, the cheapest machine for running a job shifted accordingly. This naturally incentivized users to align their computations with periods of renewable energy generation—a form of carbon-aware scheduling that emerged organically from the accounting model.
Does It Change Real Behavior?
To test whether impact-based accounting actually influences human behavior, researchers created a web-based game simulating HPC resource selection decisions. Ninety participants played under three conditions: traditional time-based costs, time-based costs with energy information displayed, and EBA.
Simply showing energy information had no effect—participants consumed the same amount of energy as the control group. However, when cost was tied to energy consumption through EBA, participants used 40% less energy on average. They didn't avoid energy-intensive jobs, but consistently selected more efficient machines to run them.
This validated the core hypothesis: information alone doesn't change behavior, but linking environmental impact to cost creates powerful incentives for sustainability.
Implications
This work demonstrates that HPC sustainability requires proper incentive structures. Current accounting methods inadvertently encourage inefficient behavior by making faster machines appear cheaper, even when they consume significantly more energy.
Impact-based accounting offers a path forward by:
- Incentivizing selection of more efficient resources
- Rewarding code optimization for energy efficiency
- Extending the useful life of existing machines
- Aligning computing demand with renewable energy availability
All code, data, and experimental artifacts are openly available on GitHub, including the green-ACCESS platform, simulation tools, and anonymized survey responses.
Looking Forward
Technical solutions like efficient hardware and renewable energy are essential for sustainable HPC, but they're not sufficient on their own. By aligning user incentives with sustainability goals through impact-based accounting, we can harness the entire community's creativity to drive meaningful reductions in energy consumption and carbon emissions.
The path to sustainable HPC requires providers and users to work together. Impact-based accounting provides a mechanism to make that collaboration not just possible, but natural.
Publication: This work was published at SC '25: Core Hours and Carbon Credits: Incentivizing Sustainability in HPC. DOI: 10.1145/3712285.3759858
Code & Data: https://github.com/AK2000/core-hours-artifact
Fair-CO2: Fair Attribution for Cloud Carbon Emissions
Understanding and accurately distributing responsibility for carbon emissions in cloud computing
- April 29, 2025 by
- Leo Han
Leo Han, a second-year Ph.D. student at Cornell Tech, conducted pioneering research on the fair attribution of cloud carbon emissions, resulting in the development of Fair-CO2. Enabled by the unique bare-metal capabilities and flexible environment of Chameleon Cloud, this work tackles the critical issue of accurately distributing responsibility for carbon emissions in cloud computing. This research underscores the potential of adaptable testbeds like Chameleon in advancing sustainability in technology.
Power Patterns: Understanding the Energy Dynamics of I/O for Parallel Storage Configurations
Powering Through Data: Energy Insights for Parallel Storage Systems
- Sept. 30, 2024 by
- Maya Purohit
Learn how cutting-edge research is shedding light on the energy dynamics of I/O operations in HPC environments, potentially reshaping future storage designs.
Zeus: GPU Energy as a First-Class Resource in DNN Training
- Nov. 22, 2022 by
- Jae-Won Chung
In this month's user experiment blog we get a fascinating insight into how much power training deep neural networks (DNNs) consumes – and how to make it less. The authors’ discuss research presented as part of their NSDI ’23 paper, describe how they structured their experiments on Chameleon, and explain why bare metal resources are essential for power management research.
No comments