The biggest challenge in getting to the next level of supercomputer performance – Exascale – is the massive amounts of electricity these systems will consume…. and big energy usage means huge costs. The industry is well aware of this, of course, and is intent on designing processors, I/O, storage and other components that provide higher flops per watt. But verifying and quantifying these gains accurately is a problem at both the data centre and individual system level.

We know how to measure energy consumption; it’s not rocket science, even when measuring the consumption of systems that actually do rocket science. The problem is two-fold. First, there aren’t enough organizations measuring their real-world energy consumption. Second, there are multiple ways to measure juice use – methods that vary in scope of measurement and also accuracy.

In this Register webcast we talk with Natalie Bates, chairperson of the Energy Efficient High Performance Computing Group, and Erich Strohmaier, a co-author of the Top500 and head of Future Technologies at Lawrence Berkeley National Lab, about the progress their group has made toward providing the industry with an energy measurement blueprint. It’s a thoughtful and interesting conversation and a good preview for what’s coming down the road.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>