HP: Apotheker Out? Whitman In?

If today’s Wall Street Journal story is correct, and the ouster of Hewlett-Packard CEO Leo Apotheker has been discussed in HP’s boardrooms, then he is finished. You can’t head an organization that large if employees believe there’s any sort of…

Read More

Watching Hurricanes: Data Center Design Tidbits

NOAA put the cart before the horse to some degree when purchasing a new supercomputer to track hurricanes.

Because they were packed to the rafters at their existing data centers, NOAA built a new center to house this new 383 TF box. However, since they hadn’t awarded the contract for the system yet, they didn’t know exactly what to design into the new building. This article outlines a few of the choices they made and provides some interesting details. (See below…)

Read More

Titan Unveiled: 10-20 Petaflops in 2012?

The “Fastest Supercomputer” title may move 6,940 miles (11,167 KM) eastward in 2012 from Kobe, Japan to a small Tennessee town. That’s if the folks at the Oak Ridge National Labs (ORNL), along with Cray and AMD, can pull off a massive upgrade of the existing Jaguar system. They’ll be replacing existing nodes with the new Cray XK6 nodes, which will be running the new 16-core AMD Interlagos chips.

In the second half of 2012, these will be augmented by dual NVIDIA Kepler GPUs. Interlagos CPUs should be around 3x faster than their current hex core processors and the addition of Kepler GPUs (and lots of ‘em) will really crank up the performance potential. Overall, they’re expecting a 9x increase in speed, which will  put the system in the 10-20 PF range when it’s completed near the end of 2012. (See below…)

Read More

Cornell Cranks Cancer Research; Bird identification sees 12x speed-up as well

Cornell University’s Center for Advanced Computing (CAC) announced a new initiative to test GPU-optimized MATLAB for use on various research projects. They’ve partnered up with Dell, NVIDIA, and Mathworks to see what GPUs bring to the table in terms of the university’s research.

Cornell cites a couple of interesting data points in their press release (here link1). The first is a 15x speed-up (well, 14.7 to be tediously accurate) in processing images used to diagnose cancer cells. Pre-GPU, it took 86.9 seconds to process a single image. Post-GPU, that time plummets to 5.9 seconds. The benefit is obvious – this increases the theoretical maximum number of images they can process from 994 per day to 14,644.

But that’s not all… (see below…)

Read More

Baseball: Still Boring; But making money via baseball analytics is cool

Baseball is perhaps the most boring thing in the world to watch. The leisurely rate of play, the lack of constant action, and the pauses players take for impromptu meetings, spitting, and crotch-grabbing are torture for my ADD-riddled brain.

Reading about baseball is every bit as bad, and reading about baseball-stats geeks who painstakingly ‘score’ every move on the field makes me want to beat myself with a bat.

On the other hand, I’m a big fan of money, and innovative ways to make more of it. I found a very interesting article in my ever-growing pile of Businessweek magazines about how automation and deep analytics are playing an increasingly large role in the game. The “Baseball: Running the New Numbers” story outlines, in highly readable form, how Major League Baseball, individual teams, and savvy techies are building out systems that will log pretty much everything that happens on a baseball field.

Read More

HPC Storage Purchasing – Exposed!

In a recent HPCwire report, Nicole Hemsoth discloses the back story behind a major storage purchase by Utah’s Center for HPC (CHPC). This story isn’t noteworthy because it’s a particularly large deal or because of their use case. It’s interesting because of its insider perspective on the process, taking the reader from the problem they’re trying to solve to the eventual solution.

It also shows how these deals aren’t simply based on vendors throwing the cheapest and fastest gear at the problem, and the customer picking the lowest bidder. So many situations that seem typical on the outside are actually quite complicated underneath the covers, with considerations other than cost per unit of raw performance becoming the decisive factors.

Read More

Square Kilo Scope Pushes Limits

Ever hear of the Square Kilometer Array? It’s a plan to build the largest radio telescope in the world: 3,000 15-meter dishes that will take up a full square kilometer in total.

Right now, they’re putting the finishing touches on the plan and figuring out where to build it; South Africa and Western Australia are on the short list. They expect to begin preconstruction (ordering parts and stuff) in 2012. Actual construction should begin in 2016, and full operation in 2024.

When complete, it’s going to be 10,000 times more sensitive than the best radio telescope today, so it’s expected to generate some profound discoveries. The Big Questions that the SKA will help answer include the origins of the universe; the nature of Dark Matter and Dark Energy (which kind of creeps me out); and whether Einstein was right with his General Relativity Theory – we’ll know if space is truly bendy or not.

Read More

HP Gets Analytical; Opening another front against Oracle?

With all of the attention focused on the war raging between Oracle and Hewlett-Packard on the server front, a significant HP announcement in late June seemed to slip under the collective radar of the industry press.

On June 20, the company announced general availability of Vertica 5.0, the newest version of the Vertica Analytics Platform, along with some integrated appliance-like bundles combining Vertica with HP hardware. HP purchased the company earlier this year (Register story here), and it looks like Vertica is going to be HP’s key play in the burgeoning ‘big data’ market.

The foundation of the Vertica platform is the columnar database which, as the name implies, handles data in columns. This column-centric design can yield huge advantages vs. traditional row-oriented databases in certain situations – primarily read-centric data warehouses.

Read More

King K Super; Are GPUs Still GPU-riffic?

It’s been an eventful ISC. The Japanese sprang their K Computer on an unsuspecting HPC world, throwing down 8.126 Pflops on the table and raising the high-water performance mark by a factor of three. Just as surprising was the fact that they did it the old-fashioned way – with semi-proprietary processors, a custom interconnect, and no fancy accelerators.

Was it only six months ago when the Chinese, with their 2.56 Pflop Tianhe system, appeared to have locked down the top spot for at least a year or more with heavy use of GPU accelerators? This led many pundits (myself included) to say that the age of hybrid HPC was upon us, and that we probably wouldn’t see another non-hybrid system topping the chart anytime soon.

So is the K computer a signpost pointing to the resurgence of traditional CPU plus custom interconnect HPC? Or is it an aberration on the road to our hybrid future?

Read More