Is HP’s Gen8 Gr8 for HPC? Or at least g00d?

HP trotted out its newest line of x86-based Proliant systems this week. These new boxes, fueled by Intel Sandy Bridge processors, will sport speedy PCIe 3.0 slots, custom HP disk controllers (for tri-mirroring and error correction), and provide a wide range of features aimed at improving system flexibility and manageability. Our pal Timothy Prickett Morgan outlines the systems here.

As Tim points out in his article, much of the innovation around the new Gen8 line is in the monitoring, management, and manageability realm. This is a pretty good move by HP, since much of the pain (and cost) in enterprise computing arises from trying to tame growing numbers of systems handling increasingly complex business functions with the same, or fewer, heads. (Read more below…)

Read More

Webcast: Benchmarks Are $%#&@!!

At SC11 I ran into Henry Newman, CEO of HPC consulting firm Instrumental Inc. After exchanging the usual pleasantries and deeply offensive personal insults, we got to talking about some of the recently released benchmark results – and how irrelevant…

Read More

Dell’s New HPC Cookbook

As I trudged toward a swanky hotel for a meeting with Dell, the Seattle sky was spitting cold rain like an old man realizing the soup in his mouth is way too hot. (Adding more drama to these intros, nice, right?)

I expected two things that day: “It’s Seattle in November; it’s going to rain,” and “It’s Dell at Supercomputing; they’re going to talk about hardware.” Only one of those assumptions was correct.

Instead of talking hardware, the meeting was all about Dell’s HPC strategy and how they’re going to engage the market. It wasn’t a typical Dell-like meeting, where they’d reel off server names and configs and I’d nod appreciatively, “Hmm… so you’re going to put newer/faster processors in that one? Way to go, nice job…” (Read more below…)

Read More

HP’s HPC Aims Higher

A quick meeting with HP at SC11 confirmed that the company is feeling good about their HPC achievements and prospects for the future. HP is the second biggest HPC vendor on the most recent Top 500 list with 141 systems (28%). However, they’re still behind market leader IBM, who has a 44% share with 223 total systems.

The HP picture is worse when you look at a comparison of system size and performance. IBM has 26 of the top 100 systems, while HP has only seven. Of course, one of those seven includes the 1.19 petaflop NEC/HP TSUBAME 2.0 system that’s #5 on the list, which isn’t too shabby.

So why is HP smiling about their HPC chances? First, according to the company, they don’t measure their HPC success by appearances on the Top500 list. They (correctly) assert that there’s plenty of profitable HPC business to be had at smaller customers with smaller-than-Top500 systems. (Read more below…)

Read More

Letting GPUs Run Free: A Big Step Forward

One of the most interesting things I saw at SC11 was a joint Mellanox and University of Valencia demonstration of rCUDA over Infiniband. With rCUDA, applications can access a GPU (or multiple GPUs) on any other node in the cluster. It makes GPUs a sharable resource and is a big step toward making them as virtualizable (I don’t think that’s a word, but I’m going with it anyway) as any other compute resource.

There aren’t a lot of details out there yet beyond this press release from Mellanox and Valencia and this explanation of the rCUDA project.

This is a big deal. To me, the future of computing will be much more heterogeneous and hybrid than homogeneous and, well, some other word that means ‘common’ and begins with an ‘h’. We’re adopting the mindset of designing systems to handle particular workloads, rather than modifying workloads to run sorta well on whatever systems are cheapest per pound or flop. (Read more below…)

Read More

GPUs Then, Now, Later; All Problems ARE Polygons

One of the presentations I caught at SC11 was by GPU computing pioneer Ian Buck. (Which is a good name for a pioneer, I think, although just ‘Buck’ might be better. I’ll go with that for the rest of this article.)

Buck’s Stanford Ph.D. thesis, “Stream Computing on Graphics Hardware,” capped his research into using GPUs as computing resources and his work to develop ‘Brook,’ one of the earliest programming languages aimed at GPUs.

This work, of course, caught NVIDIA’s attention. They brought Buck aboard six years ago; he founded NVIDIA’s CUDA team, and the rest is sort of history. History that he laid out in his talk at SC11 (video here). He takes us from the earliest days (2002-03) of using GPUs as accelerators to where it is today. And, by the numbers, they’ve come a long way. (Read more below…)

Read More

Free to a Good Home: Sponsor to Donate SC11 System

Silicon Mechanics, a mid-sized manufacturer of rackmount servers, clusters, and storage arrays, is celebrating their 10th birthday by giving away a lot of stuff. First, they did a great job sponsoring Boston University in the recently concluded SC11 Student Cluster Competition. Fueled by Silicon Mechanics gear, the team was able to snare 4th place overall – quite an achievement given the caliber and experience of the competitors they went up against.

Team Boston, aided by Art Mann of Silicon Mechanics, had one of the largest and most powerful configurations in the contest. It was a hybrid system with 11 dual-socket nodes containing 336 AMD Interlagos cores, 352 GB of main memory, and four of the latest NVIDIA Tesla C2090 GPU accelerators.

While the team appreciated the gear, they also really appreciated the personal support given them by the entire Silicon Mechanics organization. The students told me that the company was constantly checking with them before and during the competition, asking whether they needed anything and offering to help out. That’s a quite a bit of attention to an effort that isn’t going to garner a huge amount of attention or result in a big sale. (Read more below…)

Read More