HPC in the cloud? Not so much…

Here’s a great blog (“Elite HPC and the Cloud Culture Clash”) discussing how much – well, actually, how little – the current hype behind cloud computing is swaying folks at large supercomputing sites.

In the article, Nicole Hemsoth correctly identifies cloud computing as more of a business model than radical new technology. She also hits on the fact that the major concern of HPC is the “P”, meaning performance – something that the cloud model doesn’t offer at this point… not when compared with owning your own gear.

Read More

A Supercomputer to Warm Your Heart and Bollocks

Aquasar, a new water-cooled IBM supercomputer, was just fired up at the Swiss Federal Institute of Technology in Zurich. It’s a 6-Tflop system that uses 33 two-way blades with Cell processors and an additional nine blades with dual Nehalem processors, all contained in three of their BladeCenter H Chassis. Two of the three chassis are water-cooled, covering 22 of the Cell boards and six of the Nehalem-based blades. What’s kind of cool (so to speak) is that they are using the waste heat to feed an estimated 9 kW of thermal power into the building’s heating system. This is pretty innovative stuff.

Read More

Analytics Made Easy

I’m increasingly convinced that we’re entering into a new phase of computing: one where competitive advantage is going to be gained or lost based on the quality of your data and your ability to analyze it. (Well, maybe not you…

Read More

Big Decisions Take Big Data – Not Gut Instinct

So what’s the best way to make important decisions? Hunches? Intuition? Magic 8 Ball? Or relying on data analysis to reveal the best choice? Ok, it’s kind of a stupid question; most anyone answering it seriously will say, “Yeah, of course you want to use data to figure out what to do, you dumbass.” (Not that everyone would call me a dumbass in their response  –  just the people who know me personally).

While everyone says that using data to make decisions is the only way to manage, I’d submit that very few organizations actually do it. I mostly see firms relying on hunches and intuition, and then using data (if forced to; they often don’t) to justify, adjust, and calibrate the course of action that they’ve already decided to pursue.

Read More

HP Heads for HPC Top10?

We checked in with Hewlett-Packard’s HPC folks recently and had an interesting conversation about their strategy and new products, and got a glimpse of their futures. One of the most noteworthy tidbits is the deal that they are working on with NEC for the Tsubame system, which will find a home at the Tokyo Institute of Technology.

It’s a pretty brawny box, with 1,400 compute nodes sporting a mix of 6-core and 8-core Intel Xeon processors along with 4,200 NVIDIA Fermi GPUs. Details on the deal and the gear can be found here and here.

Read More

Grids Redux?

Good article from our pal Michael Feldman on Digipede, a ‘pure-play’ grid software company that focuses exclusively on Microsoft and their Windows/.NET products. It prompted some thoughts on my part about grid computing, and how it might play an increasingly large role in the future.

For the uninitiated, grid computing allows a single software job to be parceled up and sent out to a bunch of different nodes for completion. The master node in the grid divvies up the work, checks on progress, re-allocates jobs if necessary, and assembles the final results. In a lot of ways, it’s like a really smart scheduler.

Read More

NVIDIA Blog Bitchslaps Intel

An email from my friendly NVIDIA rep called my attention to this recent blog post from Andy Keane, head GPU honcho at NVIDIA. In the post, Andy pounds Intel soundly for presenting a paper titled “Debunking the 100x GPU vs. CPU Myth” which, in its abstract, asserts that an older NVIDIA GPU (the GTX280) is only 2.5x faster than Intel’s most current quad-core i7-960.

Intel does a very scholarly job in the paper of laying out their benchmarks, methodology, and results. But it makes one wonder if they – perhaps – could have, well… cherry-picked the benchmarks in order to put the best face on it? I’m sure it’s hard for any of us to imagine this being the case, but the question needs to at least be asked. Right?

Read More

AMD Heats Up GPU Wars With FireStream

AMD trotted out their latest entry in the GPU wars yesterday: the FireStream 9350 and 9370 accelerator boards. The flagship 9370 is a dual-slot PCIe card that beats NVIDIA’s Fermi handily on single-precision FP (2.64 TFlops vs. 1.03 TFlops for Fermi), but bests Fermi by a much narrower margin on dual-precision with a score of 528 to 515 gigaflops. For exhaustive details and discussion, take a look at TPM’s article here or the story from HPCwire here.

Read More

Can the Cloud Hold Off Data Deluge?

An article in CIO Magazine takes a look at a 2007 report from IDC (sponsored by EMC) estimating that the size of all digital data will grow by something like 1.2 million petabytes from 2009 to 2010, and will grow an astounding 44x by 2020; the number of individual files will increase by 67x. Even though the report is sponsored by a major storage vendor, the methodology looks pretty solid, and it’s hard to argue against their results. And 44x growth is a lot of growth – even though a significant percentage of that will be multiple copies of my Outlook.pst file scattered across various systems.

Read More

Can Larrabee Lazarus Stunt NVIDIA’s Tesla?

Our pal TPM writes here about how Intel is re-targeting their Larrabee (or a Larrabee-like) processor from being a graphics card replacement to providing HPC processing power in a tidy package. It would act as a co-processor ala NVIDIA’s Tesla or AMD’s GPUs or even FPGAs.

The key difference between Larrabee and these other solutions is that Larrabee uses the ubiquitous x86 architecture and instruction set, and the others don’t. This has been the biggest hurdle to GPU adoption, in fact, because developers and users need to do some custom coding in order to get their apps to take advantage of the much speedier number-crunching provided by GPU accelerators.

Read More