Big Decisions Take Big Data – Not Gut Instinct

So what’s the best way to make important decisions? Hunches? Intuition? Magic 8 Ball? Or relying on data analysis to reveal the best choice? Ok, it’s kind of a stupid question; most anyone answering it seriously will say, “Yeah, of course you want to use data to figure out what to do, you dumbass.” (Not that everyone would call me a dumbass in their response  –  just the people who know me personally).

While everyone says that using data to make decisions is the only way to manage, I’d submit that very few organizations actually do it. I mostly see firms relying on hunches and intuition, and then using data (if forced to; they often don’t) to justify, adjust, and calibrate the course of action that they’ve already decided to pursue.

Read More

HP Heads for HPC Top10?

We checked in with Hewlett-Packard’s HPC folks recently and had an interesting conversation about their strategy and new products, and got a glimpse of their futures. One of the most noteworthy tidbits is the deal that they are working on with NEC for the Tsubame system, which will find a home at the Tokyo Institute of Technology.

It’s a pretty brawny box, with 1,400 compute nodes sporting a mix of 6-core and 8-core Intel Xeon processors along with 4,200 NVIDIA Fermi GPUs. Details on the deal and the gear can be found here and here.

Read More

Grids Redux?

Good article from our pal Michael Feldman on Digipede, a ‘pure-play’ grid software company that focuses exclusively on Microsoft and their Windows/.NET products. It prompted some thoughts on my part about grid computing, and how it might play an increasingly large role in the future.

For the uninitiated, grid computing allows a single software job to be parceled up and sent out to a bunch of different nodes for completion. The master node in the grid divvies up the work, checks on progress, re-allocates jobs if necessary, and assembles the final results. In a lot of ways, it’s like a really smart scheduler.

Read More

NVIDIA Blog Bitchslaps Intel

An email from my friendly NVIDIA rep called my attention to this recent blog post from Andy Keane, head GPU honcho at NVIDIA. In the post, Andy pounds Intel soundly for presenting a paper titled “Debunking the 100x GPU vs. CPU Myth” which, in its abstract, asserts that an older NVIDIA GPU (the GTX280) is only 2.5x faster than Intel’s most current quad-core i7-960.

Intel does a very scholarly job in the paper of laying out their benchmarks, methodology, and results. But it makes one wonder if they – perhaps – could have, well… cherry-picked the benchmarks in order to put the best face on it? I’m sure it’s hard for any of us to imagine this being the case, but the question needs to at least be asked. Right?

Read More

AMD Heats Up GPU Wars With FireStream

AMD trotted out their latest entry in the GPU wars yesterday: the FireStream 9350 and 9370 accelerator boards. The flagship 9370 is a dual-slot PCIe card that beats NVIDIA’s Fermi handily on single-precision FP (2.64 TFlops vs. 1.03 TFlops for Fermi), but bests Fermi by a much narrower margin on dual-precision with a score of 528 to 515 gigaflops. For exhaustive details and discussion, take a look at TPM’s article here or the story from HPCwire here.

Read More

Can the Cloud Hold Off Data Deluge?

An article in CIO Magazine takes a look at a 2007 report from IDC (sponsored by EMC) estimating that the size of all digital data will grow by something like 1.2 million petabytes from 2009 to 2010, and will grow an astounding 44x by 2020; the number of individual files will increase by 67x. Even though the report is sponsored by a major storage vendor, the methodology looks pretty solid, and it’s hard to argue against their results. And 44x growth is a lot of growth – even though a significant percentage of that will be multiple copies of my Outlook.pst file scattered across various systems.

Read More

Can Larrabee Lazarus Stunt NVIDIA’s Tesla?

Our pal TPM writes here about how Intel is re-targeting their Larrabee (or a Larrabee-like) processor from being a graphics card replacement to providing HPC processing power in a tidy package. It would act as a co-processor ala NVIDIA’s Tesla or AMD’s GPUs or even FPGAs.

The key difference between Larrabee and these other solutions is that Larrabee uses the ubiquitous x86 architecture and instruction set, and the others don’t. This has been the biggest hurdle to GPU adoption, in fact, because developers and users need to do some custom coding in order to get their apps to take advantage of the much speedier number-crunching provided by GPU accelerators.

Read More

HPC Might Rises in the Far East

As spring turns into summer, we get – like clockwork – a new Top500 list. While there’s plenty of analysis yet to be done, what’s getting lots of press (NYT story here, in-depth HPCwire story here) is how the Chinese National Supercomputing Center has captured the number two slot on the list with their 1.27 petaflop (sustained), 4,640 GPU monster box. This system is noteworthy not only from a performance standpoint, but also because it relies so heavily on NVIDIA GPUs, further confirming a trend toward hybrid CPU/GPU computing.

Much of the attention will be focused on the big move that China, Inc. has made on the list. There are two China-based systems in the top ten, and they own fully 24 systems in the Top500. In total performance, this puts China behind only the U.S. as a supercomputing powerhouse. There is speculation that by this time next year, China is going to be rolling out a new, all-China-designed supercomputer that might be the fastest in the world.

Read More

Google: Future Hip Chipster?

An interesting story a few days ago from our pals Cade and TPM at The Register put forward some interesting theories about how Google’s activities and acquisitions of companies and talent might add up to the searcher building its own server chip. (Their story is here.)

Plausible? Yeah, I think it might be. We’re not talking about a chip designed to compete with the highly sophisticated Xeon or Power processors. But doing their own customized ARM implementation? It could make a lot of sense, given Google’s scale and internal needs.

Read More

Crunching the Numbers – Big Numbers

IBM pushed out some more of their “Workload Optimized” offerings last week with the introduction of analytic packages based on their mainframe and x86 systems. These bundles join the previously announced Power system bundle that they rolled out late last spring. What they’re doing here is combining IBM hardware with a full slate of Cognos and InfoSphere Warehouse software into a pre-integrated bundle that will, assumedly, allow customers to cut deployment time and get cracking on some serious data crunching. Our pal Timothy Prickett Morgan wrote it all up here.

Looking at the bigger picture, I think a lot of action is going to be centered on the enterprise analytics space in the coming years. It’s not technology pushing it; it’s macroeconomics. It’s just getting harder and harder for businesses to profitably compete. Globalization and the instant communications afforded by the web have made the world a much smaller place, although I’d still hate to have to paint it.

Read More