Our pal TPM writes here about how Intel is re-targeting their Larrabee (or a Larrabee-like) processor from being a graphics card replacement to providing HPC processing power in a tidy package. It would act as a co-processor ala NVIDIA’s Tesla or AMD’s GPUs or even FPGAs.

The key difference between Larrabee and these other solutions is that Larrabee uses the ubiquitous x86 architecture and instruction set, and the others don’t. This has been the biggest hurdle to GPU adoption, in fact, because developers and users need to do some custom coding in order to get their apps to take advantage of the much speedier number-crunching provided by GPU accelerators.

So while NVIDIA may have thought that they had smooth sailing when Intel dropped Larrabee as a video device, and AMD may have figured they had more runway to get their products off the ground, now they both will have to battle the chip behemoth in the HPC space. The latest TOP500 list has been released (here), and we already know from the press releases that the largest systems are using thousands of GPUs to pump up performance. The Top500 guys don’t break out hybrid systems or provide GPU counts for the boxes on their list, but we know that the systems at the tip top of the chart are using at least 4,000 GPUs to hit their numbers.

As we move forward, I think the burgeoning trend toward greater use of analytics will spur growth in GPUs and other accelerators. These products are essentially repackaged high-end consumer video cards, and when they’re sold as HPC accelerators, they carry much higher margins and bring happy smiles to the faces of product managers.

So how well is Intel going to do in this market? It isn’t a slam dunk for them. We don’t know how their final product is going to perform yet, and bang-for-the-buck performance is, of course, very important in an accelerator. However, the development environment and ecosystem is even more important. If Intel had made this move two or three years ago, they could have conceivably killed off NVIDIA’s CUDA operating environment. But they didn’t, and there is now critical mass behind CUDA and NVIDIA, with AMD and the OpenCL development environment gaining some followers as well.

Dominating this market won’t be a cakewalk for Intel, but this announcement certainly will garner attention and get some customers lined up to kick the tires when the silicon is released. However, one thing is certain: the game has just become quite a bit more interesting for NVIDIA and AMD.. and I mean ‘interesting’ in the old ‘Chinese curse’ sense of the word….

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>