NVIDIA’s recent Analyst Day at their HQ in Santa Clara gave me new insight into the company and how they see the market. I’ll go more into specifics in future posts, but first some general impressions. The conference was a single-day affair kicked off by Jen-Hsun Huang, NVIDIA’s co-founder, President and CEO. I’ve seen Jen-Hsun in action several times now, just as I’ve seen top executives from other high tech companies speak to large and small crowds.
The contrast between Jen-Hsun and other high tech chieftains is something that always strikes me as interesting. CEOs from other large, established tech companies seem to be business people first and technologists second. Even if their formal education and experience is highly technical, they seem to change when they reach lofty heights in their organizations. They’re enthusiastic about their companies, their products, and what they do for their customers. But they don’t seem to give off that “This is soooo cool!’ vibe.
This doesn’t make them bad leaders or managers, and I’m not sure that it even means all that much in the long run. However, it is disconcerting when you see a chief executive give a presentation and you think, ”He/she could be leading a company in a completely different industry and still be using the same words. You’d just swap out the pictures and charts.”
Watching Huang, I never get that impression. He’s a technologist first and foremost. His love of the technology and what it can do comes through loud and clear. He can speak authoritatively and accurately on technical topics ranging from the chip level all the way up to the current state of the art in seismic processing or molecular biology. He’s engaged and interested in NVIDIA’s R&D and their customers’ needs and desires.
Huang’s off-the-cuff presentation at the analyst event didn’t require much in the way of slides; just a few here and there as a graphical backdrop or to drive home a point. He set the table by baldly stating, “Creativity matters. Productivity matters,” which shouldn’t get much argument these days. He continued by outlining NVIDIA’s three businesses (personal/mobile computing, design/visualization, and cloud/HPC), which are addressed by four product lines: GeForce and Tegra cover the PC/mobile space, Quadro in the design segment, and Tesla for HPC.
The rest of Huang’s talk described three seismic shifts in the industry that have shaped NVIDIA’s strategy – and continue to do so. The first is mobile computing and its demand for much more efficient devices that can still deliver a great user experience. Huang latched onto some guy in the crowd who had a full-size laptop and used his system as an example of power consumption: his MacBook consumed 20 watts at full bore, 10 watts in normal use, and probably 2-3 watts at idle.
Huang thinks that we’re going to see fully functional systems with milliwatt idle draws and single-digit maximum power consumption rates. (He didn’t see me in the back sporting a fully configured Lenovo W510 mobile workstation with a 135-watt power brick – that would have given him much more to talk about.) His point about mobile computing is well-taken, but perhaps a bit optimistic. Of course, he didn’t specify a time scale, and no one asked…
The key to the mobile market is, in his mind, parallel processing – which plays to NVIDIA’s strengths, of course. Only by going parallel can energy utilization and performance goals both be satisfied and improved over time. NVIDIA’s CPU strategy in mobile is, not surprisingly, based on the ARM processor and not the traditional Intel/AMD x86 standard. It’s an easy decision to make; there are billions of devices running ARM processors now and billions more on the way. In mobile computing, ARM is the center of gravity for developers.
Seismic shift two is that Microsoft has bought into ARM. According to Huang, Microsoft doesn’t just want to be on ARM – they HAVE to be on ARM. He’s right. A basic mistake in business is not understanding what business you’re really in. Microsoft is in the business of providing software solutions to consumers and business – they’re not in the business of building software that runs on x86 desktops or laptops. They have to be on ARM because it’s the fastest growing computing platform in the world.
This brought up a question from our buddy Nathan Brookwood, full-time chip guru and part-time curmudgeon, who asked about the 64-bit elephant in the room. ARM is 32 bits and can address only a few GB of RAM. The server and PC world has moved on to 64 bits and the ability to address thousands of times more memory.
Huang addressed the question head-on and slipped it at the same time. He acknowledged that ARM will have to go to 64-bit in order to make it in the server, PC, and even mobile worlds long-term. He continued by saying that NVIDIA isn’t announcing or saying anything about this move… but NVIDIA and Microsoft both betting on ARM should give others the motivation and courage to push ARM to 64 bits.
He’s correct on this count, of course. I’m not privy to any of these details, and don’t have any inside info on this topic, and will deny knowing anything about anything until my last breath… but I’d bet that NVIDIA is highly interested and involved in extending ARM into 64 bithood. It’s important to the success of their HPC effort – critical ,even. And I don’t see NVIDIA as the sort of company that would just stand on the sidelines and root for someone else to do the work.
The third and last seismic shift also concerns ARM and its being an open standard. If you want to make your own ARM variant, it’s as easy as getting a license and having TSMC or someone else fab them up. NVIDIA now has their license and will be churning out their own server-optimized ARM chips sometime in the next two years or so.
This is the ‘Project Denver’ initiative that they announced earlier this year (story here). There’s a chart in their announcement presentation that TPM reproduced in his story that shows the scale of ARM production vs. x86. The difference is stark, as are the potential implications. In 2005, there were about 1.75 billion ARM chips shipped compared to maybe 250 million x86 CPUs. In 2009? The scoreboard on the x86 side reads maybe 400 million, but ARM has grown to 4 billion.
That’s some serious growth and volume. Volume plus growth equals pervasiveness. (Well, actually, it equals ‘volumegrowth,’ but you get my point.) The x86 platform has volume, but not on nearly the scale that we see with ARM, and volume drives production costs and ecosystem evolution.
Huang didn’t talk much about ‘Server ARM’; he mainly talked about mobile and consumer applications. But this is an interesting topic to me, and presumably to you IT types out there. Is Server ARM the future of computing? Is it inevitable? I could argue that it is, and point to the way high volumes of low-cost RISC chips and workstations led to RISC-based servers, which took away a huge chunk of the industry from mainframes and minicomputers. Just a few years later, we saw servers based on low-cost x86 processors do the same thing to the RISC-based systems.
But I digress… Huang ended up going way over his allotted time, fueled by his enthusiasm for the topics and questions from the gathered query of analysts. (Tech analysts run in queries, just as wolves run in packs and lions gather in prides.) It wasn’t a polished and pat presentation; it wandered and veered a bit. But speaking for myself, I enjoyed every minute. It isn’t often that I get to see someone like Huang who is knowledgeable and enthusiastic about the technology that his company is bringing to the market, and more concerned about the substance of his presentation than form and appearance.
