I did my scheduled NGDC (Next Generation Data Center) virtualization tutorial session yesterday morning. It went the full three hours, five minutes over in fact, but I scheduled two breaks to keep it from becoming a total PowerPoint death march. We covered a lot of ground, beginning with current data center problems (server sprawl, complexity, power/cooling/floor space challenges, etc.) From there we discussed how virtualization, particularly in an x86 server context, can go a long way towards solving the problems or at least relieving some of the pain. We looked at benefits real customers are getting from x86 consolidation talked through a few case studies, and then took a relaxing break. We then covered the various methods to virtualize x86 boxes (hypervisor, O/S virtualization, non-native virtualization, etc.) and wrapped up with the GCG vision for data center peace, harmony, and efficiency.
A few interesting points about the session… First of all, it was a great audience; the interaction started slowly, but as we went on, we had a lot of good questions, comments, and reactions from the crowd. They were a pretty sophisticated bunch and run serious data centers. Before the session, I was a little concerned that we might have the stereotypical Linux attendees ; you know, 18 year old Linux radicals for whom a data center is having both a laptop and an xBox running at the same time. Thankfully, this wasn’t the case.
It seemed like most the folks had at least some experience with x86 virtualization, and about half said that they could see virtualization becoming the dominant x86 usage model in their organization. Several attendees expressed their interest in consolidating Linux apps onto non-x86 hardware, primarily mainframe systems. We also discussed how customers are beginning to see I/O bottlenecks on their virtualized systems to the point where they have had to pay considerable attention to the I/O requirements (more than cpu or memory needs) of individual apps over time before slotting them onto physical servers. Not an unanticipated situation after memory, I/O is usually the next bottleneck in a consolidated system, but it was a bit surprising to see that a significant number of attendees have already hit the I/O wall. Of course, it could also be that some of them were virtualizing onto commodity gear rather than enterprise class systems. In a many cases, the more expensive enterprise servers from major vendors offer more I/O capacity through multiple Ethernet adapters and/or more PCI slots.
Anyway, it was a great event and I appreciated the opportunity. We may post the presentation in our “Recent Research” section of the web site, so if you’re interested in looking at the presentation, it’ll be a free (after registration) download.
