This is the kind of GTC session I love: a real-world expert talking about how a difficult task is actually accomplished. Not the theory, not how it should work on paper, but what it takes to move a project from Point “A” to Point “We’re done with this”. And if there’s some humor added along the way, all the better.

Ken Fingerlos from Lewan Technology delivered in spades with his “Virtual Is Better than Physical: Delivering a Delightful User Experience from a Virtual Desktop” GTC14 session. “Delightful?” Hmm… In my past lives I’ve had to use some virtual PCs, and my experiences ranged from “absolutely unusable” to “OMG, I hate this.”

It’s easy to see that Ken has been around the block when it comes to VDI. He has all the right credentials, ranging from VMware to Citrix to Microsoft. But more importantly, he’s been there and done it.

In the session, he laid out a solid methodology for planning a VDI implementation. What I particularly liked was how he discussed the importance of intimately knowing your user base. This means far more than knowing what apps they might be using.

You need to know which apps they actually use, how often they use them, and what sort of performance they’re getting out of them. Once you have this as a baseline, you’re in far better shape to take the next step, which is figuring out what sort of VDI infrastructure you truly need.

One of the surest ways to wreck a VDI project is to provide users with a worse experience than they had with their three-year-old mid-range desktop or laptop. This is why it’s crucial to configure and size the infrastructure to give them a demonstrably better experience – or at least one that won’t have them up in arms, or burning the IT staff in effigy in the parking lot.

Ken does a great job of laying out all of the considerations involved in picking a virtualization architecture. One of the first decisions is choosing between VDI (Virtual Desktop Infrastructure), which has each user Windows session running in a unique server-based VM, and SBC (Server Based Computing), which has multiple users running from a single Windows o/s using Remote Desktop Services.

He also discusses the pros and cons of persistent vs. non-persistent system images. While most everyone wants to have their own unique desktop, there’s a price to pay in terms of more gear and greater complexity. Is it worth it? Or would a hybrid model be the best solution to keep users happy and capture cost/complexity reduction benefits?

Performance evaluation pre- and post-virtualization is also a factor. Right now, he suggests using the Windows Experience Index as a quick and easy way to gauge end-user performance. However, this tool is going away in Windows 8, so an alternative PC benchmark will need to be used.

Ken also covers back-end infrastructure design and sizing recommendations. How much storage do you really need? How does network latency affect the user experience? Are endpoints really that important, or can you just throw in any cheap monitor/processor combination?

This is probably the best discussion of virtualizing the desktop that I’ve ever seen. Highly recommended for anyone curious about the topic, but required viewing for those who are in the throes of evaluating or actually implementing a VDI initiative. You can see the session here.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>