Gabriel Consulting Group (GCG) is a research, analysis, and consulting firm dedicated to helping our clients achieve maximum return on their Information Technology investment.

Find Us on Facebook Find Us on LinkedIn Find Us on Twitter

Read More...

Main Menu
GCG in the News
GCG Press Room
Recent Research
GCG Products & Services
GCG News and Views
About GCG
Search
Contact Us
GTC 2012: "Swimming in Sensors, Drowning in Data" PDF Print E-mail
Tuesday, 15 May 2012 00:00

Here at the GTC conference, you see a lot of things that you didn’t think were quite possible yet. Case in point: cleaning up surveillance video.

The standard scene in “24” or any spy thriller is of agents poring over some grainy, choppy, barely-lit video that’s so bad you can’t tell whether it’s four humans negotiating an arms deal or two bears having an animated conversation about football. In the Hollywood version, the techno geek says, “Let me work on this a little bit,” and suddenly things clear up to the degree that not only can you see the faces clearly, you can tell when the guys last shaved.

Cleaning up and enhancing video is a tall order, compute-wise – and doing it in real time? Hella hard. But I just saw a demo of that in a GTC12 session run by MotionDSP. Their specialty is processing video streams from mobile platforms (think drones and airplanes) on the fly. We’re talking full motion, 30 frames per second video streams that are enhanced, cleaned up, and highly analyzable in real time. (Read more below...)

The amount of processing they’re doing is incredible. Lighting is enhanced, edges are enhanced, jitter is taken out, and the on-screen metadata (time, location, speed, etc.) is masked. Again – all in real time.

The effect is profound. In the demo, what was once just a vague gray ship (which seemed to be vibrating like a can in a paint shaker) was clarified so that you could easily see what kind of ship it was and also see two suspicious figures milling around on deck. To me, it looked like there were enough pixels to enhance the video even further – to the point where we could identify the figures.

As the folks from MotionDSP explained, processing at this speed simply isn’t possible without using GPUs. Cleaning up a single stream of video to that degree takes 160 gigaflops of processing power. A single GPU card (Fermi, assumedly) can handle two simultaneous HD streams or 4-6 standard definition streams.

Not surprisingly, their biggest customers are various branches of the U.S. government (Air Force, Naval Special Warfare Group, and lots of other secret acronym agencies). In fact, the “swimming in sensors, drowning in data” quote is from a general (I think) talking about their struggle to take advantage the masses of data provided by their sensor platforms.

Check out the demo views on the MotionDSP website; it’s interesting stuff for sure. While the early applications are typically military surveillance, how far off is the day when we’ll see this technology used to make other videos more clear?

I’m thinking about the typical YouTube video shot from a helmet cam worn by some kid on a bike at the top of a huge mountain. What’s always detracted from my viewing experience is the way the video gets so shaky and distorted after he loses his balance and starts to tumble down the mountainside. Sure, the first hit is clear, and maybe the first loop, but once he picks up speed, there’s just too much distortion. Hopefully, MotionDSP will release an edition at a price scaled to the amateur stunt man.

 

Share this post

Submit GTC 2012: "Swimming in Sensors, Drowning in Data" in Delicious Submit GTC 2012: "Swimming in Sensors, Drowning in Data" in Digg Submit GTC 2012: "Swimming in Sensors, Drowning in Data" in FaceBook Submit GTC 2012: "Swimming in Sensors, Drowning in Data" in Google Bookmarks Submit GTC 2012: "Swimming in Sensors, Drowning in Data" in Stumbleupon Submit GTC 2012: "Swimming in Sensors, Drowning in Data" in Technorati Submit GTC 2012: "Swimming in Sensors, Drowning in Data" in Twitter