OK, I’m a sucker for anything that does something a lot faster – even if I don’t quite understand how it does it. So I have to blog a least a little bit about IBM’s recent announcement; they’ve lit a fire under the task of assessing data quality with a newly patented (and presumably shiny) algorithm. What got my attention was this result: Using this new method, they were able to accurately validate 9TB of data in 20 minutes – as opposed the 24 hours plus that traditional methods would have taken on the same hardware.
That’s pretty sporty performance, in my mind,
and reading about an improvement on that scale immediately made me want to know more. I went to the paper they submitted documenting their technique and results and was faced with a blizzard of phrases such as “Inverse covariance matrices,” “Matrix factorizations,” “Cubit cost,” and this helpful explanation: “First, we turned to stochastic estimation of the diagonal.” All of these terms, plus many others that I also don’t understand, are in just the abstract; the body of the paper seems considerably more technical and complex.
The one phrase that I fully understood was, “We stress that the techniques presented in this work are quite general and applicable to several other important applications.” And this is an important phrase, because it means that using this technique (and others that smart guys are working on right now), we’ll be able to see orders of magnitude improvement in other analytic tasks that use vast amounts of data. It’s always good to see progress. In case you’re interested, here are some pictures of the guys who came up with it – mostly shots of them standing around looking smarter than any of us.
