Ever hear of the Square Kilometer Array? It’s a plan to build the largest radio telescope in the world: 3,000 15-meter dishes that will take up a full square kilometer in total.
Right now, they’re putting the finishing touches on the plan and figuring out where to build it; South Africa and Western Australia are on the short list. They expect to begin preconstruction (ordering parts and stuff) in 2012. Actual construction should begin in 2016, and full operation in 2024.
When complete, it’s going to be 10,000 times more sensitive than the best radio telescope today, so it’s expected to generate some profound discoveries. The Big Questions that the SKA will help answer include the origins of the universe; the nature of Dark Matter and Dark Energy (which kind of creeps me out); and whether Einstein was right with his General Relativity Theory – we’ll know if space is truly bendy or not.
They’ll also be looking around to see what locations might support life and trying to figure out where magnetism comes from. (And yes, the answer is more complicated than “Magnets.”)
This thing is going to generate a fair amount of data. They’re working on a test site that’s 1% the size of the full-on SKA and will spit out raw data at 60 Terabits/sec. After some level of correlation and other processing, the rate settles down to 1GB/sec of data to be stored and analyzed.
When completed, SKA will be 100 times bigger and generate 1TB/sec of pre-processed data, which would equal an Exabyte of data every 13 days. Even with much more aggregation, we’re still talking about Exabytes of data.
According to a source on the web (so I know that it’s true), five exabytes is large enough to log every word ever spoken by human beings. I think this also would include short words like ‘a’ and ‘an’, but I’m not sure about grunts or exclamations. Either way, it’s a lot.
So how do you process, transport, and store this much data? According to the authors of SKA Memo 134, “Cloud Computing and the Square Kilometer Array”, cloud storage/computing might handle the load.
They put forward a few scenarios using Amazon EC2; the largest was storage of 1PB of data and continuous use of 1,000 compute nodes. The price tag? $225,000 per month plus an annual payment of $455,000 – which totals a little over $3.1 million per year.
They do mention that they might be able to negotiate a volume discount from Amazon, which could reduce costs significantly. I’d also make them throw in free Amazon Prime shipping, free media streaming, and early access to their super-saver items before the general public sees them.
On the compute side, they talk about potentially using a SETI@Home or Folding@Home model to help carry some of the load. According to their calculations, the current capacity available from folks volunteering their spare cycles is around 5PB, which would, if it were a single system, put it in second place behind the 8PB Japanese Super K system.
Something that captured my imagination was their speculation that the unused or underutilized capacity on multi-core, broadband-attached PCs is something like 100x the combined processing power of the entire Top500 list.
What would be a fair price for using that capacity? Perhaps the number is somewhere north of the cost of data transport plus the incremental cost of electricity, which is still about ten times cheaper than any other processing available today.
This is a very interesting concept – maybe a forerunner of future HPC. Would you sign up for free high-speed internet access in exchange for keeping your computer on all night and letting them scavenge your idle cycles? There wouldn’t be any advertising on your screen, and they wouldn’t be tracking your movements and selling them to advertisers.
If they can negotiate low enough rates from the providers, the numbers might just work. It’s a win-win: the user gets free bandwidth, and the sponsoring organization gets their computing tasks done at much lower cost.
Read more about the SKA project here.
