Thomas Hughes of UT Austin and Michael Miller of Johns Hopkins described the new era in disease identification and treatment that HPC is making possible. The big push is for patient-specific models that determine optimal treatments for specific individuals instead of a random population sample.

The process begins with medical imaging as we know it today: a CT scan, ultrasound, or MRI. From those information pieces, mathematical models are built. In the case of a cardiac patient, computational mechanics and a cardiovascular modeling toolkit isolate the necessary image; then a mesh template – based on technology from computational geometry – creates a 3D aspect. Fluid/ structure interaction analyses are run to determine flow patterns, which can pinpoint not only problems that exist but problems in the making: Is there a fatty deposit behind that section of artery wall? Has the aneurism grown? Is that swirly spot a place where sclerosis could form?

Moving massive amounts of data to patients’ bedsides in a useful form requires massive – massive – parallel compute power. The output data of 5 simulations per second of whole heart activity (how many seconds are needed? “Many”, says Miller) requires about 1.6 TB of storage. Running analyses on huge medical data sets requires millions of Gibbs sampling iterations (probability/ random variable stuff – pretend you actually know what it is, like we do), each of which is independent; linear scalability is maintained through the use of thousands of CPUs. Miller doesn’t see hardware keeping up with this demand without the use of system-on-chip technology, multi-core processors, and accelerators.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>