SLAC Today logo

Do-it-yourself Supercomputing

SLAC computational physicist Tom Devereaux will build his next computing cluster with graphics cards. (Photo by Lauren Schenkman.)

SLAC computational theorists are getting a burst of speed from an unlikely source—graphics processing units, the ordinary hardware that draws the images on a computer screen. Tom Devereaux, a computational physicist at the Stanford Institute for Materials and Energy Science, will be harnessing GPUs together in a new computing cluster that's big on speed, but small in size—and cost.

Scientists usually run simulations on a computer's (or many computers') central processing unit, or CPU. But if CPUs are Swiss army knives, capable of handling a variety of tasks, GPUs are chainsaws, designed to perform a particular type of calculation very, very well.

"You use the same box, and there's a slot where the GPU goes in," Devereaux said. For computations that involve only integers, this simple upgrade would yield a teraflop, or a trillion floating-point operations per second. That's a thousand times faster than the solo CPU. Devereaux doesn't expect such dramatic results; he models unusual electronic behavior in solids, which means handling decimal quantities. Still, he said, "I can get a factor of five better computing power" using the GPUs.

Using an open-source compiler, Devereaux will program a slim graphics card like this one to do the work of the block of rack-mount systems behind it. (Photo by Lauren Schenkman.)

Devereaux envisions the muscle of three, six-foot-tall racks of processers being condensed into a few boxes that cost a fifth of the price. But he would save more than hardware dollars.

"When you stand behind a rack of computers, you get blasted with hot air," he said. Shrinking a cluster's size also reduces the amount of electricity needed to power and cool it. "If I could get one teraflop out of one machine, that would make the footprint that much smaller."

Graphics cards have been around since the early 1980s, but GPUs were chained to their job of rendering visuals until 2007. That's when Nvidia, a leading GPU manufacturer, released an open-source compiler called Compute Unified Device Architecture, or CUDA. CUDA allows anybody to hack an ordinary GPU and use its single-minded efficiency for a different purpose—such as modeling superconductivity. In the years since, general purpose GPU computing has become a sort of grass-roots movement in do-it-yourself supercomputing, spawning an active Web community, seminars at high-performance computing conferences, and even a few university classes.

Devereaux and SLAC computational chemist Todd Martinez are exploring this movement, using the CUDA compiler to do performance and benchmark testing on GPUs in existing systems. Deveraux is preparing to hitch a team of GPUs together to tackle the large equation sets required to model exotic states such as superconductivity and ferroelectricity.

"We're always looking for a leg up," Devereaux said of computational scientists. "It seems like you always spend the same amount money to get a better and better laptop. But there's nothing that's a substantial jump. In the last three years, GPUs have started offering that."

—Lauren Schenkman
  
SLAC Today, March 23, 2009