Speed Test

A new measurement standard for supercomputers owes a debt to the work of CMU's Christos Faloutsos

By Jennifer Bails


If you were shopping for a new car, you'd want to know more than just how long it takes for the vehicle to accelerate from zero to 60 mph at full throttle. A savvy buyer would also want some information about fuel economy, handling, braking and other performance metrics.

When you're comparison shopping for supercomputers, you also need performance metrics--but you can't exactly flip through Consumer Reports for the answers.

Traditionally, supercomputers have been ranked on a list known as the TOP500, a biannual worldwide competition based on the Linpack benchmark. The benchmark measures how fast these computers can solve a dense system of linear equations, and results are reported in the units of billions of floating point operations per second, or "flops." Last fall, China's Tianhe-1A took the TOP500 crown by reaching processing speeds of 2,507 petaflops--or 2,507 trillion calculations each second.

But just as you wouldn't buy a car based on its 0-to-60 mph test alone, a supercomputer's floating-point rate of execution doesn't tell you everything you need to know about its performance. That's especially true now that supercomputing problems increasingly demand more than basic number crunching; they require deep processing of vast datasets. For many real supercomputing applications, the Linpack test has become meaningless.

"Much of supercomputing has focused on computation-intensive applications for 3D physics simulations," says David A. Bader, professor in the School of Computational Science and Engineering and College of Computing at Georgia Institute of Technology. "But as we move toward more data-intensive supercomputing problems, we need a way to measure how machines will perform on these other kinds of workloads."

Bader is a member of the steering committee of a new effort called Graph 500, which is creating a measurement to rank supercomputers in a way that's meaningful for data-intensive applications. This new yardstick will help guide the design of next-generation hardware architectures and software systems, according to Richard Murphy, a principal member of the technical staff at Sandia National Laboratories in New Mexico, who led the founding of Graph 500. "To quote Lord Kelvin, 'If you cannot measure it, you cannot improve it,'" Murphy says.

Graph 500 is a grassroots collaboration between industry, academic and government experts that began over a dinner conversation at the Supercomputing 2009 conference in Portland, Ore. As its name suggests, the benchmark ranks computers by how large of a graph they can build, and then how quickly they can sift through these data. Graphs are an ideal way to explain the complicated relationships between individual elements in large datasets, such as those between people on Facebook and Twitter; physical links between locations on transportation routes; patterns of disease outbreak; computer network topologies; and neuronal networks in the human brain. Computers increasingly are being used to analyze pathways through these graphs to find the shortest connection between two nodes. The information gleaned by "traversing" (stepping through) such graphs can help an immunologist understand how viruses spread, or allow financial experts to spot fraudulent transactions.

But although these graphs are everywhere, many of the most challenging large-scale graphs--the kinds that computers are analyzing in real-world situations--are those that have been created by major corporations or government agencies, and they're off-limits to researchers, says Christos Faloutsos, Carnegie Mellon computer science professor. "People would love to analyze these graphs, but companies often cannot give them out for privacy and competitive reasons," he says. It's not enough to anonymize the data, either, because if even a single node is identified, the integrity of an entire graph can be compromised.

So with most real graphs off limits, researchers needed to generate realistic models--and that turned out to be quite a difficult mathematical problem.

"It's extremely hard to model these graphs, let alone have a generator to use that could help us design supercomputers," Bader says.

In 2004, Faloutsos and his colleagues published a paper in the proceedings from a data mining conference in which they described a recursive matrix--or R-MAT--model for graph generation based on fractal geometry. The R-MAT generator, Faloutsos says, is the first to simulate large-scale graphs with an unlimited number of nodes in a way that's consistent with most properties of real graphs.

"There's no limit to the size of the graph the generator can handle. You just have to specify how many nodes you want--a billion, a trillion, whatever is next after that--and the generator creates a graph that is highly realistic," says Faloutsos, a data mining expert who has been at the University since 1997 and recently completed a year-long sabbatical at Google. In 2010, he was named an Association for Computing Machinery fellow in recognition of this and other fundamental contributions to computer science.

The Graph 500 steering committee also took notice, realizing the important gap Faloutsos' technology could fill in their effort. Specifically, the benchmark they developed ranks a supercomputer on the size of the R-MAT graph it can generate and how fast it can search that graph as measured in gigateps, or billions of traversed "edges" per second. Edges are connections on a graph between two data points--for example, the items that an Amazon.com customer has bought that might predict her next purchase.

"Christos' work has really let us turn a corner and allowed us to create something realistic with similar characteristics to actual data sets," Bader says. "It has been instrumental in making the Graph 500 benchmark a success."

The first-ever Graph 500 list was unveiled this past fall at the Supercomputing 2010 meeting in New Orleans, and there aren't yet 500 computers on the list. In fact, only nine supercomputers clocked in at gigatep speed, with the U.S. Department of Energy's IBM BlueGene-based Intrepid system coming out on top--using 8,192 of its 40,960 nodes to leap from one node to another on an artificially generated graph 6.6 billion times a second.

The next Graph 500 will be announced this summer at the International Supercomputing Conference in Hamburg, Germany. Murphy and his colleagues aim to improve the benchmark and expand the size of the rankings list. While "500" is an ambitious figure, they say it's probably not out of the question in coming years as more and more supercomputers are created to tackle data-intensive applications in everything from cybersecurity to medical informatics.

"Many of us on the Graph 500 steering committee believe that these applications will grow to be bigger than 'traditional HPC' over the next decade or so," Murphy says.

Faloutsos is delighted to see his research play a role to help bring about this coming revolution. "People building supercomputers are very happy with our graph generator," he says. "And that makes us very happy and proud."
For More Information: 
Jason Togyer | 412-268-8721 | jt3y@cs.cmu.edu