Skip navigation
Dueling supercomputers Thinkstock

There’s a New Test in Town for Ranking Supercomputers

A new test of computer speed measures how well they do on tasks more common for today’s supercomputers.

For a few years, the benchmark for rating the top 500 supercomputers has been the High-Performance LINPACK program. But now many industry experts say that program is not up to snuff for measuring performance on today’s computational challenges. So, researchers at Sandia National Lab have devised a new test program: High Performance Conjugate Gradients, or HPCG.

 “The LINPACK program used to represent a broad spectrum of the core computations that needed to be performed, but things have changed,” says Sandia researcher Mike Heroux, who created and developed the HPCG program. “The LINPACK program performs compute-rich algorithms on dense data structures to identify the theoretical maximum speed of a supercomputer. Today’s applications often use sparse data structures, and computations are leaner.”

The term “sparse” means that a matrix under consideration has mostly zero values. “The world is really sparse at large sizes,” says Heroux. “Think about your social media connections: There may be millions of people represented in a matrix, but your row, the people who influence you, are few. So, the effective matrix is sparse. Do other people on the planet still influence you? Yes, but through people close to you.”

Similarly, for scientific problems with solutions that require billions of equations, most matrix coefficients are zero. For example, when measuring pressure differentials in a 3D mesh, the pressure on each node directly depends on its neighbors’ pressures. The pressure in faraway places is represented through the node’s near neighbors. “The cost of storing all matrix terms, as the LINPACK program does, becomes prohibitive, and the computational cost even more so,” notes Heroux. A computer may be fast when computing with dense matrices, and thus score high on the LINPACK test, but in practical terms the HPCG test is more realistic.

To better reflect the practical elements of current supercomputing application programs, Heroux developed HPCG’s preconditioned iterative method for solving systems containing billions of linear equations and billions of unknowns. “Iterative” means the program starts with an initial guess to the solution, and then computes a sequence of improved answers. Preconditioning uses other properties of the problem to quickly converge to an acceptably close answer.

“To solve problems we need to do our mission, which might range from a full weapons simulation to a wind farm, we need to describe physical phenomena with high fidelity, such as the pressure differential of a fluid flow simulation,” Heroux explains. “For a mesh in a 3D domain, you need to know at each grid node the relations with values at all the other nodes. A preconditioner makes the iterative method converge more quickly, so a multigrid preconditioner is applied to the method at each iteration.”

Supercomputer vendors such as NVIDIA Corp., Fujitsu Ltd., IBM, and Intel Corp., in addition to some Chinese companies, write versions of HPCG’s program that are best for their platforms.

On the HPCG TOP500 list, the Sandia and Los Alamos National Laboratory supercomputer Trinity has risen to No. 3, and is the top Department of Energy system. Trinity is No. 7 overall in the LINPACK ranking. HPCG better reflects the Trinity design choices.

Heroux invested his time in developing HPCG because he had a strong desire to better assure the U.S. stockpile’s safety and effectiveness. The supercomputing community needed a new benchmark that better reflected the needs of the national security scientific computing community.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.