Creating Cyberinfrastructure, 2005

Cray XT3 serial #1: The newest stage in evolution of HPC technology, a major boost for U.S. computational science

Big Ben Goes to Work

PHOTO: Ben R PAINTING: Ben Franklin

Installed in late 2004 and early 2005, officially unveiled in July, in full production by October, Big Ben supports research nationwide as part of the NSF TeraGrid. Acquired via a $9.7 million grant from the National Science Foundation in September 2004, Big Ben is the first of the most advanced line of HPC systems from Cray Inc., the XT3, to be shipped. It comprises 2,090 processors with an overall peak performance of 10 teraflops: 10 trillion calculations per second.

Named for both the Pittsburgh Steelers quarterback and for Ben Franklin, the nation’s first great scientist, Big Ben is a leading-edge computing resource on the TeraGrid. PSC’s current lead system, LeMieux, has been one of the most productive TeraGrid systems, and although LeMieux — 3,000 processors, six teraflops — remains a much-used resource, Big Ben has begun to take over LeMieux’s role as the TeraGrid resource best suited for very large-scale, demanding projects.

On a per processor basis, Big Ben is 2.4 times faster than LeMieux. More than sheer processor speed, however, its most significant technological advance is “inter-processor bandwidth” — the speed at which processors share information with each other. Because of this, Big Ben has demonstrated performance nearly 13 times better than LeMieux on key applications when 1,000 or more processors are used. Many areas of research will benefit from this, including nanotechnology, design of new materials, protein dynamics studies that lead to new therapeutic drugs, modeling of earthquake soil-vibration, and severe storm forecasting.

Big Ben Ribbon-Cutting (July 20, 2005): NSF Director Comments

LOGO: National Science Foundation

“With this system,” said Arden Bement, director of the National Science Foundation, “we are fulfilling an important national goal — providing one of the fastest computing capabilities to the U.S. research community. We celebrate a significant leap in science and engineering research and education capacity. Richness of data, combined with powerful computing facilities and innovative people, promises a multitude of scientific breakthroughs. The Pittsburgh Supercomputing Center has all of these.”

NSF Awards $52 Million to PSC for TeraGrid Operations

PHOTO: Jim Kasdorf

Jim Kasdorf, PSC director of special projects

In August 2005, the National Science Foundation awarded $52 million over five years to support operations of the Pittsburgh Supercomputing Center (PSC) as a leading partner in the TeraGrid, NSF’s program to provide national cyberinfrastructure for education and research. Built over the last four years, the TeraGrid is the world’s largest, most comprehensive distributed cyberinfrastructure for open scientific research. The award to PSC is part of a five-year, $150 million NSF award to the eight TeraGrid partner institutions.

Much as physical infrastructure such as power grids, telephone lines, and water systems enables modern life, cyberinfrastructure makes possible much of modern scientific research. Through high-performance network connections, the TeraGrid integrates high-performance computers, data resources and tools, and high-end experimental facilities at locations around the country.

A leadership role in creating national cyberinfrastructure

“TeraGrid unites the scientific and engineering community so that larger, more complex scientific questions can be answered,” said Arden Bement, director of the National Science Foundation. “Solving these larger challenges will, in turn, motivate the development of the next generation of cyberinfrastructure.”

“This award represents an opportunity to play an important leadership role,” said PSC co-scientific directors Michael Levine and Ralph Roskies. “We look forward to meeting the challenges ahead with TeraGrid in harnessing the full range of information technologies for coordinated, distributed, productive work enabling leading-edge science.”

Within the TeraGrid, PSC has taken leadership responsibility in user services and cyber-security as well as emphasizing capability computing, the ability to tackle the computationally most-demanding problems. PSC leads the TeraGrid user services working-group, which coordinates the effort to provide thousands of researchers nationally with consulting and other support to assure they can productively use the TeraGrid, and also leads the TeraGrid security working-group, which guides TeraGrid security policy.

ILLUSTRATION: Map of TerGrid partners
LOGO: Teragrid
TeraGrid Resource Providers
  • Indiana University
  • National Center for Supercomputing Applications
  • Oak Ridge National Laboratory
  • Pittsburgh Supercomputing Center
  • Purdue University
  • San Diego Supercomputer Center
  • Texas Advanced Computing Center
  • The University of Chicago/Argonne National Laboratory

Big Ben Delivers in Real Time via TeraGrid

VISUALIZATION: shockwaves (divergence of
	velocity) produced when two fluids of different density contact each other
	in shear layers with turbulence

In September 2005, scientists used special PSC-developed software to demonstrate real-time access from a remote location to data from simulations running on Big Ben. The demonstration took place at iGrid2005 in San Diego, a conference on the scientific use of high-performance networks and grid computing. From the iGrid show floor, the researchers — Paul Woodward, David Porter and colleagues at the University of Minnesota — used Big Ben to simulate turbulent fluid dynamics in shear driven mixing layers.

As the numbers crunched in Pittsburgh, the researchers volume-rendered images from the data and displayed them in San Diego. Over the course of the 90-minute demonstration, Big Ben delivered a sustained 200 megabits per second via the TeraGrid, with burst rates nearing 800 megabits per second. To accomplish this, the researchers relied on a new PSC capability — Portals Direct I/O (PDIO) — that can route simulation data from Big Ben’s processors in real-time to remote users anywhere on the network. Data fragments written by each Big Ben processor are assembled by PDIO into complete files at the receiving end. PDIO controls the rate of data transfer, even slowing down the application if necessary. With PDIO, Woodward and Porter tested their demonstration over slower networks before using the TeraGrid backbone. PDIO has shown absolute stability for both short (minutes) and long (hours) periods of use.

© Pittsburgh Supercomputing Center, Carnegie Mellon University, University of Pittsburgh
300 S. Craig Street, Pittsburgh, PA 15213 Phone: 412.268.4960 Fax: 412.268.5832

This page last updated: May 18, 2012