Creating Cyberinfrastructure, 2008

The TeraGrid is the world’s most comprehensive distributed cyberinfrastructure for open scientific research. As a major partner in this National Science Foundation program, PSC helps to shape the vision and progress of the TeraGrid.

Ultraviolet: Almost Visible

The National Science Board in April authorized NSF to award grants for PSC to acquire and operate a large-scale system based on Silicon Graphics’ newest shared-memory architecture, called Project Ultraviolet. With this system — to be integrated into the TeraGrid — PSC will help lead U.S. scientists and engineers into the next generation of scientific computing.

PHOTO: Ben R

PSC scientific visualization specialist Greg Foss created this rendering of the proposed new PSC system.

PSC’s Project Ultraviolet-based system will comprise more than 100,000 Intel next-generation cores ’ creating a shared-memory system of unprecedented scale (more than 100 terabytes), which will significantly extend TeraGrid capability for data-intensive and non-traditional applications, such as epidemiological modeling, machine learning and game theory.

An advanced SGI high-bandwidth, low-latency interconnect called NUMAlink5 will link the processors, providing both shared-memory support and acceleration for message-passing among processors that will enhance scalability. The mass-storage system will incorporate an innovative disk/file system called ZEST, developed at PSC, (see sidebar, PSC Awards for Innovation).

The new system is one of three being implemented nationally through an NSF initiative to provide “petascale” computing for science and engineering by 2010. Petascale refers to supercomputers that can operate simultaneously on the same task at “petaflop” levels of performance — a quadrillion (1015) calculations per second, roughly equivalent to about 100,000 of the latest laptop systems. The other two new NSF systems are distributed-memory architecture. Because PSC’s system is a shared-memory system, it will complement the others and extend the capability available to U.S. scientists and engineers.

Delivery of the new system will begin in fall 2009. Earlier in 2009, PSC plans to install a small prototype for testing and optimizing of application software to run on the new architecture.

PSC’s ZEST & NRBSC Win Awards for Innovation
Last November at the Supercomputing 2007 conference in Reno, Nevada, HPCwire, a leading electronic news outlet for high-performance computing and communication, awarded two of its 2007 Reader's Choice Awards for innovation to PSC:

  • The National Resource for Biomedical Supercomputing, PSC's biomedical research program (see p. 8), won for "Most Innovative Use of HPC in the Life Sciences," and
  • ZEST, a PSC-developed file system that facilitates scientific computing on very large-scale (petascale) systems, won for “Most Innovative HPC Storage Technology or Product.”
Developed by PSC's advanced systems group, ZEST is a distributed file-system infrastructure that vastly accelerates write bandwidth, achieving more efficient backup reliability (higher “checkpointing bandwidth”) than any other program available.

Pople & Salk: New SGI Shared-Memory Systems

In March, PSC took delivery of two new SGI Altix® 4700 systems. One, named “Pople” — for Nobel-Prize-winning chemist John Pople — features 768 processors, 1.5 terabytes of memory and peak performance of 5.0 teraflops. Pople became a TeraGrid production resource in July, substantially increasing the “shared memory” capability available through NSF for U.S. science and engineering research.

The other, named “Salk” for Jonas Salk, was acquired with support from NIH’s National Center for Research Resources for NRBSC, PSC’s biomedical program. Salk features 144 processors and 288 gigabytes memory and is devoted exclusively to biomedical research. “To make a system of this scale openly available for biomedical research is unprecedented,” said NRBSC director Joel Stiles.

Both new systems feature shared memory, which means that the system’s main memory can be directly accessed from all processors, as opposed to distributed memory (in which each processor’s memory is directly accessed only by that processor). Because all processors share a single view of data, a shared memory system is relatively easy to program. The usability features of these two new systems have attracted new researchers in work involving data-analysis, computer science and other projects.

PSC and TeraGrid

PSC is actively involved in TeraGrid leadership. Scientific director Ralph Roskies serves on the executive steering committee of the Grid Infrastructure Group that guides TeraGrid. Co-scientific director Michael Levine is PSC’s representative to the TeraGrid Forum — TeraGrid’s principal decision-making group.

PSC manager team

PSC staff whose work contributes to TeraGrid include (l to r): Anirban Jana, Phillip Blood, Laura McGinnis (seated), Marcela Madrid, Nick Nystrom (seated), Shawn Brown, Kathy Benninger, Shandra Williams, James Marsteller, Derek Simmel, Sergiu Sanielevici; (rear, l to r): Rob Light, Ed Hanna, Joseph Lappa, R. Reddy. Not in picture: Michael Schneider, Josephine Palencia, David O'Neal, Rich Raymond & J. Ray Scott.

Other PSC staff with TeraGrid leadership roles include Sergiu Sanielevici, Area Director for User Support and Jim Marsteller, head of PSC’s security, who chairs the TeraGrid Security Working Group. Laura McGinnis plays a lead role in TeraGrid education, outreach and training (EOT) activities. PSC director of systems and operations, J. Ray Scott, leads the TeraGrid effort in Data Movement, and PSC director of strategic applications, Nick Nystrom, leads the TeraGrid Extreme Scalability Working Group, which fosters planning to meet the challenges of deploying extreme-scale resources into the TeraGrid.

PSC staff members serve on all of TeraGrid’s working groups.

© Pittsburgh Supercomputing Center, Carnegie Mellon University, University of Pittsburgh
300 S. Craig Street, Pittsburgh, PA 15213 Phone: 412.268.4960 Fax: 412.268.5832

This page last updated: May 18, 2012