Pittsburgh Supercomputing Center
NEWS RELEASE


FOR IMMEDIATE RELEASE                    CONTACT:
November 30, 1999                               Michael Schneider
                                                Pittsburgh Supercomputing Center
                                                412-268-4960
                                                schneider@psc.edu
                                       
                                                Sergiu Sanielevici
                                                Pittsburgh Supercomputing Center
                                                412-268-4960
                                                sergiu@psc.edu

Planetary Terascale Project Wins SC99 High-Performance Computing Award

PITTSBURGH -- At the HPC Games, held Nov. 17 at SC99 in Portland, Oregon, an intercontinental team consisting of computational scientists, networking and systems specialists in Stuttgart (Germany), Manchester (UK), Pittsburgh (USA) and Tsukuba (Japan) won top prize for the most challenging scientific application. Executed live from the Portland convention center, the project linked a planetary grid of HPC systems in four countries.

A molecular dynamics simulation with over two million particles ran concurrently on a Hitachi SR8000 at ETL (Tsukuba), and on CRAY T3Es at the Pittsburgh Supercomputing Center, CSAR (Manchester) and HLRS (Stuttgart). This Ter(r)acomputer spanning more than 10,000 miles has a total peak performance of 2.2 TFlops.

Re-entry vehicle simulation

Using a flow solver called URANUS, the team also simulated the crew-rescue vehicle (X-38) of the international space station. With 3.6 million cells on 1,536 T3E processors, this application visualized flow around the vehicle in a collaborative session with the European Networking Demonstration booth.

Stephen Pickles of Manchester Computing contributed a third application, analyzing radio astronomy data in search of pulsars. For this application, sufficient bandwidth is crucial. Among the three T3E systems, a system of networks consisting of JANET and Teleglobe (UK), DFN (Germany), and Abilene and vBNS (USA) delivered sustained bandwidths in excess of one megabit per second.

The Manchester application adapts to actual bandwidth conditions by varying the amount of work it assigns to each machine. The molecular dynamics and fluid dynamics applications are optimized to mask latency by overlapping communication and computation.

Message passing between the heterogeneous machines comprising the Ter(r)acomputer is carried out by PACX-MPI, a library developed at HLRS (Stuttgart). PACX-MPI is implemented as a large subset of the MPI-1 standard, allowing immediate "grid-enabling" of most application codes that use MPI.

The SC99 HPC Games invited researchers from around the world to "demonstrate the cool stuff they can do on high performance machines." A team led by Chuck Koelbel of the National Science Foundation judged presentations on Wednesday, Nov. 17. They evaluated projects for speed, distance between processors and style. Each entry received an individualized award at the Thursday, Nov. 18 awards ceremony.

Details about PACX-MPI, the networks, the applications, and the metacomputing team can be found at http://www.hlrs.de/news/events/1999/sc99/ .

# # #
go back to contents page
© Pittsburgh Supercomputing Center (PSC)
Revised: November 30, 1999

URL: http://www.psc.edu/publicinfo/news/1999/planetary_11-30-1999.html