Pittsburgh Supercomputing Center 

Advancing the state-of-the-art in high-performance computing,
communications and data analytics.

2014 HPCwire RCA Badge

Four research projects involving PSC staff and computing resources have been selected this year as candidates for the 2014 HPCwire Reader's Choice Awards. These projects span the scientific spectrum:

Read more: Vote PSC in HPCWire

Awards Received by PSC Users and Staff

PSC is very proud of the wide recognition its users and its staff have received as a result of their research here.

2013

HPCWire Readers’ Choice Awards

Selected by vote of HPCwire’s readership

  • Best use of HPC in Life Sciences
    Pittsburgh Supercomputing Center and SGI “Blacklight” – PSC’s Blacklight supercomputer has helped researchers overcome limitations in complex DNA and RNA sequencing tasks, identifying expressed genes in nonhuman primates, petroleum-digesting soil microorganisms and bacterial enzymes that may help convert non-food crops into usable biofuels.
  • Best use of HPC in “Edge” HPC Application
    Pittsburgh Supercomputing Center, Notre Dame University and VecNet Cyber Infrastructure (CI)– A project of PSC’s new Public Health Group and collaborators at Notre Dame, VecNet CI is building a computer system that will enable VecNet — a partnership of academic and industrial researchers, local public health officers and foundation and national decision makers — to test ideas for eradicating malaria before trying them in the real world.

HPCWire Editors’ Choice Awards

Selected by HPCwire editors

  • Best Application of “Big Data” in HPC
    Pittsburgh Supercomputing Center and Cray YarcData Urika “Sherlock” – PSC’s newest supercomputing resource, Sherlock is, in part, hard-wired to solve what are known as graph problems: questions concerning complex networks that can’t be understood in isolated pieces. Sherlock is busy shedding light on cancer protein and gene interactions, as well as performing smarter information retrieval in complex documents such as Wikipedia.
  • Best use of HPC in Financial Services
    XSEDE, PSC SGI “Blacklight” and the San Diego Supercomputer Center (SDSC) Cray “Gordon”– Earlier work on PSC’s Blacklight enabled researchers to prove that high-volume automated traders were exploiting market reporting rules to make “invisible” trades that manipulated the markets. In October, the New York Stock Exchange and NASDAQ changed their rules to close this loophole. Current work with Blacklight and the SDSC’s Gordon, through the National Science Foundation’s XSEDE network of supercomputing centers, seeks to make moment-by-moment analysis of market activity possible for regulators.

2011

HPCWire Readers' Choice Award

Selected by vote of HPCwire’s readership

  • Best Use of HPC in an Edge HPC application

The award recognizes PSC for its work with Blacklight, PSC’s SGI® Altix® UV1000 system, the world’s largest shared-memory system, a resource of XSEDE, the National Science Foundation cyberinfrastructure program. Because of Blacklight’s large amount of shared memory, scientists have been able to access up to 16 terabytes at a time, a feature that has enabled ground-breaking work in several fields, including fields of computer science — natural language processing (NLP) and machine learning (ML) — that haven’t traditionally made substantial use of HPC.

2009

HPCWire Readers' Choice Award

Selected by vote of HPCwire’s readership

  • Top Supercomputing Achievement

The award recognized PSC's research in H1N1 modeling as part of the National Institutes of Health's Models of Infectious Disease Agent Study (MIDAS) project, which supports research to simulate disease spread and evaluate intervention strategies. In this work, PSC scientist Shawn Browncollaborates with the Pittsburgh MIDAS Center of Excellence, led by Donald Burke, M.D., of the University of Pittsburgh Graduate School of Public Health.

2008

SIGCOMM Test of Time Award

PSC senior network engineering specialist Matt Mathis was awarded the Test of Time Award from the the Special Interest Group on Data Communication (SIGCOMM), of the Association for Computing Machinery (ACM), for a 1997 paper, “The macroscopic behavior of the TCP congestion avoidance algorithm”. The paper, co-authored with former PSC staff members Jamshid Mahdavi and Jeff Semke and with Teunis Ott (then at Bellcore), was published in the ACM journal Computer Communication Review.

TG08 Best Demonstration

A PSC team of two scientists and a University of Pittsburgh student won the award for “Best Demonstration” at TG08 during the annual conference of the TeraGrid, the National Science Foundation's program of cyberinfrastructure for U.S. science and education. “WiiMD”, an innovative project that merges the video-game technology of the Nintendo Wii with interactive supercomputing, was developed by PSC staff Shawn Brown and Phil Blood and student intern Jordan Soyke.

TG08 Student Science Competition

Three PSC-mentored high school students took first, second, and third prizes in the Science Competition at TG08, the annual conference of the TeraGrid, the National Science Foundation's program of cyberinfrastructure for U.S. science and education. Matthew Stoffregen won first prize; Shivam Verma placed second and Srihari Seshadri, also a PSC student employee, placed third.

TG08 TeraGrid Student Research Competition

In the TeraGrid Student Research competition at TG08, Maxwell Hutchinson, a Carnegie Mellon University student and PSC student programmer, came in second in undergraduate research.

2007

HPCWire Reader's Choice Awards

Selected by vote of HPCwire’s readership

  • Most Innovative Use of HPC in the Life Sciences

This award honors the National Resource for Biomedical Supercomputing (NRBSC), PSC's biomedical research program.

  • Most Innovative HPC Storage Technology or Product

This award recognizes ZEST, a PSC-developed file system that facilitates scientific computing on very large-scale (petascale) systems.


2006

SC06 Analytics Challenge Award

A team of scientists and engineers from Carnegie Mellon University, the University of Texas, the University of California, Davis, and Pittsburgh Supercomputing Center won the Analytics Challenge Award at SC06 for their work using PSC's Cray XT3 to realistically simulate earthquake ground motion and thereby better assess the seismic hazard to populated earthquake basins. SC06 is the 2006 international conference of high-performance computing, networking, data storage and analysis.

The award-winning project was officially titled “Remote Runtime Steering of Integrated Terascale Simulation and Visualization.” The full team comprised Hongfeng Yu, University of California, Davis (technical lead); Tiankai Tu, Carnegie Mellon (team lead); Jacobo Bielak, Carnegie Mellon; Omar Ghattas, University of Texas at Austin; Julio C. Lopez, Carnegie Mellon; Kwan-Liu Ma, University of California, Davis; David O'Hallaron, Carnegie Mellon; Leonardo Ramirez-Guzman, Carnegie Mellon; Nathan Stone, PSC; Ricardo Taborda-Rios, Carnegie Mellon; and John Urbanic, PSC.

CLADE Award

Jeff Gardner, PSC; Vladimir Litvin, California Institute of Technology; and Evan Turner, Texas Advanced Computing Center, won the best paper award at the 2006 CLADE (Challenges of Large Applications in Distributed Environments) workshop in Paris, France.

The paper, "Creating Personal Adaptive Clusters for Managing Scientific Jobs in a Distributed Computing Environment," described a system for aggregating processors on demand from across the distributed resources of the National Science Foundation TeraGrid.

The virtual environment described in the paper was built on top of an existing middleware tool called GridShell. The combination, including the pre-existing middleware, was renamed MyCluster. In production use on the TeraGrid, as of May 2006 it had already handled about 100,000 jobs and 900 teraflops of scientific computation.

2005

HPC Analytics Challenge Award

SPICE (Simulated Pore Interactive Computing Environment), a project led by Peter Coveney, University of London, with PSC's Sergiu Sanielevicias a co-author, won the HPC Analytics Challenge award, a first-time SC award given for innovative techniques in rigorous data analysis, advanced networks and high-end visualization to solve a complex, real-world problem.

2004

HPCWire Reader's Choice Awards

Most Innovative Implementation

Most Innovative HPC Technology

Most Important Emerging Technology

PSC's newest system, based on Sandia National Lab's "Red Storm" design (now designated "XT3" by Cray) won three HPCWire Reader's Choice Awards: for Most Innovative Implementation; for Most Innovative HPC Technology; and for Most Important Emerging Technology.

2003

Gordon Bell Prize for High Performance Computing

SC2003 Special Accomplishment Based on Innovation

 The "Quake Group", led by Jacobo Bielak, Omar Ghattas, and David O'Hallaron of CMU and George Biros of the University of Pennsylvania won the both the Gordon Bell prize for High Performance Computing and the SC2003 Award for Special Accomplishment Based on Innovation for developing earthquake simulations on the TCS that play an important role in reducing seismic risk. Other group members include Volkan Akcelik, Ioannis Epanomeritakis, Antonio Fernandez, Eui Joong Kim, Julio Lopez, and Tiankai Tu of Carnegie Mellon, and Greg Foss and John Urbanicof PSC.

SC2003 HPC Challenge

Most Innovative Data-Intensive Application

The TeraGyroid project, led by Peter Coveney, University of London, and Bruce Boghosian, Tufts University, coupled cutting-edge grid technologies, high-performance computing, visualization and computational steering capabilities to produce a major leap forward in soft condensed matter simulation. During SC2003, the largest Lattice Boltzmann simulation ever (10243) was carried out on PSC's TCS, interacting with smaller simulations at Daresbury Lab (5123) and 1283lattices steered on a host of systems on the UK RealityGrid and on the US TeraGrid. Simulation checkpoints were migrated back and forth across the Atlantic at 300-400 Mbps. Collaborative computational steering and visualization was demonstrated, using the TeraGrid visualization cluster at ANL and SGI Onyx systems in Manchester and Phoenix.

SC2003 Best Student Paper

"A New Parallel Kernel-Independent Fast Multipole Method" by Lexing Ying, George Biros, Denis Zorin, and Harper Langston (New York University) presented a new adaptive fast multipole algorithm and its parallel implementation, which was tested on up to 3000 processors of LeMieux at PSC. The authors solved viscous flow problems with up to 2.1 billion unknowns, reaching 1.6 Tflops in certain parts of the computation, and a sustained rate of 1.13 Tflops. This paper was also a finalist for the Gordon Bell prize.

Computerworld Honors

21st Century Achievement Award Finalist

PSC was one of five finalists for the Computerworld Honors 21stCentury Achievement Award. PSC was honored for "using information technology to make great strides toward remarkable social achievement in Science," according to Daniel Morrow, Executive Director of the Computerworld Honors Program.

2002

Gordon Bell Award for Special Accomplishment

James C. Phillips, Gengbin Zheng, Sameer Kumar, and Laxmikant V. Kale, from the University of Illinois at Urbana-Champaign, won the Gordon Bell Award for Special Accomplishment for their work on NAMD. NAMD maps the structure of large biological molecules and molecular systems. Using PSC's Lemieux system, NAMD scaled effectively and efficiently to over 2,000 processors.

SC2002 Best Technical Paper

Volkan Akcelik of Carnegie Mellon, George Biros of New York University, and Omar Ghattas of Carnegie Mellon, won the award for Best Technical Paperat SC2002. Using PSC's Lemieux, they built on large-scale earthquake simulation work done previously at PSC.

SC2002

Bandwidth Challenge
High Performance Computing Challenge

LeMieux also was used in projects at SC2002 that received the "Bandwidth Challenge Award" and the "High Performance Computing Challenge Awards" for "the most geographically distributed application" and for "the most heterogeneous set of platforms."

1999

Computerworld Smithsonian Award in Science Finalist

Peter Kollman and Yong Duan of the University of California, San Francisco, were finalists in the CWSAin Science for their simulation of the folding action of a protein. Their 1 microsecond simulation, 100 times longer than any before, offers new insights into the folding process and could lead to more effective drugs for diseases believed to be caused by malfunctions in protein folding.

SC1999 High-Performance Computing Award

An intercontinental team consisting of computational scientists, networking and systems specialists in Stuttgart (Germany), Manchester (UK), Pittsburgh (USA) and Tsukuba (Japan) won top prize for the most challenging scientific application in the HPC Games at SC1999. The molecular dynamics simulation with over two million particles ran concurrently on a Hitachi SR8000 at ETL (Tsukuba), and on CRAY T3Es at the Pittsburgh Supercomputing Center, CSAR (Manchester) and HLRS(Stuttgart).

1998

Gordon Bell Prize

Best Achievement in High-Performance Computing

Yang Wang of the PSC collaborated with scientists at Oak Ridge National Laboratory, National Energy Research Supercomputing Center and the University of Bristol, UK, to win the Gordon Bell Prize for Best Achievement in High-Performance Computing at SC1998. Their first-principles simulation of complex magnetic properties is the world's first fully fledged scientific application to sustain more than one Teraflop. This was accomplished on a 1480-processor T3E-1200 system at Cray Research.

SC1998 Most Insightful Application

A group of researchers including PSC's Greg Hood and others from Carnegie Mellon University, University of Pittsburgh Medical Center (UPMC), Princeton University, the University of Edinburgh (Scotland) and other PSC staff won the Most Insightful Application award at SC1998. They linked an MRI scanner at UPMC with PSC's CRAY T3Eover high-speed networks. A series of complex data manipulations converted raw MRI data to 3-D images of the MRI subject's brain, which were then transmitted and displayed on a visualization screen at a remote site, in this case the show floor at Orlando. Typically for this kind of research — known as “functional MRI” — it takes a day or more to produce 3-D images; the Pittsburgh team cut the delay to seconds.

1997

Computerworld Smithsonian Award in Science

Discover Magazine Award for Technological Innovation

Kelvin Drogemeierwon both the CWSA in Science and the DISCOVER magazine award for Technological Innovation in Computer Software for his research in storm forecasting at PSC.

1996

Computerworld Smithsonian Award in Science

The Center for Light Microscope Imaging and Biotechnology, a National Science Foundation science and technology center based at Carnegie Mellon University, won the 1996 CWSA in Science for developing automated light microscope technology to observe the dynamics of living cells.

Fernbach Award

Gary Glatzmaier of the Los Alamos National Laboratory was awarded the Fernbach Award for the first three-dimensional computer simulation of how the Earth's magnetic field is generated and how it occasionally reverses its direction. The work required 5,000 hours of computing time to simulate 80,000 years of geodynamic history, providing insights into the nature of the Earth's magnetic field.

Computerworld Smithsonian Award Finalists

Three of the four finalists in addition to the Center for Light Microscope Imaging and Biotechnology (see above) in the Science category of the Computerworld Smithsonian Awards were also collaborations with PSC.

  • At Carnegie Mellon University, Ted Russell and colleague Greg McRae, of the Massachusetts Institute of Technology, used PSC's Cray C90 to demonstrate that smog reduction strategies can be improved through selective control of nitrogen oxides and hydrocarbons, produced from automobile emissions and many other manmade sources.
  • Gary Glatzmaier of the Los Alamos National Laboratory was a CWSA finalist for his research into how the Earth's magnetic field is generated and how it occasionally reverses its direction. This same research garnered him the 1996 Fernbach award.
  • Scientists at California Institute of Technology's Scalable Concurrent Programming Laboratory used PSC's CRAY T3D to simulate the aerodynamics of the Delta II satellite launch vehicle. The simulations lower the cost of space operations by providing a means of identifying flaws that cannot be traced through existing test procedures.

SC1996 High-Performance Computing Challenge

A collaborative effort among three of the nation's leading computational research centers won a Gold Medal in the Concurrency category of the High Performance Computing Challenge at Supercomputing '96. A team of physicists, software developers and networking experts from Oak Ridge National Labs, Sandia National Labs and the Pittsburgh Supercomputing Center linked disparate, highly parallel computing systems at their respective sites. They applied this "metacomputing" approach to a large-scale materials science computation. The HPC Challenge judges cited them for demonstrating the ability to solve significant problems using diverse systems linked over high-speed networks.

1995

Computerworld Smithsonian Award Finalist

Mordecai-Mark Mac Low of the University of Chicago was a finalist in the CWSA in Science for his simulations of the impact of comet Shoemaker-Levy 9 on Jupiter.

Fernbach Award

Paul Woodward  of the University of Minnesota received the Fernbach Award for his simulation of the turbulent dynamics of the hot gasses in the Sun's outer layer.

1994

Computerworld Smithsonian Award for Breakthrough Computational Science

The CWSA for Breakthrough Computational Science was awarded to researchers Charles Peskin and David McQueenof the New York University's Courant Institute of Mathematical Sciences for development of a three-dimensional computational model of blood flow in the heart, its nearby valves and major vessels.

Fernbach Award

Charles Peskin of the New York University's Courant Institute of Mathematical Sciences for his research on blood flow in the heart.

DISCOVER Magazine Computer Software Award Finalist

University of Pittsburgh biologist John Rosenberg was a finalist for the DISCOVER Magazine Award for Technological Innovation in Computer Software for his research at PSC into the biological process by which proteins recognize and attach to the correct spot on a DNA strand.

1993

Computerworld Smithsonian Award in Science

      The Pittsburgh Supercomputing Center itself was the recipient of the CWSA in Science for its efforts to bring high-performance computing to bear on research that improves the quality of human life. The award cited the center's involvment in important biomedical research on interactions between proteins and DNA.

Computers and Thought Award

Hiroaki Kitano, of Carnegie Mellon University's Center for Machine Translation and the Sony Computer Science Laboratory in Japan, received the most prestigious award in artificial intelligence for researchers under 35, the Computers and Thought Award. This award, given by the International Joint Conferences for Artificial Intelligence, honors Kitano's work in simultaneous language translation programs.

SC1993 Most Hetereogenous

A distributed volume-rendering application developed by Joel Welling, Pittsburgh Supercomputing Center scientific visualization specialist, was named Most Heterogeneous Application at the Supercomputing '93 Conference.

1992

Computerworld Smithsonian Award in Science

The Westinghouse Electric Corporation received the CWSA award in Science for the creation and operation of supercomputing centers at local universities, including the operation of PSC.

1991

Discover Award for Computer Software Finalist

John Rosenberg, University of Pittsburgh, was a finalist in the Discover Award for Computer Software for his work to simulate the complicated interaction between DNA and a enzyme protein.

    • See the Projects in Scientific Computing article, "Kinky DNA"

Forefronts of Large-Scale Computing Award

The team of John Rosenberg, University of Pittsburgh, Peter Kollman, University of California, San Francisco, Robert Swendsen, Carnegie Mellon University, and Shankar Kumar, University of Pittsburgh, won the Forefronts of Large-Scale Computing Award for their work in applying molecular dynamics to DNA research.

  • See the Projects in Scientific Computing article, "Kinky DNA"

1989

Forefronts of Large Scale Computing Award

Gregory McRae, Massachusetts Institute of Technology, won the first Forefronts of Large Scale Computation Award for his work in computational modeling of large atmospheric systems.

Blacklight: Incubating Research in Machine Learning and Natural Language Processing

Blacklight: 2x16TB of hardware-enabled coherent shared memory at the Pittsburgh Supercomputing Center

Blacklight, PSC's SGI Altix UV 1000, on which applications can access up to 16 TB of hardware-enabled coherent shared memory, is enabling groundbreaking research in machine learning (ML), natural language processing (NLP), game-theoretic analysis, and related computer science disciplines. These projects are paving the way toward automated reasoning and unprecedented data analytics.

Very large, hardware-enabled coherent shared memory is especially valuable for developing computer science algorithms (e.g., in machine learning), for several reasons:

  • Shared Memory: Being able to access up to 16 TB of shared memory, either from many threads or even from a single thread, frees the programmer from having to distribute data explicitly, as is the case when using MPI on a distributed-memory computer. This is especially valuable for algorithms where data sizes and access patterns are irregular and difficult or impossible to predict, e.g., graph algorithms. Similarly, being able to stage a full dataset into memory and then perform complex operations on it is much more efficient than repeatedly accessing fragments from distributed disks.
  • Single-System Image/ High Thread Count: Each of Blacklight's two 16 TB, 2048-core partitions runs a single system image (SSI), i.e., a single instance of the SUSE Linux operating system. This allows applications with very high thread counts, easily in thousands and potentially much higher, to express algorithms conveniently and productively, building on computer science’s existing, large code base. Algorithms in machine learning and related disciplines are often expressed using POSIX threads (p-threads) or Java threads.
  • High Productivity: The combination of SUSE Linux with being able to access 16 TB of coherent shared memory flexibly from one thread or from thousands of threads enables an unusually wide range of familiar, highly productive languages and analysis tools. These include Java, Python, other scripting languages, R, and Octave, to name a few.
  • Scalable Software: Blacklight is currently the only resource on XSEDE that supports GraphLabGraphLab, a parallel framework for machine learning. Developed at Carnegie Mellon, GraphLab implements machine learning algorithms that are scalable, efficient, and provably correct, and it is used by several of the ML projects at CMU.
  • Datasets: Blacklight hosts datasets that are important for machine learning research; for example, ClueWeb09.

A few examples of ongoing research in machine learning, natural language processing, and related data analytics are as follows:

Text-Driven Forecasting

Figure 1. Smith's group has looked at predicting the number of downloads of papers published at the National Bureau of Economic Research (an influential repository of working papers in economics and finance), based on the paper's content. With "download" information for each paper, they map the requesting IP addresses to regions of the world, and then determine what aspects of a paper's content correlate with its download "popularity". The chart shows the most informative single-word features for each of seven regions of the world.

Research being done by Noah Smith group at the Language Technologies Institute, School of Computer Science, Carnegie Mellon University, spans several topics in statistical natural language processing (NLP). Text driven forecasting is one new application of NLP: given some text, make a concrete prediction about future measurable events in the world. An example is forecasting the impact of a scientific article, using its text content [Yogotama2011]. Impact might be measured as the citation rate or download rate on the web. The group has been exploring a wide range of text features and forecasting methods for this problem, using datasets of articles from two fields (economics and computational linguistics). They have been able to find trends over time and across geography, using download data for articles in the economics literature as in Figure 1. Because their models are linear in human-intelligible text features, they are understandable (the models "speak English"), making these techniques an excellent way to connect with social scientists. Smith says, "The large amounts of shared-memory available on Blacklight have been crucial to our ability to model things like this, since the models are parameterized with very large numbers of features, and training requires iteratively re-estimating the model parameters."

Machine Translation

Another major application of NLP is machine translation: turning text in one language into semantically equivalent text in another language. The statistical approach to translation learns from examples of human-translated sentences, inferring a hidden alignment structure between words and phrases. Smith's group reformulated one of the dominant models of alignment as a conditional random field with latent variables. They showed that well-designed features that are unavailable in more traditional generative models can lead to significant increases in translation quality for Czech-English, Chinese-English, and Urdu-English translation. A paper describing this work was accepted for publication by the Association for Computational Linguistics

The group also considered the related problem of experimental methodology in translation. The dominant optimization routine used to build translation models from data involves randomness, but most experimental research fails to take into account this randomness when testing significance (i.e., in comparing two systems) so they proposed a computationally intensive solution based on sampling. Using Blacklight to generate a very large number of samples, they demonstrated empirically that estimates obtained with only a few samples (and therefore within range of commodity hardware) can determine with high confidence whether the difference in two experimental conditions is valid.

Leveraging Supercomputing for Large-scale Game-theoretic Analysis

Automatically determining effective strategies in stochastic environments with hidden information is an important and difficult problem. In multiagent systems, the problem is exacerbated because the outcome for each agent depends on the strategies of the other agents. Consequently, each agent must incorporate into its deliberation the expected actions of the other agents. For many such environments, game theory has proven to be an effective tool, both for modeling the situation and for providing prescriptive solutions. In principle, optimal strategies can be computed for sequential imperfect information games. However, the size of the game trees can be enormous, and in order to compute optimal strategies, the entire game tree must be considered at once. Tuomas Sandholm's recent research advances in automated abstraction and equilibrium-finding algorithms have opened up the possibility of solving two-person zero-sum games many orders of magnitude larger than what was previously possible.

The class of sequential imperfect information games includes poker as a special case. Poker games are well-defined environments exhibiting many challenging properties, including adversarial competition, uncertainty (with respect to the cards the opponent currently holds), and stochasticity (with respect to the uncertain future card deals). Poker has been identified as an important testbed for research on these topics. Consequently, many researchers have chosen poker as an application area in which to test new techniques. In particular, Heads-up Limit Texas Hold'em poker, now a benchmark problem in ML, has recently received a large amount of research attention.

The group introduced two techniques for speeding up any gradient-based algorithm for solving sequential two-person zero-sum games of imperfect information. Both of the techniques decrease the amount of time spent performing the critical matrix-vector product operation needed by gradient-based algorithms. They also specialized their software for running on a ccNUMA architecture like Blacklight, which is becoming ubiquitous in high-performance computing. The two techniques developed can be used together or separately.

Sandholm's research group computed the strategies for their bots for the AAAI 2010 Annual Computer Poker Competition using their fast-EGT equilibrium-finding algorithm. Playing Heads-up No-Limit Texas Hold'em poker, their program, Tartanian4, won the Total Bankroll category and and placed third in the Bankroll Instant Run-off category.

References

[Yogotama2011] D. Yogatama, M. Heilman, B. O'Connor, C. Dyer, B. Routledge, and N. A. Smith. Predicting a Scientific Community's Response to an Article. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2011, http://aclweb.org/anthology-new/W/W11/W11-22.pdf.

Research Programs

PSC staff engage in projects related to many areas of high-performance computing.   Fields that our staff contribute to include:

 High-performance Networking

The Advanced Networking group at Pittsburgh Supercomputing Center conducts research on network performance and analysis in support of high-performance computing applications. We also develop software to support heterogeneous distributed supercomputing applications and to implement high-speed interfaces to archival and mass storage systems.

Projects that the Advanced Networking group is involved in include:

  • 3ROX -  The Three Rivers Optical Exchange (3ROX) is a regional network aggregation point, also called a GigaPoP, providing high speed commodity and research network access to sites in western and central Pennsylvania and West Virginia.

  • DANCES - The DANCES project (Developing Applications with Networking Capabilities via End-to-End SDN) is  developing tools to allow the scheduling of network resources much as compute resources are scheduled now, with the aim of reducing data transfer bottlenecks, a serious hindrance to scientifc research.
  • Web10G - Web10G promises innovations in performance, stability, optimization, and network diagnostics  to anyone who wants to fully exploit modern high bandwidth connections.   It provides users with have real-time statistics about their network connections, and allows them to fine-tune some characteristics of the connection to improve performance.

  • KINBER and PennREN - The Keystone Initiative for Network Based Education and Research (KINBER) serves as coordinator for the construction and management of a Pennsylvania-wide fiber optic network, PennREN (Pennsylvania Research and Education Network).  PennREN will be accessible to educational, research, health care, and economic development partners seeking to aggregate services for their members and subscribers at affordable cost.

 See the Advanced Networking page for more of  the tools, software and information they provide.

Biomedical Applications Group

The Biomedical Applications Group at PSC pursues leading edge research in high performance computing and the life sciences and fosters exchange between PSC expertise in computational science and biomedical researchers nationwide.

Advanced Systems

The Advanced Systems & Operations group conducts research in High Performance Computing systems and data storage. Current projects provide researchers better access to their data via the development of fast parallel filesystems and distributed filesystems for widely distributed computing environments.

In addition, they manage all PSC HPC resources, including security, supercomputing operations, high performance storage, and data management and file systems.

Current research efforts include:

  • Zest - Zest is a parallel storage system specifically designed to meet the ever-increasing demands of HPC application checkpointing. Zest differs from traditional parallel filesystems by making use of log-structuring filesystems on the I/O server in combination with opportunistic data placement. Utilizing these techniques, Zest is capable of driving its disks at 90% peak bandwidth.    Read more about Zest here.

  • SLASH2 is a distributed filesystem that incorporates existing storage resources into a common filesystem domain. It provides system-managed storage tasks for users who work in widely distributed environments. The SLASH2 metadata controller performs inline replication management, maintains data checksums and coordinates third party, parallel, data transfer between constituent data systems. The SLASH2 I/O service is a portable, user-space process that does not interfere with the underlying storage system's administrative model. Read more about SLASH2 at http://quipu.psc.teragrid.org/slash2.

See the Advanced Systems page for more information.

Scientific Applications and User Services

The Scientific Applications and User Services (SAUS) group at PSC promotes groundbreaking scientific research through efficient and inventive use of PSC resources. They often collaborate in research projects to provide expertise in scientific computing.

See the SAUS page for specific examples of the application of their expertise to promote high performance computing.