News & Publications

PSC Collaboration with Harvard Pioneers a New Approach in Brain Study

Pittsburgh Supercomputing Center Collaboration with Harvard Pioneers a New Approach in Brain Study

Prodigious data-transmission, management and image processing made it possible to map the connections between individual brain cells identified according to function

PITTSBURGH, August 16, 2011 — “Untangling Neural Nets” said the big-print headline on the cover of Nature (March 10, 2011), with an image from work by scientists at Harvard and the Pittsburgh Supercomputing Center (PSC). A news comment in the same issue framed their research as “an exciting and pioneering approach . . .” by which the researchers “achieved a new feat . . . a way of directly studying the relationship of a neuron’s function to its connections.”

PSC Scientific Director Appointed to National Library of Medicine Board of Regents

Pittsburgh Supercomputing Center Scientific Director Appointed to National Library of Medicine Board of Regents

PITTSBURGH, August 5, 2011 — Ralph Roskies, scientific co-director of the Pittsburgh Supercomputing Center, has been appointed to the Board of Regents of the National Library of Medicine. The appointment, for a four-year term, was made by Kathleen Sebelius, U.S. Secretary of Health and Human Services.

New "Memory Advantage Program" on Blacklight

New "Memory Advantage Program" on Blacklight at the Pittsburgh Supercomputing Center

PITTSBURGH, July 26, 2011Blacklight, the SGI Altix UV 1000 system at the Pittsburgh Supercomputing Center (PSC), available to researchers nationally through the National Science Foundation’s newly announced XSEDE program, has opened new computational capability for U.S. scientists and engineers. Featuring 32 terabytes of shared memory, partitioned into two connected 16-terabyte coherent shared-memory systems, Blacklight is the largest shared-memory system in the world.

PSC is a Lead Partner in XSEDE, NSF Cyberinfrastructure Program

XSEDE project brings advanced cyberinfrastructure, digital services, and expertise to nation’s scientists and engineers

PITTSBURGH, July 25, 2011 — A partnership of 17 institutions today announced the Extreme Science and Engineering Discovery Environment (XSEDE). XSEDE will be the most advanced, powerful, and robust collection of integrated advanced digital resources and services in the world.

Protein Research Leaps Forward with Anton at Pittsburgh Supercomputing Center

Protein Research Leaps Forward with Anton at Pittsburgh Supercomputing Center

Special-purpose supercomputer's stay is extended, with new round of time allocations

PITTSBURGH, July 21, 2011 — Using a special-purpose supercomputer for biomolecular simulation, U.S. scientists have made significant advances in the understanding of protein function. The supercomputer, called Anton, designed by D. E. Shaw Research (DESRES), was made available to researchers through the National Resource for Biomedical Supercomputing (NRBSC) at the Pittsburgh Supercomputing Center (PSC).

Pittsburgh Supercomputing Center Network Exchange Adds Robert Morris University

 

Pittsburgh Supercomputing Center Network Exchange Adds Robert Morris University

PITTSBURGH, June 1, 2011 - The Three Rivers Optical Exchange (3ROX), the high-performance Internet hub operated and managed by the Pittsburgh Supercomputing Center (PSC), which serves universities, research sites and K-12 schools in western Pennsylvania and West Virginia, has added Robert Morris University (RMU).

As a result, RMU now has access to Internet2 and to National LambdaRail, high-performance research and education networks that connect universities, corporations and research agencies nationally. Beyond this, said Wendy Huntoon, PSC director of networking, "Robert Morris will also receive all the 3ROX regional routes, thus enabling better connectivity with other universities and area school districts."

"3ROX is the premier high-speed interconnection point for research and education networks in the region," said Randy Johnson, RMU senior director of technical services. "We joined 3ROX primarily to gain access to the National LambdaRail TelePresence Exchange for the future U.S. Steel Videoconferencing and Technology Center, which will be based on Cisco TelePresence technology. We will also benefit from 'peering' with other exchange members to pass traffic locally instead of utilizing an Internet connection."

More information about 3ROX: http://www.3rox.net

PSC Network Exchange Adds Robert Morris University

Pittsburgh Supercomputing Center Network Exchange Adds Robert Morris University

PITTSBURGH, June 1, 2011 — The Three Rivers Optical Exchange (3ROX), the high-performance Internet hub operated and managed by the Pittsburgh Supercomputing Center (PSC), which serves universities, research sites and K-12 schools in western Pennsylvania and West Virginia, has added Robert Morris University (RMU).

As a result, RMU now has access to Internet2 and to National LambdaRail, high-performance research and education networks that connect universities, corporations and research agencies nationally. Beyond this, said Wendy Huntoon, PSC director of networking, “Robert Morris will also receive all the 3ROX regional routes, thus enabling better connectivity with other universities and area school districts.”

“3ROX is the premier high-speed interconnection point for research and education networks in the region,” said Randy Johnson, RMU senior director of technical services. “We joined 3ROX primarily to gain access to the National LambdaRail TelePresence Exchange for the future U.S. Steel Videoconferencing and Technology Center, which will be based on Cisco TelePresence technology. We will also benefit from 'peering' with other exchange members to pass traffic locally instead of utilizing an Internet connection.”

More information about 3ROX: http://www.3rox.net

PSC Accelerates Machine Learning with GPUs

Pittsburgh Supercomputing Center Accelerates Machine Learning with GPUs

Researchers at the Pittsburgh Supercomputing Center and HP Labs achieve unprecedented speedup in a key machine-learning algorithm.

PITTSBURGH, May 23, 2011 — Computational scientists at the Pittsburgh Supercomputing Center (PSC) and HP Labs are achieving speedups of nearly 10 times with GPUs (graphic processing units) versus CPU-only code (and more than 1000 times versus an implementation in a high-level language) in k-means clustering, a critical operation for data analysis and machine learning.

A branch of artificial intelligence, machine learning enables computers to process and learn from vast amounts of empirical data through algorithms that can recognize complex patterns and make intelligent decisions based on them. For many machine-learning applications, a first step is identifying how data can be partitioned into related groups or “clustered.”

Ren Wu, principal investigator of the CUDA Research Center at HP Labs, developed advanced clustering algorithms that run on GPUs, which have advantages for many data-intensive applications. PSC scientific specialist Joel Welling recently applied Wu’s innovations to tackle a real-world machine-learning problem. Using data from Google’s “Books N-gram” dataset and working together, Wu and Welling were able to cluster all five-word sets of the one thousand most common words (“5-grams”) occurring in all books published in 2005. With this project, representative of many research efforts in natural-language processing and culturomics, the researchers demonstrated an extremely high-performance, scalable GPU implementation of k-means clustering, one of the most used approaches to clustering.

Wu and Welling ran on the latest “Fermi” generation of NVIDIA GPUs. Using MPI between nodes (three nodes, with three GPUs and two CPUs per node), they observed a speedup of 9.8 times relative to running an identical distributed k-means algorithm (written in C+MPI) on all CPU cores in the cluster, and thousands of times faster than the purely high-level language implementation commonly used in machine-learning research. Using their GPU implementation, the entire dataset with more than 15 million data points and 1000 dimensions can be clustered in less than nine seconds. This breakthrough in execution speed will enable researchers to explore new ideas and develop more complex algorithms layered atop k-means clustering.

“K-means is one of the most frequently used clustering methods in machine learning,” says William Cohen, professor of machine learning at Carnegie Mellon University. “It is often used as a subroutine in spectral clustering and other unsupervised or semi-supervised learning methods. Because some of these applications involve many clustering passes with different numbers of means or different randomized starting points a greatly accelerated k-means clustering method will be useful in many machine learning settings.” Cohen co-leads the Never-Ending Language Learning (NELL) and Read the Web projects (http://rtw.ml.cmu.edu/rtw/). The goal of NELL is to automate inferences based on continually “reading” natural-language text from the Web.

Machine learning is just one example of the exploding field of data analytics, notes PSC scientist Nick Nystrom. Other data-analytic applications range from understanding the results of traditional high-performance computing (HPC) simulations of global climate, engineering, and protein dynamics to emerging fields that need HPC such as genomics, social network analysis, and mining extensive datasets in the humanities.

“A substantial body of major application codes is already being developed specifically for NVIDIA GPUs,” notes Nystrom, PSC director of strategic applications. “Because NVIDIA GPUs are so widespread, those codes can run well on anything from a supercomputer to a netbook.” Nystrom has been instrumental in PSC’s work with advanced technologies for scientifically important, data-intensive problems. This application of NVIDIA GPUs to k­­-means clustering, he notes, is one example of how a pervasive technology that leverages broad markets can benefit important algorithms in science and data analysis.

This advanced clustering algorithm, notes Wu, also has the advantage of being easy to use, which facilitated rapid implementation with Welling. “I think that the CUDA programming model is a very nice framework,” says Wu, “well balanced on abstraction and expressing power, easy to learn but with enough control for advanced algorithm designers, and supported by hardware with exceptional performance (compared to other alternatives). The key for any high-performance algorithm on modern multi/many-core architecture is to minimize the data movement and to optimize against memory hierarchy. Keeping this in mind, CUDA is as easy, if not easier, than any other alternatives.”

Nystrom concurs and sees an exciting future for software developers: “There’s a rich software ecosystem supporting NVIDIA’s GPUs, ranging from easy-to-use compiler directives to explicit memory management to powerful performance tools. Add to that integration of general-purpose processors in this successful line of architectures, and the potential for developing transformative software architectures is extraordinary.”

More Articles ...

Events Calendar

<<  <  November 2021  >  >>
 Su  Mo  Tu  We  Th  Fr  Sa 
   1  2  3  4  5  6
  7  8  910111213
14151617181920
21222324252627
282930