Last Updated on Tuesday, 02 July 2013 09:37
2014 Pennsylvania State Budget Includes $500,000 for Pittsburgh Supercomputing Center
July 2, 2013
The Commonwealth of Pennsylvania budget signed by Gov. Tom Corbett on June 30 includes a $500,000 line item for PSC.
“This is very good news for PSC and for the Commonwealth,” says Ralph Roskies, scientific director for PSC, adding that the state’s return on its past investments in PSC has been excellent. “Since our inception we’ve brought over $500 million in outside funds into Pennsylvania, representing a 14:1 return on state funding for PSC.”
“We’re grateful to the members of the General Assembly, and especially the Allegheny County delegation,” Roskies adds. “The bipartisan support of Senators Randy Vulakovich and Jay Costa and Representatives Mark Mustio and Joe Markosek made this possible.”
The funding, says PSC’s leadership, will benefit the state’s technological and workforce infrastructures as well.
“PSC is responsible for generating 1,600 jobs and over $200 million in annual economic activity,” says Cheryl Begandy, PSC’s director of education, outreach, and training. “In addition, our place on the leading edge of computing technologies at the largest scale enables us to respond quickly to technological developments, giving the state, its researchers and its small and mid-sized companies a leg up in capitalizing on these advances.”
The state line item will also prove valuable to PSC’s ongoing competition for federal research funding. Local funding is often seen by granting agencies as concrete evidence of grassroots support for a research center.
“In our fight for federal awards, we’re competing with some of the best high performance computing centers in the world, many of which enjoy significant state funding,” Roskies says. “The state line item will help us retain a competitive edge over and above the excellence of our proposals themselves.”
The details of the line item have yet to be worked out with the state, Roskies says. Potential projects include
- supporting the Commonwealth’s STEM Education initiative through PSC programs in Computational Reasoning and Bioinformatics
- collaborating with the Pennsylvania State System of Higher Education to support research and education at its 14 state universities
- supporting small and mid-sized manufacturers in Pennsylvania through the introduction of Digital Modeling tools, resources and training
- encouraging workforce development through internships for undergraduate or graduate students
- continuing PSC core management and outreach efforts expected by federal and other granting agencies
Last Updated on Friday, 28 June 2013 10:38
Pittsburgh Supercomputing Center, Numascale AS to Collaborate on Improved Memory Systems for Research
June 28, 2013
Pittsburgh Supercomputing Center (PSC), Pennsylvania’s only National Science Foundation high performance computing facility, and Numascale AS, whose products support the construction of low-cost, scalable-server computer systems, have launched a collaborative project investigating the applicability of Numascale systems to the many research projects requiring more directly addressable memory than is readily available on single, commodity, multi-socket, large memory servers.
“Rapid advancement in many scientific fields of data-dependent research will be facilitated by the availability of larger memory systems at near commodity prices,” says Michael J. Levine, scientific director, PSC. “Having large amounts of data in directly-addressable memory avoids very time-consuming disk input/output and allows a much more productive programming paradigm.”
The field of supercomputing is well known for engineering extreme processing speeds but increasingly, researchers’ calculations are limited not by the speed of processing but access to and efficient use of vast amounts of data. Application areas that require very large memories include natural language processing, multi-organism genomics and quantum chemistry.
“We see the collaboration with Pittsburgh Supercomputing Center as an important milestone for utilizing NumaConnectTM for a number of applications that have previously been limited by inferior memory capacity in standard servers,” says Einar Rustad, CTO and co-founder of Numascale. “The huge and scalable memory capacity in systems with NumaConnect allows users to operate in the familiar programming and runtime environment they are used to with workstations.”
This, Rustad explains, eliminates the need for explicit message passing and significantly reduces the overall time from idea to solution for a number of important applications in many scientific fields. “PSC's unique expertise will strengthen our focus on applications that are key to advances in major scientific fields and help us to widen the market for Numascale.”
The collaboration between PSC and Numascale seeks to leverage PSC’s unique and extensive experience with very large memory computing systems and Numascale’s NumaConnect memory technology to produce systems capable of handling such large data volumes without memory-retrieval lags. NumaConnect uses commodity servers as building blocks to provide memory capacities and retrieval speeds currently only available through high-end and enterprise-class systems. PSC’s application specialists will work with Numascale engineers and application programmers to find ways the two organizations’ experience and expertise can be combined synergistically.
Last Updated on Thursday, 27 June 2013 09:04
Opening the Flood Gates
Argonne, PSC Staff Shepherd Internet2 Migration, Give XSEDE Network Bandwidth Needed for Big Data Era
Monday, June 24, 2013
Thanks to personnel at Argonne National Laboratory and PSC — chiefly Linda Winkler, senior network engineer, Argonne; Joseph Lappa, principal network design engineer, PSC and Kathy Benninger, network performance engineer, PSC — the National Science Foundation’s network of supercomputing sites now has the “pipe capacity” it will need to keep pace with the Big Data era.
XSEDE, the National Science Foundation’s U.S.-wide network of high performance computing centers, which includes Argonne and PSC, has migrated its data network to Internet2, a vastly higher-capacity system than the previous carrier. XSEDE’s improved network will enable sites to achieve connection rates of up to 100 Gigabit per second (100 GE) — 10 times faster than currently possible. The architecture of the new system will also enable a number of upgrades that will help the transfer of data through the system.The XSEDE Data Network
As part of the Internet2 migration, Lappa has taken on new responsibilities for the XSEDE network. Newly appointed as XSEDE’s operations networking manager, he will be XSEDE’s main contact with Internet2. In this role, he and his team will monitor the performance of the new network, oversee details of transitioning sites to 100 GE, assist with campus bridging and help Internet2’s programmers and service representatives optimize and tailor the network to XSEDE and its users’ needs.
The approaching bottleneck
In 2006, Senator Ted Stevens made the mistake of referring to the Internet as “a series of tubes.” He instantly became the brunt of jokes about a guy who grew up in a time when people communicated via post, in cursive script, trying to make sense of an email world. But to be fair, it isn’t such a bad metaphor.
Information — data — is as critical to our economy and society as fresh drinking water is to our homes. Like the plumbing running through our houses, the Internet transports data through “pipes” that are limited both by their size and by the capacity their “faucets” can deliver.
Users at XSEDE sites employ some of the largest, fastest computers in the world to generate vast volumes of data. Moving those data between researchers, the supercomputers and storage sites is no small mission. To accomplish that job, XSEDE originally built what was then one of the highest-capacity, most reliable networks in the world.
“Advanced networking is critical … to support the researchers and educators who are making innovative use of our … resources,” says John Towns, XSEDE project director, noting that XSEDE supplies about 8,000 users with 17 supercomputers, data storage and management tools and networking resources.
In the Information Age, though, technology ages quickly. As the XSEDE network and its demands grew, it began to approach the limits of its infrastructure: in particular, a potential bottleneck between XSEDE sites in Denver and Chicago loomed large.
“As far as the technical reasons for migrating to Internet2, it was the ‘speeds and feeds’ problem,” Lappa says. A factory, for example, can perform an operation on a product quickly (speed). But if it can’t then move the next product up the line (feed) fast enough, that speed is wasted. Similarly, the blinding speed of XSEDE’s computing machines was in danger of being made far less relevant by the approaching difficulty of getting data into and out of them.
Unclogging the pipes
Internet2’s 100 GE backbone proved to be the solution to the problem, Benninger says. “With 100 GE, there is a clearer path to allow us to operate.”
While not all the sites will initially have 100 GE connections to the new backbone, she adds, the system will have room to grow to meet the next three years’ needs. Currently, Indiana University and Purdue University share a 100 GE connection, with a number of other sites planning to upgrade over the next several years.
In addition to supplying the leadership for the migration process, PSC also served as one of the first sites on the new network, testing out and helping Internet2 improve and customize the system to serve XSEDE’s needs.
Internet2’s architecture offers a big plus in terms of managing data flow with what’s known as “dynamic provisioning capability.” If a particular network path between two sites is congested with large data flows, a network engineer can establish a virtual local area network (VLAN) to route additional data transfers over an alternate path.
In addition to optimizing the network and helping sites connect with the backbone or upgrade to 100 GE, Benninger and Lappa will support efforts by a number of PSC and XSEDE staff to add new functions that take advantage of the higher bandwidth.
- The XSEDE-wide File System (XWFS) will allow the increasingly large files required by researchers to be moved rapidly between XSEDE sites.
- Web 10G, developed by Chris Rapier, PSC network programmer, Andrew K. Adams, PSC network engineer and John Estabrook, network programmer at the National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, will monitor data flowing from servers to the network to help pinpoint sources of slowdown even as they happen.
- VLAN (virtual local area network) provisioning will allow any two XSEDE sites to set up a “virtual network” between the two sites that performs as if it were a direct, hard-wire data connection, avoiding the need to set up potentially complex routing through the network.
Last Updated on Friday, 10 January 2014 09:10
A Million Little Pictures
Anton-Created “Movies” of Key Nerve Cell Protein Reveal Unexpected Mechanism
June 18, 2013
Consider the humble clothespin.
Merely looking at a static photo of one enables us to see how it works. Its arms pivot around the spring, jaws opening to grab a T-shirt, then pivot closed again.clothsepin
When we look at the complex molecules inside living cells, though, it isn’t always clear that the static image gives us enough information to get it right. In the case of the aspartate and glutamate neurotransmitter transporters of the brain’s nerve cells, static X-ray pictures had offered a fairly compelling argument for how the transporters work. Trouble is, the pictures were misleading.
Using PSC’s Anton computer, Elia Zomot and Ivet Bahar of the University of Pittsburgh School of Medicine have created “movies” of the transporter that show it simply can’t move the way that the static images had hinted. By gaining additional insight into the true mechanism of action, the researchers have helped improve our understanding of a component involved in the progressive death of nerve cells, like stroke or Alzheimer’s disease. The researchers reported their results in the Journal of Biological Chemistry in November 2012
A very interesting machine
The space between a nerve cell and another nerve cell to which it communicates is called the “synapse.” When a nerve cell fires, it floods the synapse with chemical messengers called neurotransmitters. This in turn causes the second cell to fire. The transmitter molecules remaining in the synapse afterward, though, pose a problem.
“It’s important to clear excess neurotransmitter from the synapse,” says Bahar, John K. Vries Chair in Computational & Systems Biology at Pitt. “When you have such an excess … you can develop neurodegenerative diseases” like Alzheimer’s, Huntington’s, epilepsy or the nerve-cell death following stroke or brain trauma.
The glutamate and aspartate transporters are proteins that form channels across the nerve cell’s outer membrane. These transporters pump either glutamate or aspartate out of the synapse and into the cell, using the tendency of sodium ions to flow into the cell as a power source, says Zomot, first author of the journal article and a research associate in Bahar’s laboratory.
The protein has two faces, which pivot open and closed, a little like the opposite ends of a clothespin. It starts with its outside-facing end open. When both a neurotransmitter molecule and two sodium ions attach to binding sites in the channel at the center of the transporter — about where the spring is in a clothespin — the outward face swings closed. This motion opens the transporter’s inward face, releasing both transmitter and sodium ions into the cell.
“It’s a very interesting machine,” Bahar says. “It opens up, lets the transmitter in, then closes down, changing conformation and becoming inward facing so it can release the transmitter without leakage.”
Devil in the details
The problem, though, was in the details. In the static X-ray images, a loop in the transporter protein’s structure, called HP1, appears initially to block the channel near the central pivot point. If that were true, HP1 would work by moving aside and letting the transmitter and sodium ions into the cell when the protein pivots.
“People made inferences that this region was probably functioning as a gate,” Bahar says. “But this was based only on static images.”
Zomot and Bahar used Anton, designed and constructed by D. E. Shaw Research, to simulate the motions of the transporter as it pivoted. PSC hosts Anton for use by the national biomedical community as part of a collaboration between the company and PSC’s National Resource for Biomedical Supercomputing. Zomot and Bahar acquired time on Anton through a grant from the National Center for Multiscale Modeling of Biological Systems, a National Institutes of Health-funded collaboration between PSC, Pitt, Carnegie Mellon University and the Salk Institute.
Anton was very different from other systems Zomot has used. But learning its ways offered serious advantages, he says. “Once you become familiar with it, you find out that it’s really worth the effort of learning the system. You can accomplish much, much more than you could with the other supercomputing clusters that we usually have access to.”
Bahar agrees. “In a few hours, the machine generates what another would in months.”
transporter cycleWorth millions of pictures
Anton created a virtual model of a specific member of the glutamate/aspartate transporter family, an aspartate transporter called GltPh. Using ball-like atoms connected by spring-like molecular bonds, the computer calculated how all the different parts of the transporter would move around as they experienced normal atomic movements and vibrations.
The model delivered a bit of a surprise: HP1, which had been a favorite bet to be the gate based on the X-ray work, was not in the way (see left). Instead another loop, HP2, did seem to function as the gate. This is an important discovery, potentially saving countless hours of drug development revolving around the wrong molecular target.
“Now we know really how it functions,” Bahar says. “If you want to understand function, you need to see how it moves… If a picture is worth a thousand words, a movie is worth millions of pictures.”
The researchers are getting ready to study another neurotransmitter transporter — one similar to those that move the brain neurotransmitters dopamine, serotonin and others out of the synapse.
“We have another allocation on Anton,” Zomot says, and studies of the leucine transporter, which is more distantly related to the glutamate/aspartate transporters, could reveal both common mechanisms and useful differences for drug developers wanting to affect one but not the other. With Anton, “finally we have the option to do this type of study in a reasonable amount of time.”