Pittsburgh Supercomputing Center 

Advancing the state-of-the-art in high-performance computing,
communications and data analytics.

Bridges 4c stacked

Bridges is a uniquely capable resource for empowering new research communities and bringing together HPC and Big Data. Bridges is designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users. Its richly-connected set of interacting systems offers exceptional flexibility for data analytics, simulation, workflows and gateways, leveraging interactivity, parallel computing, Spark and Hadoop. Bridges includes:

  • Compute nodes with hardware-supported shared memory ranging from 128GB  to 12TB per node to support genomics, machine learning, graph analytics and other fields where partitioning data is impractical 
  • GPU nodes to accelerate applications as diverse as machine learning, image processing and materials science
  • Database nodes to drive gateways and workflows and to support fusion, analytics, integration and data management
  • Webserver nodes to host gateways and provide access to community datasets
  • Data transfer nodes with 10 GigE connections to enable data movement between Bridges and XSEDE, campuses, instruments and other advanced cyberinfrastructure

We hope the following information will help you understand what Bridges is and how you can use this unique resource in your research.  If you would like to hear more about Bridges' capabilities or discuss how you can take advantage of Bridges, call us at 412-268-4960 or email This email address is being protected from spambots. You need JavaScript enabled to view it..

   Is Bridges a good fit for my research?

Bridges is being built to facilitate research ranging from traditional HPC areas like astronomy and physics  through emerging fields like genomics to decision science, natural language processing and digital humanities.   Bridges could be a good fit for you if:

You want to scale your research beyond the limits of your laptop, using familiar software and user environments.

You want to collaborate with other researchers with complementary expertise. 

Your  research can take advantage of any of the following:

  • Rich data collections - Rapid access to data collections will support their use by individuals, collaborations and communities.
  • Cross-domain analyses - Concurrent access to datasets from different sources, along with tools for their integration and fusion, will enable new kinds of questions.
  • Gateways and workflows - Web portals will provide intuitive access to complex workflows that run "behind the scenes". 
  • Large coherent memory - Bridges' 3TB and 12TB nodes will be ideal for memory-intensive applications, such as genomics and machine learning.
  • In-memory databases  - Bridges' large-memory nodes will be valuable for in-memory databases, which are important due to their performance advantages.
  • Graph analytics - Bridges' hardware-enabled shared memory nodes wil execute algorithms for large, nonpartitionable graphs and complex data very efficiently.
  • Optimization and parameter sweeps - Bridges is designed to run large numbers of small to moderate jobs extremely well, making it ideal for large-scale optimization problems.
  • Rich software environments - Robust collections of applications and tools, for example in statistics, machine learning and natural language processing, will allow researchers to focus on analysis rather than coding. 
  • Data-intensive workflows - Bridges' filesystems and high bandwidth will provide strong support for applications that are typically I/O bandwidth-bound.  One example is an analysis that runs best with steps expressed in different programming models, such as data cleaning and summarization with Hadoop-based tools, followed by graph algorithms that run more efficiently with shared memory. 
  • Contemporary applications - Applications written in Java, Python, R, MATLAB, SQL, C++, C, Fortran, MPI, OpenACC, CUDA and other popular languages will run naturally on Bridges.

   What is Bridges' hardware architecture?

View the Bridges virtual tour
Use Chrome to view the Bridges virtual tour, showing Bridges architecture and illustrating features which could be used in various research models.

Bridges will comprise 3 classes of compute nodes with additional dedicated nodes for databases, webservers and data transfer.  Several types of filesystems with different functions will be available.  Bridges components will be interconnected by the Intel Omni-Path Fabric.

Bridges will have 3 classes of compute nodes: 4 Extreme Shared Memory (ESM) nodes, HP Integrity Superdome X servers with 16 Intel Xeon EX-series CPUs and 12TB of RAM; several tens of Large Shared Memory (LSM) nodes, HP DL580 servers with 4 Intel Xeon EX-series CPUs and 3TB of RAM; and many hundreds of Regular Shared Memory (RSM) nodes, each with 2 Intel Xeon EP-series CPUs and 128GB of RAM.

Bridges' database nodes will be dual-socket Xeon servers with 128GB of RAM. Some will contain solid-state disks to deliver high IOPs for latency-sensitive workloads and others will contain banks of hard disk drives for capacity-oriented workloads.

Bridges' webserver nodes will be dual-socket Xeon servers with 128GB of RAM and be connected to PSC's wide-area network, including XSEDE and commodity Internet. They will be implemented in virtual machines to provide security, allow maximum use of Bridges' resources and grant project-specific customization of the web server configuration.

Bridges' data transfer nodes will be dual-socket Xeon servers with 128GB of RAM and 10 GigE connections to PSC's wide-area network, enabling high-performance data transfers between Bridges and XSEDE, campuses, instruments and other advanced cyberinfrastructure.

Bridges will support a shared parallel filesystem for persistent storage, node-local filesystems and memory filesystems.

The shared parallel filesystem, named Pylon, will be a high-bandwidth, high-capacity centralized, parallel system cross-mounted across Bridges' nodes.  Pylon is modeled on other PSC production filesystems, including the one on the Data Exacell.  It is entirely disk-based with high-level RAID providing data safety.  Pylon will have approximately 10PB of storage and bandwidth to the system of approximately 180GB/s.

Node-local filesystems will be available on each compute node. They will provide natural support for Hadoop and software layers that need it, applications, frameworks, etc. that distribute data or are written to shard, and applications that benefit from local "scratch" storage.  Node-local filesystems will also improve bandwidth and performance consistency to Pylon.

Memory filesystems will be supported on Bridges' compute nodes, especially the ESM and LSM nodes.  They will provide maximum IOPs and bandwidth to improve the performance of applications such as pipelined workflows, genome sequence assembly and in-memory instantiations of otherwise disk-based databases.

Bridges' components wil be interconnected by the Intel Omni-Path Fabric, which delivers 100Gbps line speed, low latency, excellent scalability and improved tolerance to data errors. A unique two-level "island" topology, designed by PSC, will maximize performance for the intended workloads.  Compute islands will provide  full bi-section bandwidth communication performance to applications spanning up to 42 nodes.   Storage islands will take advantage of the Intel Omni-Path Fabric to implement mutiple paths and  provide optimal bandwidth to the Pylon filesystem.  Storage switches will be cross-linked to all other storage switches and connect management nodes, database nodes, web server nodes and data transfer nodes.

   When can I apply?

Requests for Startup allocations are accepted at any time.  They are the easiest way to get started with XSEDE resources and are recommended for all new XSEDE users.  See https://portal.xsede.org/allocations-overview#types-startup for more information.

Requests for Research allocations are accepted four times a year:
Mar 15th through Apr 15th
Jun 15th through Jul 15th
Sep 15th through Oct 15th
Dec 15th through Jan 15th
See https://portal.xsede.org/allocations-overview#types-research for more information.

   How can I apply?

Request an allocation on Bridges through the XSEDE User Portal. You will need to create an XSEDE portal account before you can apply if you don't already have one.

    How do I prepare an allocation request?

There are detailed instructions on the XSEDE User Portal at https://portal.xsede.org/allocation-request-steps explaining how to prepare an allocation request.  These resources may be helpful:

A video on writing and submitting a successful XSEDE allocation proposal.  

A sample resource request for Bridges

Examples of successful allocation requests for other XSEDE resources

   What do I ask for in an allocation request?

At a minimum, you will request computing time and a storage allocation.

  See an example of an XSEDE resource request for Bridges.  The "Resource Justification" section may be  particularly helpful in quantifying your request.

Computing time:  You will request computing time on Bridges "Regular memory" nodes or Bridges "Large memory" nodes, or both, depending on what your application needs. 

Bridges "Regular memory" nodes are appropriate for applications needing up to 128GB of cache-coherent shared memory. "Large memory" nodes can accommodate applications requiring up to 12TB of cache-coherent shared memory.

Computing time allocations are given in terms of Service Units (SU).

If you will use Bridges' "Large Memory" nodes, SUs are defined in terms of memory-hours:

1 Service Unit = 1TB-hour

For Bridges' "Regular Memory" nodes, SUs are defined in terms of core-hours:

1 SU = 1 core-hour

Storage allocation: You must also request a storage allocation on Pylon, a parallel filesystem shared across all of Bridges' nodes. The default Pylon allocation is 512GB.  If you need more than that, you should justify it in your allocation request.

Other: You must also note in your request if you need GPUs, big data frameworks like Hadoop, or virtual machines (if your research requires persistent databases or webservers, e.g., for gateways or community datasets).

   How should I estimate the resources I will need?

If  you need experience with large memory HPC to help you estimate the resources your research requires, you can request a Start-up allocation on PSC's Greenfield system for benchmarking.  You can also ask for  help from XSEDE's Extended Collaborative Support Service.

If you have questions about resources for persistent databases, gateways or other types of distributed applications, please contact PSC User Services.  

  See an example of an XSEDE resource request for Bridges.  The "Resource Justification" section may be  particularly helpful in quantifying your request.

   Do I need to make any special requests in addition to computing time on Bridges?

If your research requires any of the following, be sure to specifically ask for (and justify) them in your allocation request:

  • Big data frameworks like Spark and Hadoop
  • GPUs
  • Virtual machines
  • More than 512GB of storage

    Where can I get more information?

If you would like to hear more about Bridges' capabilities or discuss how you can take advantage of Bridges in your research, call us at 412-268-4960 or email This email address is being protected from spambots. You need JavaScript enabled to view it..