Applying for a Bridges account

How to prepare a request; what to request; when requests can be submitted; examples of successful requests. 

Bridges was built to facilitate research ranging from traditional HPC areas like astronomy and physics  through emerging fields like genomics to decision science, natural language processing and digital humanities.   Bridges could be a good fit for you if:

You want to scale your research beyond the limits of your laptop, using familiar software and user environments.

You want to collaborate with other researchers with complementary expertise. 

Your  research can take advantage of any of the following:

  • Rich data collections - Rapid access to data collections will support their use by individuals, collaborations and communities.
  • Cross-domain analyses - Concurrent access to datasets from different sources, along with tools for their integration and fusion, will enable new kinds of questions.
  • Gateways and workflows - Web portals will provide intuitive access to complex workflows that run "behind the scenes". 
  • Large coherent memory - Bridges' 3TB and 12TB nodes will be ideal for memory-intensive applications, such as genomics and machine learning.
  • In-memory databases  - Bridges' large-memory nodes will be valuable for in-memory databases, which are important due to their performance advantages.
  • Graph analytics - Bridges' hardware-enabled shared memory nodes wil execute algorithms for large, nonpartitionable graphs and complex data very efficiently.
  • Optimization and parameter sweeps - Bridges is designed to run large numbers of small to moderate jobs extremely well, making it ideal for large-scale optimization problems.
  • Rich software environments - Robust collections of applications and tools, for example in statistics, machine learning and natural language processing, will allow researchers to focus on analysis rather than coding. 
  • Data-intensive workflows - Bridges' filesystems and high bandwidth will provide strong support for applications that are typically I/O bandwidth-bound.  One example is an analysis that runs best with steps expressed in different programming models, such as data cleaning and summarization with Hadoop-based tools, followed by graph algorithms that run more efficiently with shared memory. 
  • Contemporary applications - Applications written in Java, Python, R, MATLAB, SQL, C++, C, Fortran, MPI, OpenACC, CUDA and other popular languages will run naturally on Bridges.
on May 16, 2019

Allocations for Bridges are open to faculty members, postdocs, and researchers at a U.S.-based institution. Eligible institutions include federal research labs or commercial organizations; however, special rules may apply if your institution is not a university or a two- or four-year college.

The Principal Investigator (PI) for an allocation may not be a high school or undergraduate student; a qualified advisor, e.g., a high school teacher or faculty member, must serve in this capacity. In most cases, a graduate student may not be a PI. A post-doctoral researcher is eligible to be a PI.

For all others, full details on eligibility are available here:

on May 16, 2019

Four types of allocations for non-proprietary research are available on Bridges:

  • Startup grants are appropriate for groups that are just getting familiarized with XSEDE resources.  They are used to prepare a group to submit a full research proposal.

We recommend all new investigators first get a startup allocation before submitting a full research proposal.

  • Research grants are for full production research on Bridges.
  • Educational allocations are to be used by students in classroom settings.

Allocations for propietary research are available through the Corporate Affiliates Program.

on May 16, 2019

View the Bridges virtual tour
The Bridges virtual tour depicts Bridges architecture and illustrates features which could be used in various research models.

Bridges comprises 5 classes of compute nodes with additional dedicated nodes for databases, webservers and data transfer.  Several types of filesystems with different functions are available.  Bridges components are interconnected by the Intel Omni-Path Fabric.

Bridges has 5 classes of compute nodes: 

  • 752 Regular Shared Memory (RSM) nodes, each with 2 Intel Xeon EP-series CPUs and 128GB of RAM
  • 48 "RSM-GPU" nodes: 16 nodes with 2 NVIDIA Tesla K80 GPUs, 2 Intel Xeon EP-series CPUs, and 128GB of RAM each, and 32 nodes with 2 NVIDIA Tesla P100 GPUs, 2 Intel Xeon EP-series CPUs, and 128GB of RAM  each
  • 10 "GPU-AI" nodes: 9 nodes with 8 Volta V100 GPUs each, 16GB of GPU memory per GPU and 192 GB of RAM per node, and a DGX-2, with 16 Volta V100 GPUs,  32GB of GPU memory per GPU and 1.5TB of RAM
  • 42 Large Shared Memory (LSM) nodes, HP DL580 servers with 4 Intel Xeon EX-series CPUs and 3TB of RAM
  • 4 Extreme Shared Memory (ESM) nodes, HP Integrity Superdome X servers with 16 Intel Xeon EX-series CPUs and 12TB of RAM

Bridges' database nodes are dual-socket Xeon servers with 128GB of RAM. Some  contain solid-state disks to deliver high IOPs for latency-sensitive workloads and others contain banks of hard disk drives for capacity-oriented workloads.

Bridges' webserver nodes  are dual-socket Xeon servers with 128GB of RAM and are connected to PSC's wide-area network, including XSEDE and commodity Internet. They are implemented in virtual machines to provide security, allow maximum use of Bridges' resources and grant project-specific customization of the web server configuration.

Bridges' data transfer nodes are dual-socket Xeon servers with 128Gb of RAM and 10 GigE connections to PSC's wide-area network, enabling high-performance data transfers between Bridges and XSEDE, campuses, instruments and other advanced cyberinfrastructure.

Bridges supports a shared parallel filesystem for persistent storage, node-local filesystems and memory filesystems.

The shared parallel filesystem, named Pylon, is a high-bandwidth, high-capacity centralized, parallel system cross-mounted across Bridges' nodes.  Pylon is modeled on other PSC production filesystems, including the one on the Data Exacell.  It is entirely disk-based with high-level RAID providing data safety.  Pylon will have approximately 10PB of storage and bandwidth to the system of approximately 180GB/s.

Node-local filesystems are available on each compute node. They provide natural support for Hadoop and software layers that need it, applications, frameworks, etc. that distribute data or are written to shard, and applications that benefit from local "scratch" storage.  Node-local filesystems also improve bandwidth and performance consistency to Pylon.

Memory filesystems are supported on Bridges' compute nodes, especially the ESM and LSM nodes.  They provide maximum IOPs and bandwidth to improve the performance of applications such as pipelined workflows, genome sequence assembly and in-memory instantiations of otherwise disk-based databases.

Bridges' components are interconnected by the Intel Omni-Path Fabric, which delivers 100Gbps line speed, low latency, excellent scalability and improved tolerance to data errors. A unique two-level "island" topology, designed by PSC,  maximizes performance for the intended workloads.  Compute islands provide  full bi-section bandwidth communication performance to applications spanning up to 42 nodes.   Storage islands take advantage of the Intel Omni-Path Fabric to implement mutiple paths and  provide optimal bandwidth to the Pylon filesystem.  Storage switches are cross-linked to all other storage switches and connect management nodes, database nodes, web server nodes and data transfer nodes.

on May 17, 2019

If you are a researcher or educator at a US academic or non-profit research institution you can apply for Bridges access through the NSF's XSEDE program,by requesting an allocation on Bridges through the XSEDE User Portal. You will need to create an XSEDE portal account before you can apply if you don't already have one.

  • Navigate to
  • Click on the “Sign In” button on the top-right of the screen.
  • At the next screen, you will see a link under the sign in button to “Create Account”.
  • Once you have created the account, login.
  • Navigate to the “Allocations” tab
  • Click on “Submit/Review Request”.
  • On the next page, you will see the option to begin a submission.  We recommend all new investigators first get a startup allocation before submitting a full research proposal.

If you are doing proprietary research, you can get access to Bridges through the Corporate Affiliates program.

on May 17, 2019

Three types of applications are accepted at any time:

Requests for Research XSEDE allocations are accepted four times a year:
Mar 15 - Apr 15
Jun 15 - Jul 15
Sep 15 - Oct 15
Dec 15 - Jan 15
See for more information.  

on May 17, 2019

At a minimum, you will request two things:

  • computing time 
  • a storage allocation on Pylon, Bridges' persistent storage system

 See an example of an XSEDE resource request for Bridges.  The "Resource Justification" section may be  particularly helpful in quantifying your request.

Computing time:  You will request computing time on "Bridges"  for Regular memory nodes, "Bridges GPU" nodes, Bridges "AI-GPU" nodes, "Bridges Large" nodes (3TB - 12TB RAM), or any combination, depending on what your application needs. 

If you want to use GPU nodes, be sure to request a GPU allocation.

  • Bridges "Regular memory" nodes are appropriate for applications needing up to 128GB of cache-coherent shared memory.
  • Bridges "GPU" nodes are for applications which want to take advantage of GPU processors.
  • Bridges "GPU-AI" nodes can support highly complex deep learning models with extremely large data sets.
  • Bridges "Large memory" nodes can accommodate applications requiring up to 12TB of cache-coherent shared memory.

Computing time allocations are given in terms of Service Units (SU).

For Bridges' "Regular Memory" nodes, SUs are defined in terms of core-hours:

The use of one core for 1 hour = 1 SU

For "Bridges GPU" nodes, SUs are defined in terms of GPU-hours.  Because of the difference in the performance between the K80 and P100 nodes, the charges are different for the two types of nodes.

For Bridges' K80 GPU nodes,  the use of 1 GPU for 1 hour  = 1 SU

For Bridges' P100 GPU nodes, the use of 1 GPU for 1 hour  = 2.5 SUs

For Bridges' "GPU-AI" nodes, SUs are defined in terms of GPU-hours:

The use of one GPU for 1 hour = 1 SU

For "Bridges' Large" nodes, SUs are defined in terms of memory-hours:

The use of 1TB of memory for 1 hour = 1 SU

For more information on accounting for Bridges' use and examples of calculating SUs, see the Account Administration section of the Bridges User Guide.

Storage allocation: You must also request a storage allocation on Pylon, a parallel filesystem shared across all of Bridges' nodes. The default Pylon allocation is 512GB.  If you need more than that, you should justify it in your allocation request.

Other: You must also note in your request if you need big data frameworks like Hadoop, or virtual machines (if your research requires persistent databases or webservers, e.g., for gateways or community datasets).

on May 17, 2019

You can look at this example of an XSEDE resource request for Bridges.  The "Resource Justification" section may be  particularly helpful in quantifying your request.

If  you need experience with large memory HPC to help you estimate the resources your research requires, you can request a Startup allocation on Bridges for benchmarking.  You can also ask for  help from XSEDE's Extended Collaborative Support Service.

If you have questions about resources for persistent databases, gateways or other types of distributed applications, please contact PSC User Services


on May 17, 2019

Don't forget to request storage space on pylon, and if your research requires any of the following, be sure to specifically ask for (and justify) them in your allocation request:

  • Big data frameworks like Spark and Hadoop
  • Virtual machines
  • More than 512GB of storage
on May 17, 2019

For startup allocations, the limits are:

  • Bridges Regular Memory: 50,000 core-hours 
  • Bridges GPU: 2,500 GPU-hours
  • Bridges GPU-AI: 1,500 GPU-hours
  • Bridges Large Memory: 1,000 TB-hours 

For research allocations, there are no limits for the size of the request. Note, however, that all requests must be extremely detailed and justified.

on May 17, 2019

Startups and educational proposals take about 2 weeks to be reviewed and processed, depending on the size of the request.

Research allocation proposals are reviewed quarterly.  They must be submitted about 3 months prior to the start date.  See "When can I apply?" for the submission deadlines. Decisions are announced a few weeks before the start date. For example, if a proposal is submitted by January 1 for a start date of April 1, a notification regarding the outcome will be sent around the week of March 15.

on May 17, 2019

If you would like to hear more about Bridges' capabilities or discuss how you can take advantage of Bridges in your research, call us at 412-268-4960 or email

on May 17, 2019