DXC

Data Exacell (DXC): Hardware Infrastructure

Storage Building Blocks - eight storage building blocks each 300-500TB, able to be configured and reconfigured for multiple filesystems, currently servicing:

  • Crucible, a high-performance, shared, parallel SLASH2 file system instance that extends technologies PSC developed for the Data Supercell. The Crucible filesystem is mounted on all DXC and Greenfield nodes.
  • Hopper, a high-performance, shared, parallel SLASH2 file system instance spread across multiple locations nationwide to show the WAN capabilities of SLASH2
  • OpenStack - OpenStack storage, including cinder and ceph

MDS servers - two Intel Xeon based servers with 128 GB of RAM and SSD to support the Storage Building Blocks

Extreme Shared Memory Nodes - two HPE Superdome X servers with 12TB of hardware-enable coherent shared memory, ideal for memory-intensive applications expressed in Java, Python, R, and other high-productivity programming languages, as well as genome sequence assembly, graph analytics, and other memory-intensive research areas.

Large Shared Memory Nodes - three HPE DL580 servers with 3TB of hardware-enable coherent shared memory, ideal for memory-intensive applications expressed in Java, Python, R, and other high-productivity programming languages, as well as genome sequence assembly, graph analytics, and other research areas needing up to 3TB of shared memory. Each DXC LSM server has 4 Intel Xeon E7-4880 v2 CPUs (15 cores, 2.5 GHz) for a total of 60 cores (120 hyperthreads) per server.

Accelerated Nodes - featuring GPUs and Intel Xeon Phi processors provide hardware infrastructure for developing highly concurrent applications for machine learning, simulation and other areas.

  • GPU Nodes: 3 servers containing a mix of NVIDIA GeForce GTX980, Tesla K20, Tesla K40 and Tesla K80 GPUs enable application development in CUDA OpenACC and using accelerated frameworks for deep learning simulation, visualization and other fields.
  • Knights Landing Nodes: 2 servers contain Intel Knights Landing (KNL) Xeon Phi processors to enable application development in OpenMP and using accelerated frameworks for deep learning, simulation and other fields.

Database Nodes - equipped with solid-state disks (SSDs) for high IOPs for serving persistent relational and NoSQL databases. DXC database servers have two Intel Xeon CPUs each and 128-256GB of RAM.

  • Each pilot application that requires one or more databases is given its own virtual machine to improve customizability and enhance security.

  • Note: In-memory databases and in-memory implementations of other databases such as Neo4j can be run on the DXC's other large-memory nodes.

Web Server Nodes - for running Apache, Tomcat, and related web infrastructure. DXC web servers each have two Intel Xeon CPUs and 128GB of RAM.

  • Each pilot application that requires web services is given its own virtual machine to improve customizability and enhance security.

Application Server Node - each with two Intel Xeon CPUs and 128-384GB of RAM.  Perfect for serving a wide variety of uses

  • Each pilot application that requires one or more application is given its own virtual machine to improve customizability and enhance security.

Hadoop Nodes - each with two Intel Xeon CPUs and 128GB of RAM. Customizable to allow many different hadoop workloads

Front End Server - Intel Xeon based server, handles shared home directories and sofware across the DXC project as well as serving as a login node to traditional cluster users.

Operational Support - three Intel Xeon based servers for handling logging, configuration and administration tasks for the other servers within the DXC project.

Systems Development and Testing - Six Intel Xeon based servers used for active development and testing of the SLASH2 filesystem.

Greenfield - A high performance computing resource for users who needed memory-limited scientific applications in fields as different as biology, chemistry, cosmology, machine learning and economics. Greenfield comprises 360 cores and 18TB of memory in three nodes: two Large Shared Memory Nodes and an Extreme Shared Memory Node.

The Crucible filesystem and DXC nodes are interconnectd by FDR InifiniBand for high performance.

DXC pilot applications have used one or more DXC hardware resources, depending on the applications' requirements.

Events Calendar

<<  September 2017  >>
 Su  Mo  Tu  We  Th  Fr  Sa 
       1  2
  3  4  5  6  7  8  9
10111213141516
17181920212223
24282930