Neocortex

PSC Resources Neocortex

Cerebras CS-2, the world’s most powerful AI system, is the main compute element of Neocortex.

Unlocking Interactive AI Development for Rapidly Evolving Research

Neocortex is a highly innovative resource that targets the acceleration of AI-powered scientific discovery by greatly shortening the time required for deep learning training and by fostering greater integration of artificial deep learning with scientific workflows.  The revolutionary hardware in Neocortex facilitates the development of more efficient algorithms for artificial intelligence and graph analytics.

With Neocortex, users can apply more accurate models and larger training data, scale model parallelism to unprecedented levels, and avoid the need for expensive and time-consuming hyperparameter optimization.

Questions?

For more information about Neocortex, please email neocortex@psc.edu.

PSC’s Bridges-2 Joins Neocortex Among Elite Artificial Intelligence Computers Allocated through National NAIRR Pilot Project

The Pittsburgh Supercomputing Center’s Bridges-2 supercomputer is now available to scientists through the National AI Research Resource (NAIRR) Pilot Project.

National AI Research Resource Announcement

Click the title above to get the link for the announcement.

PSC’s Neocortex Among Elite Artificial Intelligence Computers Selected for National AI Research Resource Pilot Project

Initial Goal of NAIRR Pilot Project, Also Supported by Allocations Software Developed by PSC and ACCESS Partners, Will Be to Explore Trustworthy AI

Training

Contact us

Email us at neocortex@psc.edu

Cerebras CS-2, the world’s most powerful AI system, is the main compute element of Neocortex.

Unlocking Interactive AI Development for Rapidly Evolving Research

Neocortex is a highly innovative resource that targets the acceleration of AI-powered scientific discovery by greatly shortening the time required for deep learning training and by fostering greater integration of artificial deep learning with scientific workflows.  The revolutionary hardware in Neocortex facilitates the development of more efficient algorithms for artificial intelligence and graph analytics.

With Neocortex, users can apply more accurate models and larger training data, scale model parallelism to unprecedented levels, and avoid the need for expensive and time-consuming hyperparameter optimization.

Questions?

For more information about Neocortex, please email neocortex@psc.edu.

Click on any of the topics below to learn more.

ML models supported

Four types of applications are currently supported on the Neocortex system:

  • Cerebras modelzoo ML models
  • Models similar to the Cerebras modelzoo models
  • General purpose SDK
  • WFA, WSE Field-equation API

Research projects

Research on Neocortex spans topics from NLP through physics, chemistry and biology. See some of the exciting projects using Neocortex to advance knowledge.

Getting an allocation

See how to apply, including eligibility requirements and the Neocortex Acceptable Use Policy.

Training

Training on the Neocortex system presents a system overview of this innovative platform for AI and ML research, and insights on how to best take advantage of its unique hardware, software, and capabilities.

User guide

Learn about the applications that are supported and how to conduct your research on Neocortex.

System specifications

Neocortex features two Cerebras CS-2 systems and an HPE Superdome Flex HPC server robustly provisioned to drive the CS-2 systems simultaneously at maximum speed and support the complementary requirements of AI and HPDA workflows.

Neocortex is federated with PSC’s flagship computing system, Bridges-2, which provides users with:

  • Access to the Bridges-2 filesystem for management of persistent data
  • General purpose computing for complementary data wrangling and preprocessing
  • High bandwidth connectivity to other ACCESS sites, campus, labs, and clouds

Cerebras CS-2

Each CS-2 features a Cerebras WSE-2 (Wafer Scale Engine 2), the largest chip ever built.

AI processor:

Cerebras Wafer Scale Engine 2

  • 850,000 Sparse Linear Algebra Compute (SLAC) Cores
  • 2.6 trillion transistors
  • 46,225 mm² 40 GB SRAM on-chip memory
  • 20 PB/s aggregate memory bandwidth
  • 220 Pb/s interconnect bandwidth
System I/O:
1.2 Tb/s (12 × 100 GbE ports)

HPE Superdome Flex

Processors

32 x Intel Xeon Platinum 8280L

  • 28 cores, 56 threads each
  • 2.70-4.0 GHz
  • 38.5 MB cache

(More info on the Superdome Flex processors)

Memory
24 TiB RAM, aggregate memory bandwidth of 4.5TB/s
Local disk

32 x 6.4TB NVMe SSDs

  • 204.6 TB aggregate
  • 150 GB/s read bandwidth
Network to CS systems

24 x 100 GbE interfaces

  • 1.2 Tb/s (150 GB/s) to each Cerebras CS system
  • 2.4 Tb/s aggregate
Interconnect to Bridges-2

16 Mellanox HDR-100 InfiniBand adapters

  • 1.6 Tb/s aggregate
OS
Red Hat Enterprise Linux

 Acknowledgment in publications

Please use the following citation when acknowledging the use of computational time on Neocortex:

Buitrago P.A., Nystrom N.A. (2021) Neocortex and Bridges-2: A High Performance AI+HPC Ecosystem for Science, Discovery, and Societal Good. In: Nesmachnow S., Castro H., Tchernykh A. (eds) High Performance Computing. CARLA 2020. Communications in Computer and Information Science, vol 1327. Springer, Cham. https://doi.org/10.1007/978-3-030-68035-0_15

PSC’s Bridges-2 Joins Neocortex Among Elite Artificial Intelligence Computers Allocated through National NAIRR Pilot Project

The Pittsburgh Supercomputing Center’s Bridges-2 supercomputer is now available to scientists through the National AI Research Resource (NAIRR) Pilot Project.

National AI Research Resource Announcement

Click the title above to get the link for the announcement.

PSC’s Neocortex Among Elite Artificial Intelligence Computers Selected for National AI Research Resource Pilot Project

Initial Goal of NAIRR Pilot Project, Also Supported by Allocations Software Developed by PSC and ACCESS Partners, Will Be to Explore Trustworthy AI

Contact us

Email us at neocortex@psc.edu

This material is based upon work supported by the National Science Foundation under Grant Number 2005597. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.