PSC Resources Neocortex Neocortex former projects

Former Projects on Neocortex

Mark Anastasio, University of Illinois Urbana-Champaign
Automated sleep states classification for wide-field calcium imaging using deep learning

Project Abstract: 

Wide-field calcium imaging (WFCI) with genetically encoded calcium indicators enables spatial-temporal recordings of neuronal depolarization in mice on a sub-second temporal scale with simultaneous examination of neurovascular coupling and cell type specificity. When applied to the study of sleep, it requires human experts to manually score hours of WFCI recordings by use of adjunct electroencephalogram (EEG) and electromyogram (EMG) signals. However, this process is tedious, time-consuming and often suffers from low inter- and intra-rate reliability and invasiveness. Therefore, an automated sleep states classification method applied on WFCI sequential data is desired. Given that sleep is a cyclic process and the high temporal resolution provided by WFCI, it is of our interest to investigate the use of deep learning models which exploits temporal dependencies among events to classify sleep states on a large-scale dataset of spatial-temporal sequential WFCI recordings. In addition, uncovering the spatial-temporal features underlying calcium dynamics in mice by use of deep learning may enable future sleep-focused studies with WFCI.

Gregory Beroza, Stanford University

Earthquake Phase Association with Graph Neural Networks
Project Abstract:

In this work we propose a new Graph Neural Network (GNN) architecture for earthquake phase association, in which we process streaming datasets of estimated seismic wave arrival times (known as “picks”), determine the number and location of earthquakes in a time window, and associate picks to earthquake sources. We train the GNN through supervised learning with synthetic pick datasets, for which ground truth is known, and for which there is high variability and noise (false picks) in the datasets. The network is not trained for a particular configuration of stations, rather it is trained to allow variable: network geometry, numbers of stations, station qualities, and pick rates. By frequently including closely overlapping events in space and time in the training data, the GNN learns to untangle overlapping events. As a mathematical function, the GNN maps sets of sets (sets of discrete picks, on each station) to a continuous, smooth, bounded-prediction of source-likelihoods in space-time, similar to the traditional back-projection (BP) mapping; however, it greatly suppresses side-lobes that plague traditional approaches, and large and small earthquakes are mapped to a similar output value (in contrast to BP, where outputs scale with the number of observed picks). The technique has been tested on real data from the NC network of northern California, using machine-learning-produced picks as input, where we recover over 95% of previously reported earthquakes > M1 throughout the interval 2000 – 2020. Initial applications suggest that the GNN will reveal at least 5x more previously undetected earthquakes < M1, that will reveal active fault structure in unprecedented detail. By enabling us to train more rapidly, the computing capabilities of Neocortex can help us to significantly enhance these results further. With the Neocortex computing platform, we will have the necessary capabilities to optimize the GNN more thoroughly over the hyperparameter space, tune the synthetic data generator to reduce the covariate shift between synthetic and real data, and add additional modules to the architecture, such as an initial full waveform processing layer. We will also be able to perform an ablation analysis to analyze the performance of the individual components of the GNN more thoroughly, which can help identify aspects of the architecture that can be improved, and assist other researchers in adapting our GNN to their own applications.

William Bradley, Mirabolic Consulting

Voxel Pretraining for Few-Shot Learning
Project Abstract:

TBA

Vincenzo Carnevale, Temple University

Discovery & Engineering Protein Artificial Intelligence (DEPr-AI) with Manifold-aware Protein Resuscitation (MaPR): Synthetic Bioinformatics in the Age of AlphaFold2

Project Abstract:

 Though separated in some cases by over 1B years of evolutionary time, divergent protein families may share remarkable sequence and structural patterns. Although the patterns are complex, and although the evolutionary parameters that generated them are ambiguous, the patterns are detectable, given sufficient protein data. Therefore, a model trained on suffibcient data could in principle extract the parameters from the patterns, and then parameterize itself to generate synthetic proteins that are statistically indistinguishable from those generated by natural evolutionary processes, but in a controllable way. For the first time, sufficient data are available to train such a model. We propose Discovery and Engineering Protein Artificial Intelligence (DEPr-AI), a BERT-like autoencoder neural network (AENN) generative protein model trained on evolutionary patterns in protein sequence and structure data. Until just recently, sufficient volumes of protein structure data were unavailable, and so prior generative protein modeling methods focused primarily on sequence data. Here, we propose to leverage the rich information contained in the sequence-structure map that was previously unaccounted for. The recent release of AlphaFold2 (AF2) by Google’s DeepMind, which can predict structures with atomic precision on par with experimental techniques, makes our proposed work newly feasible and extremely timely. The first part of this research proposal is to use Neocortex to generate hundreds of thousands of protein structures using AF2, a challenge that would take days or weeks of GPU on current XSEDE resources such as Bridges2, but could take days, hours, perhaps even minutes on Neocortex. After AF2 structure generation on Neocortex, we will extend our prior generative protein sequence modeling efforts to characterize the relationship between protein sequence-structure and conformational dynamics using DEPr-AI, which employs a novel joint embedding approach of sequences, merged with their corresponding structures, into a paired representation. By embedding these joint sequence-structure protein entities into the latent space of an AENN during training, DEPr-AI can learn the sequence-structure-function continuum from the evolutionary patterns in the data, and encode the continuum into the topology of a fitness landscape with improved accuracy and interpretability over current methods. We propose another method, Manifold-aware Protein Resuscitation (MaPR), which DEPr-AI can use to interpolate new synthetic proteins from the latent space by \”resuscitating\” them along high probability geodesic paths between known proteins. With MaPR, DEPr-AI, and AF2, all running on Neocortex, we will deliver breakthroughs in protein discovery, engineering, and analysis that were technologically infeasible until now. Further, we have already begun coordinating with experimental collaborators, who will verify that our synthetic proteins have the features predicted by DEPr-AI.

Kenneth Chiu, Binghamton University

Wafer-Scale Geometric Deep Learning on the PSC Neocortex
Project Abstract:

Neural networks, especially deep neural networks, have seen remarkable success in the past decade on a variety of problems. Many problems, however, are naturally modeled as a graph problem, for which traditional neural networks are not well-suited. This has led to the development of graph neural networks (GNN). GNNs or Message Passing Neural Networks are a set of deep learning algorithms based on message passing or graph convolutions, and are designed for supervised and unsupervised learning on graph structured data. The message passing or convolution operation is analogous to the filter operator from Convolutional Neural Networks (CNN) over neighboring pixels. CNNs can be considered grid-like or lattice graphs with a consistent number of neighbors. GNNs act over a more generalized set of operations, and thus can have an arbitrary number of neighbors and also varied kernel operations. As a result, kernel operations vary depending on the data itself, and require generalized sparse scatter/gather communication over the data features. We will use a customized CSR/CSC format with custom kernels to perform efficient reduction across neighboring graph vertices. We will co-develop our implementation with three applications. One application will be inverse molecular design. Molecules are naturally represented as graph structures, and deep learning on molecules has been hugely successful in many domains such as materials science, and drug discovery. Successful incorporation of deep learning in the molecular design loop can result in development of exotic materials for energy storage, energy generation, and combat climate change. Structure-property prediction is an important part of the design of new materials. Our other application will be predicting events in the world’s most popular multiplayer video game: League of Legends. Using high-resolution large-scale data of thousands of played games, we will learn interactions in complex dynamic graphs that update in real-time. Dynamic graphs such as these will be a case study for performing accelerated deep learning on real-time graphs. Our third application will be to identify state-sponsored disinformation by online interaction graphs.

Timothy Chung, University of Pittsburgh

Artificial Intelligence Framework to Predict Wall Stresses on Aneurysms

Project Abstract:

Abdominal aortic aneurysm (AAA) is the progressive, degenerative dilation of the terminal aorta; without treatment AAAs can undergo rupture, an often-fatal event that is the 13th most common cause of death in the US. Clinical intervention occurs when the maximum diameter exceeds 5.5 cm, a diameter beyond which it is thought that the risk of rupture is greater than risk of intervention. A biomechanical tool, the rupture potential index (RPI) was developed by our group2,3 through computational finite element analysis (FEA) and experimental uniaxial extension testing. RPI is defined as the ratio of transmural wall stress (driven by systolic pressure) and failure strength (the maximum strength the aneurysm wall can support). However, the RPI has not translated clinically due to the heavy computational requirement, the reliance on manual segmentation methods and the relatively low number of patient images studied (the combined number of significant studies investigating peak wall stress is around 348 where the RPI was not always calculated10). We use a combination of machine learning techniques to automatically segment aneurysm geometries and perform predictive modeling of wall stresses based on many computational simulations. Comparisons of shape and biomechanical indices are quantified to determine the reliability of the automatically reconstructed AAA geometry. Preliminary results have shown that we are able to predict wall stresses within 0.34% based on shape indices without the need for computational simulations. Increased sample size will allow us to further develop a clinically translatable tool to predict the biomechanical status of AAA.

Peiwen Cong, Georgia Institute of Technology

Deep learning analysis for single-molecule ligand-receptor interaction

Project Abstract:

Ligand-receptor interactions’ biophysical and biochemical characteristics govern many biological processes, particularly cell signal transduction, where extracellular molecular bindings relay messages through membranes to initiate intracellular responses. Our in-situ nanotools: micropipette adhesion frequency assay and biomembrane force probe have delivered cutting-edge knowledge about the single-molecule level ligand-receptor interaction right in their native micro-environments. At the core of these nanotools, their ultra-sensitive kinetic measures heavily rely on 1-dimensional deformation of the micropipette-aspirated red blood cell (RBC). Here, we propose to improve them with the convolutional neural network (CNN) for feature extraction followed by the recurrent neural network (RNN) for dynamic event detection, which potentially leads to more precise quantifications, insightful interpretations, and accurate decision-making. The unique opportunity created by Neocortex can ease the challenges and accelerate the progress to integrate these deep learning components into our current ligand-receptor kinetic analysis workflows.

Giulia Fanti, Carnegie Mellon University

Privacy-preserving synthetic data from federated clients
Project Abstract:

Synthetic data refers to randomized data that is drawn from the same (or a similar) distribution to an underlying ground truth dataset. In recent years, synthetic data has become remarkably high-quality, due to the growing successes of deep generative models. However, synthetic data from deep generative models is typically trained over centralized datasets. In practice, one important use case for synthetic data is to understand data patterns at distributed clients (e.g., in a federated learning (FL) setting). Our goal in this project is to design a method for generating synthetic data at a central server, from the data of many distributed, privacy-conscious clients. We propose to achieve this by first computing a privacy-preserving estimate of the mean and covariance of client embeddings, under a pre-trained embedder, as described in our upcoming paper at the TrustML Workshop at ICLR 2023. Then, we aim to generate synthetic data at the server side that matches the privately-estimated embedding distribution of the client data, by decoding a private embedding into a full sentence. In doing so, we wish to explore whether federated client fine-tuning can be eliminated in some cases, in favor of fine-tuning on privately-generated synthetic datasets. Our proposed pipeline will require fine-tuning (or possibly retraining) standard large language models, such as BERT or T5 on benchmark datasets.

Wu Feng, Virginia Tech

ComputeCOVID19++: Accelerating Medical Diagnosis and Monitoring via High-Performance Deep Learning on CT Images
Project Abstract:

ComputeCOVID19++ builds on our existing work with ComputeCOVID19+, a CT-based framework that significantly enhances the speed and accuracy of diagnosing and monitoring COVID-19 (and its variants) via a deep-learning network for CT image enhancement called DDnet, short for DenseNet and Deconvolution network. In this work, we propose to create a new algorithm that is synergistically co-designed with the Cerebras CS-1 neuromorphic hardware and its associated neuromorphic software in the integrated HPE Superdome Flex and Cerebras CS-1 system. As such, we seek to improve the regularization and specificity of our DDnet in ComputeCOVID19+, which enhances the quality of any given CT scan, and then map the sparsity of our model onto Cerebras CS-1 to reduce the training time of DDnet. In addition, we seek to propose and validate the efficacy of a new set of metrics that can then be used as a guide to quantify the sparsity of any deep-learning model for different types of layers such as convolution and fully connected layers.

 

Boniface Fokwa, University of California Riverside

High-throughput and data-mining search for new rare-earth-free permanent magnetic borides
Project Abstract:

The project will focus on applying machine learning to discover new metal borides with high magnetocrystalline anisotropy and high Curie temperatures, with the long-term goal of realizing rare-earth free permanent magnets (PMs) that can be competitive or surpass the current PMs. The creation of DFT databases (predicted structures) and accumulated experimental data (e.g. the Materials Project and CITRINE INFORMATICS) has opened new avenues for designing new materials with targeted properties. Particularly, machine learning techniques have provided the ability to use these data sets to rapidly predict the preferred crystal structure or physical properties of intermetallics. Specifically, we will use subsets of known and predicted structures that will serve as training sets for the machine learning algorithm, while the large databases available (Materials Project and ICSD) will be used to expand the training data sets, which will then enable the prediction of new candidate structures.

John Galeotti, Carnegie Mellon University

AI Understanding of Ultrasound Scans: Semantic Segmentation and Diagnosis Trained with Simulation and Genetic/Back-Prop Hybrid Training
Project Abstract:

TBA

Siddhartha Ghosh, National Center for Atmospheric Research

Exploring Wafer Scale Engine on Fluid Dynamics Simulations for Atmospheric and Other Applications

Project Abstract:

Numerical weather prediction (NWP) models are often implemented using well known finite difference or finite volume numerical schemes characterized as low arithmetic intensity algorithms. They are typically limited by memory bandwidth and latency and parallelized on a x-y (lat-lon) grid, with a small number of vertical levels, with an order of magnitude of 10. As such, they appear to be a great fit for the WSE architecture. The efforts supported by this allocation request would seek to assess performance capacity of the WSE architecture for stencil-based numerical approaches that underpin many NWP codes in existence today.

Rafael Gomez-Bombarelli, Massachusetts Institute of Technology

Improving predictability of anomalies in vitreous silica using uncertainty-based adversarial attack

Project Abstract:

Understanding the structure of glassy materials represents a tremendous challenge for both experiments and computations. One of the most common glass materials, vitreous silica, has been used in a plethora of commercial and scientific applications, but is still not well understood despite decades of research. Sharing the same tetrahedral order as water, vitreous silica has been known to exhibit several anomalous behaviors in its physical properties, including a temperature dependent density minimum around 900˚C and density maximum around 1500˚C. Due to such anomalies, many empirical force fields and machine learning interatomic potentials have shown to be volatile in predictions of physical properties that accurately reflects mechanical and density anomalies in silica. Here, we exploit automatic differentiation strategy in graph neural network (GNN) potentials to discover highly uncertain glass configurations such that structural configurations that are responsible for anomalies in vitreous silica have a higher likelihood to be learned. The automatic differentiation strategy is done by performing adversarial attack on a differentiable uncertainty metric. When combined into an active learning loop, only a small amount of expensive ab initio molecule dynamics trajectories is needed as the initial training dataset.

Ruoying He, North Carolina State University

Ocean Reanalysis Data-Driven Deep Learning Forecast
Project Abstract:

In this project, a hybrid model of empirical orthogonal function (EOF)-complete ensemble empirical mode decomposition (CEEMD)-artificial neural network (ANN) will be developed to enable efficient and accurate ocean forecast for the Northwest Atlantic Ocean. EOF analysis transforms the spatial-temporal prediction problem into a time series prediction problem. It can reduce computational effort and dimensionality, capture spatial relationships, and consider correlations between different variables. Then, CEEMD can improve the predictability of nonlinear time series. ANNs are subsequently used to predict CEEMD-derived time series from the PCs corresponding to EOFs. This work is expected to lay a solid foundation for AI research in oceanography, and provide a temporal-spatial domain prediction of ocean conditions that can be used for marine hazards forecast and mitigation.

Han Hu, University of Arkansas
Robust Fault Detection of Cooling Systems using Multimodal Fusion
Project Abstract:

The ever-increasing vehicle electrification has led to critical challenges to electronic cooling. High power pulsed load may cause faults of the cooling systems (e.g., boiling crisis) that may eventually lead to overheating and device failures. Due to the stochasticity of the cooling process, traditional physics-based thermal models are not capable of handling transient heat loads. Deep learning models have been developed for fault detection during two-phase cooling based on single-channel signals but suffer from low generalizability and interpretability. To address this issue requires considering creative and novel data analytic approaches involving theoretical mathematics. A recent subject that provides a promising approach is called topological data analysis (TDA) and its principle tool, persistent homology (PH). The proposed project seeks to develop an interpretable fusion model for two-phase cooling fault detection that leverages multimodal sensor signals from cooling systems (e.g., temperature, pressure, sound, and images), the pre and post-processing power, and internal DL modeling capabilities of TDA and PH, and attention-based interpretation to improve model accuracy, reliability, and interpretability. Multimodal signals from heterogeneous sources will be collected to create a database for two-phase cooling data. A multimodal fusion network will be developed and trained using the database with integrated TDA/PH capabilities for data compression and feature engineering and the interpretability of the network will be examined through attention maps-based analysis.

George Karniadakis, Brown University

Training of conservative physics-informed neural networks (CPINN) to solve the incompressible Navier-Stokes equation at high Reynolds number
Project Abstract:

TBA

Tushar Krishna, Georgia Institute of Technology

Enabling Training and Inference of Large and Sparse Deep Learning Models

Project Abstract:

The end of Moore’s Law has necessitated a need for domain-specific hardware accelerators for efficiently running High Performance Computing (HPC) workloads. The Neocortex platform provides access to the Cerebras wafer-scale engine which is an accelerator that supports dataflow execution. The focus of this proposal is to develop and study efficient algorithms for key linear algebra kernels used in HPC workloads. Specifically, we will target Graph Neural Networks at the target workload, that include dense and sparse matrix multiplications. The PI is also part of the Department of Energy ARIAA center and will leverage ongoing research in key tensor kernels from the center and identify acceleration mechanisms using Neocortex.

Pin-Kuang Lai, Stevens Institute of Technology

Apply Machine Learning to Predict Antibody Drug Developability
Project Abstract:

The number of monoclonal antibody (mAb) drugs in clinical trials or approved for use has increased rapidly in recent years with 97 drugs currently approved by the U.S. Food and Drug Administration (FDA) or the European Medicines Agency (EMA) as of Aug 2020. In addition to successful antibody binding to the target to stimulate biological responses, the developability properties of mAbs such as the feasibility of their manufacture, stability in storage and absence of off-target stickiness are essential to new drug development. In fact, attrition of therapeutic candidates during clinical development is the major factor in high development costs. However, the developability profiles of antibodies are difficult to assess in the early-stage discovery and candidate screening due to limited number of molecules, material availability and lack of physical understanding. Therefore, developing predictive tools that can evaluate the developability of antibody as early in the discovery/development process is desired. These include low aggregation rate and low viscosity. Previously, we have developed different computational tools based on molecular dynamics (MD) simulations or use features extracted from MD simulations to develop machine learning model to predict antibody aggregation and viscosity. Two of the key descriptors are called spatial aggregation propensity (SAP) and spatial charge map (SCM), respectively. The calculation of SAP and SCM requires to build homology model from antibody sequences and run MD simulations to get ensemble average. This step is very time consuming and needs supercomputers to do the tasks. The goal of this project is to apply neural networks to train MD simulation results using antibody sequence as inputs. The ML model thus obtained will speed up the calculation of SAP and SCM scores and facilitate antibody screening in the early-stage design.

Jason Larkin, Carnegie Mellon University

Simulation and Benchmarking of Quantum Machine Learning
Project Abstract:

TBA

Yaping Liu, Cincinnati Children’s Hospital Medical Center

Impute cell free DNA fragmentation pattern from low-coverage whole-genome sequencing

Project Abstract: 

TBA

Ryan Mills, University of Michigan

Molecular Mutagenesis by Biological Graphs
Project Abstract:

Variation in gene expression is a complex process correlated with multiple molecular features, such as chromatin structure, epigenetic marks, gene-gene and protein/protein interactions, as well as post-transcriptional modifications. The assayable molecular contexts of a locus (such as methylation, histone modification, and chromosome conformation) are suggestive, not causal: no single feature is enough to reveal the entirety of genomic interactions. We are developing new methods representing genes as tissue-specific, multilayer heterogenous networks based on regulatory and interaction assays. The graph structure is then trained to regress quantitative measurements of gene expression through an attention-based graph neural network. Such an approach allows us to mutagenize the features within the structure to query the relative impact of molecular changes on expression at tissue-specific and gene-specific resolution. Our goal is to understand and discover the patterns of molecular combinations that come together and affect the regulation of gene expression, and to figure whether or not the varying impact of molecular features surrounding a gene create a type of regulatory language that describes gene function and genomic architecture. To do this, we require advanced GPUs for training large models, owing to the fact that genomics data is infamously large and heterogenous. The novel graph structures we are using regularly require more memory than multiple 32GB GPUs can provide.

Lyle Muller, Salk Institute for Biological Studies

Large-scale spiking network models to explain dynamics of visual perception and working memory

Project Abstract:

TBA

Graham Neubig, Carnegie Mellon University
Large-scale Pre-training for Natural Language to Code Generation
Project Abstract:

This project aims to create pre-trained models for natural language to code\ngeneration, the task of generating programs from natural language descriptions. This has the potential to make\nprogramming easier, and perhaps even allow for command and control of computers by non-programmers. Our research\nteam has a large amount of experience in this area, but lacks resources to scale models to very large datasets such\nas training on the entirety of github, which this proposal aims to address. We also plan to examine novel models for\ncode generation based on non-parametric models, which look up related examples in a training corpus, which is\nimportant both for performance and interpretability. All models we develop will be made available open source for the community to use.

Bhiksha Raj, Carnegie Mellon University

Unsupervised labelling and learning from large audio datasets

Project Abstract: 

Acoustic scene analysis is fast becoming a standard technology, expected in devices such as smartphones. But the latest solutions are limited by the availability of labelled training data. In this project, we propose to automatically label a very large quantity of audio data, to generate currently the largest dataset for use by the research community. This will, however, require the development of algorithms that can iterate over such large amounts of data and iteratively refine their automatically generated labels. On traditional machine learning hardware such as Graphical Processing Units (GPUs), we expect our approach to take several weeks or more of compute time for a single pass through the dataset, leading to unreasonable latencies in research (and development) time. We believe that the neocortex system can reduce the iteration time by orders of magnitude, and enable us to optimize our unsupervised inference algorithms and put out labelled data resources that will be of high value, not just to us, but to the research community at large.

Gail Rosen, Drexel University

Interpretable Deep Modeling of SARS-CoV-2 Sequences
Project Abstract:

We propose to use Neocortex to generate interpretable deep learning models of how Sars-CoV-2 (COVID-19) sequence variation affects viral phenotype, viral evolution, and host phenotype / clinical outcomes. To date, nearly 4.9 million viral genome sequences have been collected and submitted to the GISAID Initiative’s central database (http://www.gisaid.org). This volume of data represents an unprecedented opportunity to learn more about this novel disease, and how it is evolving and changing. Building from our research group’s prior work on interpretable deep learning, we employ a Transformer architecture, using an optional CNN filter to reduce model complexity, with a distinct sequence-wide attention layer for the purpose of interpretability. Our framework provides for two levels of interpretability, by generating both attention graphs that reveal important sequence features, as well as embeddings that can be used to visualize underlying patterns in sequence variation. We will use the Neocortex architecture to analyze larger COVID-19 sequence data sets and improve our deep modeling framework.

Huajie Shao, The College of William & Mary

Exploring Interpretable Deep Learning from Information Theoretic Perspective: Modeling and Applications
Project Abstract:

Despite the great success of AI techniques in many different applications, such as computer vision, self-driving cars, and robotics, it is still hard for humans to fully understand and interpret them. The goal of this proposal is to reason and understand deep learning models by learning the disentangled representations. Disentangled representation learning aims at learning a low-dimensional representation that consists of multiple interpretable latent factors of the observations. The semantically meaningful latent factors help us better explain which one affects the classification and prediction accuracy. However, learning disentangled representations based on Variational Autoencoders (VAE) models pose two major challenges. First, many existing models require prior knowledge of some data generative factors from human annotation to train the model, costing lots of human labor. The second challenge is the trade-off problem between reconstruction and disentanglement learning. This proposal intends to solve these two issues by applying control theory, information bottleneck, self-supervised learning, and casual representation learning. Finally, we plan to apply the disentangled representations from our models to improve downstream tasks, such as image generation, reinforcement learning, and text generation. The proposed solution requires a high computing capability, on-device memory, and inter-device communication throughput. We believe the CS-1 WSE is a natural fit for our problem and is expected to significantly reduce the requirement for GPUs to train the proposed model.

Gil Speyer, Arizona State University

Analysis of differential dependency on large-scale RNA expression networks

Project Abstract: 

The dependency between genes within a functional biological pathway can be contrasted between two conditions through the calculated divergence between distributions of dependency networks [1]. The EDDY (Evaluation of Differential DependencY) is a statistical test to identify gene sets, a.k.a., pathways, that are significantly “rewired”, by leveraging a probabilistic framework with resampling and permutation, aided by the incorporation of annotated gene sets, to demonstrate superior sensitivity when compared to other methods. Further, the ample and independent computation coupled with manageable memory footprint incurred by this statistical rigor positions EDDY as an excellent subject for graphical processing unit (GPU) acceleration [2]. Custom kernels written in CUDA decompose the independence test loop, network construction, network enumeration, and Bayesian network scoring to accelerate the computation. The algorithm has recently been used to discover novel drugs for pulmonary hypertension, repurposed from small compounds that are designed for cancer treatments [3]. The Neocortex RFP provides an opportunity to pursue new directions with EDDY analysis, such as the interrogation of larger gene sets and the development of statistical sampling strategies for larger (e.g. single-cell) RNA expression sample sets.

[1] Jung S, Kim S. EDDY: a novel statistical gene set test method to detect differential genetic dependencies. Nucleic Acids Res. 2014 Apr;42(7):e60. doi: 10.1093/nar/gku099. Epub 2014 Feb 5. PMID: 24500204; PMCID: PMC3985670.

[2] G. Speyer, J. Rodriguez, T. Bencomo and S. Kim, “GPU-Accelerated Differential Dependency Network Analysis”, 2018 26th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP), 2018, pp. 410-414, doi: 10.1109/PDP2018.2018.00072.

[3] Negi V, Yang J, Speyer G, Pulgarin A, Handen A, Zhao J, Tai YY, Tang Y, Culley MK, Yu Q, Forsythe P, Gorelova A, Watson AM, Al Aaraj Y, Satoh T, Sharifi-Sanjani M, Rajaratnam A, Sembrat J, Provencher S, Yin X, Vargas SO, Rojas M, Bonnet S, Torrino S, Wagner BK, Schreiber SL, Dai M, Bertero T, Al Ghouleh I, Kim S, Chan SY. Computational repurposing of therapeutic small molecules from cancer to pulmonary hypertension. Sci Adv. 2021 Oct 22;7(43):eabh2794. doi: 10.1126/sciadv.abh2794. Epub 2021 Oct 20. PMID: 34669463.

Sreeskandarajan Sutharzan, Cincinnati Children’s Hospital Medical Center

A novel deep learning method for discovering genetic mechanisms underlying differential gene regulation
Project Abstract:

Gene regulation is a fundamentally important molecular process that is required for all known forms of life. Gene regulation is defined as the processes underlying the activation or repression of gene expression levels. Transcription factors (TFs) are proteins that play key roles in gene regulation. The human genome encodes >1,600 TFs, each of which plays an important gene regulatory role in particular contexts. TFs act by recognizing short DNA sequences in the genome. Upon doing so, they recruit other proteins to ultimately influence gene expression levels. In this sense, TFs are the primary molecules responsible for interpreting the genome. Our lab and many others are currently engaged in understanding this complex “regulatory grammar”, with the ultimate goal of predicting gene expression levels from DNA sequence alone. Achieving this goal would enable a thorough understanding of genome function, and how genetic variation contributes to phenotypes and diseases. Recent advances in Deep Learning methodologies and capabilities are quickly enabling major progress towards this goal. In this study, we propose to leverage the power of Deep Learning to study a particularly important question in regulatory genomics – what DNA sequences underlie differential gene regulatory mechanisms that occur due to differential cellular conditions?

Xulong Tang, University of Pittsburgh

Characterizing DNN training on Neocortex
Project Abstract:

This project aims to conduct characterization of the new hardware, Neocortex, designed for AI applications. We aim to study the hardware execution statistics including the execution bottlenecks. The results and observations will help develop better application mappings as well as improve architecture designs for executing AI applications on Neocortex.

John Wohlbier, Carnegie Mellon University

Identifying Actor Characteristics in State-Linked Information Operations Using Twitter Data and Graph Based Neural Networks
Project Abstract:

TBA

Zhiyong Zhang, Stanford University

An Integrated Machine Learning Platform of GWAS (Genome Wide Association Study) and Epigenetics for Personalized Bladder Cancer Clinical Applications

Project Abstract:

TBA

Parkinson’s Research, Evolution of Vocalization, AI Training Tool, and National AI Collaboration Underlie Four HPCwire Awards to PSC

High Performance Computing Achievements Recognized by Peers, Editors of Leading Trade Press Magazine at SC24 Conference in Atlanta

ByteBoost Workshop: Accelerating HPC Skills and Advancing Computational Research

Student Projects Tackle Challenges in Drug Discovery, Congressional Policy, Coordinating Heavy Air Traffic, and More

Dana O’Connor – MCS Senior Rookie Awardee

Dana O’Connor, Machine Learning Research Scientist, talks about her recent Senior Rookie award and her work at PSC.

PSC’s Bridges-2 Joins Neocortex Among Elite Artificial Intelligence Computers Allocated through National NAIRR Pilot Project

The Pittsburgh Supercomputing Center’s Bridges-2 supercomputer is now available to scientists through the National AI Research Resource (NAIRR) Pilot Project.

PSC’s Neocortex Among Elite Artificial Intelligence Computers Selected for National AI Research Resource Pilot Project

Initial Goal of NAIRR Pilot Project, Also Supported by Allocations Software Developed by PSC and ACCESS Partners, Will Be to Explore Trustworthy AI

Training

Contact us

Email us at neocortex@psc.edu

 Acknowledgment in publications

Please use the following citation when acknowledging the use of computational time on Neocortex:

Buitrago P.A., Nystrom N.A. (2021) Neocortex and Bridges-2: A High Performance AI+HPC Ecosystem for Science, Discovery, and Societal Good. In: Nesmachnow S., Castro H., Tchernykh A. (eds) High Performance Computing. CARLA 2020. Communications in Computer and Information Science, vol 1327. Springer, Cham. https://doi.org/10.1007/978-3-030-68035-0_15

This material is based upon work supported by the National Science Foundation under Grant Number 2005597. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.