PyTorch

 

PyTorch is a flexible and intuitive deep learning framework.

Example scripts for GPU and CPU use of PyTorch are available in directory /opt/packages/examples/pytorch on Bridges.

Singularity containers for PyTorch are available for use on Bridges in directory /pylon5/containers/ngc/pytorch. Multiple containers are available, for different versions of python and supporting software.  These containers can be used on the Volta or P100 GPU nodes. See Singularity images on Bridges for more information on the available containers.

Documentation

Usage

Include these commands in your job script or type them after starting an interactive session. See more on the module command.

Load the PyTorch module

PyTorch requires CUDA so you must also load the cuda module. Be sure to check on which version of CUDA is needed. To load the default PyTorch and CUDA modules

module load pytorch
module load cuda

To see if other modules are needed, what commands are available and how to get additional help type

module help pytorch

(Optional) To see which versions of PyTorch are available

module avail pytorch

(Optional) To load a non-default version, if more than one is available, use its full name, e.g.,

module load pytorch/version

Example interactive sessions

 Here are interactive sessions showing the use of PyTorch with both GPU nodes and CPU nodes.  Commands typed by the user are shown in bold.

Interactive GPU example

First, start an interactive session on a GPU node:

[joeuser@br006]$ interact --gpu
A command prompt will appear when your session begins
"Ctrl+d" or "exit" will end your session

Once you have been allocated a GPU node, use PyTorch as follows:

[joeuser@gpu046]$ module load AI/anaconda3-5.1.0_gpu.2018-08 
[joeuser@gpu046]$ source activate $AI_ENV 
(/opt/packages/AI/anaconda3-5.1.0_gpu.2018-08) [joeuser@gpu046 ]$ python
Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) 
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__version__)
0.4.1
>>> torch.cuda.manual_seed(123)
>>> gpu0=torch.device('cuda:0')
>>> A=torch.randn(100,1000,device=gpu0)
>>> x=torch.randn(1000,1,device=gpu0)
>>> y=A.mm(x)
>>> print("|y|^2=%s"%y.pow(2).sum().item())
|y|^2=111037.609375
>>> exit()
(/opt/packages/AI/anaconda3-5.1.0_gpu.2018-08) [joeuser@gpu046]$ source deactivate
[joeuser@gpu046]$ exit
[joeuser@br006]$

 

Interactive CPU example

First, start an interactive session on an RM node:

[joeuser@br006 ]$ interact -N 1 -n 14 -p RM-small
A command prompt will appear when your session begins
"Ctrl+d" or "exit" will end your session

Once you have been allocated an RM node, use PyTorch as follows:

[joeuser@r001]$ module load AI/anaconda3-5.1.0_gpu.2018-08 
[joeuser@r001]$ source activate $AI_ENV
(/opt/packages/AI/anaconda3-5.1.0_gpu.2018-08) [joeuser@r001 ]$ python
Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) 
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__version__)
0.4.1
>>> torch.manual_seed(123)

>>> cpu=torch.device('cpu')
>>> cpu.__class__
<class 'torch.device'>
>>> A=torch.randn(100,1000,device=cpu)
>>> A.__class__
<class 'torch.Tensor'>
>>> x=torch.randn(1000,1,device=cpu)
>>> y=A.mm(x)
>>> print("|y|^2=%s"%y.pow(2).sum().item())
|y|^2=136245.03125
>>> exit()
(/opt/packages/AI/anaconda3-5.1.0_gpu.2018-08) [joeuser@r001 ]$ source deactivate
[joeuser@r001]$ exit
[joeuser@br006]$

User Information

Passwords
Connect to PSC systems:
Policies
Technical questions:

Send mail to remarks@psc.edu or call the PSC hotline: 412-268-6350.