Account Administration

In this document

Charging for Bridges use

Charges for using Bridges depend on the type of node you use, which is determined by the type of allocation you have: "Bridges regular", for Bridges' RSM (128GB) nodes); "Bridges GPU", for Bridges' GPU nodes; or "Bridges large", for  Bridges LSM and ESM (3TB and 12TB) nodes.

Usage is charged in "Service Units" or SUs.  The definition of an SU varies with the type of node being used.

Bridges regular 

The  RSM nodes are allocated as "Bridges regular".  This does not include Bridges' GPU nodes.  Each RM node holds 28 cores, each of which can be allocated separately. Service Units (SUs) are defined in terms of "core-hours": the use of one core for 1 hour.  

1 core-hour = 1 SU

Because the RM nodes each hold 28 cores, if you use one entire RM node for one hour, you will be charged 28 SUs.

28 cores x 1 hour = 28 core-hours = 28 SUs

If you use 2 cores on a node for 30 minutes, you will be charged 1 SU.

2 cores x 0.5 hours = 1 core-hour = 1 SU

Bridges GPU

Bridges contains two kinds of GPU nodes: NVIDIA Tesla K80s and NVIDIA Tesla P100s. Service Units (SUs) for GPU nodes are defined in terms of "gpu-hours": the use of one GPU unit for one hour.

Because of the difference in the performance of the nodes, the charges are different for the two types of nodes.

K80 nodes 

The K80 nodes hold 4 GPU units each, each of which can be allocated separately.  Service units (SUs) are defined in terms of gpu-hours:

For K80 GPU nodes, 1 GPU-hour = 1 SU

If you use 2 entire K80 nodes for 1 hour, you will be charged 8 SUs.

4 GPU units/node x 2 nodes  x 1 hour = 8 gpu-hours = 8 SUs

If you use 2 GPU units for 3 hours, you will be charged 6 SUs.

2 GPU units x 3 hours = 6 gpu-hours = 6 SUs

P100 nodes

The P100 nodes hold 2 GPU units each, which can be allocated separately.  Service units (SUs) are defined in terms of GPU-hours.  Because the P100s are more powerful than the K80 nodes, the charge is higher.

For P100 GPU nodes, 1 GPU-hour = 2.5 SUs

If you use an entire P100 node for one hour, you will be charged 5 SUs.

2 GPU units/node x 1 node x 1 hour = 2 gpu-hours

2 gpu-hours x 2.5 SUs/gpu-hour = 5 SUs

If you use 1 GPU unit on a P100 for 8 hours, you will be charged 20 SUs.

1 GPU unit x 8 hours = 8 gpu-hours

8 gpu-hours x 2.5 SU/gpu-hours = 20 SUs

Bridges large

The LSM and ESM nodes are allocated as "Bridges large".  Charging for the LM and ESM nodes is done by the memory requested for the job. Service Units (SUs) are defined in terms of "TB-hours": the use of 1TB of  memory for one hour. Note that you will be charged based on how much memory you request when the job is submitted, not by how much memory you actually use.  Because the memory is set aside for your use when your job begins, you are charged for what has been allocated for you.

1 SU = 1 TB-hour 

If your job requests 3TB of memory and runs for 1 hour, you will be charged 3 SUs.

3TB x 1 hour = 3TB-hours = 3 SUs

If your job requests 8TB and runs for 6 hours, you will be charged 48 SUs.

8TB x 6 hours = 48 TB-hours = 48 SUs 

 

Managing multiple grants

If you have more than one grant, be sure to charge your usage to the correct one.  Usage is tracked by group name.

Find your group names

To find your group names, use the id command.

id -Gn

will list all the groups you belong to.

Find your current group

id -gn

will list the group you associated with your current session.  

Charge to a (non-default) group

Batch jobs and interactive sessions are charged to your primary group by default.  To charge your usage to a different group, you must specify the appropriate group  with the -A groupname  option to the SLURM sbatch command.   See the Running Jobs section of this Guide for more information on batch jobs, interactive sessions and SLURM.

Note that any files created during a job are owned by your primary group, no matter which group is charged for the job.

Change your group for a login session

To change the group which will be charged for usage during a login session, use the newgrp command.

newgrp groupname

Until you logout (or issue another newgrp command) , groupname is charged for all usage.  All files created during this time will belong to groupname, and their storage is charged against the quota for groupname.

On the next login, your default group is back in effect, and will be charged for usage.

Change your default group permanently

Your primary group is charged with all usage by default.  To change your primary group, the group to which your SLURM jobs are charged by default, use the change_primary_group command.  Type:

change_primary_group -l

to see all your groups.  Then type

change_primary_group groupname

to set groupname as your default group.

Tracking your usage

There are several methods you can use to track your Bridges usage. The xdusage command is available on Bridges. There is a man page for xdusage. The projects  command will also help you keep tabs on your usage.  It shows grant  information, including usage and the pylon directories associated with the grant.

Type:

projects

 

For more detailed accounting data you can use the Grant Management System.   You can also track your usage through the XSEDE User Portal. The xdusage and projects command and the XSEDE Portal accurately reflect the impact of a Grant Renewal but the Grant Management System currently does not.

Managing your XSEDE allocation

Most account management functions for your XSEDE grant are handled through the XSEDE User Portal.  You can search the Knowledge Base to get  help.  Some common questions:

Changing your default shell

The change_shell command allows you to change your default shell.   This command is only available on the login nodes.

To see which shells are available, type

change_shell -l

To change your default shell, type

change_shell newshell

where newshell is one of the choices output by the change_shell -l command.   You must use the entire pathoutput by change_shell -l, e.g. /usr/psc/shells/bash.  You must log out and back in again for the new shell to take effect.

System Status

  • Bridges is Up

     

      Bridges is running normally.

New on Bridges

Filesystems upgrades mean changes in usage for pylon5 and pylon2.
Read more

Omni-Path User Group

The Intel Omni-Path Architecture User Group is open to all interested users of Intel's Omni-Path technology.

More information on OPUG