Pittsburgh Supercomputing Center 

Advancing the state-of-the-art in high-performance computing,
communications and data analytics.

Greenfield User Guide

 

Running jobs 

Torque, an open source version of the Portable Batch System (PBS), controls all access to Greenfield's compute processors for both batch and interactive jobs. 

To run a job on Greenfield, use the qsub command to submit a job script to the scheduler. A job script consists of PBS directives, comments and executable commands. 

Queue structure

There are two queues on Greenfield: the batch queue and the debug queue. Interactive jobs can run in either queue and the method for doing so is discussed below.

The debug queue is used for debugging runs. It is not to be used for production runs. The maximum walltime for jobs in the debug queue is 30 minutes.

The batch queue is for all production runs. Jobs can use a maximum of 168 hours of walltime.

You determine how much memory your job will be allocated through the value of your core request.

Qsub command

After you create your batch script, submit it to PBS with the qsub command.

qsub myscript.job

Your batch output--your .o and .e files--is returned to the directory from which you issued the qsub command after your job finishes.

   See an explanation and examples of job scripts 

Interactive access

A form of interactive access is available on greenfield by using the -I option to qsub. For example, the command

qsub -I -l nodes=1:ppn=15 -l walltime=5:00 -q debug

requests interactive access to 15 cores for 5 minutes in the debug queue. Your qsub -I request will wait until it can be satisfied. If you want to cancel your request you should type ^C.

When you get your shell prompt back your interactive job is ready to start. At this point any commands you enter will be run as if you had entered them in a batch script. Stdin, stdout, and stderr are connected to your terminal. To run an MPI or hybrid program you must use the mpirun command just as you would in a batch script.

When you finish your interactive session type ^D. When you use qsub -I you are charged for the entire time you hold your processors whether you are computing or not. Thus, as soon as you are done executing commands you should type ^D.

X11 connections in interactive use

In order to use any X11 tool, you must also include -X on the qsub command line:

qsub -X -I -l nodes=1:ppn=15 -l walltime=5:00

This assumes that the DISPLAY variable is set. Two ways in which DISPLAY is automatically set for you are:

  1. Connecting to greenfield with ssh -X greenfield.psc.xsede.org
  2. Enabling X11 tunneling in your Windows ssh tool

Monitoring and Killing Jobs

The qstat and pbsnodes commands provide information about jobs and queues. The qdel command is used to kill a job.

qstat

The qstat command requests the status of jobs or queues.  Options include:

-a Displays the status of the queues, including  running and queued jobs. For each job it shows the amount of walltime and the number of cores and processors requested.  For running jobs it shows the amount of walltime the job has already used.
-s Includes comments provided by the batch administrator or scheduler.
-f Provides a full status display
-u username Displays all running or queued jobs belonging to user username

pbsnodes

The pbsnodes -a command shows details about the nodes.

qdel

The qdel command is used to kill queued and running jobs. For example,

qdel 54

Give the jobid of the job you want to kill (here, 54) as the argument to qdel.  The jobid is displayed when the job is submitted.  You can also find the jobid with qstat -u username. If you cannot kill a job you want to kill send email to This email address is being protected from spambots. You need JavaScript enabled to view it..">This email address is being protected from spambots. You need JavaScript enabled to view it..

  

 


Publications

Spring2015 cover

View all PSC publications

PSC Fall2014 cover PSC Spring2014cPSC2013 covers web 

Subscriptions: You can receive PSC news releases via e-mail. Send a blank e-mail to psc-wire-join@psc.edu.