本文共 4725 字,大约阅读时间需要 15 分钟。
The current batch job manager on Dell Linux clusters is PBS. To send a batch job to PBS, users need to write a script that is readable by PBS to specify their needs. A PBS script is bascially a shell script which contains embedded information for PBS. The PBS information takes the form of a special comment line which starts with #PBS and continues with PBS specific options.
Two example scripts, with comments, illustrates how this is done. To set the context, we'll assume the user name is myName, and the script file is named myJob.
To run a serial job with PBS, you might create a bash shell script named myJob with the following contents:
#!/bin/bash # # All PBS instructions must come at the beginning of the script ,before # any executable commands occur. # # Start by selecting the "single" queue, and providing an allocation code. # #PBS -q single #PBS -A your_allocation_code # # To run a serial job, a single node with one process is required. # #PBS -l nodes=1:ppn=1 # # We then indicate how long the job should be allowed to run in terms of # wall-clock time. The job will be killed if it tries to run longer than this. # #PBS -l walltime=00:10:00 # # Tell PBS the name of a file to write standard output to, and that standard # error should be merged into standard output. # #PBS -o /scratch/myName/serial/output #PBS -j oe # # Give the job a name so it can be found readily with qstat. # #PBS -N MySerialJob # # That is it for PBS instructions. The rest of the file is a shell script. # # PLEASE ADOPT THE EXECUTION SCHEME USED HERE IN YOUR OWN PBS SCRIPTS: # # 1. Copy the necessary files from your home directory to your scratch directory. # 2. Execute in your scratch directory. # 3. Copy any necessary files back to your home directory. # Let's mark the time things get started with a date-time stamp. date # Set some handy environment variables. export HOME_DIR=/home/myName/serial export WORK_DIR=/scratch/myName/serial # Make sure the WORK_DIR exists: mkdir -p $WORK_DIR # Copy files, jump to WORK_DIR, and execute a program called "demo" cp $HOME_DIR/demo $WORK_DIR cd $WORK_DIR ./demo # Mark the time it finishes. date # And we're out'a here! exit 0
Once the contents of myJob meets your requirements, it can be submitted with the qsub command as so:
qsub myJob
To run a parallel job, you would follow much the same process as the previous example. This time the contents of your file myJobwould contain:
#!/bin/bash # # Use "workq" as the job queue, and specify the allocation code. # #PBS -q workq #PBS -A your_allocation_code # # Assuming you want to run 16 processes, and each node supports 4 processes, # you need to ask for a total of 4 nodes. The number of processes per node # will vary from machine to machine, so double-check that your have the right # values before submitting the job. # #PBS -l nodes=4:ppn=4 # # Set the maximum wall-clock time. In this case, 10 minutes. # #PBS -l walltime=00:10:00 # # Specify the name of a file which will receive all standard output, # and merge standard error with standard output. # #PBS -o /scratch/myName/parallel/output #PBS -j oe # # Give the job a name so it can be easily tracked with qstat. # #PBS -N MyParJob # # That is it for PBS instructions. The rest of the file is a shell script. # # PLEASE ADOPT THE EXECUTION SCHEME USED HERE IN YOUR OWN PBS SCRIPTS: # # 1. Copy the necessary files from your home directory to your scratch directory. # 2. Execute in your scratch directory. # 3. Copy any necessary files back to your home directory. # Let's mark the time things get started. date # Set some handy environment variables. export HOME_DIR=/home/$USER/parallel export WORK_DIR=/scratch/myName/parallel # Set a variable that will be used to tell MPI how many processes will be run. # This makes sure MPI gets the same information provided to PBS above. export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'` # Copy the files, jump to WORK_DIR, and execute! The program is named "hydro". cp $HOME_DIR/hydro $WORK_DIR cd $WORK_DIR mpirun -machinefile $PBS_NODEFILE -np $NPROCS $WORK_DIR/hydro # Mark the time processing ends. date # And we're out'a here! exit 0
Once the file myJob contains all the information for the desired parallel process, it can be submitted it with qsub, just as before:
qsub myJob
Users with more experience writing shell scripts can take advantage of additional shell environment variables which are set by PBS when the job begins to execute. Those interested are directed to the qsub man page for a list and descriptions.
https://docs.loni.org/wiki/Submitting_a_Batch_Job_in_PBS
转载地址:http://lzuli.baihongyu.com/