博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Submitting a Batch Job in PBS
阅读量:4198 次
发布时间:2019-05-26

本文共 4725 字,大约阅读时间需要 15 分钟。

The current batch job manager on Dell Linux clusters is PBS. To send a batch job to PBS, users need to write a script that is readable by PBS to specify their needs. A PBS script is bascially a shell script which contains embedded information for PBS. The PBS information takes the form of a special comment line which starts with #PBS and continues with PBS specific options.

Two example scripts, with comments, illustrates how this is done. To set the context, we'll assume the user name is myName, and the script file is named myJob.

A Serial Job Script (One Process)

To run a serial job with PBS, you might create a bash shell script named myJob with the following contents:

#!/bin/bash # # All PBS instructions must come at the beginning of the script ,before # any executable commands occur.  # # Start by selecting the "single" queue, and providing an allocation code. # #PBS -q single #PBS -A your_allocation_code # # To run a serial job, a single node with one process is required. # #PBS -l nodes=1:ppn=1 #  # We then indicate how long the job should be allowed to run in terms of # wall-clock time. The job will be killed if it tries to run longer than this. # #PBS -l walltime=00:10:00 #  # Tell PBS the name of a file to write standard output to, and that standard # error should be merged into standard output. # #PBS -o /scratch/myName/serial/output #PBS -j oe # # Give the job a name so it can be found readily with qstat. # #PBS -N MySerialJob # # That is it for PBS instructions. The rest of the file is a shell script. #  # PLEASE ADOPT THE EXECUTION SCHEME USED HERE IN YOUR OWN PBS SCRIPTS: # #   1. Copy the necessary files from your home directory to your scratch directory. #   2. Execute in your scratch directory. #   3. Copy any necessary files back to your home directory. # Let's mark the time things get started with a date-time stamp. date # Set some handy environment variables. export HOME_DIR=/home/myName/serial export WORK_DIR=/scratch/myName/serial  # Make sure the WORK_DIR exists: mkdir -p $WORK_DIR # Copy files, jump to WORK_DIR, and execute a program called "demo" cp $HOME_DIR/demo $WORK_DIR cd $WORK_DIR ./demo # Mark the time it finishes. date # And we're out'a here! exit 0

Once the contents of myJob meets your requirements, it can be submitted with the qsub command as so:

qsub myJob

A Parallel Job Script (Multiple Processes)

To run a parallel job, you would follow much the same process as the previous example. This time the contents of your file myJobwould contain:

#!/bin/bash # # Use "workq" as the job queue, and specify the allocation code. # #PBS -q workq #PBS -A your_allocation_code #  # Assuming you want to run 16 processes, and each node supports 4 processes,  # you need to ask for a total of 4 nodes. The number of processes per node  # will vary from machine to machine, so double-check that your have the right  # values before submitting the job. # #PBS -l nodes=4:ppn=4 #  # Set the maximum wall-clock time. In this case, 10 minutes. # #PBS -l walltime=00:10:00 #  # Specify the name of a file which will receive all standard output, # and merge standard error with standard output. # #PBS -o /scratch/myName/parallel/output #PBS -j oe #  # Give the job a name so it can be easily tracked with qstat. # #PBS -N MyParJob # # That is it for PBS instructions. The rest of the file is a shell script. #  # PLEASE ADOPT THE EXECUTION SCHEME USED HERE IN YOUR OWN PBS SCRIPTS: # #   1. Copy the necessary files from your home directory to your scratch directory. #   2. Execute in your scratch directory. #   3. Copy any necessary files back to your home directory. # Let's mark the time things get started. date # Set some handy environment variables. export HOME_DIR=/home/$USER/parallel export WORK_DIR=/scratch/myName/parallel  # Set a variable that will be used to tell MPI how many processes will be run. # This makes sure MPI gets the same information provided to PBS above. export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'` # Copy the files, jump to WORK_DIR, and execute! The program is named "hydro". cp $HOME_DIR/hydro $WORK_DIR cd $WORK_DIR mpirun -machinefile $PBS_NODEFILE -np $NPROCS $WORK_DIR/hydro # Mark the time processing ends. date  # And we're out'a here! exit 0

Once the file myJob contains all the information for the desired parallel process, it can be submitted it with qsub, just as before:

qsub myJob

Shell Environment Variables

Users with more experience writing shell scripts can take advantage of additional shell environment variables which are set by PBS when the job begins to execute. Those interested are directed to the qsub man page for a list and descriptions.

https://docs.loni.org/wiki/Submitting_a_Batch_Job_in_PBS

转载地址:http://lzuli.baihongyu.com/

你可能感兴趣的文章
Redis源码剖析--快速列表quicklist
查看>>
Redis源码剖析--列表list
查看>>
Android开发学习 之 五、基本界面控件-4时间控件
查看>>
详细解读Jquery的$.get(),$.post(),$.ajax(),$.getJSON()用法
查看>>
同步与异步的区别
查看>>
IT行业--简历模板及就业秘籍
查看>>
JNI简介及实例
查看>>
DOM4J使用教程
查看>>
JAVA实现文件树
查看>>
Drools 规则引擎
查看>>
OLTP和OLAP区别
查看>>
JMeter最常用的三种类型的压力测试
查看>>
Hibernate HQL 语法大全(上)
查看>>
深入Java事务的原理与应用
查看>>
CSS单位和CSS默认值大全
查看>>
交大我来了--周末再见了
查看>>
网页中flash wmode属性
查看>>
挑战自我,勇攀高峰
查看>>
神奇的HTML5画图应用
查看>>
flex 滚动条问题
查看>>