next up previous contents
Next: compiling and running HPF Up: submitting scalar jobs to Previous: compiling and running OpenMP   Contents

submittting multi-threaded jobs

If your cluster is set up primarily to do MPI jobs then simply submit a multi-threaded job in the same way as a scalar job. The SGE job scheduler will run the job exclusively on a single compute node since SGE will be configured with 1 job slot per node. e.g
   qsub  my_2thread_job.sh

If your cluster is set up primarily as a job farm then SGE will have been set up with 2 job slots per compute node. In this case you need to submit a multi-threaded job to a special parallel queue to ensure that it occupies 2 slots on a single compute node rather than 2 slots from 2 compute nodes. e.g.

   qsub -pe smp 2 my_2thread_job.sh
would submit a job to occupy both processors on a dual proceessor compute node.

In this case if you were to submit a multi-threaded job as a scalar job then it is possible your code would share with another scalar job.

Please ask your administrator which way you should submit a multi-threaded job if you are unsure how your cluster is set up.


next up previous contents
Next: compiling and running HPF Up: submitting scalar jobs to Previous: compiling and running OpenMP   Contents
2004-06-17