8 Replies Latest reply on May 9, 2018 9:56 PM by Intel Corporation

    torque setting

    afshin67

      Hi,

       

      I have faced two problems.

      1- when I set #PBS -l nodes=1:ppn=1, it does not work and also incurs some error from that line of my submission file. However #PBS -l nodes=1:ppn=2 is fine. But, the point is I do not need two cores. It was working up to three days ago.

      2- I am currently can run at most 15 jobs in a same time. I believe it was 30. Is the policy changed?

       

      Thanks,

      Afshin

        • 1. Re: torque setting
          Intel Corporation
          This message was posted on behalf of Intel Corporation

          Hello,

          Thanks for reaching out to us. 

          Could you please share the screenshot of the error which you observed while submission. Meanwhile we will investigate from our end. 

          Thanks,
          Dilraj

           

          • 2. Re: torque setting
            afshin67

            Hi,

             

            I do not get any error, the submission settings are wrong. I submit:

             

            #!/bin/sh

            #PBS -N $fn

            #PBS -e $addres/reports/errors-$fn.err

            #PBS -o $addres/reports/output-$fn.out

            #PBS -l nodes=1:ppn=2

            #PBS -q batch

            #PBS -l mem=10gb

            #PBS -l vmem=$memo

            #PBS -l walltime=24:00:00

             

            but, the submitted job runs for 6:00:00 with 1 node and 2 cores, without any memory limit, which are the default settings for the batch queue.

            Additionally, I am currently can run at most 15 jobs in a same time. I believe it was 30. Is the policy changed?

             

            Thanks,

            Afshin

            • 3. Re: torque setting
              Intel Corporation
              This message was posted on behalf of Intel Corporation

              Hi Afshin,

              Would like to discuss the problems raised, one by one.

              1.
              #PBS -l nodes=1:ppn=1, it does not work and also incurs some error from that line of my submission file. 
              Reply : It does not look like an error to us. It is more like an information stating that "Queue manager has overridden the nodes request from #PBS -l nodes=1:ppn=1 to #PBS -l nodes=1:ppn=2". It does not affect the running of the job script. However, we would check and let you know why a value of one for ppn is not accepted.

              2. I could run at most 15 jobs at a time. I believe it was 30. Is the policy changed?

              Reply : How did you run 15 jobs ? Did you submit the jobs one after the other? Did it throw an error when the number of submitted jobs is more than 15? We could not recreate such a situation, hence requesting you.

              3. Even after the giving a different PBS setting for wall time and memory, the submitted job ran with default settings.
              Reply: Do you have commands before the #PBS commands in the job script?
              Did you export the variables like $fn, $addres/reports/errors-$fn.err, $addres/reports/output-$fn.out, $memo or did you define it inside the job script?
              Note: PBS Commands should always be given at the very beginning of the job script file. Otherwise it will be ignored.
              Kindly revert with the result of “qstat -xf <JOB_ID>”, after submitting your job.
               
              Regards,
              Anju

              • 4. Re: torque setting
                afshin67

                Hi Anju,

                 

                Thanks for your replay.Here are the answer of your question:

                1- The problem is that from the point that #PBS -l nodes=1:ppn=1, the rest of the settings are getting inactive and the default setting are replaced by, i.e. the submitted job runs for 6:00:00 (instead of 24:00:00) with 1 node and 2 cores (instead of 1 node and 1 core), without any memory limit (instead of 11gb), which are the default settings for the batch queue.

                2- I submitted around 90 jobs and the scheduler started to run 15 of them. When I call qstat -q, there were situations that the number total of jobs in the queue was exactly the number of my jobs with status Q (so one else had jobs in queue), while the total running jobs was something around 30 jobs. I know that the other running jobs might be very resource demanding so that my jobs cannot run, but really it is very in-probable to something like that happen for a whole week. So, I guess that the limit of total running jobs per person is 15.

                3- I do not have any command before #PBS scripts. My file is what I posted. I have another script that creates different setting of the game that I want to run, and that script completes the submission script that I posted here. Indeed, $fn and $memo are coming from that script. Once this file is completed, I call qsub to submit it into the queue.

                Below is what you asked, but note that I changed the #PBS -l nodes=1:ppn=1 into #PBS -l nodes=1:ppn=2 and submitted them into queue to get the results as soon as possible.

                 

                Thanks,

                Afshin

                <Data><Job><Job_Id>75561.c009</Job_Id><Job_Name>27_04_18_4_3_5_5000_2_-2_0_3_1_60000_64_0.9_30_000025_3_180_130_61_2.0_2.0_2.0_2.0_2.0_0.0_0.0_0.0_8.0_8.0_0.0_0.0_5_False_0_0_0_2_2_2_4_2_2_2_0_1_False_100</Job_Name><Job_Owner>u13549@c009.colfaxresearch.com</Job_Owner><job_state>Q</job_state><queue>batch</queue><server>c009</server><Checkpoint>u</Checkpoint><ctime>1524851002</ctime><Error_Path>c009:/home/u13549/beer_game/reports/errors-27_04_18_4_3_5_5000_2_-2_0_3_1_60000_64_0.9_30_000025_3_180_130_61_2.0_2.0_2.0_2.0_2.0_0.0_0.0_0.0_8.0_8.0_0.0_0.0_5_False_0_0_0_2_2_2_4_2_2_2_0_1_False_100.err</Error_Path><Hold_Types>n</Hold_Types><Join_Path>n</Join_Path><Keep_Files>n</Keep_Files><Mail_Points>n</Mail_Points><mtime>1524851002</mtime><Output_Path>c009:/home/u13549/beer_game/reports/output-27_04_18_4_3_5_5000_2_-2_0_3_1_60000_64_0.9_30_000025_3_180_130_61_2.0_2.0_2.0_2.0_2.0_0.0_0.0_0.0_8.0_8.0_0.0_0.0_5_False_0_0_0_2_2_2_4_2_2_2_0_1_False_100.out</Output_Path><Priority>0</Priority><qtime>1524851002</qtime><Rerunable>True</Rerunable><Resource_List><mem>10gb</mem><nodect>1</nodect><nodes>1:ppn=2</nodes><vmem>11200mb</vmem><walltime>24:00:00</walltime></Resource_List><Variable_List>PBS_O_QUEUE=batch,PBS_O_HOME=/home/u13549,PBS_O_LOGNAME=u13549,PBS_O_PATH=/glob/intel-python/python2/bin/:/glob/development-tools/gcc/bin:/glob/intel-python/python3/bin/:/glob/intel-python/python2/bin/:/glob/development-tools/versions/intel-parallel-studio-2018-update2/compilers_and_libraries_2018.2.199/linux/bin/intel64:/glob/development-tools/versions/intel-parallel-studio-2018-update2/compilers_and_libraries_2018.2.199/linux/mpi/intel64/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/u13549/.local/bin:/home/u13549/bin,PBS_O_MAIL=/var/spool/mail/u13549,PBS_O_SHELL=/bin/bash,PBS_O_LANG=en_US.UTF-8,PBS_O_SUBMIT_FILTER=/usr/local/sbin/torque_submitfilter,PBS_O_WORKDIR=/home/u13549/beer_game,PBS_O_HOST=c009,PBS_O_SERVER=c009</Variable_List><euser>u13549</euser><egroup>u13549</egroup><queue_type>E</queue_type><etime>1524851002</etime><submit_args>./runs/runs_torque_27_04_18_4_3_5_5000_2_-2_0_3_1_60000_64_0.9_30_000025_3_180_130_61_2.0_2.0_2.0_2.0_2.0_0.0_0.0_0.0_8.0_8.0_0.0_0.0_5_False_0_0_0_2_2_2_4_2_2_2_0_1_False_100.pbs</submit_args><fault_tolerant>False</fault_tolerant><job_radix>0</job_radix><submit_host>c009</submit_host></Job></Data>

                 

                • 5. Re: torque setting
                  Intel Corporation
                  This message was posted on behalf of Intel Corporation

                  Hi Afshin,
                   
                  From your reply, we assume that “#PBS -l nodes=1:ppn=2does not encounter any issues with wall time or memory. We could also see it from job summary provided by you.
                  We will check with the concerned team and get back to you on the following issues:

                  1. The issues with setting “#PBS -l nodes=1:ppn=1
                  2. Limit of total running jobs per person is 15.
                   
                  Regards,
                  Anju
                  • 6. Re: torque setting
                    Intel Corporation
                    This message was posted on behalf of Intel Corporation

                    Hi Afshin,

                    After checking with the concerned team, given below is the response from them.

                    1. The issues with setting “#PBS -l nodes=1:ppn=1” : This is expected. We implemented it recently. The reason for this is that 1 slot vs 2 slots does not affect resource allocation, it affects only how the resource manager counts the occupancy of this node. The reason why we don't like ppn=1 jobs is that, if you have one job on a node, it will occupy all cores, but if you have two jobs per node, they will compete for cores. This is not ideal because your performance depends on whether you have a neighbor on the node or not. Our updated scheme only allows ppn=2 so that queued jobs never have co-tenants on the node. The only case when ppn=1 is used is the Jupyter queue.
                    2. Limit of total running jobs per person is 15: This is correct: we allow up to 15 running jobs per user, and each job can request up to 5 nodes.
                    Hope this clarifies your query. Please let us know.

                    Regards,
                    Anju

                    • 7. Re: torque setting
                      afshin67

                      Hi Anju,

                       

                      Thanks for letting me know.

                       

                      Best,

                      Afshin

                      • 8. Re: torque setting
                        Intel Corporation
                        This message was posted on behalf of Intel Corporation

                        Hi Afshin,

                        We are closing this case.
                        Kindly open a new thread for further queries

                        Regards,
                        Anju