1 Reply Latest reply: Feb 15, 2012 3:46 PM by Gergana RSS

Help using Intel MPI Benchmarks

rohitm@engr.uconn.edu Community Member
Currently Being Moderated

Hi I'm trying to run Intel MPI Benchmarks and am having some difficulties.  We are using Intel MPI Benchmarks 3.2.3 with Intel MPI 4.0.3.008.

 

We have no problems when using a single node, but trying to use more than one node we have problems.  I'm trying to start with 4 hosts (each having 12 CPU cores) so I do the following:

$ mpdboot --totalnum=4 --ncpus=12 -f mpd.hosts

 

But if we trying to do a multi-node benchmark, we get problems:

 

$ mpirun -f mpd.hosts -n 24 ./IMB-MPI1 pingpong
APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)

 

Oddly, sometimes it seems to use hydra and other times mpd:

 

[hpc-rohit@cn61 imb]$  mpirun -f mpd.hosts -n 24 ./IMB-MPI1 pingpong
APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)
[hpc-rohit@cn61 imb]$  mpirun -f mpd.hosts -n 24 ./IMB-MPI1 pingpong
[proxy:0:0@cn62] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:70): assert (!(pollfds[i].revents & ~POLLIN & ~POLLOUT & ~POLLHUP)) failed
[proxy:0:0@cn62] main (./pm/pmiserv/pmip.c:387): demux engine error waiting for event
[mpiexec@cn61] HYDT_bscu_wait_for_completion (./tools/bootstrap/utils/bscu_wait.c:101): one of the processes terminated badly; aborting
[mpiexec@cn61] HYDT_bsci_wait_for_completion (./tools/bootstrap/src/bsci_wait.c:18): bootstrap device returned error waiting for completion
[mpiexec@cn61] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:521): bootstrap server returned error waiting for completion
[mpiexec@cn61] main (./ui/mpich/mpiexec.c:548): process manager error waiting for completion

 

We do have success trying to do a 2 CPU benchmark (or even 12 CPU benchmark no problem),but we are really trying to measure the performance of our Infiniband interconnect.  Any assistence is appreciated.  Thanks!

 

$ mpirun -n 2 ./IMB-MPI1 pingpong
benchmarks to run pingpong
#---------------------------------------------------
#    Intel (R) MPI Benchmark Suite V3.2.3, MPI-1 part   
#---------------------------------------------------
# Date                  : Tue Feb 14 14:36:28 2012
# Machine               : x86_64
# System                : Linux
# Release               : 2.6.18-238.el5
# Version               : #1 SMP Sun Dec 19 14:22:44 EST 2010
# MPI Version           : 2.1
# MPI Thread Environment:

 

# New default behavior from Version 3.2 on:

 

# the number of iterations per message size is cut down
# dynamically when a certain run time (per message size sample)
# is expected to be exceeded. Time limit is defined by variable
# "SECS_PER_SAMPLE" (=> IMB_settings.h)
# or through the flag => -time

 


# Calling sequence was:

 

# ./IMB-MPI1 pingpong

 

# Minimum message length in bytes:   0
# Maximum message length in bytes:   4194304
#
# MPI_Datatype                   :   MPI_BYTE
# MPI_Datatype for reductions    :   MPI_FLOAT
# MPI_Op                         :   MPI_SUM 
#
#

 

# List of Benchmarks to run:

 

# PingPong

 

#---------------------------------------------------
# Benchmarking PingPong
# #processes = 2
#---------------------------------------------------
       #bytes #repetitions      t[usec]   Mbytes/sec
            0         1000         0.57         0.00
            1         1000         0.57         1.66
            2         1000         0.57         3.32
            4         1000         0.59         6.44
            8         1000         0.60        12.63
           16         1000         0.63        24.24
           32         1000         0.65        46.73
           64         1000         0.65        93.40
          128         1000         0.74       165.08
          256         1000         0.80       305.35
          512         1000         0.84       578.53
         1024         1000         1.03       950.79
         2048         1000         1.29      1509.91
         4096         1000         2.02      1932.30
         8192         1000         3.14      2491.67
        16384         1000         5.87      2662.98
        32768         1000         9.86      3168.55
        65536          640        13.97      4473.80
       131072          320        26.62      4695.69
       262144          160        49.06      5095.90
       524288           80        94.54      5288.91
      1048576           40       186.49      5362.27
      2097152           20       371.30      5386.46
      4194304           10       763.61      5238.30

 


# All processes entering MPI_Finalize

  • 1. Re: Help using Intel MPI Benchmarks
    Gergana Community Member
    Currently Being Moderated

    Hey Rohit,

     

    Let me start by saying that if you have any questions regarding components in the Intel® Cluster Studio or Toolkit, you should post them to the Intel® Clusters and HPC Technology forum.  This here is related more to the low-level software (like drivers, etc) rather than the Intel® Software Tools.

     

    Would you be able to copy and paste your question from here to there?  I'm, unfortunately, unable to move this over from my side.  Alternatively, you can submit an issue to our Intel® Premier Support portal, just select the "Intel(R) Cluster Studio [XE] for Linux*" product.

     

    Regards,

    ~Gergana

     

    ==========================
    Gergana Slavova
    Technical Consulting Engineer
    Intel® Cluster Tools
    E-mail: gergana.s.slavova_at_intel.com

More Like This

  • Retrieving data ...

Legend

  • Correct Answers - 4 points
  • Helpful Answers - 2 points