Intel MPI Benchmark
Intel MPI Benchmark From Rocks Clusters
Intel MPI Benchmark Contents [hide]
* 1 Initial steps * 2 Get the source: * 3 Build the application: * 4 Running IMB * 5 Viewing output
[edit] Initial steps
Create directory structure in the shared applications mount point:
- cd /share/apps
- mkdir benchmarks
- mkdir benchmarks/IMB
- cd benchmarks/IMB
[edit] Get the source:
IMB is available on the Intel web site: http://www.intel.com/cd/software/products/asmo-na/eng/cluster/clustertoolkit/219848.htm
Once you've downloaded the application, copy it to your master node and unpack it.
- cd /share/apps/benchmarks/IMB
- tar -zxvf IMB_2.3.tar.gz
[edit] Build the application:
- cd IMB_2.3/src
IMB calls include files from the Makefile. You can uncomment/comment/add include files in the beginning of the Makefile. There are two include files below that can be used with a default installation of rocks using the compute roll, and one for use when you've added the Intel Roll.
This shows the include section at the beginning of the Makefile, with additions I've added for make_[intel,gnu,path,pgi]. The uncommented include file is the active include file.
- User configurable options #####
- include make_intel
include make_gnu
- include make_path
- include make_pgi
- include make_ia32
- include make_ia64
- include make_sun
- include make_solaris
- include make_dec
- include make_ibm_sp
- include make_sr2201
- include make_vpp
- include make_t3e
- include make_sgi
- include make_sx4
- End User configurable options ###
make_gnu
COMPILER = gnu MPICH = /opt/mpich/gnu MPI_HOME = ${MPICH} MPI_INCLUDE = $(MPI_HOME)/include LIB_PATH = LIBS = CC = ${MPI_HOME}/bin/mpicc OPTFLAGS = -O CLINKER = ${CC} LDFLAGS = CPPFLAGS =
make_intel
COMPILER = intel MPICH = /opt/mpich/intel MPI_HOME = ${MPICH} MPI_INCLUDE = $(MPI_HOME)/include LIB_PATH = LIBS = CC = ${MPI_HOME}/bin/mpicc OPTFLAGS = -O CLINKER = ${CC} LDFLAGS = CPPFLAGS =
Uncomment the include file you need and build the executable. The example below shows the IMB-MPI1 executable built with the GNU compilers:
- make
- cp IMB-MPI1-gnu ../../
[edit] Running IMB
Now we've built IMB. There are many ways to run it, I'll show how to use it with an interactive PBS session and also a PBS script.
First, running in an interactive PBS session:
Prepare an output directory to write data at the location where you want to test performance. I'll assume you have all directories on the master node, and want to write output files in the home directory.
$ su - user $ mkdir output_files
Start the Interactive session:
$ qsub -I -lnodes=[n]:ppn=[p]
$ mpiexec /share/apps/benchmarks/IMB/IMB-MPI1-gnu
Using a PBS Script:
- !/bin/bash
- PBS -N IMB
- PBS -e IMB.err
- PBS -o IMB.out
- PBS -m aeb
- PBS -M user
- PBS -l nodes=[n]:ppn=[p]
- PBS -l walltime=30:00:00
PBS_O_WORKDIR='/home/user/output_files' export PBS_O_WORKDIR
- ---------------------------------------
- BEGINNING OF EXECUTION
- ---------------------------------------
echo The master node of this job is `hostname` echo The working directory is `echo $PBS_O_WORKDIR` echo This job runs on the following nodes: echo `cat $PBS_NODEFILE`
- end of information preamble
cd $PBS_O_WORKDIR cmd="mpiexec /share/apps/benchmarks/IMB/IMB-MPI1-gnu" echo "running bounce with: $cmd in directory "`pwd` $cmd >& $PBS_O_WORKDIR/log.IMB.$PBS_JOBID
Running IMB as shown above uses the defaults and runs all tests. To learn more about running specific tests and configurable options, refer to the userguide: /share/apps/benchmarks/IMB/IMB_2.3/doc [edit] Viewing output
gnuplot may not be included by default in your distribution. Here's one way to add the package:
- up2date --install gnuplot
Once you have gnuplot, you can plot results, example shown:
$ gnuplot gnuplot> plot '12p1ppn.intel' u 1:3 w lp lw 2 p 2, '12p2ppn.intel' u 1:3 w ld lw 2 p2 gnuplot> set title "Intel MPI Benchmark Send/Receive & AlltoAll Tests" gnuplot> set xlabel "Number of Bytes Transferred" gnuplot> set ylabel "Time in Microseconds"
Here are examples of what you can do with plots (very simple plots)
http://www.hpcclusters.org/public/benchmarks/ExampleBenchmark.pdf
http://www.hpcclusters.org/public/benchmarks/SendRecv8Core.pdf
Referensi
- http://software.intel.com/en-us/articles/intel-mpi-benchmarks/
- http://www.mcs.anl.gov/research/projects/mpi/mpi-test/tsuite.html
- https://wiki.rocksclusters.org/wiki/index.php/Intel_MPI_Benchmark