Skip to content

Latest commit

 

History

History
102 lines (89 loc) · 4.56 KB

README.md

File metadata and controls

102 lines (89 loc) · 4.56 KB

A basic "Hello world" example to output text to console from nodes over a network using MPI.

A cluster at IIIT has four SLURM nodes. We want to run one process on each node, and run 32 threads using OpenMP. In future, such a setup would allow us to run distributed algorithms that utilize each node's memory efficiently and minimize communication cost (within the same node). Output is saved in gist. Technical help from Semparithi Aravindan.

Note You can just copy main.sh to your system and run it.
For the code, refer to main.cxx.


$ scl enable gcc-toolset-11 bash
$ sbatch main.sh

# ==========================================
# SLURM_JOB_ID = 3373
# SLURM_NODELIST = node[01-04]
# SLURM_JOB_GPUS =
# ==========================================
# Cloning into 'hello-mpi'...
# [node01.local:2180262] MCW rank 0 is not bound (or bound to all available processors)
# [node02.local:3790641] MCW rank 1 is not bound (or bound to all available processors)
# [node04.local:3758212] MCW rank 3 is not bound (or bound to all available processors)
# [node03.local:3287974] MCW rank 2 is not bound (or bound to all available processors)
# P00: NAME=node01.local
# P00: OMP_NUM_THREADS=32
# P02: NAME=node03.local
# P02: OMP_NUM_THREADS=32
# P03: NAME=node04.local
# P03: OMP_NUM_THREADS=32
# P01: NAME=node02.local
# P01: OMP_NUM_THREADS=32
# P00.T00: Hello MPI
# P00.T24: Hello MPI
# P00.T16: Hello MPI
# P00.T26: Hello MPI
# P00.T05: Hello MPI
# P00.T29: Hello MPI
# P00.T22: Hello MPI
# P00.T06: Hello MPI
# P00.T17: Hello MPI
# P00.T23: Hello MPI
# P00.T25: Hello MPI
# P00.T13: Hello MPI
# P00.T01: Hello MPI
# P00.T09: Hello MPI
# P00.T03: Hello MPI
# P00.T02: Hello MPI
# P00.T31: Hello MPI
# P03.T00: Hello MPI
# P03.T24: Hello MPI
# P03.T05: Hello MPI
# P03.T21: Hello MPI
# P03.T04: Hello MPI
# ...


References




ORG DOI