App Icon MPI + SR-IOV KVM cluster

Launch at CHI@TACC  Launch Complex Appliance at CHI@UC     Launch Complex Appliance at CHI@TACC


This appliance deploys an MPI cluster of KVM virtual machines using the MVAPICH2-Virt implementation and configured with SR-IOV for high-performance communication over InfiniBand.

It accepts the following parameters:

  • key_name: name of a key pair to enable SSH access to the instance (defaults to "default")
  • reservation_id: ID of the Blazar reservation to use for launching instances
  • total_nodes: Number of physical nodes to launch
  • total_vms: Number of virtual machines to create
  • vcpu_per_vm: Number of VCPUs per virtual machine
  • memory_per_vm: Amount of memory size per virtual machine (in GiB)

The following outputs are provided:

  • first_instance_ip: The public IP address of the first bare-metal instance. Login with the command 'ssh cc@first_instance_ip'.

To check the VM / IP mapping list, run the following command:

cat /home/cc/vm-ip_mapping.dat

To run an MPI program, first login to one VM using command "ssh root@vm_ip", then execute the following command, assuming you have compiled a program called mpi.out:

mpirun_rsh -np <nprocs> -hostfile vmhosts MV2_VIRT_USE_IVSHMEM=1 ./mpi.out

In some cases, the library path of the MVAPICH2-Virt package needs to be exported as follows before running MPI programs:

export LD_LIBRARY_PATH=/opt/mvapich2-virt/lib64:$LD_LIBRARY_PATH

Refer to the MVAPICH2-Virt user guide for more details on running MPI programs.



Image IDs

CHI@TACC:  e3ceca5b-2746-478b-a8d1-52a44585431a


Get Template


Name: Xiaoyi Lu, Network Based Computing Lab, The Ohio State University


Name: Xiaoyi Lu, Network Based Computing Lab, The Ohio State University

Version: 1.0.1
Created By:  luxi on Oct. 31, 2016, 3:15 p.m.
Updated By:  zzhen on Aug. 7, 2018, 8:53 a.m.