MPI and Spack based HPC Cluster

This artifact sets up a ready-to-use HPC cluster in the cloud with a master-worker setup. All nodes come pre-installed with MPICH, OpenMPI, Spack, and Lmod (Lua modules) for running and managing MPI-based applications. Users can optionally enable a shared NFS directory for seamless data access across nodes. Example Jupyter notebooks are included to help users get started.

11 1 1 3 Aug. 25, 2025, 7:14 PM

Authors

Launch on Chameleon

Launching this artifact will open it within Chameleon’s shared Jupyter experiment environment, which is accessible to all Chameleon users with an active allocation.

Download Archive

Download an archive containing the files of this artifact.

Download with git

Clone the git repository for this artifact, and checkout the version's commit

git clone https://github.com/rohanbabbar04/MPI-Spack-Experiment-Artifact.git
# cd into the created directory
git checkout b89447480ce87c9a30ed5649ffdc34a0c02a9ee9
Feedback

Submit feedback through GitHub issues

Version Stats

11 1 1