Using Chameleon for Artifact Evaluation

Increasingly more computer science systems conferences are using Chameleon for artifact evaluation. For example, in recent months, both SOSP’21 and SC21 used Chameleon in their reproducibility initiatives. There are many good reason for this: Chameleon supports bare metal reconfigurability which means that it can support a wide range of systems experiments, it is based on a mainstream cloud implementation (OpenStack) familiar to many users, and allows users to easily package and share experiments via integration with Jupyter and a sharing platform called Trovi. It also gives users access to a wide range of interesting hardware including different types of GPUs, FPGAs, storage in interesting configurations including NVMEs and NVDIMMs, as well as innovative interconnect and networking features -- and most recently, edge devices. Given these advantages, we have been increasingly fielding questions about how to use Chameleon for reproducibility; this blog provides some tips both for organizers of these events and artifact authors. 

 

Tips for For Artifact Evaluation Organizers:

  • Create a Chameleon project for artifact evaluation: To do so, first log into Chameleon, go to your profile, and request PI eligibility. Once your PI eligibility is approved, follow this link and give us a few sentences of justification. Artifact evaluation programs are exactly the sort of thing we want to support; you will get approval with an allocation of 20,000 service units (SUs == one node hour) that you can extend at any point if you run out of time or cycles. Note that Chameleon is an open resource so you do NOT need to be associated with an NSF project to be supported. 

  • Add authors/evaluators to the project: This will give everybody associated with your evaluation initiative access to your project allocation and allow them to run experiments/artifacts; they won’t be able to use Chameleon otherwise. We expect that many of the authors will have created their own Chameleon projects to support their research long term; if this is the case, they need not be added to your project. Other authors may use independent resources for their research but may still want to make their artifacts packaged and available via Chameleon; to gain access, they will need to be added to your project. In addition, any evaluators who want to use Chameleon resources will need to be added to your project as well. To do so, go to dashboard/projects, select your evaluation project, and add members in the appropriate section. For simplicity, it is usually best to add all the evaluators to the project at once rather than manage individual requests -- however, if you prefer to add evaluators only on need basis, that’s also fine. Chameleon supports the PI delegate feature that allows you to delegate the management of project membership to somebody else to help manage access. 

  • Familiarize the evaluators (and potentially authors) with the system: Allow sufficient time to familiarize the evaluators -- and potentially authors as well -- with the system; what works here best will depend on your specific conference deadlines. Since Chameleon is based on OpenStack which supports standard cloud interfaces similar to commercial clouds, many evaluators are likely to be more or less familiar with the system already. For those who are not, there is extensive documentation including YouTube videos on how to get started, and how to use the system for experiments in special topics such as networking. The Chameleon team also provides extensive user support via the help desk. In addition, we will be happy to provide a custom webinar for your evaluators explaining all the features of the system as well as “office hours” to make sure that all the questions are answered -- feel free to contact us directly to arrange such specialized sessions. 

  • Conduct the evaluation: The evaluators will be able to choose from an extensive array of Chameleon hardware and we will do our best to support their exploration! Bear in mind that the hardware you need may not always be available on-demand -- in particular, popular resources, like our GPU nodes, often need to be reserved in advance. You can check Chameleon’s availability calendar for any site (e.g., TACC) and filter on the desired resource type, to see what hardware is available at any given time and what hardware may be available later to decide when you would like to make an advance reservation. When you run experiments packaged as Jupyter notebooks a personal copy of the notebooks is automatically made for you and you can adapt it as you see fit (for example, to make your reservation under the evaluation project or adapt the experimental configuration to work with your advance reservation). Authors will not be able to tell who evaluated their artifact (unless of course you contact them explicitly), but they see how many times their artifact has been launched.

 

Tips for Artifact Authors: 

Chameleon is one of the most versatile platforms out there for reproducing your experiments. In addition, you can make the work of an evaluator easier by noting the following: 

 

  • Shared resource: Using Chameleon (or another shared resource) for your experiments in the first place will likely increase their reproducibility, especially if they rely on rare or expensive hardware: a shared system like Chameleon will ensure that the exact same hardware is available to the evaluator as well as you -- otherwise, your experiment may be hard to reproduce because of lack of access to a similar platform. 

  • Packaging experimental environments: Using Chameleon (or any other cloud) for your experiments means that much of the work of artifact description is done as a side-effect of experimentation, a point we explain in detail here: to use a cloud, you have to prepare and then save/snapshot an image which provides a convenient record of your experimental environment. Evaluators can then use that snapshotted image to recreate it, thus eliminating much of the sources of errors and frustration in repeating experiments. Further, if your experimental environment consists of multiple resources/nodes/instances, orchestration mechanisms like OpenStack Heat or using Jupyter notebooks provide a good way of automating environment creation -- for yourself as well as the evaluator later on. 

  • Packaging experiments: Once you deploy or re-deploy the experimental environment, you will need to run the actual experiment. While you can package your experiment in any way you see fit, we recommend the use of Jupyter notebooks for many reasons. First, the Jupyter integration in Chameleon allows you to automate/orchestrate the deployment of your experimental environment in an imperative and non-transactional fashion. The latter property is particularly important because if you are re-running your experiment and something goes wrong,  you, or the evaluator, can easily adapt the experiment (leaving comments on how to handle potential brittleness or variability is a good practice). Second, this means that you can prepare your experimental environment, run your experiment, and then do the data processing, all in one place. While evaluation typically involves re-running the whole experiment, some readers of your paper may be interested in just the data analysis. Finally, Jupyter provides convenient means of integrating text that explains how to replicate your experiment with the actual packaged process: it is a good practice to tell users how long it will take to repeat the experiment and how they may adjust what is in the notebook to deal with variation (for example, they will likely need to charge their usage to a different project or look ahead to make an advance reservation for popular resources like GPUs or memory hierarchy nodes). 

  • Sharing experiments: Chameleon provides an experiment sharing platform, called Trovi, integrated with the system, so that when you use Jupyter to package your experiment, you can easily save it to Trovi directly. A Trovi artifact is like a Google Drive: you can save everything pertinent to an experiment -- your notebook, data, a README, etc. Furthermore, you can share it with others: perhaps just your team while you are working on it, but don’t forget to make it public for evaluation! 

  • Publishing experiments: Once you are finished with your experiment you can publish it on Zenodo with just one click. This will automatically assign a Digital Object Identifier (DOI) to your experiment -- this means that your experiment is now citable! You can reference it from your paper to make it easier for others to find and interact with your research!

 

If you are looking for examples of experiments that were packaged by others, you can go to Chameleon Trovi and filter with the “experiment” tag -- or you can read about them here. Last but not least, remember that artifact evaluators are not the only people who may be interested in your experiments. To facilitate use by others we support a feature, called Chameleon Daypass that allows you to grant access to the system to non-Chameleon users for the purpose of replication -- keep an eye out for  a future tips and tricks article on how to use this. Lastly, many of the experiments packaged on Chameleon in the way described above have been replayed hundreds of times in the first year after packaging -- think of the impact your research could have! 


Add a comment

No comments