Artifact for Baleen: ML Admission & Prefetching for Flash Caches (FAST 2024)

Artifact description: Baleen is a flash cache that uses coordinated ML admission and prefetching to reduce peak backend load in bulk storage systems. This artifact contains Python code to reproduce the simulator results and key figures in the Baleen paper.


First-timers: you will need an active allocation to launch this artifact on Chameleon. If you are unsure how to get one, please contact the Chameleon Helpdesk.

Baleen: ML Admission & Prefetching for Flash Caches
Daniel Lin-Kit Wong, Hao Wu, Carson Molder, Sathya Gunasekar, Jimmy Lu, Snehal Khandkar, Abhinav Sharma, Daniel S. Berger, Nathan Beckmann, Gregory R. Ganger
Link to paper site

Estimated time to reproduce: 3 hours (setup, small-scale experiment, plotting figures using intermediate results). To re-run all experiments from scratch would take >600 machine-days.

Reproducibility status: awarded Results Reproduced, Artifacts Functional, Artifacts Available badges during FAST 2024 Artifact Evaluation.

Experiment Pattern: This artifact provisions a single node to run simulator runs and the Jupyter notebooks. (See notebook for details)

Support: create a GitHub issue (preferred) or email

See also:

34 14 3 6 Feb. 1, 2024, 7:46 PM


Launch on Chameleon

Launching this artifact will open it within Chameleon’s shared Jupyter experiment environment, which is accessible to all Chameleon users with an active allocation.

Download Archive

Download an archive containing the files of this artifact.

Download with git

Clone the git repository for this artifact, and checkout the version's commit

git clone
# cd into the created directory
git checkout 72c3853d4a5753af11b7dfc9f221c5e202675325

Submit feedback through GitHub issues

Version Stats

18 9 1