Great news in Chameleon-land!
What? Another changelog? Even though we explicitly said last month we wouldn't be giving you one for a little while? Well, this is quite embarrassing. If you must know, we happened to find a few things that we could button up and release for you in January. If you can just quickly read through to the end and then forget this ever happened, I'd be grateful.
New RTX GPU cards at CHI@UC. In case you are not on the Chameleon-users mailing list, last week we announced that 40 NVIDIA RTX 6000 GPUs are installed and available for experimentation at CHI@UC. The GPUs have been installed into 40 of our existing "compute_skylake" nodes at CHI@UC, meaning they retain their ability to participate in advanced networking experiments (e.g., stitching or BYOC SDN). This means that there are effectively fewer nodes of type "compute_skylake"; experiments requiring a large number of nodes may therefore want to leverage model-based constraints when making reservations instead of relying on the node_type field. Our documentation has an example of this (using the "architecture.smp_size" attribute to request a node having a specified number of physical processors), though you are always free to file a support ticket to consult with us on your particular experiment.
Connect Chameleon experiments directly with AWS via Amazon DirectConnect. It has been possible for some time to create isolated layer-2 circuits between any two ExoGENI stitch ports--this makes it possible to create a network that spans CHI@UC and CHI@TACC, but stitching to other host institutions is also possible. Now, that capability has been extended to commercial clouds, starting with AWS. DirectConnect allows you to establish a dedicated connection between your AWS cloud and another network; Chameleon is now able to peer with AWS via ExoGENI and Internet2 Cloud Connect. This functionality is in its early stages and still requires some manual steps to get the connection up and running, but we have prepared detailed documentation to help guide you through. We hope this unlocks more exciting network experiment possibilities!
Lower charge for stitchable VLAN reservations. We have heard from some of you that the allocation charge for our stitchable VLANs is too high and can cause an allocation to rapidly deplete. After reviewing the usage of stitchable VLANs, we've decided to reduce the SU charge from 10 to 4. This should lower your reservation costs while ensuring these VLANs are still treated like the precious shared resource they are.
That’s all folks, now we really stop writing those blogs -- unless you write to us!