OpenStack on Packet: A Twitch Unboxing

John Studarus
4 min readAug 24, 2020

Imitation may be the sincerest form of flattery, but a live stream, (via Twitch no less) is the thing that flatters me! Earlier this week, Rain Leander and David McKay live streamed a completely unscripted and unrehearsed “unboxing” of my OpenStack on Packet repo. In their 90 minute stream, they successfully deployed the cloud (with 30 minutes to spare). It’s not all flattery: They did run across some liberal interpretation of the documentation and perhaps a bug or two. Please watch the video! It is both informative and entertaining.

David and Rain having way too much fun unboxing OpenStack on Packet.

Here is some background on the OpenStack on Packet project, how it came to be and how it has evolved through the years. I’ll also elaborate some of the items identified during the stream.

Eight years ago, I was stuck. I was leading product management for a software development team developing a cloud security solution supporting a number of up and coming cloud technologies. Getting access to cloud infrastructure was troublesome. There were long delays to get private cloud environments stood up, which impacted our timelines. Public clouds weren’t a viable solution since we needed to tailor the cloud infrastructure with our security infrastructure.

That’s when I realized there was a need for private cloud automation atop of a deep pool of available bare metal resources (such as a Bare Metal Cloud Provider). I needed something that would quickly and reliably deploy a private cloud without the agonizing wait for hardware to be man provisioned. I also needed a cloud where I could consume hardware as I required it, a consumption based model, allowing me to scaled my hardware footprint up and down while only being charged for hardware that is actively being.

The first incarnation of this private cloud on-demand was atop donated Packet infrastructure for some open source evangelism. These early clouds I built were used at technical conferences, meetups, and workshops to showcase OpenStack functionality. Using the Packet API and the associated Terraform provider, these private cloud environments were quickly and reliably stood up worldwide. Full private cloud environments were spun up the day before an event and then torn down immediately afterwards, releasing the underlying infrastructure.

For the OpenStack Summit in Vancouver 2015, the project hit a critical milestone. We migrated away from any proprietary vendor OpenStack version to the pure upstream OpenStack code base and incorporated ARM processor support to the existing x86 support. We removed dependencies on downstream modules, allowing the latest, bleeding edge OpenStack code base to be deployed with waiting for vendor dependencies or releases. With ARM support, full ARM processor based clouds, control and compute nodes, as well as heterogeneous clouds, a mix of x86 and ARM could be deployed.

As you can imagine, over such a long project lifetime, things have changed and the documentation. Rain and David hit on a few of these items including:

  • The most recent updates have included new hardware support (2nd and 3rd generation Packet systems) and OpenStack releases (Train and Ussuri).
  • Rain and David noted that the repo only pulls down the virtual machine images required based upon the processor types being used (i.e. if there are only x86 compute nodes then only the x86 virtual machine images are downloaded).
  • When setting up serial console access, Rain and David ran into some old documentation referring to “novaconsole” that they couldn’t find. Well, “novaconsole” has been deprecated and is no longer necessary. This support is now built into the mainline OpenStack code base and the “novaconsole” software is no longer required. You can simply pull up the console via the GUI. Sorry about the Wild Goose Chase! Old documentation = wild misdirection.
  • Rain and David had to work around the Terraform remote backend setting by disabling the setting. The current CI system uses the Terraform cloud remote backend to process builds following GitHub pull requests. Whoops, sorry about the lack of instructions- but you two figured it out: you turned off the CI and ran it all locally.

The Continuous Integration (CI) is the latest addition to the repo. The first steps are in place setting up a continuous testing across the latest OpenStack release version across all the supported Packet hardware models. The vision is to proactively validate correct deployment, cloud level functionality and performance for several stock cloud configurations (multiple nodes) across multiple Packet data centers.

Hopefully Rain and David will be willing to do another unboxing . . . . would you like a challenge? In the current state, the repo can move in a number of different directions. I’d like to see example configurations of larger clouds across multiple data centers (current configuration files are for a max of 3 nodes in a single data center). Such advanced configurations would allow testing of automated failover, live scaling up/down, latency research on large scale clouds, and validating edge deployment models. Just imagine a private cloud that is smart enough to scale itself up and down requesting additional hardware nodes from the underlying, world-wide bare metal cloud provider dynamically to balance between performance and cost! Rain, David… I’ll be watching when you’ve got some of these more advanced capabilities up and running!

Get involved! Try it out yourself! Feel free! If you’ve got any comments or suggestions, please leave an issue or a pull request on the GitHub OpenStack on Packet.

--

--

John Studarus

John is the president of JHL Consulting focused on cloud, networking, and security product consulting.