OpenStack, the open source infrastructure project that aims to give enterprises the equivalent of AWS for the private clouds, today announced the launch of its 17th release, dubbed ‘Queens.’ After all of those releases, you’d think that there isn’t all that much new that the OpenStack community could add to the project, but just as the large public clouds keep adding new services, so is OpenStack.
“People want to get more out of their cloud,” OpenStack Foundation COO Mark Collier told me. Those users want to run both their legacy workloads and new workloads on the platform, but what those new workloads look like is changing. “For us, what we are seeing in terms of new workloads is a lot of demand for machine learning. That’s a very hot space and people see value in it very quickly.”
It’s probably no surprise then that one of the marquee new features in the Queens release is built-in support for vGPUs, that is, the ability to attach GPUs to virtual machines.
As Collier and OpenStack Executive Director Jonathan Bryce noted, until now, most users would opt for running bare-metal servers with GPUs for this, but that comes with its own administrative overhead for setting these machines up. Now, users can simply boot up a virtual machine with a vGPU and start running their scientific and machine learning workloads.
In addition to support for vGPUs, OpenStack is also adding support for other hardware and software acceleration resources (think FPGAs, CryptoCards, etc.) thanks to the new Cyborg project, which can make these resources available as standalone machines or as part of the core OpenStack virtual machine platform or for bare metal deployments.
Unsurprisingly, just as in the public cloud space, the various OpenStack groups are also working on making containers a more integral part of the platform. “The containerization of everything continues,” as Collier noted. With this release, that specifically means the launch of the new Zun container service for OpenStack, which allows users to easily start and run containers without the need for managing servers and clusters. Using some of the core OpenStack services, Zun handles the networking, storage and authentication necessary to run these containers.
With the Kuryr project, which is also making its debut in this release, OpenStack is also now adding improved support for Kubernetes, the de facto standard for container orchestration. Kuryr brings some of the native Kubernetes concepts like pods into OpenStack’s network stack.
Related to this, the OpenStack project is also turning to containers to bring OpenStack to the edge of the network. One new project, OpenStack-Helm, offers easier lifecycle management for OpenStack on top of Kubernetes (and lets you run individual OpenStack projects as independent services), while another new project, LOCI, offers container images of these services. Those two features make using OpenStack at the edge easier, though they obviously also help in managing complex OpenStack deployments in general.
As Collier and Bryce also noted, this new release adds a number of new high-availability features to OpenStack, which is very much in reaction to the needs of the project’s users (which include a lot of telcos and large enterprises that range from eBay to Comcast and the Shenzhen Stock Exchange).
One emerging area the OpenStack teams are still looking at is serverless computing. So far, there are a few community projects that are exploring this space, but there’s no official serverless OpenStack project. Bryce and Collier tell me that they are keeping their eyes open, though, and argue that many of the emerging open source serverless frameworks already mostly rely on Kubernetes, which is obviously getting the full support of the project anyway.
Featured Image: Keith Sherwood/Getty Images