Thousands of Containers Without Kubernetes?

Often marketing departments and analysts like buzzwords and want to hear every vendor using the popular buzzwords, whether they represent value or not. Kubernetes is a fantastic container orchestrator with its origins in Google and a footprint in every public and almost every private cloud in the world. But like any tool, Kubernetes is not the solution to all your problems, not even every possible container orchestration problem. Kubernetes is designed to orchestrate groups of containers within a single physical location. Each physical location needs its own Kubernetes cluster, often with three nodes as a control plane and more nodes for the workload. There are tools such as K3S that consolidate all the roles onto a single host, but a local Kubernetes cluster is still required to manage containers. Requiring a cluster per site isn’t a big issue at cloud scale, with many containers running at a relatively small number of locations. Managing a dozen clusters that each run thousands of containers is where Kubernetes excels.

Kubernetes becomes a problem with edge deployments where the numbers are reversed. With a few containers running at thousands of locations, the overhead of running Kubernetes at every location becomes significant, and the Kubernetes value decreases. With edge deployments, managing your containers across locations becomes much more significant than managing the containers within each location. You may need to ensure that the edge sites in France get the version of your container application that stores its data in a French data center or that the machine learning application is only delivered to sites with installed GPUs.

In the Avassa presentation at Edge Field Day, we saw the ability to manage container deployment across fifty edge sites and use tags to control which applications were deployed to which location. The tags on each site are either inherited from the hardware inventory, such as x86 or ARM CPU architecture or set by the platform administrator. We saw the deployment of new applications (containers) and updates to existing containers orchestrated by Avassa across a selection of the sites based on tags. I was interested in Avassa handling canary deployment and the manual approval for broader deployment. While the demos used the Avassa web console, most application deployments will use automation and likely be driven by a CI/CD pipeline. Most customers will probably handle the canary release through the pipeline, using different Avassa tags to identify the canary and any phased deployment in keeping with the principles of DevOps and fast application development.

Avassa will manage your containerized applications across your distributed edge. You will need another product to bootstrap the edge compute with a Linux operating system and the Avassa agent. The agent phones home to the multi-tenant Avassa web console, which controls the application deployment process. There are a couple of ways for the agent to be associated with a particular customer, including certificates or an on-edge-site generated ID being manually entered. Usually, the same method used to deploy the Avassa agent will also handle the association between the agent and the tenant.

I like what was shown by Avassa at Edge Field Day and am looking forward to getting some more hands-on time when I have an opportunity. If you need to manage container applications at your distributed edge, look at Avassa’s videos and the other presenters at Edge Field Day.

© 2023, Alastair. All rights reserved.

About Alastair

I am a professional geek, working in IT Infrastructure. Mostly I help to communicate and educate around the use of current technology and the direction of future technologies.
This entry was posted in General. Bookmark the permalink.