let’s edgifi blog
Understanding the limitations of using Kubernetes at the edge
Want to have some fun over the weekend? Try creating a Kubernetes cluster using cell phones as a farm of worker nodes. It’s not fun. I know. I’ve tried it.
Kubernetes is essentially an orchestration framework for Linux containers. Getting a Linux container to run on a cell phone is hard, really hard. All the tools and capabilities that developers enjoy when working with a full installation of Linux on a X86 or even a Raspberry Pi computer are luxuries when working with a cell phone’s operating system. While it’s true that both the iOS and Android operating systems are derivatives of Linux, the stuff you need in order to run Linux containers is missing on a cell phone. Getting something as commonplace as an nginx container up and running on an iPhone is akin to rocket science even for someone who understands the details of Kubernetes. For a beginner, fuhgeddaboudit.
Thus, no containers, no Kubernetes. It’s that simple. Wish it was easy, but it’s not. Still understanding why running Kubernetes on cell phones is so hard is useful information, particularly for those of us, myself included, who harbor such fantasies. As they say, the devil is always in the details and when it comes to running Kubernetes on cell phones the details count. So, let’s look at them.
The place to start is understanding how containers run on a Linux computer.
Containers are isolated Linux processes
The most important thing to understand is that containers do not run under a container manager such as Docker or Podman. Rather, containers run as independent Linux processes that are virtually isolated from other processes. The container manager is a helper toward that end.
Container isolation is created using features available in the Linux kernel. Thus, you can think of a Linux container as an isolated process that runs on top of the Linux kernel. (See Figure 1, below.)
Figure 1: A container a Linux process that runs in virtual isolation over the Linux kernel
The Linux kernel on which a container runs can be hosted on a virtual machine or on bare metal. The component that does the work of creating and isolating a container process is called the container runtime. Examples of a container runtime are containerD, runC and rkt. The role of a container manager such as Docker is to present a way for humans or machines to work with the container runtime. Let’s take a look at the work the container runtime does in order to create a container
Creating a container
As mentioned above, the role of the container runtime is to create and manage the lifecycle of a container. When a container manager such as Docker contacts the container runtime – containerD, for example – to create a container. Then the container runtime will do four things.
First the container runtime will create the container’s Linux process.
Second, it will dedicate the Linux process to a custom Linux namespace. According to the Linux manual, a namespace wraps a global system resource in an abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of the global resource. Dedicating a process to a namespace creates the essential isolation that a container requires.
After the container runtime creates the namespace it then assigns cgroups (control groups) to the process. A cgroup defines how a particular process can use system resources. For example you can use cgroups to limit how much memory or CPU a process can use as well as assign network and disk access priority.
Finally, the container runtime creates an overlay filesystem for the container. The overlay filesystem creates a special layer on the host filesystem that makes it seem as if the container has its own files, even at the OS level. (See Figure 2, below)
Figure 2: The container creation process executed by the container runtime
In short, as shown in Figure 3 below, give a Linux process a namespace, assign it to cgroups and an overlay filesystem and you end up with a Linux container. (See Figure 3 below.)
Figure 3: Linux containers combine Linux kernel features around a Linux process
Now, while the container creation process seems pretty straightforward at the conceptual level, when it comes to actually making containers, it’s a lot of hard work on the part of the container runtime that gets executed in milliseconds.
As current events have revealed, Linux containers have caught on like wildfire. Also, today they’re the cornerstone of the container orchestration technology Kubernetes, which, by the way, has also caught on line wildfire.
But as powerful as containers and their descendant Kubernetes pods are, they are not a one-size-fits-all solution, particularly when it comes to distributed IoT architectures that use mobile phones and tablets.
The challenges to get containers running on a cell phone are anything but trivial. There are some significant hurdles to overcome
The first hurdle that needs to be overcome to get a container up and running on a cell phone is that you need to be able to install a container manager and container runtime on the device. Taking the simplest approach, this means that you have to SSH into the phone and download the release files for the container manager and runtime , which are probably in a compressed format. Then you have to install them.
This might be a simple enough task if you had a terminal prompt to work with. But, out of the box on a cell phone, you don’t. So you have to install a terminal app from an app store.
Then, once you get the terminal up and running there’s no guarantee that your phone will have all the utilities that you need. There’s no wget or curl. There’s probably no zip or tar utilities installed to extract the container manager and container runtime from the downloads.
You’ll have to do a lot of work just to get the files. And, once you have the container manager and container runtime on the cell phone, there’s no guarantee they’ll work. Remember, containers rely upon a lot of low-level features in the Linux kernel. They might be there; they might not.
Now, let’s say by some miracle you do get a container to load in your cell phone. You still have a long way to go to actually turn it into a worker node that can be part of a Kubernetes cluster. That’s another bucket of work that’s just as detailed and fraught with potential errors. Kubernetes has more moving parts than containers. If any one of those parts fails to work as expected, you’re in for some hurt.
In short, get containers and Kubneretes to run on a cell phone is a crapshoot; a time consuming, labor intensive crap shoot with little, if any guarantee of success
Addressing the issue
So, then what’s to be done?
In terms of getting Linux containers to run on a cell phone or mobile tablet, the question to ask is: why?
Containers in general and Kuberenetes in particular have their origins in the datacenter. Containers came about as a way to increase the efficiency of process isolation beyond the capabilities of virtual machines. Containers load very fast, on the order of milliseconds. Loading a VM can take minutes.
Also, the ecosystem for distributing a container is built around the Container Image Repository of which DockerHub is the most familiar. The container image is the template that describes the parts necessary to create a container at runtime. If the container image exists on the local machine the container manager will use the local copy. If not, the container manager is smart enough to figure out how to get the required container image from a repository on the internet.
Cell phones and mobile tablets on the other hand use the app store model. When you want to add an app to your cell phone you go to the Apple App Store or Google Play and intentionally download the app. It’s not a process that lends easily to the type of automation that’s used in a data center. The app store pattern is essentially focused on human instigation
On the other hand, the app store pattern is a lot easier to use than the container image repository pattern. The app store pattern is a click and download process. This is its virtue. It’s a hard process that’s hard to break. Container automation is a lot more fragile.
The long and short of it is that if you’re looking to make mobile devices such as cell phones or tablets part of distributed architecture, make sure they’re being used in a way that makes sense. For example, there’s a good case to be made that a cell phone can be a valuable contributor to a larger distributed system by providing locale based face recognition capability. But, to expect that cell phone to provide that capability as part of a Kubernetes cluster doesn’t really make sense when you consider the time and labor required to make it happen.
An alternative approach is to devise a distributed architecture that’s compatible with the mobile computing ecosystem, particularly around distributing applications and components.
As they say, when in Rome, do as the Romans do. The analogy rings true when thinking about creating distributed architectures that use mobile devices. Or, if you have the time, tolerance and expertise, you can devote a weekend of your life and try to get a Linux container to run on a cell phone. If you’ve had both pleasure and success making it all happen, by all means, please let me know. This is a case where I’d love to be proven wrong.
Did you know:
Did you know that In environments that cannot run container daemons (e.g., smartphones), mimik’s edgeEngine provide additional “light” container capabilities with the ability to download, deploy, and operate microservices ?