Kubernetes is a platform that allows containerized workloads to be managed, scaled and automated. As the ecosystem is growing more and more use cases become available. The architecture of Kubernetes allows for nearly limitless customization and extension of its function.
The API-driven nature of the Kubernetes architecture for almost everything from basic building blocks to custom extensions allows for the abstraction of infrastructure as Kubernetes resources.
All of this makes Kubernetes a very nice fit for Infrastructure as Code endeavors where it can function as the control plane for the entire cloud infrastructure, not just those workloads that run on the workers nodes. This is especially useful for multi cloud deployments. So how can we do this? Let’s look at what makes up a modern cloud native application.
A modern application
Modern, cloud-native, applications usually contain many separate resources to function. Resources such as the runtime itself, message queues, datastores, configuration, log storage, metric storage and credentials to connect with those resources.
All those resources need to be managed and connected which used to be quite a tedious and error prone activity. We have been automating these activities for quite some time and most cloud providers have API’s to help us do this.
What is troublesome, is managing the lifecycle of all these resources. Is the application still running or can this database be decommissioned? Is this message queue still necessary? And what about these credentials? For all of this often a CMDB is used, however this usually is an administrative hurdle. But what if we can automate all this easily? What if we can define all these resources and their connections within Kubernetes?
Kubernetes Extensions
Kubernetes has a few options for extensions that allow us to manage everything through one API. The first are cloud events such as the creation of new objects, changes in state and more. Second is the ability to enhance existing building blocks with new fields to allow for extra data and behavior on existing resources. Last but not least is the ability to create
entirely new resource types through Custom Resource Definitions.
Custom Resource Definitions
Kubernetes allows for customization in several forms. One of these is the custom resource definition or CRD. CRD’s allow for creating new objects inside the Kubernetes API. This can be something simple to just store information inside the platform. Think of something like storing the information of a DevOps team responsible for one of the deployments inside the platform.
What can make CRD’s more powerful by combining them with controllers to add additional behavior to the Kubernetes ecosystem. Many enhancements to Kubernetes do this by combining CRDs with Controllers.
Controllers and Operators
A Controller within Kubernetes makes use of the basic building blocks such as Deployment, Service and Service Account. From that perspective they are the same as any other regular application. What makes them special is that they connect to the Kubernetes Control Plane where they can listen for events such as the creation of a resource within the API, the stopping of a deployment and more.
Controllers can act upon these events and API requests and ensure that our infrastructure is in the required state. A new message queue needs to be created? The controller will make it happen. A deployment is stopped? Perhaps we can decommission or at least pause the cloud resource to save costs.
Cloud Provider Support
Since extending Kubernetes is relatively simple we can write CRDs and Controllers to manage cloud resources ourselves. However all the major cloud providers have recognized this shift from traditional infrastructure as code towards a more API-driven approach through the Kubernetes Control Plane. Azure, AWS and GKE all have created supported operators to create CRD’s and controllers to allow for the management of cloud resources through Kubernetes.
So is Kubernetes the new Cloud Control Plane? I believe the current developments are very promising. As the ecosystem grows and matures we will see more of Kubernetes as the focal point of cloud infrastructure.