additional kubernetes controllers from fabric8 you can use with your microservice

James Strachan
fabric8 io
Published in
4 min readNov 4, 2016

--

The folks from CoreOS just announced their idea of adding operators to kubernetes to manage stateful applications with ease. Its a very compelling idea! We’ve been thinking along similar lines on the fabric8 project — more on that later.

thoughts on CoreOS Operators

I’m particularly looking forward to a ZooKeeper Operator that works like CoreOS’s etcd operator so you can easily just scale up or down the size of your ZooKeeper cluster by changing 1 simple number via the CLI, REST API or UI and the operator figures out the detail of how to update quorum, services/DNS, state and so forth.

Though as an end user, I’d kinda like to have regular Kubernetes Deployment semantics for etcd or ZooKeeper pods and scale it in the usual way (e.g. via kubectl scale) and the Operator take over and wire up the cluster after the Deployment size changes — rather than scaling via changing the ThirdPartyResource for the Operator — then Kubernetes feels a bit more polymorphic and its easier for tools to work with all scalable Operators; but I realise that might be harder as the Operator may want to do things before the Deployment controller scales up; though could that be done with init containers?

Fabric8 Controllers

We’ve been using the term controllers in fabric8 rather than operators; but I thought I’d mention a few of the ones we’ve created to help us automatically configure and manage our microservices inside the fabric8 developer platform.

Here’s the controllers we’ve created so far:

expose controller

The expose controller takes care of exposing services externally which have the label:

expose=true

With the Kubernetes ecosystem there are various ways to expose services:

  • Ingress on Kubernetes
  • Routes on OpenShift (which pre-date Ingress and may be replaced long term by Ingress)
  • LoadBalancer types on public cloud providers like GKE which then uses a cloud provider generated public IP (which adds cost)
  • NodePort if developing on your laptop which uses ports on a host node to avoid needing DNS and/or a load balancer

So rather than hard coding your app to pick one of the above, we tend to make our microservices just use the expose=true label and run the expose controller microservice as part of fabric8.

Then when running on OpenShift we default to Routes; on Kubernetes to Ingress or on minikube or minishift we default to NodePorts as we avoid having to run routers, wildcard DNS and so forth (which can be flaky on laptops).

Another feature of expose controller is to inject into a ConfigMap external URLs to services. For example when you run a git hosting service like gogs or gitlab you need to tell it the public URL to the service so that its web application can tell you how to type in a git clone from your laptop. Also web consoles which want to use OAuth to log clients in need to know where the OAuth authorize URL is.

So expose controller can inject these values at runtime if the ConfigMap for an exposed service is annotated with the expose controller annotations.

configmap controller

One issue of using expose controller is that after an app is deployed, the external URL will change as, say, an Ingress is created, leading to a change in the ConfigMap for the app. However sometimes the values in the ConfigMap are used as environment variables so that the pod will be running with old configuration.

To fix this the configmap controller will automatically trigger a rolling update of the application whenever its ConfigMap changes.

This turns out to be really handy for any app that uses a ConfigMap to store configuration which is loaded on startup (rather than mounting the ConfigMap as a volume and then watching the file system for changes inside the pod).

To see this in action check out this spring boot web MVC application. It loads an application.properties from the ConfigMap by mounting it as a volume which spring boot then loads on startup. It reads this configuration on startup. So if we edit the ConfigMap (e.g. via the kubectl command line or via the fabric8 developer console) by default the app would keep running with the old configuration.

However if we add the configmap.fabric8.io/update-on-change annotation to the Deployment like this:

metadata:
annotations:
configmap.fabric8.io/update-on-change: "foo"

Then whenever the ConfigMap changes its configuration data, the configmap controller will update the Deployment to add/update an environment variable with the configuration contents which then triggers a rolling upgrade of your application! Neat eh!

git controller

The git controller is like the configmap controller but for when your Deployment is mounting a revision of a git repository into a volume — e.g. configuration files or CMS web content. The idea of the git controller is it will watch all the git repositories of all the git repositories used by your Deployments and if a git commit occurs in the branch of a repo, it will perform a rolling upgrade on the Deployment by pointing the git repository volume to the new commit version.

So its very much like the configmap controller but has the added benefit of having a change audit log, history and an easy way to revert changes.

summary

So those are the 3 controllers we’ve created so far on the fabric8 project. I’m sure we’ll create more as and when required.

One of the things I particularly love about kubernetes is it promotes a very unix like philosophy of creating lots of small microservices that solve particular problems; then they are super easy to compose and reuse across projects, languages and frameworks!

Have you created any custom controllers for kubernetes or thinking of any? Any controllers you wish you had? Let us know!

--

--

I created Groovy and Apache Camel. I’m currently working at CloudBees on Jenkins X: automated CI/D for Kubernetes: https://jenkins-x.io/