During my PhD I developed a deep appreciation of declarative programming. When I joined LogicBlox a few years back (two jobs ago, I’m not proud to say), I wrote an article about this: Declare Everything:
At LogicBlox I also did a lot of DevOps related work in the context of the Nix project, where I worked with NixOS and NixOps. NixOps allowed you to define networks of machines and the software running on them by declaratively specifying what you wanted to run on them, rather than the steps to get the system there. Making changes then simply involved making some changes to the specification (written in the Nix language) and running “deploy.” NixOps would rebuild the system and figure out the changes to implement the new version of your specification.
NixOps was (and still is) cool, but it’s very static. That is, you have a bunch of text file that specifies the system you want to have, and that’s what it will build. The system doesn’t adapt to the environment automatically, e.g. when a node dies, it doesn’t detect that and fix it (without rerunning the deployment at least). There’s not much intelligent scheduling either. You can’t tell it “hey, here’s a service, I’d like 20 instances figure out where to deploy them — go make it so!”
And in Kubernetes you can.
If you tell it to run 50 instances of some container, it will attempt to do so. If you had already 100 instances running, it will kill half. If you didn’t have anything running yet, it will spin up 50. If one of your cluster’s nodes dies which happened to run 8 instances of your service, it will select other nodes to run those 8 instances on.
If you want to upgrade your service, you can specify a roll-out strategy, again, declaratively. This strategy specifies how Kubernetes should implement upgrades. For instance. Here’s an example Kubernetes “Deployment”:
apiVersion: extensions/v1beta1<br>kind: Deployment<br>metadata:<br> name: my-app<br>spec:<br> replicas: 3<br>strategy:<br> type: RollingUpdate<br> rollingUpdate:<br> maxSurge: 1<br> maxUnavailable: 0<br>template:<br> metadata:<br> labels:<br> app: my-app<br> spec:<br> containers:<br> - name: my-app<br> image: org/my-app:v1<br> readinessProbe:<br> httpGet:<br> path: /health<br> port: 8080<br> initialDelaySeconds: 5<br> periodSeconds: 5<br> timeoutSeconds: 2<br> livenessProbe:<br> path: /health<br> port: 8080<br> initialDelaySeconds: 5<br> periodSeconds: 5<br> timeoutSeconds: 2<br> ports:<br> - containerPort: 8080
This specifies a few things:
- A deployment that should always run 3 instances of “my-app.”
- “my-app” consists of one container, for which both a readiness and live probes are specified that Kubernetes will use to see if the container is… well, ready, and still alive (if it’s not alive it’ll kill it and start a new instance).
- The rollingUpdate strategy specifies that upon an upgrade (and I’ll mention how to perform upgrades later) it will first spin up 1 (maxSurge) instance of the new version, wait for it to be ready, and only then terminate one of the old containers. When that’s done it will move on to the next one, and so on. This ensures there’s always 3 healthy containers running (maxUnavailable: 0).
How do you run this deployment? If you put the spec above in a “my-app-deployment.yml” you can simply run:
kubectl apply -f my-app-deployment.yml
Then, to deploy a new version, simply edit the file, change the “image” to e.g. “org/my-app:v2” and run the same command again. You can watch new containers being spun up, and as they become healthy, old ones being terminated, one by one.
Kubernetes makes it so. Coolio.