k8s with cue and kpt

no more text templates

SEAN K.H. LIAO

k8s with cue and kpt

no more text templates

k8s with cue and kpt

I recently reset my k8s cluster, with that, wiping away my previous experiment of deploying things to it.

Previously, I wanted to do gitops at home, but I didn't want to run argo cd which I already do for work, and I didn't really want flux either, which is how I ended up using the open source version of Config Sync. It was fine, but also really strict (a validating webhook prevented any modification outside of the giops flow), and somewhat hard to recover from errors (at least without any ui). I also used it with plain yaml manifests, not liking any of the templating / modification tools at the time.

Looking around this time, I wanted something smarter than kubectl especially for pruning (the applyset KEP seems to be in limbo atm). Having a strong dislike for Helm, it and Timoni were out of the picture. From the applyset KEP, the other listed options were Carvel kapp and kpt. I liked kpt's approach better, so that's what I chose to move forward with.

While I like the concept of how kpt and config as data passed through a pipeline of KRM functions work, I wasn't all that enthused about how they were implemented in practice (docker containers), so I decided I needed something to generaye manifests, and use kpt as just as smarter applier.

I briefly entertained the idea of defining manifests in Go code, and just bundling all the functionality from kpt, but decided it probably wasn't quite worth the effort. The next best thing appears to be generating the manifests from cue, at least there's some level of type checking and reuse, even if it is somewhat clunky to have to run 2 commands every time you change something.

cue

We finally get to using cue: initially I started with a model similar to how timoni is set up, but decided that was too much flexibility for too little help in structuring things. Instead, I landed with a structured tree for how everything would be laid out. Instead of repeating the keys apiVersion / kind for every object, they would become required fields along with namespace and name, a bit like function args or terraform resources. With a fixed structure, I could fill in typemeta and partial objectmeta for every object, while ensuring that objects are validated against specs.

 1package deploy
 2
 3import (
 4	"list"
 5	"strings"
 6
 7	corev1 "k8s.io/api/core/v1"
 8)
 9
10k8s: [kgroup=string]: [kversion=string]: [kkind=string]: [kns=string]: [kn=string]: {
11	if kgroup == "" {
12		apiVersion: kversion
13	}
14	if kgroup != "" {
15		apiVersion: kgroup + "/" + kversion
16	}
17	kind: kkind
18	metadata: name: kn
19	if kns != "" {
20		metadata: namespace: kns
21	}
22}
23
24k8s: {
25	"": v1: {
26		ConfigMap: [kns=string]: [kn=string]:             corev1.#ConfigMap
27		LimitRange: [kns=string]: [kn=string]:            corev1.#LimitRange
28		PersistentVolumeClaim: [kns=string]: [kn=string]: corev1.#PersistentVolumeClaim
29		Pod: [kns=string]: [kn=string]:                   corev1.#Pod
30		Secret: [kns=string]: [kn=string]:                corev1.#Secret
31		Secret: [kns=string]: [kn=string]:                corev1.#Service
32		ServiceAccount: [kns=string]: [kn=string]:        corev1.#ServiceAccount
33
34		Namespace: [kns=""]: [kn=string]: corev1.#Namespace
35	}
36  
37  // other apigroups
38}

Generating the manifests for kpt becomes a matter of flattening the list, and sending that to yaml.MarshalStream. By defining these at the root directory, and creating apps in subdirectories while still sharing the same package name, the resulting command can be called with cue cmd k8smanifests in each subdirectory.

 1package deploy
 2
 3k8slist: list.FlattenN([for _group, versions in k8s {
 4	[for version, kinds in versions {
 5		[for kind, namespaces in kinds {
 6			[for namespace, names in namespaces {
 7				[for name, obj in names {
 8					obj
 9				}]
10			}]
11		}]
12	}]
13}], -1)
14
15command: k8smanifests: {
16	env: os.Getenv & {
17		SKAFFOLD_IMAGE?: string
18	}
19
20	output: file.Create & {
21		filename: "kubernetes.yaml"
22		contents: yaml.MarshalStream([for obj in k8slist {
23			obj & {
24				#config: {
25					image: env.SKAFFOLD_IMAGE
26				}
27			}
28		}])
29	}
30}

The pattern I've found to create "functions" looks like the below, where out is unified with where I want it to be used.

 1#LabelSelector: {
 2	#args: {
 3		labels: [string]: string
 4	}
 5
 6	out: {
 7		metadata: labels: #args.labels
 8		spec: selector: matchLabels: #args.labels
 9		spec: template: metadata: labels: #args.labels
10	}
11}

As an example:

 1package deploy
 2
 3k8s: apps: v1: Deployment: "kube-system": {
 4	"softserve": (#LabelSelector & {
 5		#args: labels: {
 6			"app.kubernetes.io/name": "softserve"
 7		}
 8	}).out
 9	"softserve": {
10		spec: revisionHistoryLimit: 1
11		spec: strategy: type: "Recreate"
12		spec: template: spec: {
13			containers: [{
14				image: "ghcr.io/charmbracelet/soft-serve:v0.7.4"
15				name:  "softserve"
16				ports: [{
17					containerPort: 9418
18					name:          "git"
19				}, {
20					containerPort: 23231
21					hostPort:      23231
22					name:          "git-ssh"
23				}, {
24					containerPort: 23232
25					name:          "git-http"
26				}, {
27					containerPort: 23233
28					name:          "stats"
29				}]
30				volumeMounts: [{
31					mountPath: "/soft-serve"
32					name:      "data"
33				}]
34			}]
35			enableServiceLinks: false
36			volumes: [{
37				hostPath: path: "/opt/volumes/softserve"
38				name: "data"
39			}]
40		}
41	}
42}