With terraform resources,
you have things like
aws_s3_bucket,
which to prevent you from accidentally destroying a bucket containing important information,
has a force_destroy
argument.
In normal terraform usage,
if you forget to set it,
you'll run into an error,
you can go back to change your code to set it to true
,
and run it again to destroy things.
At $work, we use hashicorp/terraform-k8s
(it's really bad, do not recommend).
Here, destroy runs are triggered by deleting the Workspace
CRD,
which leads to the unfortunate situation where if you forgot to set force_destroy
beforehand,
there's no way to go back and update it to true
,
since the Workspace
object is already in a deleting state,
and the operator won't try to pick up new changes to apply first.
After having to fix this a few times for different teams,
I came up with using a kyverno
policy to block deletes that will fail.
We block the delete operation if it affects a Workspace
whose force_destroy
variable isn't set.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: s3-force-destroy
5spec:
6 validationFailureAction: Enforce
7 background: false
8 rules:
9 - name: s3-force-destroy
10 match:
11 any:
12 - resources:
13 kinds:
14 - Workspace
15 operations:
16 - DELETE
17 validate:
18 message: |
19 force_destroy must be set to destroy buckets
20 deny:
21 conditions:
22 all:
23 # NOTE: MUST use double quotes "" outside for yaml
24 # and single quotes '' inside for JMESPath
25 - key: "{{ request.oldObject.spec.variables[?key == 'force_destroy'].value | [0] }}"
26 operator: Equals
27 value: "false"
28 # filtering because we didn't have the foresight to implement labels properly
29 - key: "{{ contains(request.oldObject.spec.module.source, 's3-bucket') }}"
30 operator: Equals
31 value: true