Once you have got started with Kubernetes and are in productive operation, new requirements are often added to the existing environments over time. These can be diverse: from consistent labels for rolled-out applications to security-relevant settings, there are many things that cannot be implemented without some kind of Kubernetes policies.
Due to its modular approach, Kubernetes does not offer many options here on its own – network traffic can be regulated with the help of a CNI with NetworkPolicies, and the Pod Security Admission Controller offers various predefined profiles for application security, but beyond that you have to help yourself with external applications.
Two frequently used tools for this are Kyverno and the Open Policy Agent (OPA) with its Kubernetes implementation Gatekeeper to introduce Kubernetes policies – today’s tutorial will focus on Gatekeeper.
Follow this tutorial live and in a real Kubernetes environment in our free NWS Playground.
Open Policy Agent and Gatekeeper explained
The Open Policy Agent (OPA) provides policy-based control for cloud-native environments – these are not limited to Kubernetes: OPA provides declarative, contextual support for policies around Kubernetes, Envoy, various HTTP and GraphQL APIs, Kafka and other scenarios.
This variety of application areas is made possible by OPA’s policy language Rego, which in turn is strongly oriented towards and inspired by Datalog. Each query in Rego defines and checks certain assumptions about data that is available to OPA at the time of the check, e.g. the content of API requests. A simple Kubernetes policy in Rego could look like this, for example:
package kubernetes.admission
deny contains msg if {
input.request.kind.kind == "Pod"
image := input.request.object.spec.containers[_].image
not startswith(image, "myregistry.com/")
msg := sprintf("image '%v' comes from untrusted registry", [image])
}This policy rejects requests to the Kubernetes API to create Pods if all the assumptions defined in the rule have been evaluated as true. When evaluating the request, OPA iterates over all entries and checks whether the assumptions in the respective case are true or false.
In the example above, the creation of Pods would therefore be refused if at least one container defined in Pod wants to obtain an image from a registry other than myregistry.com .
The official Rego Playground can be used to test Kubernetes policies for OPA. Here, Rego modules can be defined and tested with any input.
Gatekeeper is an implementation of OPA specifically for Kubernetes Policies, with a few useful extensions:
- an extensible, parameterizable library with Kubernetes policies
- Kubernetes
CustomResourceDefinitions(CRDs) for the import and use of existing policies (Constraints) - Kubernetes CRDs for the definition of new policies (
ConstraintTemplates) - Kubernetes CRDs for the mutation of requests and data
- Audit functionality
- Support for external data and data sources
Installation of OPA Gatekeeper
If you want to implement Kubernetes policies with the OPA Gatekeeper, you must first install it in your environment. This can be done using the official Helmchart, for example:
helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
helm repo update
helm install -n gatekeeper-system gatekeeper gatekeeper/gatekeeper --create-namespaceAfter a few seconds, the gatekeeper controller and the audit helper can be found in the corresponding namespace:
kubectl get all -n gatekeeper-system
NAME READY STATUS RESTARTS AGE
pod/gatekeeper-audit-8948486cd-754x6 1/1 Running 0 118s
pod/gatekeeper-controller-manager-5f9dfb6899-4m92s 1/1 Running 0 118s
pod/gatekeeper-controller-manager-5f9dfb6899-8ns8j 1/1 Running 0 118s
pod/gatekeeper-controller-manager-5f9dfb6899-drmqn 1/1 Running 0 118s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gatekeeper-webhook-service ClusterIP 10.254.22.244 <none> 443/TCP 118s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/gatekeeper-audit 1/1 1 1 118s
deployment.apps/gatekeeper-controller-manager 3/3 3 3 118s
NAME DESIRED CURRENT READY AGE
replicaset.apps/gatekeeper-audit-8948486cd 1 1 1 118s
replicaset.apps/gatekeeper-controller-manager-5f9dfb6899 3 3 3 118sThe CRDs mentioned above are also already installed and known to our cluster:
kubectl get crds | grep gatekeeper
assign.mutations.gatekeeper.sh 2025-02-19T10:09:19Z
assignimage.mutations.gatekeeper.sh 2025-02-19T10:09:19Z
assignmetadata.mutations.gatekeeper.sh 2025-02-19T10:09:19Z
configpodstatuses.status.gatekeeper.sh 2025-02-19T10:09:19Z
configs.config.gatekeeper.sh 2025-02-19T10:09:19Z
constraintpodstatuses.status.gatekeeper.sh 2025-02-19T10:09:19Z
constrainttemplatepodstatuses.status.gatekeeper.sh 2025-02-19T10:09:19Z
constrainttemplates.templates.gatekeeper.sh 2025-02-19T10:09:19Z
expansiontemplate.expansion.gatekeeper.sh 2025-02-19T10:09:19Z
expansiontemplatepodstatuses.status.gatekeeper.sh 2025-02-19T10:09:19Z
modifyset.mutations.gatekeeper.sh 2025-02-19T10:09:19Z
mutatorpodstatuses.status.gatekeeper.sh 2025-02-19T10:09:19Z
providers.externaldata.gatekeeper.sh 2025-02-19T10:09:19Z
syncsets.syncset.gatekeeper.sh 2025-02-19T10:09:19ZThe next step is to import existing Kubernetes policies from the OPA Gatekeeper Library or write your own policies.
Use of existing Kubernetes policies
If you want to integrate existing Kubernetes policies from the OPA Gatekeeper Library, you can do this with a simple kubectl apply. For this blog post, we will look at three existing policies:
K8sAllowedReposto restrict the registries from which container images may be obtainedK8sRequiredLabelsfor the promotion of mandatory labelsK8sContainerLimitsfor the promotion of mandatoryResourceLimits
Installation of existing ConstraintTemplates
We can install the corresponding constraint templates directly from the policy repository using kubectl:
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/general/allowedrepos/template.yaml
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/general/requiredlabels/template.yaml
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/general/containerlimits/template.yamlThe successful installation can also be checked:
kubectl get ConstraintTemplates
NAME AGE
k8sallowedrepos 8s
k8scontainerlimits 8s
k8srequiredlabels 8sA ConstraintTemplate basically defines two different things:
- the structure of the parameterizable
Constraintsbased on theConstraintTemplateas an OpenAPI specification - the applicable rule(s)
Use of installed ConstraintTemplates by Constraints
Once the ConstraintTemplates have been installed, they can be cast into specific Kubernetes policies via Constraints.
A constraint can look very different depending on the OpenAPI specification. The following examples define and enforce specific Kubernetes policies for the affected cluster based on the three previously installed ConstraintTemplates.
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRepos
metadata:
name: repo-is-quay
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaces:
- "default"
parameters:
repos:
- "quay.io/"This Constraint only allows container images in the default namespace that are obtained from quay.io.
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: all-must-have-team
spec:
match:
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment"]
parameters:
message: "All deployments must have a `team` label that points to your team name"
labels:
- key: team
allowedRegex: "^team-[a-zA-Z]+$"This Constraint forces the existence of a label team for all Deployments. The value of the label must also correspond to the form team-xxx.
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sContainerLimits
metadata:
name: container-must-have-memory-limits
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaces:
- "limited-resources"
parameters:
cpu: "-1" # do not enforce CPU limits
memory: "1Gi"This Constraint forces a memory limit of 1Gi (1 gibibyte) to be set in the namespace limited-resources.
The corresponding YAML blocks can be installed using kubectl apply and then checked for functionality.
Testing the defined Kubernetes policies
The defined policies can be tested by creating allowed or disallowed objects. For the Kubernetes policies K8sAllowedRepos we create two Pods in the namespace default:
kubectl run -n default denied-pod --image mysql:latest
Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [repo-is-quay] container <denied-pod> has an invalid image repo <mysql:latest>, allowed repos are ["quay.io/"]
kubectl run -n default allowed-pod --image quay.io/fedora/mysql-80
pod/allowed-pod created
Gatekeeper denies the creation of the first Pods with a specific error message, which contradicts Constraint.
For the policy K8sRequiredLabels we create the following two Deployments:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: denied-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: quay.io/fedora/mysql-80
name: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: allowed-deployment
labels:
team: team-database
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: quay.io/fedora/mysql-80
name: mysqlIn this scenario, we also receive an error message from Gatekeeper for the Deployment that is not labeled according to the Kubernetes policy:
kubectl apply -f deployments.yaml
deployment.apps/allowed-deployment created
Error from server (Forbidden): error when creating "deployments.yaml": admission webhook "validation.gatekeeper.sh" denied the request: [all-must-have-team] missing required label, requires all of: team
[all-must-have-team] regex mismatchFor the third scenario, we create three more Pods:
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: unaffected-pod
spec:
containers:
- image: quay.io/fedora/mysql-80
name: mysql
---
apiVersion: v1
kind: Pod
metadata:
namespace: limited-resources
name: affected-pod-without-limits
spec:
containers:
- image: quay.io/fedora/mysql-80
name: mysql
---
apiVersion: v1
kind: Pod
metadata:
namespace: limited-resources
name: affected-pod-with-limits
spec:
containers:
- image: quay.io/fedora/mysql-80
name: mysql
resources:
limits:
memory: 1GiOnce again, we receive an error message for the pod affected-pod-without-limits. It should also be noted that the pod unaffected-pod is successfully created despite the lack of limits – according to the definition, Constraint only works in the namespace limited-resources.
kubectl create namespace limited-resources
kubectl apply -f pods.yaml
pod/unaffected-pod created
pod/affected-pod-with-limits created
Error from server (Forbidden): error when creating "pods.yaml": admission webhook "validation.gatekeeper.sh" denied the request: [container-must-have-memory-limits] container <mysql> has no resource limitsUnit tests for Kubernetes policies
As it rarely makes sense in real scenarios to test all Kubernetes policies manually during operation, OPA Gatekeeper enables automated unit testing of defined policies. The gator CLI of the Gatekeeper project is used for this purpose.
On the one hand, tests can be triggered via gator test and test any number of given Kubernetes objects against a set of ConstraintTemplates and Constraints.
On the other hand, entire test suites consisting of several tests and test cases can be checked using gator verify. A test consists of a concrete ConstraintTemplate and Constraint as well as inputs to be tested in the form of Kubernetes objects (as cases).
This procedure enables automated testing of created Kubernetes policies, e.g. in CI pipelines, without having to test manually in the actual environment. Further explanations and examples can be found in the gator CLI documentation.
Kubernetes policies for everyone
OPA Gatekeeper, with its extensive library of ready-made Kubernetes policies for many conceivable scenarios, offers a good starting point for standardizing and securing everyday operations on Kubernetes.
If even more specific policies are required, the Rego Playground and the ability to test created policies independently of the actual environment can help. In this way, secure operation can be guaranteed, even in multi-tenancy scenarios or highly regulated environments.
It is important to note here that OPA Gatekeeper is not the only solution to these problems – applications such as Kyverno also have many users, and combined with applications for runtime protection of your Kubernetes environment such as Tetragon or Falco, a holistic approach to securing Kubernetes environments can be created. We will certainly be discussing one or two solutions here on our blog in the future.
Until then, we wish you every success in implementing the Kubernetes policies that are useful for you – if you need help with this, our MyEngineer® is of course always at your disposal!





0 Comments