Kyverno: Policy as Code for Automated Compliance Checks
TL;DR Kyverno is a Kubernetes-native policy engine that allows you to define security and …
Diese Serie erklärt systematisch, wie moderne Software compliant entwickelt und betrieben wird – von EU-Regulierungen bis zur technischen Umsetzung.
Modern software landscapes are highly distributed. In a typical cluster, hundreds to thousands of pods run, managed across dozens of namespaces and teams. In this complexity, it is no longer realistic to manually check every configuration.
Guardrails address exactly this problem. They are predefined, automatically enforced rules that ensure deployments meet certain minimum standards—technical, operational, and regulatory. Technically, in our context, these are Kyverno policies that check every incoming object at the admission level.
The perspective is important: Guardrails are not a brake but a railing. Developers can move freely within a defined framework without having to reinvent security or compliance details with every change. The platform ensures that no deployment leaves these guidelines.
With regard to Art. 32 GDPR (“Security of processing”), NIS-2 (mandatory in EU member states from 17.10.2024), and BSI IT-Grundschutz, such automated controls are not only sensible today but a de facto requirement for a robust compliance proof.
Kyverno integrates as an admission webhook directly into the Kubernetes API. Every Deployment, Pod, and Namespace is checked against defined policies before it enters the etcd store. This allows for two main operating modes:
For guardrails, we deliberately speak of the enforce mode as the standard. It is the technical equivalent of “mandatory controls” in the internal control system: no go-live without fulfillment.
Kyverno supports three basic policy types relevant for guardrails:
This allows guardrails to be designed to automate as much as possible and only hard block where there is a real risk.
Security guardrails enforce technical minimum standards that exist in many organizations on PowerPoint but are rarely consistently practiced in everyday life. With Kyverno, you can cast these standards into executable policies.
A core principle of modern container security: Applications do not run as root and without unnecessarily extended rights. However, many base images still deliver root as the default.
A guardrail “no privileged containers” enforces, among other things:
runAsNonRoot or a non-root runAsUser are mandatory (conceptually).Violations are rejected at deployment. The error message clearly refers to the policy and the affected container, allowing the responsible team to make targeted adjustments.
From a compliance perspective, this directly supports the limitation of access rights required by Art. 32 GDPR and the BSI IT-Grundschutz modules for secure system and application operation.
Another typical guardrail: Only images from approved registries are allowed, optionally with signature verification.
This reduces the risk that:
Kyverno policies can enforce that all images:
Especially in the context of NIS-2, which explicitly requires measures to secure the supply chain, such guardrails are a very effective component.
Without network policies, any pod can communicate with any other pod. For the security architecture of a cluster, this is effectively a “flat network”—the opposite of segmentation.
A guardrail “namespace must have at least one network policy” forces network segmentation to be conceptually considered. In combination with a CNI like Cilium, which efficiently enforces network policies, a zero-trust-like design is created within the cluster.
This reduces the possibilities of lateral movement after a compromise—a central goal of both the BSI IT-Grundschutz recommendations and the requirements from NIS-2.
Security contexts are often underestimated: They define effective user IDs, groups, file system permissions, and other security attributes of a pod.
Guardrails can enforce, among other things, that:
fsGroup and umask are set,This makes security by default a lived practice: Anyone who wants to deviate from the secure default setting must consciously justify this and go through the governance process.
Security alone is not enough. Failures due to resource conflicts or uncoordinated maintenance windows are just as critical—not least because Art. 32 GDPR explicitly names availability as a protection goal.
Many cluster failures are ultimately capacity issues:
A guardrail “every container must be assigned CPU and memory requests (and ideally limits)” makes such situations much less common. Without these specifications, a deployment is rejected—again with a specific error message pointing to the missing resource block.
For ISO and BSI audits, this is a very tangible proof of lived capacity planning, as also required in the context of compliance audits.
Another type of reliability guardrail concerns high availability:
Kyverno can distinguish here based on labels or namespaces and enforce different minimum requirements per environment (e.g., stricter in production than in staging).
Planned maintenance—whether node updates, kernel patches, or rolling upgrades—requires that pods are not terminated arbitrarily at the same time. PodDisruptionBudgets (PDBs) define exactly that.
A guardrail can require that a suitable PDB exists for all deployments of a certain type before they are rolled out into productive namespaces. This makes unintentional downtimes during maintenance work much less likely.
In addition to security and reliability, there is a third category: rules that primarily keep operational complexity under control.
In many environments, only platform or ingress controller teams should configure direct exposure to the outside. If each product team can independently create Service objects of type LoadBalancer, an unmanageable picture quickly emerges.
A Kyverno guardrail can:
This keeps the responsibility for external attack surfaces clearly with the platform team—a key point for risk and accountability models in the sense of NIS-2.
Local storage binds workloads to individual nodes and complicates restart and scaling. In the event of a node failure, the data is often lost or only recoverable with great effort.
Guardrails can:
This fits well with BSI recommendations for failover safety and supports availability requirements from Art. 32 GDPR.
Unevenly distributed pods are a frequently underestimated risk: If all replicas of a service land on the same node or in the same availability zone, their failure is directly business-critical.
A guardrail can prescribe that deployments for certain workloads use a topology spread configuration—i.e., deliberately distribute replicas across nodes or zones.
This turns infrastructure failures into performance degradations instead of hard failures—a crucial difference, especially for regulatory-relevant systems.
Imagine a typical scenario:
A development team wants to deploy a new version of a backend. The manifests include:
Without guardrails, this deployment would land in the cluster—until the next vulnerability or load peak shows that fundamental standards were ignored.
With Kyverno-based guardrails, the following happens instead:
TL;DR Kyverno is a Kubernetes-native policy engine that allows you to define security and …
Kubernetes has become the de facto standard for operating cloud-native applications. However, with …
TL;DR Polycrate is an Ansible-based framework for deployment automation that containerizes all …