User Namespaces: Stateful Pods Now Available in Kubernetes 1.28!
Kubernetes v1.25 introduced support for user namespaces only for stateless Pods. With Kubernetes …
The Kubernetes community took a significant step with version v1.24 by digitally signing their container image-based artifacts. With the transition of the corresponding enhancement from alpha to beta in v1.26, signatures for binary artifacts were also introduced. This has inspired other projects to implement image signatures for their releases as well. But how can these signatures be effectively verified?
For developers and DevOps teams, this means they now have the ability to automatically sign container images and verify these signatures. This can be done either within their own CI/CD pipelines, for example, through GitHub Actions, or via the Kubernetes image promotion process, which automatically handles the signing. The prerequisite is that the project is part of the kubernetes or kubernetes-sigs GitHub organization.
Suppose your project now produces signed container image artifacts. How can you verify these signatures? Manually, it is possible, but not practical for production environments. This is where tools like the sigstore policy-controller come into play. These tools utilize a higher API level through Custom Resource Definitions (CRD) and integrated Admission Controllers and Webhooks to verify the signatures.
The general process for verification by an admission controller is as follows:
A major advantage of this architecture is its simplicity: A single instance in the cluster validates the signatures before an image pull can occur in the container runtime on the nodes. This is initiated by the Kubelet. However, this also brings the challenge of separation: The node that is supposed to pull the container image is not necessarily the same one that performs the admission. This means that if the controller is compromised, cluster-wide enforcement of policies is no longer possible.
One solution to this problem is to perform policy evaluation directly in the Container Runtime Interface (CRI)-compatible container runtime. The runtime is directly connected to the Kubelet on a node and performs all tasks such as pulling images. CRI-O is one such runtime and will offer full support for verifying container image signatures in v1.28.
How does it work? CRI-O reads a file called policy.json, which contains all defined rules for container images. For example, you can define a policy that only allows signed images quay.io/crio/signed for any tags or digests:
{
"default": {
"type": "signed",
"locations": [
{
"location": "quay.io/crio/signed",
"signature": true
}
]
}
}
With these new capabilities for verifying container image signatures, developers and DevOps teams can ensure their applications are secure and reliable. ayedo is excited to stand by your side as a partner in the Kubernetes world and support you on your journey.
Source: Kubernetes Blog
Kubernetes v1.25 introduced support for user namespaces only for stateless Pods. With Kubernetes …
In the latest version of Kubernetes, v1.27, there’s an exciting new feature: the ability to …
Kubernetes v1.27, released in April 2023, introduces improvements in Memory QoS (alpha), enabling …