TL;DR
- Traditional container builds with Docker Daemon, root privileges, and
docker.sock in CI systems pose an unnecessary security risk—especially when builds run directly in a Kubernetes cluster.
- Rootless, daemonless tools like Kaniko and Buildah enable secure image builds in pods without privileged rights and without a Docker Daemon—an essential component for technical guardrails and modern Compliance requirements.
- Kaniko is the “Kubernetes-native” approach: declarative, heavily focused on Dockerfiles, with good layer caching via the registry; Buildah offers more flexibility, deep OCI integration, and scriptability—suitable if you need more complex build workflows or custom toolchains.
- For the European context—including the Cyber Resilience Act and signed builds—rootless pipelines are a pragmatic way to embed security-by-design in the software supply chain.
- ayedo consistently relies on rootless builds with Kaniko and Buildah in its platform, integrated into GitLab, Harbor, and GitOps deployment—and supports teams in adopting this architecture in a structured manner.
Why Traditional Container Builds in Kubernetes Become a Risk
Many organizations built their first CI/CD pipelines with a simple assumption: “We install Docker on the runner and call docker build.” In traditional VM setups, this was pragmatic. However, in a Kubernetes cluster, this habit becomes a structural risk.
Typical patterns that are problematic in modern environments:
- CI jobs with direct access to
docker.sock
- Build pods running as
privileged
- Container builders effectively gaining root privileges on the node
This gives the build process extensive access to the host—and indirectly to other workloads. From an attacker’s perspective, the CI environment is highly attractive: access to source code, credentials, registries, and often production clusters.
From a Compliance perspective, such setups also complicate the argument that you consistently implement the principle of least privilege. Security and audit teams are increasingly asking:
- What privileges do CI jobs actually have?
- Can a compromised build job affect other workloads in the cluster?
- How do you ensure that guardrails like “no privileged pods” are consistently enforced?
The answer almost inevitably leads to rootless, daemonless build tools.
Rootless, Daemonless Builds: Principles Instead of Workarounds
“Rootless” and “daemonless” are more than buzzwords—they describe an architectural pattern:
- Rootless: The build process does not run with root privileges in the container or on the host. User namespaces and user-space file systems ensure that the build “feels” like root but technically does not gain host privileges.
- Daemonless: There is no long-running Docker Daemon communicating with the host and controlled via
docker.sock. Instead, a single process tool directly builds container layers and writes them to a registry or local storage backend.
For Kubernetes-native CI/CD, this means:
- Build jobs run as regular pods with restrictive PodSecurity or admission policies.
- Guardrails like “no
privileged pods,” “no host mounts,” or “no HostPID/HostIPC” can be strictly enforced.
- Security zones become clearer: the CI pipeline builds images but does not gain general access to the node.
Kaniko and Buildah are two mature projects that implement this pattern—with different focuses.
Kaniko: Kubernetes-native Image Builds Without Docker Daemon
Kaniko was originally developed by Google to build container images in Kubernetes clusters without needing a Docker Daemon. It is inherently tailored to cluster environments and CI/CD.
Architecture and Functionality
Kaniko:
- Runs itself as a container in a pod.
- Reads a Dockerfile and the build context (e.g., from the Git checkout).
- Simulates the Docker build process in user-space, without accessing
docker.sock or host privileges.
- Writes the resulting image layers directly to a registry, such as Harbor.
Key features:
- Rootless: Kaniko does not require root privileges in the pod; policies demanding
runAsNonRoot remain intact.
- Daemonless: No Docker Daemon is needed in the cluster; this reduces the attack surface and complexity.
- Multi-Stage Builds: Modern Dockerfile patterns are supported, including multi-stage builds for lean production images.
- Layer Caching: Kaniko can use a dedicated cache repository in the registry. Frequently reused layers (e.g., base dependencies) do not need to be rebuilt with each build.
Strengths of Kaniko
For many teams, Kaniko is the obvious standard:
- Simplicity: Teams with clean Dockerfiles can often use Kaniko without major adjustments.
- Kubernetes Integration: Kaniko executors integrate well with existing GitLab runners on Kubernetes.
- Good Performance Through Remote Caching: Especially in shared runner environments, caching via a registry is more robust than local daemon caches.
The downside of this focus is that Kaniko deliberately does not aim to do everything: it is primarily a Dockerfile interpreter, not a comprehensive container tooling.
Buildah originates from the Red Hat environment and is part of the broader OCI ecosystem around Podman. It is not just an “image builder,” but a toolkit for working with container images.
Architecture and Functionality
Buildah:
- Is a CLI tool that can create, modify, tag, and push images.
- Supports both Dockerfiles and declarative build scripts and step-by-step image manipulation.
- Works OCI-natively and integrates well into toolchains based on open standards.
- Can also be operated rootless and without a Docker Daemon—both directly on Linux workers and in containers within a Kubernetes cluster.
Strengths of Buildah
Buildah shines when the requirements for the build process become more complex:
- High Flexibility: In addition to classic Dockerfile builds, you can assemble images step-by-step in scripts—useful for dynamic or generic pipelines.
- Deep OCI Integration: For teams increasingly relying on OCI standards, SBOMs, and signed artifacts, Buildah fits well into corresponding toolchains.
- Advanced Features: Finer control over layers, storage backends, and integration with other tools (e.g., Podman) is possible.
The trade-off is a slightly higher entry barrier: those who “only” want to build Dockerfiles often find Kaniko more accessible.
Both tools are rootless, daemonless, and thus fundamentally suitable for making your build pipelines more secure. The decision is less about “better or worse,” but about your context.
Guidelines for Selection
1. Focus on Kubernetes CI with Existing Dockerfiles
- You already rely heavily on Kubernetes-based runners, such as GitLab runners in the cluster.
- Your projects use consistent Dockerfiles, multi-stage builds, and standardized patterns.
- You desire a straightforward transition from
docker build to a rootless tool.
→ In this setting, Kaniko is usually the pragmatic entry point.
2. Need for Flexible, Scriptable Build Processes
- You want to create images programmatically without strictly adhering to Dockerfile syntax.
- Your toolchain should leverage deeper OCI functionalities (e.g., extended metadata, special storage setups).
- You plan more complex build orchestrations, such as for base image pipelines or multiple product variations.
→ Here, Buildah plays to its strengths.
3. Performance and Caching Considerations
- Kaniko typically uses a registry-based cache repository—ideal for shared runners in clusters.
- Buildah can benefit more from local caching if you have dedicated runner nodes where builds run regularly.
In many organizations, a combination is sensible: Kaniko for standardized app workloads, Buildah for special pipelines and platform builds.
Guardrails and Cyber Resilience Act: Rootless Builds as a Compliance Component
With the European Cyber Resilience Act (CRA) coming into effect on July 29, 2024, the regulatory focus shifts even more towards secure software supply chains. After the transition periods, manufacturers will need to demonstrate:
- that they systematically address security risks,
- that they manage vulnerabilities and updates throughout the lifecycle,
- that they provide transparency about the components used (e.g., SBOMs).
In this context, rootless, daemonless builds are not a “nice to have,” but a tangible manifestation of security-by-design.
Enforcing Guardrails in the Cluster
Modern platform teams implement guardrails such as:
- No
privileged pods
- Prohibition of host mounts (e.g.,
/var/run/docker.sock)
- Requirement for
runAsNonRoot and restrictive SecurityContexts
With traditional Docker-based builds, these requirements quickly collide. Kaniko and Buildah, on the other hand, fit into such policies without needing exceptions or special roles.
This not only simplifies technical implementation but also documentation for Compliance audits: you can clearly show that CI builds are subject to the same security rules as all other workloads.
Signed Builds and Traceability
The CRA explicitly promotes measures such as:
- Signed artifacts
- Traceable build processes
- Traceability of versions and components
Kaniko and Buildah produce OCI-compatible images that can be seamlessly combined with signature tools (e.g., via Sigstore/cosign). In a typical pipeline, this conceptually looks like this:
- Git commit is merged in GitLab.
- Rootless build job (Kaniko or Buildah) creates an image and pushes it to Harbor.
- A downstream job signs the image and optionally creates SBOMs and attestations.
- GitOps deployment tools distribute only signed images to target clusters.
This creates a supply chain-ready architecture that better withstands the requirements of the CRA and internal policies in the long term.
Practical Example: GitLab CI/CD Job with Kaniko in Kubernetes
To make this tangible, it’s worth looking at the setup of a pipeline with GitLab and Kaniko on the ayedo platform—abstracted from specific YAML.
Role Distribution in the Build Process
- GitLab orchestrates the CI/CD pipeline and manages project configuration, branches, and merge requests.
- GitLab Runner on Kubernetes starts a separate pod in the cluster for each job.
- Kaniko runs in these pods as a container, reads the project repository, and generates images.
- Harbor serves as the central registry for images and Helm charts.
No step requires a Docker Daemon or privileged pods in the cluster.
Workflow of a Kaniko Build Job
Conceptually, the following happens in the build job:
- GitLab schedules a “build” job in the pipeline (typically after tests and linting).
- The Kubernetes-based runner starts a pod using the Kaniko executor image.
- The pod receives:
- Read-only access to the project repository
- Environment variables and secrets for the build context
- A writable workspace for temporary files
- Kaniko reads the Dockerfile and build context, builds the image, and pushes it to Harbor.
This setup ensures that the build process remains secure and compliant, leveraging the strengths of Kubernetes-native tools like Kaniko.