Kaniko vs. Buildah: Rootless, Daemonless Container Builds in Kubernetes
Fabian Peter 8 Minuten Lesezeit

Kaniko vs. Buildah: Rootless, Daemonless Container Builds in Kubernetes

Rootless Container Builds: Kaniko and Buildah as Secure Alternatives
compliance-campaign-2026 kaniko buildah rootless container-builds security
Ganze Serie lesen (40 Artikel)

Diese Serie erklärt systematisch, wie moderne Software compliant entwickelt und betrieben wird – von EU-Regulierungen bis zur technischen Umsetzung.

  1. Compliance Compass: EU Regulations for Software, SaaS, and Cloud Hosting
  2. GDPR: Privacy by Design as the Foundation of Modern Software
  3. NIS-2: Cyber Resilience Becomes Mandatory for 18 Sectors
  4. DORA: ICT Resilience for the Financial Sector Starting January 2025
  5. Cyber Resilience Act: Security by Design for Products with Digital Elements
  6. Data Act: Portability and Exit Capability Become Mandatory from September 2025
  7. Cloud Sovereignty Framework: Making Digital Sovereignty Measurable
  8. How EU Regulations Interconnect: An Integrated Compliance Approach
  9. 15 Factor App: The Evolution of Cloud-Native Best Practices
  10. 15 Factor App Deep Dive: Factors 1–6 (Basics & Lifecycle)
  11. 15 Factor App Deep Dive: Factors 7–12 (Networking, Scaling, Operations)
  12. 15 Factor App Deep Dive: Factors 13–15 (API First, Telemetry, Auth)
  13. The Modern Software Development Lifecycle: From Cloud-Native to Compliance
  14. Cloud Sovereignty + 15 Factor App: The Architectural Bridge Between Law and Technology
  15. Standardized Software Logistics: OCI, Helm, Kubernetes API
  16. Deterministically Checking Security Standards: Policy as Code, CVE Scanning, SBOM
  17. ayedo Software Delivery Platform: High-Level Overview
  18. ayedo Kubernetes Distribution: CNCF-compliant, EU-sovereign, compliance-ready
  19. Cilium: eBPF-based Networking for Zero Trust and Compliance
  20. Harbor: Container Registry with Integrated CVE Scanning and SBOM
  21. VictoriaMetrics & VictoriaLogs: Observability for NIS-2 and DORA
  22. Keycloak: Identity & Access Management for GDPR and NIS-2
  23. Kyverno: Policy as Code for Automated Compliance Checks
  24. Velero: Backup & Disaster Recovery for DORA and NIS-2
  25. Delivery Operations: The Path from Code to Production
  26. ohMyHelm: Helm Charts for 15-Factor Apps Without Kubernetes Complexity
  27. Let's Deploy with ayedo, Part 1: GitLab CI/CD, Harbor Registry, Vault Secrets
  28. Let's Deploy with ayedo, Part 2: ArgoCD GitOps, Monitoring, Observability
  29. GitLab CI/CD in Detail: Stages, Jobs, Pipelines for Modern Software
  30. Kaniko vs. Buildah: Rootless, Daemonless Container Builds in Kubernetes
  31. Harbor Deep Dive: Vulnerability Scanning, SBOM, Image Signing
  32. HashiCorp Vault + External Secrets Operator: Zero-Trust Secrets Management
  33. ArgoCD Deep Dive: GitOps Deployments for Multi-Environment Scenarios
  34. Guardrails in Action: Policy-Based Deployment Validation with Kyverno
  35. Observability in Detail: VictoriaMetrics, VictoriaLogs, Grafana
  36. Alerting & Incident Response: From Anomaly to Final Report
  37. Polycrate: Deployment Automation for Kubernetes and Cloud Migration
  38. Managed Backing Services: PostgreSQL, Redis, Kafka on ayedo SDP
  39. Multi-Tenant vs. Whitelabel: Deployment Strategies for SaaS Providers
  40. From Zero to Production: The Complete ayedo SDP Workflow in an Example

TL;DR

  • Traditional container builds with Docker Daemon, root privileges, and docker.sock in CI systems pose an unnecessary security risk—especially when builds run directly in a Kubernetes cluster.
  • Rootless, daemonless tools like Kaniko and Buildah enable secure image builds in pods without privileged rights and without a Docker Daemon—an essential component for technical guardrails and modern Compliance requirements.
  • Kaniko is the “Kubernetes-native” approach: declarative, heavily focused on Dockerfiles, with good layer caching via the registry; Buildah offers more flexibility, deep OCI integration, and scriptability—suitable if you need more complex build workflows or custom toolchains.
  • For the European context—including the Cyber Resilience Act and signed builds—rootless pipelines are a pragmatic way to embed security-by-design in the software supply chain.
  • ayedo consistently relies on rootless builds with Kaniko and Buildah in its platform, integrated into GitLab, Harbor, and GitOps deployment—and supports teams in adopting this architecture in a structured manner.

Why Traditional Container Builds in Kubernetes Become a Risk

Many organizations built their first CI/CD pipelines with a simple assumption: “We install Docker on the runner and call docker build.” In traditional VM setups, this was pragmatic. However, in a Kubernetes cluster, this habit becomes a structural risk.

Typical patterns that are problematic in modern environments:

  • CI jobs with direct access to docker.sock
  • Build pods running as privileged
  • Container builders effectively gaining root privileges on the node

This gives the build process extensive access to the host—and indirectly to other workloads. From an attacker’s perspective, the CI environment is highly attractive: access to source code, credentials, registries, and often production clusters.

From a Compliance perspective, such setups also complicate the argument that you consistently implement the principle of least privilege. Security and audit teams are increasingly asking:

  • What privileges do CI jobs actually have?
  • Can a compromised build job affect other workloads in the cluster?
  • How do you ensure that guardrails like “no privileged pods” are consistently enforced?

The answer almost inevitably leads to rootless, daemonless build tools.


Rootless, Daemonless Builds: Principles Instead of Workarounds

“Rootless” and “daemonless” are more than buzzwords—they describe an architectural pattern:

  • Rootless: The build process does not run with root privileges in the container or on the host. User namespaces and user-space file systems ensure that the build “feels” like root but technically does not gain host privileges.
  • Daemonless: There is no long-running Docker Daemon communicating with the host and controlled via docker.sock. Instead, a single process tool directly builds container layers and writes them to a registry or local storage backend.

For Kubernetes-native CI/CD, this means:

  • Build jobs run as regular pods with restrictive PodSecurity or admission policies.
  • Guardrails like “no privileged pods,” “no host mounts,” or “no HostPID/HostIPC” can be strictly enforced.
  • Security zones become clearer: the CI pipeline builds images but does not gain general access to the node.

Kaniko and Buildah are two mature projects that implement this pattern—with different focuses.


Kaniko: Kubernetes-native Image Builds Without Docker Daemon

Kaniko was originally developed by Google to build container images in Kubernetes clusters without needing a Docker Daemon. It is inherently tailored to cluster environments and CI/CD.

Architecture and Functionality

Kaniko:

  • Runs itself as a container in a pod.
  • Reads a Dockerfile and the build context (e.g., from the Git checkout).
  • Simulates the Docker build process in user-space, without accessing docker.sock or host privileges.
  • Writes the resulting image layers directly to a registry, such as Harbor.

Key features:

  • Rootless: Kaniko does not require root privileges in the pod; policies demanding runAsNonRoot remain intact.
  • Daemonless: No Docker Daemon is needed in the cluster; this reduces the attack surface and complexity.
  • Multi-Stage Builds: Modern Dockerfile patterns are supported, including multi-stage builds for lean production images.
  • Layer Caching: Kaniko can use a dedicated cache repository in the registry. Frequently reused layers (e.g., base dependencies) do not need to be rebuilt with each build.

Strengths of Kaniko

For many teams, Kaniko is the obvious standard:

  • Simplicity: Teams with clean Dockerfiles can often use Kaniko without major adjustments.
  • Kubernetes Integration: Kaniko executors integrate well with existing GitLab runners on Kubernetes.
  • Good Performance Through Remote Caching: Especially in shared runner environments, caching via a registry is more robust than local daemon caches.

The downside of this focus is that Kaniko deliberately does not aim to do everything: it is primarily a Dockerfile interpreter, not a comprehensive container tooling.


Buildah: Rootless Container Tooling with Maximum Flexibility

Buildah originates from the Red Hat environment and is part of the broader OCI ecosystem around Podman. It is not just an “image builder,” but a toolkit for working with container images.

Architecture and Functionality

Buildah:

  • Is a CLI tool that can create, modify, tag, and push images.
  • Supports both Dockerfiles and declarative build scripts and step-by-step image manipulation.
  • Works OCI-natively and integrates well into toolchains based on open standards.
  • Can also be operated rootless and without a Docker Daemon—both directly on Linux workers and in containers within a Kubernetes cluster.

Strengths of Buildah

Buildah shines when the requirements for the build process become more complex:

  • High Flexibility: In addition to classic Dockerfile builds, you can assemble images step-by-step in scripts—useful for dynamic or generic pipelines.
  • Deep OCI Integration: For teams increasingly relying on OCI standards, SBOMs, and signed artifacts, Buildah fits well into corresponding toolchains.
  • Advanced Features: Finer control over layers, storage backends, and integration with other tools (e.g., Podman) is possible.

The trade-off is a slightly higher entry barrier: those who “only” want to build Dockerfiles often find Kaniko more accessible.


Kaniko vs. Buildah: Which Tool Fits Which Team?

Both tools are rootless, daemonless, and thus fundamentally suitable for making your build pipelines more secure. The decision is less about “better or worse,” but about your context.

Guidelines for Selection

1. Focus on Kubernetes CI with Existing Dockerfiles

  • You already rely heavily on Kubernetes-based runners, such as GitLab runners in the cluster.
  • Your projects use consistent Dockerfiles, multi-stage builds, and standardized patterns.
  • You desire a straightforward transition from docker build to a rootless tool.

→ In this setting, Kaniko is usually the pragmatic entry point.

2. Need for Flexible, Scriptable Build Processes

  • You want to create images programmatically without strictly adhering to Dockerfile syntax.
  • Your toolchain should leverage deeper OCI functionalities (e.g., extended metadata, special storage setups).
  • You plan more complex build orchestrations, such as for base image pipelines or multiple product variations.

→ Here, Buildah plays to its strengths.

3. Performance and Caching Considerations

  • Kaniko typically uses a registry-based cache repository—ideal for shared runners in clusters.
  • Buildah can benefit more from local caching if you have dedicated runner nodes where builds run regularly.

In many organizations, a combination is sensible: Kaniko for standardized app workloads, Buildah for special pipelines and platform builds.


Guardrails and Cyber Resilience Act: Rootless Builds as a Compliance Component

With the European Cyber Resilience Act (CRA) coming into effect on July 29, 2024, the regulatory focus shifts even more towards secure software supply chains. After the transition periods, manufacturers will need to demonstrate:

  • that they systematically address security risks,
  • that they manage vulnerabilities and updates throughout the lifecycle,
  • that they provide transparency about the components used (e.g., SBOMs).

In this context, rootless, daemonless builds are not a “nice to have,” but a tangible manifestation of security-by-design.

Enforcing Guardrails in the Cluster

Modern platform teams implement guardrails such as:

  • No privileged pods
  • Prohibition of host mounts (e.g., /var/run/docker.sock)
  • Requirement for runAsNonRoot and restrictive SecurityContexts

With traditional Docker-based builds, these requirements quickly collide. Kaniko and Buildah, on the other hand, fit into such policies without needing exceptions or special roles.

This not only simplifies technical implementation but also documentation for Compliance audits: you can clearly show that CI builds are subject to the same security rules as all other workloads.

Signed Builds and Traceability

The CRA explicitly promotes measures such as:

  • Signed artifacts
  • Traceable build processes
  • Traceability of versions and components

Kaniko and Buildah produce OCI-compatible images that can be seamlessly combined with signature tools (e.g., via Sigstore/cosign). In a typical pipeline, this conceptually looks like this:

  1. Git commit is merged in GitLab.
  2. Rootless build job (Kaniko or Buildah) creates an image and pushes it to Harbor.
  3. A downstream job signs the image and optionally creates SBOMs and attestations.
  4. GitOps deployment tools distribute only signed images to target clusters.

This creates a supply chain-ready architecture that better withstands the requirements of the CRA and internal policies in the long term.


Practical Example: GitLab CI/CD Job with Kaniko in Kubernetes

To make this tangible, it’s worth looking at the setup of a pipeline with GitLab and Kaniko on the ayedo platform—abstracted from specific YAML.

Role Distribution in the Build Process

  • GitLab orchestrates the CI/CD pipeline and manages project configuration, branches, and merge requests.
  • GitLab Runner on Kubernetes starts a separate pod in the cluster for each job.
  • Kaniko runs in these pods as a container, reads the project repository, and generates images.
  • Harbor serves as the central registry for images and Helm charts.

No step requires a Docker Daemon or privileged pods in the cluster.

Workflow of a Kaniko Build Job

Conceptually, the following happens in the build job:

  1. GitLab schedules a “build” job in the pipeline (typically after tests and linting).
  2. The Kubernetes-based runner starts a pod using the Kaniko executor image.
  3. The pod receives:
    • Read-only access to the project repository
    • Environment variables and secrets for the build context
    • A writable workspace for temporary files
  4. Kaniko reads the Dockerfile and build context, builds the image, and pushes it to Harbor.

This setup ensures that the build process remains secure and compliant, leveraging the strengths of Kubernetes-native tools like Kaniko.

Ähnliche Artikel