Deploy Kubernetes Apps from the PolyHub: From Idea to Deployment in Minutes
Fabian Peter 7 Minuten Lesezeit

Deploy Kubernetes Apps from the PolyHub: From Idea to Deployment in Minutes

Deploy Kubernetes apps from the PolyHub: leverage official ayedo blocks
Ganze Serie lesen (24 Artikel)

Diese Serie zeigt Schritt für Schritt, wie Ansible mit Polycrate zu einer strukturierten, teilbaren und compliance-fähigen Automatisierungsplattform wird – von den Grundlagen bis zu Enterprise-Szenarien.

  1. Install Polycrate and Build Your First Ansible Block in 15 Minutes
  2. Blocks, Actions, and Workspaces: The Modular Principle of Polycrate
  3. Linux Servers on Autopilot: System Management with Polycrate and Ansible
  4. Nginx and Let's Encrypt as a Reusable Polycrate Block
  5. Managing Docker Stacks on Linux Servers with Polycrate
  6. Many Servers, One Truth: Multi-Server Management with Polycrate Inventories
  7. Windows Automation with Polycrate: Ansible and WinRM Without Pain
  8. Windows Software Deployment without SCCM: Chocolatey and Ansible
  9. Hybrid Automation: Windows and Linux in the Same Polycrate Workspace
  10. Deploy Kubernetes Apps from the PolyHub: From Idea to Deployment in Minutes
  11. Creating Your Own Kubernetes App as a Polycrate Block: A Step-by-Step Guide
  12. Multi-Cluster Kubernetes with Polycrate: Why One Cluster, One Workspace
  13. SSH Sessions and kubectl Debugging: Polycrate as an Operations Tool
  14. Helm Charts as a Polycrate Block: More Control Over Chart Deployments
  15. Policy as Code: Automating Compliance Requirements with Polycrate
  16. Workspace Encryption: Managing Secrets in GDPR Compliance – Without External Tooling
  17. Managing Raspberry Pi and Edge Nodes with Polycrate in IoT and Edge Computing
  18. Enterprise Automation: Building, Versioning, and Sharing Blocks Within Teams
  19. Polycrate MCP: Connecting AI Assistants with Live Infrastructure Context
  20. Polycrate vs. plain Ansible: What You Gain – and Why It's Worth It
  21. The Polycrate Ecosystem: PolyHub, API, MCP, and the Future of Automation
  22. Your First Productive Polycrate Workspace: A Checklist for Getting Started
  23. Auditable Operations: SSH Sessions and CLI Activities with Polycrate API
  24. Polycrate API for Teams: Centralized Monitoring and Remote Triggering

TL;DR

  • PolyHub functions like an app store for infrastructure: Ready-made ayedo blocks for Kubernetes apps (nginx, cert-manager, external-dns, and many more) can be pulled directly from the registry cargo.ayedo.cloud into your workspace and used without a local Ansible setup.
  • In this post, you’ll build a complete ingress stack (nginx + cert-manager + external-dns) solely by configuring your workspace.poly and executing a few polycrate commands – including clean Kubeconfig handling and versioned blocks.
  • You’ll learn how an official ayedo Kubernetes block is structured (block.poly + Ansible playbook), why version pinning (:0.2.2 instead of :latest) is mandatory in production, and how to leverage the extensive ecosystem on PolyHub for your own workspaces.
  • Polycrate solves the classic Ansible dependency problem by running all playbooks in a predefined container with a complete toolchain (kubectl, Helm, Python, Collections) – identical on every workstation, without Python chaos and without manual setup.
  • ayedo provides a tested, reusable foundation for Kubernetes solutions with Polycrate and the official blocks on PolyHub, which you can directly adopt and adapt to your environment.

PolyHub as an App Store for Kubernetes Infrastructure

If you’ve ever set up an ingress stack manually with plain Ansible or Helm, you know the pattern:

  • Gather values and CRDs for ingress-nginx
  • Configure cert-manager with the correct issuers
  • Connect external-dns to your DNS provider
  • Version and document everything cleanly

With Polycrate and PolyHub, you reverse the principle: Instead of modeling everything yourself, you start with ready-made, tested building blocks.

PolyHub at https://hub.polycrate.io is a registry of Polycrate blocks, stored as OCI images in cargo.ayedo.cloud – similar to container images on Docker Hub, but specifically for automation. For Kubernetes, you’ll find among others:

  • cargo.ayedo.cloud/ayedo/k8s/nginx:* – Ingress Controller
  • cargo.ayedo.cloud/ayedo/k8s/cert-manager:* – TLS Automation
  • cargo.ayedo.cloud/ayedo/k8s/external-dns:* – DNS Automation

The core ideas behind this:

  • Sharable Automation: Once built as a block, it can be reused in any workspace and shared via an OCI registry.
  • Guardrails instead of Overgrowth: Instead of loosely scattered playbooks, you encapsulate logic in clearly defined blocks with a configuration interface.
  • Containerized Execution: All Ansible playbooks run in a Polycrate container – no local Ansible installation, no Python version conflicts, no manually installed collections.

In this post, we use exactly these building blocks to build a production-ready ingress stack for an example cluster.


Preparing the Workspace: Declare Kubeconfig and Blocks

We start with a workspace acme-corp-automation. It is the logical unit for your automation – whether you’re managing Linux servers, Windows hosts, or Kubernetes clusters.

A minimal workspace.poly could start like this:

name: acme-corp-automation
organization: acme

config: {}
blocks: []
workflows: []

The top-level config field is for workspace/toolchain settings (for example the container image under config.image) – not a free-form bag for arbitrary automation variables; those belong in block config or secrets.poly. See Configuration. The empty config: {} here is only a placeholder until you set an image, for example.

Important for Kubernetes: The Kubeconfig of your cluster is always stored as a secret file in the workspace in Polycrate:

artifacts/secrets/kubeconfig.yml

This file is treated as a secret by Polycrate and can be protected with the built-in workspace encryption, see Workspace Encryption.

To activate encryption, simply run on the CLI:

polycrate workspace encrypt

You don’t need an external vault – Polycrate uses age internally and integrates it into your workflow execution, which often significantly simplifies compliance requirements.


Pulling a Kubernetes Block from PolyHub

The official ayedo blocks for Kubernetes apps are available as OCI images on cargo.ayedo.cloud. You can pull any block into your workspace:

polycrate blocks pull cargo.ayedo.cloud/ayedo/k8s/nginx:0.2.2

Here’s what happens:

  • Polycrate loads the nginx block in version 0.2.2 from the registry.
  • The block is stored in the workspace cache; you can reference it from there.
  • No local Ansible, kubectl, or Helm is necessary – all of this comes from the Polycrate container.

Important: The version specification :0.2.2 is not a detail – it is a best practice. In production, you should never use :latest, see also the Best Practices in the Documentation.

Now we integrate the block into our workspace.


Integrating and Configuring the Block in workspace.poly

In workspace.poly, you declare which block instances you want to use. For our ingress stack, we add three blocks:

name: acme-corp-automation
organization: acme

blocks:
  - name: k8s-nginx
    from: cargo.ayedo.cloud/ayedo/k8s/nginx:0.2.2
    config:
      namespace: "ingress-nginx"
      ingress_class: "nginx"

  - name: k8s-cert-manager
    from: cargo.ayedo.cloud/ayedo/k8s/cert-manager:0.3.0
    config:
      namespace: "cert-manager"
      email: "admin@acme-corp.com"

  - name: k8s-external-dns
    from: cargo.ayedo.cloud/ayedo/k8s/external-dns:0.4.1
    config:
      namespace: "external-dns"
      provider: "route53"
      domain_filter: "acme-corp.com"

workflows: []

A few important points:

  • from: always references the full block name including version (:0.2.2).
  • name: is the instance name in the workspace – this is how you refer to the block later (polycrate run k8s-nginx deploy).
  • config: contains block-specific values. These later appear as block.config.* in your Ansible playbook.

The beauty: You don’t need to know the internal logic of these blocks to use them. They are like meaningful building blocks in a kit – neatly encapsulated and versioned.


How an ayedo Kubernetes Block is Structured

Let’s take a look at how such an ayedo block looks inside. Imagine you were developing cargo.ayedo.cloud/ayedo/k8s/nginx:0.2.2 locally. In the block directory, there would be a block.poly:

name: k8s-nginx
version: 0.2.2
kind: generic

config:
  namespace: "ingress-nginx"
  ingress_class: "nginx"

actions:
  - name: deploy
    playbook: deploy.yml

  - name: delete
    playbook: delete.yml

Important details:

  • Kubeconfig: You do not need a kubeconfig_path in block.poly. Polycrate takes the kubeconfig from artifacts/secrets/kubeconfig.yml and sets KUBECONFIG in the action container to the correct path – kubectl, Helm, and Ansible modules (for example community.kubernetes.helm) use it without extra block configuration. .poly files also do not support Jinja like {{ workspace.secrets[...] }}.
  • actions define which commands you can execute via polycrate run (deploy, delete, …).
  • The version 0.2.2 is hard-coded in the block.poly and is also used in the registry.

This includes an Ansible playbook, e.g., deploy.yml:

- name: Deploy ingress-nginx via Helm
  hosts: localhost
  gather_facts: false

  tasks:
    - name: Install ingress-nginx Helm chart
      community.kubernetes.helm:
        name: ingress-nginx
        chart_ref: ingress-nginx/ingress-nginx
        release_namespace: "{{ block.config.namespace }}"
        create_namespace: true
        values:
          controller:
            ingressClassResource:
              name: "{{ block.config.ingress_class }}"

A few explanations:

  • hosts: localhost is correct here because the playbook only interacts with the Kubernetes API. It does not alter local packages – it runs in the Polycrate container and controls the cluster from there.
  • The module community.kubernetes.helm comes from an Ansible collection that is already installed in the Polycrate container. You don’t have to worry about this dependency. KUBECONFIG is already set in the container – an explicit kubeconfig: task parameter is not required.
  • block.config.* are the values from your workspace.poly configuration for this block instance (here, among others, namespace, ingress_class). You control the deployment entirely from the workspace.

With plain Ansible, you would need to:

  • Set up a Python environment with matching versions
  • Install kubectl, helm, Ansible plus collections
  • Wire up kubeconfig access yourself (Polycrate does this via KUBECONFIG)
  • Organize your playbooks and roles in a free directory tree

With Polycrate, you get a clearly structured block definition, a standardized execution environment in the container, and a consistent way to pass configuration from the workspace into the playbooks. See also the Ansible Integration in the Documentation.


Executing the Complete Ingress Stack as a Workflow

A major advantage of the block structure is that you can build a reproducible workflow from multiple blocks. In our example:

  • k8s-nginx – Ingress Controller
  • k8s-cert-manager – TLS Certificates
  • k8s-external-dns – DNS Entries for your Ingresses

We define this stack as a workflow in workspace.poly:

name: acme-corp-automation
organization: acme

blocks:
  - name: k8s-nginx
    from: cargo.ayedo.cloud/ayedo/k8s/nginx:0.2.2
    config:
      namespace: "ingress-nginx"
      ingress_class: "nginx"

  - name: k8s-cert-manager
    from: cargo.ayedo.cloud/ayedo/k8s/cert-manager:0.3.0
    config:
      namespace: "cert-manager"
      email: "admin@acme-corp.com"

  - name: k8s-external-dns
    from: cargo.ayedo.cloud/ayedo/k8s/external-dns:0.4.1
    config:
      namespace: "external-dns"
      provider: "route53"
      domain_filter: "acme-corp.com"

workflows:
  - name: k8s-ingress-stack
    actions:
      - block: k8s-nginx
        action: deploy
      - block: k8s-cert-manager
        action: deploy
      - block: k8s-external-dns
        action: deploy

With this, you can roll out the complete stack with:

polycrate workflows run k8s-ingress-stack

Here’s what happens:

  • Polycrate starts a container for each action, where Ansible is executed with the respective block configuration.
  • Your Kubeconfig is piped from artifacts/secrets/kubeconfig.yml into the container, without needing to be unencrypted in the repository.
  • Each block uses its own tested logic, but all follow the same UX guidelines (uniform actions like deploy, delete, upgrade).

With plain Ansible, you would typically:

  • Set up a folder structure like playbooks/k8s/nginx.yml, playbooks/k8s/cert-manager.yml, playbooks/k8s/external-dns.yml
  • Remember (or document) the order in which you need to call these playbooks
  • Assume a working Ansible environment on every developer laptop

With Polycrate, these playbooks are packaged into reusable building blocks and orchestrated via workflows. This prevents the playbook overgrowth that many teams know from evolving environments and is detailed in the Best Practices.


Version Pinning: Why :latest is Taboo in Production

A central principle in productive Kubernetes solutions is reproducibility. That’s exactly why :latest is not a good idea for container images – and the same goes for Polycrate blocks.

The registry URL of a block always contains the version:

cargo.ayedo.cloud/ayedo/k8s/nginx:0.2.2

Best practices include:

  • Always pin explicitly: Always use a specific version (:0.2.2) in workspace.poly.
  • Upgrade consciously:

Ähnliche Artikel