Creating Your Own Kubernetes App as a Polycrate Block: A Step-by-Step Guide
Fabian Peter 13 Minuten Lesezeit

Creating Your Own Kubernetes App as a Polycrate Block: A Step-by-Step Guide

Build your own Kubernetes app as a reusable Polycrate block
Ganze Serie lesen (24 Artikel)

Diese Serie zeigt Schritt für Schritt, wie Ansible mit Polycrate zu einer strukturierten, teilbaren und compliance-fähigen Automatisierungsplattform wird – von den Grundlagen bis zu Enterprise-Szenarien.

  1. Install Polycrate and Build Your First Ansible Block in 15 Minutes
  2. Blocks, Actions, and Workspaces: The Modular Principle of Polycrate
  3. Linux Servers on Autopilot: System Management with Polycrate and Ansible
  4. Nginx and Let's Encrypt as a Reusable Polycrate Block
  5. Managing Docker Stacks on Linux Servers with Polycrate
  6. Many Servers, One Truth: Multi-Server Management with Polycrate Inventories
  7. Windows Automation with Polycrate: Ansible and WinRM Without Pain
  8. Windows Software Deployment without SCCM: Chocolatey and Ansible
  9. Hybrid Automation: Windows and Linux in the Same Polycrate Workspace
  10. Deploy Kubernetes Apps from the PolyHub: From Idea to Deployment in Minutes
  11. Creating Your Own Kubernetes App as a Polycrate Block: A Step-by-Step Guide
  12. Multi-Cluster Kubernetes with Polycrate: Why One Cluster, One Workspace
  13. SSH Sessions and kubectl Debugging: Polycrate as an Operations Tool
  14. Helm Charts as a Polycrate Block: More Control Over Chart Deployments
  15. Policy as Code: Automating Compliance Requirements with Polycrate
  16. Workspace Encryption: Managing Secrets in GDPR Compliance – Without External Tooling
  17. Managing Raspberry Pi and Edge Nodes with Polycrate in IoT and Edge Computing
  18. Enterprise Automation: Building, Versioning, and Sharing Blocks Within Teams
  19. Polycrate MCP: Connecting AI Assistants with Live Infrastructure Context
  20. Polycrate vs. plain Ansible: What You Gain – and Why It's Worth It
  21. The Polycrate Ecosystem: PolyHub, API, MCP, and the Future of Automation
  22. Your First Productive Polycrate Workspace: A Checklist for Getting Started
  23. Auditable Operations: SSH Sessions and CLI Activities with Polycrate API
  24. Polycrate API for Teams: Centralized Monitoring and Remote Triggering

TL;DR

  • In this post, you’ll create a complete Polycrate block for your own Kubernetes app – including block.poly, an Ansible playbook, and three Kubernetes templates for Deployment, Service, and Ingress.
  • block.config serves as the single source of truth for image, replicas, namespace, and domain; the upgrade workflow is reduced to: change the image tag in the workspace, run polycrate run myapp install, and you’re done.
  • Thanks to Polycrate’s container execution, you don’t need Ansible, Python, or kubectl locally – the entire toolchain is provided in the container and is identical for the whole team.
  • With the block model featuring actions install, uninstall, and status, you get a clean, reusable interface instead of loosely distributed playbooks; the block can be versioned and shared in an OCI registry.
  • ayedo supports teams with proven Kubernetes solutions, workshops, and best practices around Polycrate and Ansible – from the first app to comprehensive platforms.

Why Creating Your Own Kubernetes Block Makes Sense

Many Kubernetes teams start with Helm charts and a few YAML files in the Git repo. Once multiple internal services, different environments, and compliance requirements come into play, things get confusing:

  • Each app has its own folder structure and conventions.
  • Variables like image, replicas, or domain are spread across multiple files.
  • New colleagues need a lot of context to find “the right” set of manifests.
  • Automation depends on the local setup: Python version, kubectl, Ansible, auth tools.

With Polycrate, you bring order to this automation without giving up Ansible or known Kubernetes patterns:

  • Guardrails through the block model: Each app becomes a block with defined actions (install, uninstall, status). No “playbook sprawl”, just a clear interface.
  • Sharable Automation: What you build for your internal app can later be shared as a versioned block via an OCI registry – or published in the PolyHub if it goes beyond internal needs.
  • Dependency Problem Solved: Ansible, Python, kubectl, kubernetes.core collection – everything runs in the Polycrate container. No locally installed tools, no version hell.

Creating your own block is always worthwhile when:

  • Your app does not (yet) exist as an official block in the PolyHub.
  • You want to deploy an internal service (e.g., “billing”, “customer portal”, “reporting”) repeatedly on different clusters.
  • You want to provide a standardized interface for operations teams or other departments: polycrate run myapp install instead of “please read the README and run these five kubectl commands”.

In this post, we’ll build exactly such a block – complete, from structure to registry push.


Initial Scenario: Internal App “myapp” in the Kubernetes Cluster

Our example:

  • Company: ACME Corp.

  • Workspace: acme-corp-automation

  • App: myapp, an internal web service (HTTP on port 8080)

  • Cluster: Already existing, kubeconfig is available locally

  • Domain: myapp.acme-corp.com

  • Goal: A Polycrate block myapp-k8s that we can use with

    polycrate run myapp install

    for different clusters/environments.

Important: We use the Ansible module kubernetes.core.k8s, which directly communicates with the Kubernetes API. The playbook runs inside the Polycrate container with hosts: localhost and connection: local. So nothing is installed in the container itself, only changed via API in the cluster.

Details on Kubernetes integration can also be found in the official documentation:
https://docs.ayedo.de/polycrate/kubernetes/


Preparing the Workspace

First, we define our workspace and integrate the block there. The workspace.poly is located in the root directory of your Git repo:

# workspace.poly
name: acme-corp-automation
organization: acme

blocks:
  - name: myapp
    from: registry.acme.corp/blocks/myapp-k8s:0.1.0
    config:
      image: "registry.acme-corp.com/myapp"
      image_tag: "1.0.0"
      replicas: 2
      namespace: "apps"
      domain: "myapp.acme-corp.com"

A few notes:

  • from: uses registry-style block notation (<registry>/<path>:<version>). registry.acme.corp is a fictitious example; in practice you pull the block with polycrate blocks pull … or from your OCI registry.
  • Under config:, we specify app-specific values. These end up as block.config.* in the block and are our single source of truth.
  • Image versions are intentionally configured in the workspace: This allows you to set different tags per environment later (e.g., dev, staging, prod).

We place the kubeconfig – Polycrate-compliant – in the workspace under artifacts/secrets/kubeconfig.yml. A workspace repo usually holds only configuration and secrets – not a checked-in blocks/myapp-k8s/ when the block comes from a registry:

acme-corp-automation/
  workspace.poly
  artifacts/
    secrets/
      kubeconfig.yml

Polycrate sets KUBECONFIG in the action container for the workspace kubeconfig; kubectl and Ansible (kubernetes.core) use it without an extra path variable. You can encrypt the workspace (including kubeconfig) with the built-in encryption function and age to cleanly cover compliance requirements (see Workspace Encryption) – without any additional tool like Vault.


Block Structure for the Kubernetes App

This is what the block looks like – when developing locally under blocks/myapp-k8s/ or in the workspace cache after pulling from a registry:

blocks/myapp-k8s/
  block.poly
  install.yml
  templates/
    deployment.yml.j2
    service.yml.j2
    ingress.yml.j2
  • block.poly describes the block, its configuration, and actions.
  • install.yml is the central Ansible playbook. We use it for install, uninstall, and status.
  • Under templates/, the Jinja2 templates for the Kubernetes manifests are located.

block.poly: Defining Interface and Configuration

Here is the complete block.poly:

# blocks/myapp-k8s/block.poly
name: myapp-k8s
version: 0.1.0
kind: generic

config:
  image: "registry.acme-corp.com/myapp"
  image_tag: "1.0.0"
  replicas: 2
  namespace: "apps"
  domain: "myapp.acme-corp.com"

actions:
  - name: install
    playbook: install.yml

  - name: uninstall
    playbook: install.yml

  - name: status
    playbook: install.yml

Important notes:

  • config: defines default values. They are overridden by workspace.blocks[].config. In the templates and playbook, we always access them via block.config.*.
  • The actions install, uninstall, and status form a standard interface that is suitable for all Kubernetes apps:
    • install: Creates/updates all resources.
    • uninstall: Removes all resources.
    • status: Shows the current state in the cluster.
  • All three actions use the same playbook install.yml. We later differentiate in the playbook which path to take using the Polycrate variable action.name.

This interface is also valuable for operations teams: no matter which app – the interface remains the same.


Jinja2 Templates for Deployment, Service, and Ingress

Now come the Kubernetes manifests. They are defined as Jinja2 templates and directly access block.config.*. Thus, block.config is our single source of truth.

Deployment Template

# blocks/myapp-k8s/templates/deployment.yml.j2
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: {{ block.config.namespace }}
  labels:
    app: myapp
spec:
  replicas: {{ block.config.replicas }}
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: myapp
          image: "{{ block.config.image }}:{{ block.config.image_tag }}"
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          readinessProbe:
            httpGet:
              path: /
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 10
          livenessProbe:
            httpGet:
              path: /
              port: 8080
            initialDelaySeconds: 15
            periodSeconds: 20

Service Template

# blocks/myapp-k8s/templates/service.yml.j2
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: {{ block.config.namespace }}
  labels:
    app: myapp
spec:
  type: ClusterIP
  selector:
    app: myapp
  ports:
    - name: http
      port: 80
      targetPort: 8080
      protocol: TCP

Ingress Template

# blocks/myapp-k8s/templates/ingress.yml.j2
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp
  namespace: {{ block.config.namespace }}
  labels:
    app: myapp
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
  ingressClassName: nginx
  rules:
    - host: {{ block.config.domain }}
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: myapp
                port:
                  number: 80
  tls:
    - hosts:
        - {{ block.config.domain }}
      secretName: myapp-tls

You see: All variable values (namespace, replicas, domain, image + tag) come directly from block.config. An upgrade will be simple later because we only need to adjust there.


Ansible Playbook install.yml with kubernetes.core.k8s

Now we build the heart: a playbook that executes install, uninstall, or status depending on the action.

Important: The playbook runs inside the Polycrate container on localhost and communicates with the Kubernetes API server via the kubernetes.core collection. Polycrate sets KUBECONFIG; the modules do not need an explicit kubeconfig: parameter on the task.

# blocks/myapp-k8s/install.yml
- name: Manage myapp Kubernetes resources
  hosts: localhost
  connection: local
  gather_facts: false

  vars:
    manifest_dir: "/tmp/myapp-k8s"

  tasks:
    - name: Ensure manifest directory exists (container-local)
      ansible.builtin.file:
        path: "{{ manifest_dir }}"
        state: directory
        mode: "0755"
      when: action.name in ['install', 'uninstall', 'status']

    # INSTALL: Ensure namespace + apply manifests
    - name: Ensure namespace exists
      kubernetes.core.k8s:
        api_version: v1
        kind: Namespace
        name: "{{ block.config.namespace }}"
        state: present
      when: action.name == 'install'

    - name: Render Deployment manifest
      ansible.builtin.template:
        src: "templates/deployment.yml.j2"
        dest: "{{ manifest_dir }}/deployment.yml"
        mode: "0644"
      when: action.name == 'install'

    - name: Render Service manifest
      ansible.builtin.template:
        src: "templates/service.yml.j2"
        dest: "{{ manifest_dir }}/service.yml"
        mode: "0644"
      when: action.name == 'install'

    - name: Render Ingress manifest
      ansible.builtin.template:
        src: "templates/ingress.yml.j2"
        dest: "{{ manifest_dir }}/ingress.yml"
        mode: "0644"
      when: action.name == 'install'

    - name: Apply Deployment
      kubernetes.core.k8s:
        state: present
        src: "{{ manifest_dir }}/deployment.yml"
        wait: true
        wait_timeout: 300
      when: action.name == 'install'

    - name: Apply Service
      kubernetes.core.k8s:
        state: present
        src: "{{ manifest_dir }}/service.yml"
      when: action.name == 'install'

    - name: Apply Ingress
      kubernetes.core.k8s:
        state: present
        src: "{{ manifest_dir }}/ingress.yml"
      when: action.name == 'install'

    # UNINSTALL: Remove resources
    - name: Delete Ingress
      kubernetes.core.k8s:
        state: absent
        api_version: networking.k8s.io/v1
        kind: Ingress
        name: "myapp"
        namespace: "{{ block.config.namespace }}"
      when: action.name == 'uninstall'

    - name: Delete Service
      kubernetes.core.k8s:
        state: absent
        api_version: v1
        kind: Service
        name: "myapp"
        namespace: "{{ block.config.namespace }}"
      when: action.name == 'uninstall'

    - name: Delete Deployment
      kubernetes.core.k8s:
        state: absent
        api_version: apps/v1
        kind: Deployment
        name: "myapp"
        namespace: "{{ block.config.namespace }}"
      when: action.name == 'uninstall'

    # STATUS: Query current state
    - name: Get Deployment status
      kubernetes.core.k8s_info:
        api_version: apps/v1
        kind: Deployment
        namespace: "{{ block.config.namespace }}"
        name: "myapp"
      register: deployment_info
      when: action.name == 'status'

    - name: Show Deployment status
      ansible.builtin.debug:
        var: deployment_info.resources[0].status
      when:
        - action.name == 'status'
        - deployment_info.resources | length > 0

A few key points:

  • Kubeconfig: Polycrate sets KUBECONFIG in the container; kubernetes.core uses it without a kubeconfig: parameter on the task.
  • manifest_dir: A temporary path under /tmp/… is typical for rendered YAML (it goes away with the container). If rendered manifests should persist in the workspace (e.g. for review or GitOps), use something like {{ block.artifacts.path }}/… instead – see Artifacts.
  • For install:
    • We create the namespace first (idempotent).
    • Then we render the three templates into manifest_dir.
    • Then kubernetes.core.k8s applies the files and waits for a ready Deployment.
  • For uninstall, resources are removed explicitly with state: absent.
  • For status, we use kubernetes.core.k8s_info to read the current state.

Polycrate provides the action.name variable automatically (see Actions), so you can branch cleanly in one playbook.


Run and Test the Block

With the structure above, you can test the block directly:

# From workspace root
cd /path/to/acme-corp-automation

# Install: create/update resources in the cluster
polycrate run myapp install

Polycrate starts a container with the full toolchain (Ansible, kubernetes.core, Python, optionally kubectl), mounts the workspace, and runs install.yml in the context of the myapp block.

More actions:

# Query status
polycrate run myapp status

# Remove resources again
polycrate run myapp uninstall

Compared to plain Ansible, you do not need to:

  • maintain an ansible.cfg,
  • keep a local Python/Ansible/collection install in sync,
  • document separate “deploy scripts”.

Everything is encapsulated in the block and runs reproducibly in the container.


Quick Look: What Would This Look Like with Plain Ansible?

Without Polycrate, you would typically:

  • Define your own layout (playbooks/, roles/, group_vars/).
  • Install Ansible locally, including the kubernetes.core collection.
  • Decide how to distribute and protect kubeconfig (Polycrate sets KUBECONFIG in the container).
  • Figure out how other teams should invoke your playbooks:
    • ansible-playbook install.yml -e env=prod?
    • separate scripts per environment?

Sharing playbooks usually means: clone the Git repo, set up the local environment, read the README, type commands by hand.

With Polycrate:

  • Standard interface: polycrate run myapp install|uninstall|status.
  • Containerized toolchain: every teammate gets the same environment with no setup.
  • Sharable block: you can store the block as an OCI image in a registry and reuse it with versions.

Ansible stays the same powerful tool – Polycrate adds structure, reproducibility, and a clear packaging model.


Push the Block to a Registry and Share It

Once you are happy with the result, you can push the block to a registry. That makes it:

  • versionable (:0.1.0, :0.2.0, …),
  • easy to use in other workspaces,
  • optionally visible on PolyHub if you publish it.

Pushing uses polycrate blocks push, not registry push: pass the full registry path to the block (same shape as from:), e.g. cargo.ayedo.cloud/acme/myapp-k8snot the workspace instance name myapp. The OCI tag comes from version in block.poly (no version suffix on the push command). See CLI reference – polycrate blocks push.

Example: push to an OCI registry (e.g. cargo.ayedo.cloud):

# From workspace root where blocks/myapp-k8s lives (or --workspace <path>)
polycrate blocks push cargo.ayedo.cloud/acme/myapp-k8s
# Short form (alias):
# polycrate push cargo.ayedo.cloud/acme/myapp-k8s

In another workspace you can reference the block like this:

# workspace.poly in another project
name: another-workspace
organization: acme

blocks:
  - name: myapp-prod
    from: cargo.ayedo.cloud/acme/myapp-k8s:0.1.0
    config:
      image: "registry.acme-corp.com/myapp"
      image_tag: "1.0.0"
      replicas: 3
      namespace: "apps-prod"
      domain: "myapp.acme-corp.com"

Always pin versions explicitly (:0.1.0), never use :latest. That is a core Polycrate best practice (see Best Practices).


Upgrade Workflow: Roll Out a New Image Version

The most common day-to-day task: roll out a new app version.

With our setup the workflow stays simple:

  1. Adjust the image tag in the workspace

    # workspace.poly (excerpt)
    blocks:
      - name: myapp
        from: registry.acme.corp/blocks/myapp-k8s:0.1.0
        config:
          image: "registry.acme-corp.com/myapp"
          image_tag: "1.1.0"   # <-- new tag
          replicas: 2
          namespace: "apps"
          domain: "myapp.acme-corp.com"
  2. Run install again

    polycrate run myapp install

    Because Ansible and kubernetes.core.k8s are idempotent, the Deployment is updated with the new image tag. You do not need extra flags or custom “upgrade” commands – idempotency and single source of truth do the rest.

  3. Optionally check status

    polycrate run myapp status

    That way you can see whether the Deployment behaves as expected.

This scales well:

  • For different environments (DEV, STAGE, PROD), define multiple block instances with different config values.
  • For rollbacks, change the tag back and run install again.

Frequently Asked Questions

Do Kubernetes blocks always require Ansible?

Polycrate supports several technologies, but Ansible is a strong choice for Kubernetes workloads:

  • You benefit from the idempotency of kubernetes.core modules.
  • You can use the same approach for other tasks (e.g. Linux/Windows servers, cloud APIs).
  • Playbooks stay declarative and readable – important for compliance and audits.

If you already rely on Ansible, moving to Kubernetes blocks with Polycrate is a small step – you mainly add Kubernetes-specific tasks.

How do I handle sensitive data like kubeconfig?

Polycrate includes workspace encryption:

  • Files under artifacts/secrets/ – such as our kubeconfig.yml – can be encrypted with age.
  • The key typically lives outside the repo (e.g. in a separate secret store).
  • When actions run, Polycrate decrypts secrets in the container so they are not stored in clear text in Git.

See Workspace Encryption. You do not need an extra vault product – a real advantage for teams that want to move fast yet stay compliant.

Can I use the same block for multiple clusters and environments?

Yes – that is what workspaces and blocks are for:

  • The block describes how to deploy (templates, actions, default configuration).
  • The workspace describes where and with which values to deploy (namespace, domain, image tag, kubeconfig).

For multiple clusters, use multiple workspaces (or multiple block instances in one workspace) with different kubeconfigs and config values. The block code stays unchanged.

More questions? See our FAQ.


From Theory to Practice

With the myapp-k8s block you have a complete, reusable building block:

  • Your Kubernetes app is encapsulated in a clear block structure.
  • block.config is the single source of truth – image, replicas, namespace, and domain are defined centrally.
  • Actions install, uninstall, and status give you a stable interface that operations and compliance can follow.
  • Thanks to Polycrate’s container execution, everything runs in a controlled environment – Ansible, kubernetes.core, and dependencies included. Local setups get much simpler.

In many organizations this first block is the start of a broader automation strategy:

  • More services follow the same standard (install/uninstall/status).
  • Shared patterns move into internal block libraries and are distributed via a registry.
  • Compliance gets easier because deployments are reproducible and versioned.

At ayedo we guide teams on this path – from the first Kubernetes service to full platform automation. In our workshops we show hands-on how to:

  • turn existing Helm charts, YAML stacks, and Ansible playbooks into Polycrate blocks,
  • establish consistent block design for your services,
  • set up registry workflows, workspace encryption, and team collaboration.

If you want to structure your own Kubernetes apps as Polycrate blocks and benefit from proven practices, our Kubernetes block workshop is a good next step:
Kubernetes Block Workshop

Ähnliche Artikel