Helm Charts as a Polycrate Block: More Control Over Chart Deployments
Fabian Peter 11 Minuten Lesezeit

Helm Charts as a Polycrate Block: More Control Over Chart Deployments

Orchestrating Helm Charts with Ansible and Polycrate: more control, more reproducibility
Ganze Serie lesen (24 Artikel)

Diese Serie zeigt Schritt für Schritt, wie Ansible mit Polycrate zu einer strukturierten, teilbaren und compliance-fähigen Automatisierungsplattform wird – von den Grundlagen bis zu Enterprise-Szenarien.

  1. Install Polycrate and Build Your First Ansible Block in 15 Minutes
  2. Blocks, Actions, and Workspaces: The Modular Principle of Polycrate
  3. Linux Servers on Autopilot: System Management with Polycrate and Ansible
  4. Nginx and Let's Encrypt as a Reusable Polycrate Block
  5. Managing Docker Stacks on Linux Servers with Polycrate
  6. Many Servers, One Truth: Multi-Server Management with Polycrate Inventories
  7. Windows Automation with Polycrate: Ansible and WinRM Without Pain
  8. Windows Software Deployment without SCCM: Chocolatey and Ansible
  9. Hybrid Automation: Windows and Linux in the Same Polycrate Workspace
  10. Deploy Kubernetes Apps from the PolyHub: From Idea to Deployment in Minutes
  11. Creating Your Own Kubernetes App as a Polycrate Block: A Step-by-Step Guide
  12. Multi-Cluster Kubernetes with Polycrate: Why One Cluster, One Workspace
  13. SSH Sessions and kubectl Debugging: Polycrate as an Operations Tool
  14. Helm Charts as a Polycrate Block: More Control Over Chart Deployments
  15. Policy as Code: Automating Compliance Requirements with Polycrate
  16. Workspace Encryption: Managing Secrets in GDPR Compliance – Without External Tooling
  17. Managing Raspberry Pi and Edge Nodes with Polycrate in IoT and Edge Computing
  18. Enterprise Automation: Building, Versioning, and Sharing Blocks Within Teams
  19. Polycrate MCP: Connecting AI Assistants with Live Infrastructure Context
  20. Polycrate vs. plain Ansible: What You Gain – and Why It's Worth It
  21. The Polycrate Ecosystem: PolyHub, API, MCP, and the Future of Automation
  22. Your First Productive Polycrate Workspace: A Checklist for Getting Started
  23. Auditable Operations: SSH Sessions and CLI Activities with Polycrate API
  24. Polycrate API for Teams: Centralized Monitoring and Remote Triggering

TL;DR

  • Deploying Helm charts directly via CLI works – but only with Ansible and Polycrate do deployments become truly idempotent, versioned, and team-friendly.
  • block.config makes your Helm values a central, versioned configuration point – including a Jinja2 template that cleanly generates complex values.yaml structures.
  • With Ansible modules like kubernetes.core.helm_repository, kubernetes.core.helm, kubernetes.core.helm_rollback, you get upgrade, diff, and rollback logic as reusable actions in a Polycrate block.
  • Polycrate solves the dependency problem: Helm, kubectl, Python, Ansible collections run in the container – no local installation sprees or version conflicts, workspaces can be encrypted.
  • ayedo supports teams with proven patterns, tooling, and workshops around Helm, Ansible, and Polycrate – including tailored Kubernetes solutions.

Why Helm via Ansible and Polycrate?

Helm is the standard tool for deploying complex applications on Kubernetes. Many teams start with:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm upgrade --install postgres bitnami/postgresql -f values-prod.yaml

As long as a single admin manages their cluster, that’s okay. But as soon as multiple environments, multiple people, and compliance requirements come into play, typical problems arise:

  • Where is the valid values.yaml for prod actually located?
  • Who deployed which version of the chart and when?
  • How do I perform a structured rollback without searching through shell histories?
  • How do I ensure that all colleagues use the same Helm/Ansible version?

This is exactly where the combination of Ansible and Polycrate shows its strengths:

  • Ansible brings idempotency, modular tasks, and API integrations (e.g. kubernetes.core.helm).
  • Polycrate gives Ansible a structure through the block model and solves the dependency problem through container execution.
  • block.config makes your Helm values cleanly versioned configurations that can be shared across workspaces, teams, and even via OCI registry.

In this post, you’ll build a complete Polycrate block that deploys a PostgreSQL Helm chart – including:

  • Jinja2 values template
  • Install/Upgrade action
  • Diff action via dry_run
  • Rollback action

And we’ll honestly look at when you should still use Helm directly.


Architecture: Helm via Ansible in the Polycrate Workspace

We start with a workspace:

# workspace.poly
name: acme-corp-automation
organization: acme

blocks:
  - name: postgres
    from: registry.acme-corp.com/acme/helm/postgres-helm:0.1.0
    config:
      namespace: "databases-prod"
      chart_version: "12.1.2"
      postgres:
        storage_size: "50Gi"

Key points:

  • from: points to a block in an OCI registry (here fictional registry.acme-corp.com/...; public examples include cargo.ayedo.cloud or PolyHub). After polycrate blocks pull, the block lives under blocks/registry.acme-corp.com/acme/helm/postgres-helm/ – the version is pinned in from: (:0.1.0).
  • The config under the block entry overrides defaults from block.poly – so you can use the same block for dev, stage, prod with different values.
  • The workspace.poly structure follows Workspaces best practices.

For Kubernetes access, we place the kubeconfig in the workspace:

artifacts/secrets/kubeconfig.yml

Polycrate automatically loads files under artifacts/secrets/ into workspace.secrets. Sensitive content can be protected with integrated workspace encryption, see Workspace encryption. No kubeconfig_path in workspace.poly or block.poly needed: Polycrate sets KUBECONFIG and K8S_AUTH_KUBECONFIG in the action container to the workspace kubeconfig; kubernetes.core.helm* and other Kubernetes modules use that automatically.


The postgres-helm block: configuration and actions

Our block lives under blocks/registry.acme-corp.com/acme/helm/postgres-helm/. The block.poly defines configuration and actions (name = full registry path without tag):

# blocks/registry.acme-corp.com/acme/helm/postgres-helm/block.poly
name: registry.acme-corp.com/acme/helm/postgres-helm
version: 0.1.0
kind: generic
description: "Deploys a PostgreSQL Helm chart via Ansible and kubernetes.core modules"

config:
  helm_release_name: "acme-postgres"
  namespace: "databases"
  chart_repo_name: "bitnami"
  chart_repo_url: "https://charts.bitnami.com/bitnami"
  chart_name: "postgresql"
  chart_version: "12.1.2"

  postgres:
    username: "app"
    database: "appdb"
    storage_size: "10Gi"
    resources:
      requests:
        cpu: "100m"
        memory: "256Mi"
      limits:
        cpu: "500m"
        memory: "512Mi"

actions:
  - name: install
    description: "Install or upgrade PostgreSQL release"
    playbook: install.yml

  - name: upgrade
    description: "Upgrade PostgreSQL Helm release to configured chart_version"
    playbook: upgrade.yml

  - name: rollback
    description: "Rollback PostgreSQL Helm release to previous revision"
    playbook: rollback.yml

  - name: diff
    description: "Show diff of pending changes for PostgreSQL Helm release"
    playbook: diff.yml

Important details:

  • Chart parameters (chart_repo_url, chart_version, postgres.*) live in config and are thus:
    • versioned in the Git repo
    • overridable per block instance in the workspace
    • usable in Jinja2 templates and Ansible playbooks

Through the block model, Polycrate prevents the typical playbook sprawl that many teams experience with plain Ansible. Everything related to this chart sits in one block: block.poly, playbooks, templates.


Jinja2 template for complex Helm values

Instead of maintaining a static values.yaml for each environment, we generate it from block.config. The template lives in the block:

# blocks/registry.acme-corp.com/acme/helm/postgres-helm/templates/values.yml.j2
global:
  postgresql:
    auth:
      username: "{{ block.config.postgres.username }}"
      database: "{{ block.config.postgres.database }}"

primary:
  persistence:
    size: "{{ block.config.postgres.storage_size }}"

resources:
  requests:
    cpu: "{{ block.config.postgres.resources.requests.cpu }}"
    memory: "{{ block.config.postgres.resources.requests.memory }}"
  limits:
    cpu: "{{ block.config.postgres.resources.limits.cpu }}"
    memory: "{{ block.config.postgres.resources.limits.memory }}"

This achieves:

  • All Helm values are centrally defined in block.config and optionally workspace.blocks[].config.
  • The template can become arbitrarily complex (nested structures, conditions, loops).
  • Changes to the configuration are Git-diffable and traceable.

With plain Helm, you would typically manage values-dev.yaml, values-prod.yaml, and copies of them – often with copy-paste deviations that no one keeps track of.


Install playbook: Helm repo and chart deployment

The Ansible modules from kubernetes.core talk to the Kubernetes API. Since these are API calls, hosts: localhost with connection: local is correct here – the code runs in the Polycrate container, not on a remote host.

# blocks/registry.acme-corp.com/acme/helm/postgres-helm/install.yml
- name: Install or upgrade PostgreSQL Helm release
  hosts: localhost
  connection: local
  gather_facts: false

  vars:
    release_name: "{{ block.config.helm_release_name }}"
    namespace: "{{ block.config.namespace }}"
    chart_repo_name: "{{ block.config.chart_repo_name }}"
    chart_repo_url: "{{ block.config.chart_repo_url }}"
    chart_name: "{{ block.config.chart_name }}"
    chart_version: "{{ block.config.chart_version }}"
    values_file: "{{ block.artifacts.path }}/{{ release_name }}-values.yml"

  tasks:
    - name: Ensure Helm repository is configured
      kubernetes.core.helm_repository:
        name: "{{ chart_repo_name }}"
        repo_url: "{{ chart_repo_url }}"
        state: present

    - name: Render Helm values from template
      ansible.builtin.template:
        src: "templates/values.yml.j2"
        dest: "{{ values_file }}"

    - name: Install or upgrade PostgreSQL Helm release
      kubernetes.core.helm:
        name: "{{ release_name }}"
        chart_ref: "{{ chart_repo_name }}/{{ chart_name }}"
        release_namespace: "{{ namespace }}"
        chart_version: "{{ chart_version }}"
        values_files:
          - "{{ values_file }}"
        create_namespace: true
        wait: true
        atomic: true
        state: present

Rendered values are written under {{ block.artifacts.path }} (typically artifacts/blocks/<block-name>/), not a hard-coded /workspace/ path – see Artifacts and Best practices.

Once built, it runs with a simple Polycrate command:

polycrate run postgres install

In the background, Polycrate starts a container with a consistent toolchain (Ansible, Python, kubectl, Helm – configurable via Dockerfile), as described in the Ansible integration. Kubeconfig: Polycrate sets KUBECONFIG / K8S_AUTH_KUBECONFIG – the kubernetes.core.helm* modules do not need a kubeconfig: parameter per task. No pip install, no helm binary chaos, no global Python interpreters – this solves the classic dependency problem of automation stacks.

With plain Ansible, you would have to:

  • Install Ansible and kubernetes.core on all laptops/runners.
  • Keep ansible.cfg and Python environments in sync.
  • Install Helm yourself or write scripts to do so.

Polycrate takes care of this and ensures that the actions run identically everywhere.


Upgrade workflow: chart version in block.config, run the action

From Helm’s perspective, an upgrade is just another helm upgrade. We model this as its own action:

# blocks/registry.acme-corp.com/acme/helm/postgres-helm/upgrade.yml
- name: Upgrade PostgreSQL Helm release to configured chart_version
  hosts: localhost
  connection: local
  gather_facts: false

  vars:
    release_name: "{{ block.config.helm_release_name }}"
    namespace: "{{ block.config.namespace }}"
    chart_repo_name: "{{ block.config.chart_repo_name }}"
    chart_repo_url: "{{ block.config.chart_repo_url }}"
    chart_name: "{{ block.config.chart_name }}"
    chart_version: "{{ block.config.chart_version }}"
    values_file: "{{ block.artifacts.path }}/{{ release_name }}-values.yml"

  tasks:
    - name: Ensure Helm repository is configured
      kubernetes.core.helm_repository:
        name: "{{ chart_repo_name }}"
        repo_url: "{{ chart_repo_url }}"
        state: present

    - name: Render Helm values from template
      ansible.builtin.template:
        src: "templates/values.yml.j2"
        dest: "{{ values_file }}"

    - name: Upgrade PostgreSQL Helm release
      kubernetes.core.helm:
        name: "{{ release_name }}"
        chart_ref: "{{ chart_repo_name }}/{{ chart_name }}"
        release_namespace: "{{ namespace }}"
        chart_version: "{{ chart_version }}"
        values_files:
          - "{{ values_file }}"
        create_namespace: true
        wait: true
        atomic: true
        state: present

The upgrade workflow in practice:

  1. Adjust the chart version in block.poly or in the workspace, e.g.:

    # workspace.poly – excerpt
    blocks:
      - name: postgres
        from: registry.acme-corp.com/acme/helm/postgres-helm:0.1.0
        config:
          chart_version: "12.2.0"
  2. Commit & push – the change is traceably versioned.

  3. Run the upgrade:

    polycrate run postgres upgrade

You combine:

  • Versioning of the chart version via Git.
  • Reproducible execution via Polycrate (container + block).
  • Idempotent upgrade logic via Ansible.

Want to share this block with other teams? Push it to an OCI registry, e.g.:

registry.acme-corp.com/acme/helm/postgres-helm:0.1.0

(Public or hosted, e.g. cargo.ayedo.cloud/....) Through the registry documentation and PolyHub, you can share such blocks team- or organization-wide. This is sharable automation instead of local Helm scripts.


Diff action: what would change?

Before you perform an upgrade, you often want to know: what would change? We use dry_run:

# blocks/registry.acme-corp.com/acme/helm/postgres-helm/diff.yml
- name: Show pending changes for PostgreSQL Helm release
  hosts: localhost
  connection: local
  gather_facts: false

  vars:
    release_name: "{{ block.config.helm_release_name }}"
    namespace: "{{ block.config.namespace }}"
    chart_repo_name: "{{ block.config.chart_repo_name }}"
    chart_repo_url: "{{ block.config.chart_repo_url }}"
    chart_name: "{{ block.config.chart_name }}"
    chart_version: "{{ block.config.chart_version }}"
    values_file: "{{ block.artifacts.path }}/{{ release_name }}-values.yml"

  tasks:
    - name: Ensure Helm repository is configured
      kubernetes.core.helm_repository:
        name: "{{ chart_repo_name }}"
        repo_url: "{{ chart_repo_url }}"
        state: present

    - name: Render Helm values from template
      ansible.builtin.template:
        src: "templates/values.yml.j2"
        dest: "{{ values_file }}"

    - name: Run Helm dry-run to show diff
      kubernetes.core.helm:
        name: "{{ release_name }}"
        chart_ref: "{{ chart_repo_name }}/{{ chart_name }}"
        release_namespace: "{{ namespace }}"
        chart_version: "{{ chart_version }}"
        values_files:
          - "{{ values_file }}"
        dry_run: true
        state: present

Run:

polycrate run postgres diff

The output shows what Helm would change without deploying anything. You can archive it in logs, hand it to auditors, or use it as a prerequisite for manual approval.


Rollback action: kubernetes.core.helm_rollback

When an upgrade fails, you want to think as little as possible. The rollback action wraps that in one command:

# blocks/registry.acme-corp.com/acme/helm/postgres-helm/rollback.yml
- name: Rollback PostgreSQL Helm release
  hosts: localhost
  connection: local
  gather_facts: false

  vars:
    release_name: "{{ block.config.helm_release_name }}"
    namespace: "{{ block.config.namespace }}"

  tasks:
    - name: Rollback to previous revision
      kubernetes.core.helm_rollback:
        name: "{{ release_name }}"
        namespace: "{{ namespace }}"

Run:

polycrate run postgres rollback

Without Polycrate/Ansible you would typically:

  • call helm history,
  • pick a revision,
  • type helm rollback postgres 12 by hand.

With Polycrate this becomes an action that:

  • is documented as code,
  • can be shared across the team,
  • works in pipelines the same as on an admin workstation.

When Helm directly, when via Ansible and Polycrate?

Helm directly from the CLI remains an important tool. An honest trade-off:

Helm directly makes sense when:

  • you are trying out a new chart locally or debugging,
  • one person changes something ad hoc in a test cluster,
  • you manage very small, simple deployments without a team context.

Helm via Ansible in Polycrate has the advantage when:

  • multiple environments (dev/stage/prod) exist with their own values,
  • multiple people run deployments and you need traceability of who did what when,
  • you need reusable, versioned building blocks (e.g. a “standard Postgres” for all teams),
  • compliance requirements (e.g. in regulated industries) demand encrypted kubeconfigs and auditable deployments.

Polycrate adds:

  • Containerized execution (no local Helm/Ansible installations).
  • Built-in workspace encryption with age – without external tools like Vault.
  • Good UX via actions (polycrate run postgres install) instead of long Ansible CLI commands.
  • A block model that structures Helm deployments and makes them reusable building blocks.

More on this in Best practices for blocks and Best practices for using Polycrate.


Frequently asked questions

Do I still need Helm installed locally if I use Helm via Polycrate?

For automated deployments via Polycrate: no. Polycrate runs Ansible playbooks in a container that includes Helm, kubectl, Python, and the required collections. You only need Polycrate on your workstation, not Helm itself.

For local experiments, debugging, or quickly trying a new chart, a local Helm is still practical. Many teams use both: Helm CLI locally, Ansible+Polycrate in automation and CI/CD.

How do I handle sensitive values like passwords?

Configuration values like username, database name, or resource limits can live in block.config without issue. For passwords, choose another path:

  • Create them as a K8s Secret consumed by your chart.
  • Or store them in artifacts/secrets/ and access via workspace.secrets[...].
  • Encrypt the workspace with Polycrate (see Workspace encryption) so kubeconfigs and secret files are protected in the Git repo.

The nice part: you don’t need an external Vault for a lightweight but secure setup – Polycrate already brings the encryption you need.

Can I use the same block for multiple clusters and stages?

Yes. Define a separate block entry in workspace.poly per environment with its own config – the same block type from the registry, with a pinned from::

blocks:
  - name: postgres-dev
    from: registry.acme-corp.com/acme/helm/postgres-helm:0.1.0
    config:
      namespace: "databases-dev"
      chart_version: "12.1.2"

  - name: postgres-prod
    from: registry.acme-corp.com/acme/helm/postgres-helm:0.1.0
    config:
      namespace: "databases-prod"
      chart_version: "12.2.0"

The logic (playbooks, template) stays the same; namespace, chart version, and resources differ per instance. Each workspace has exactly one kubeconfig (artifacts/secrets/kubeconfig.yml) – Polycrate integrates it automatically for Ansible. Two entries like above are useful for dev vs. prod in the same cluster (different namespaces). For separate clusters, you typically use one workspace per cluster with its own kubeconfig, but you can reference the same registry block in each workspace.

More questions? See our FAQ.


From theory to practice

With the PostgreSQL example block you have seen how Helm deployments turn into a structured, reusable component:

  • Chart configuration lives in block.config and is thus versioned and team-friendly.
  • A Jinja2 template produces a full values.yaml from that config model.
  • Actions like install, upgrade, diff, and rollback become clearly named, easy-to-run Polycrate actions.
  • Thanks to containerized execution and integrated workspace encryption, dependencies stay manageable and sensitive data protected.

From a platform and Kubernetes team perspective, this is an important step: away from individually maintained Helm scripts, toward reusable blocks shareable via registry that other teams can use immediately – or that live in a central block library (internal or via PolyHub).

ayedo supports exactly this path: from first Helm deployments through structured Polycrate workspaces to organization-wide automation libraries. In our projects we combine Ansible, Polycrate, and Kubernetes solutions to build reusable components for databases, messaging, monitoring, or security – including governance, compliance, and documentation.

If you already use Helm today and want to move toward reproducible, team-friendly deployments, a joint deep dive is often the most efficient path. In our Helm workshop we develop concrete blocks, workspaces, and upgrade strategies with your team – against your real charts and clusters.

Learn more in a personal conversation: Helm workshop

Ähnliche Artikel