Multi-Cluster Kubernetes with Polycrate: Why One Cluster, One Workspace
Fabian Peter 8 Minuten Lesezeit

Multi-Cluster Kubernetes with Polycrate: Why One Cluster, One Workspace

Multi-Cluster Kubernetes with Polycrate: One Workspace per Cluster, Shared Blocks via Registry
Ganze Serie lesen (24 Artikel)

Diese Serie zeigt Schritt für Schritt, wie Ansible mit Polycrate zu einer strukturierten, teilbaren und compliance-fähigen Automatisierungsplattform wird – von den Grundlagen bis zu Enterprise-Szenarien.

  1. Install Polycrate and Build Your First Ansible Block in 15 Minutes
  2. Blocks, Actions, and Workspaces: The Modular Principle of Polycrate
  3. Linux Servers on Autopilot: System Management with Polycrate and Ansible
  4. Nginx and Let's Encrypt as a Reusable Polycrate Block
  5. Managing Docker Stacks on Linux Servers with Polycrate
  6. Many Servers, One Truth: Multi-Server Management with Polycrate Inventories
  7. Windows Automation with Polycrate: Ansible and WinRM Without Pain
  8. Windows Software Deployment without SCCM: Chocolatey and Ansible
  9. Hybrid Automation: Windows and Linux in the Same Polycrate Workspace
  10. Deploy Kubernetes Apps from the PolyHub: From Idea to Deployment in Minutes
  11. Creating Your Own Kubernetes App as a Polycrate Block: A Step-by-Step Guide
  12. Multi-Cluster Kubernetes with Polycrate: Why One Cluster, One Workspace
  13. SSH Sessions and kubectl Debugging: Polycrate as an Operations Tool
  14. Helm Charts as a Polycrate Block: More Control Over Chart Deployments
  15. Policy as Code: Automating Compliance Requirements with Polycrate
  16. Workspace Encryption: Managing Secrets in GDPR Compliance – Without External Tooling
  17. Managing Raspberry Pi and Edge Nodes with Polycrate in IoT and Edge Computing
  18. Enterprise Automation: Building, Versioning, and Sharing Blocks Within Teams
  19. Polycrate MCP: Connecting AI Assistants with Live Infrastructure Context
  20. Polycrate vs. plain Ansible: What You Gain – and Why It's Worth It
  21. The Polycrate Ecosystem: PolyHub, API, MCP, and the Future of Automation
  22. Your First Productive Polycrate Workspace: A Checklist for Getting Started
  23. Auditable Operations: SSH Sessions and CLI Activities with Polycrate API
  24. Polycrate API for Teams: Centralized Monitoring and Remote Triggering

TL;DR

  • In Polycrate, multi-cluster automatically means multi-workspace: one workspace manages exactly one Kubernetes cluster. That keeps responsibilities, kubeconfigs, and access rights clearly separated.
  • Shared logic lives in reusable blocks in an OCI registry. Staging and production use the same block type but different, explicitly pinned versions—that is the promotion mechanism.
  • Deploy workflow: pull the new block version into the staging workspace, test with polycrate run, then pin the same block version in production and run again. Rollback means: revert the version in that workspace and rerun the action.
  • Each block deploys into its own namespace—technical isolation, fewer shared-state effects, and simpler audits per workspace and per block.
  • ayedo supports you with Polycrate, best practices, and services to build multi-cluster setups cleanly as a multi-workspace architecture—from the first demo to production.

Why one workspace = one Kubernetes cluster

Polycrate follows a clear convention: a workspace represents a self-contained automation domain. For Kubernetes solutions, that means: one workspace = one cluster.

There are several practical reasons:

  1. Clear ownership
    A production cluster has different requirements (SLAs, compliance, change processes) than a staging cluster. If both lived in the same workspace, policies, secrets, and audit trails would be mixed.

  2. No kubeconfig chaos
    Polycrate expects exactly one kubeconfig per workspace—under:

    artifacts/secrets/kubeconfig.yml

    That makes it very hard for a playbook to “accidentally” hit the wrong cluster because the kubectl context was wrong. Multi-cluster is not solved with multiple kubeconfigs in one workspace, but with multiple workspaces.

  3. Containerized execution without local dependencies
    Polycrate always runs Ansible in a container. The full toolchain (Python, kubectl, helm, Ansible collections) is encapsulated. No local dependency drift, no Python version chaos, lower supply-chain risk—the classic dependency problem many teams see with plain Ansible.

  4. Workspace encryption and compliance per cluster
    Each workspace can be encrypted separately, including kubeconfig and secrets. That makes it easier to meet audit requirements per cluster. See Workspace encryption.

For more on recommended layouts, the Polycrate best practices describe this “one workspace = one cluster” pattern in detail.


Example setup: acme-staging-1 and acme-production-1

Take a fictional but realistic setup at “Acme Corp”:

  • acme-staging-1: staging cluster for internal tests
  • acme-production-1: production cluster for customer traffic

Each cluster has its own workspace:

acme-staging-1/
  workspace.poly
  inventory.yml
  artifacts/
    secrets/
      kubeconfig.yml
  blocks/
    registry.acme-corp.com/acme/apps/app-deploy/
      block.poly
      deploy.yml
      rollback.yml

acme-production-1/
  workspace.poly
  inventory.yml
  artifacts/
    secrets/
      kubeconfig.yml
  blocks/
    registry.acme-corp.com/acme/apps/app-deploy/
      # same block path after polycrate blocks pull / first run

Important:

  • Each workspace has its own kubeconfig—there are never multiple kubeconfigs in one workspace.
  • Blocks are pulled via from: from an OCI registry; locally they live under blocks/<registry-path>/ (here fictional: registry.acme-corp.com/...). Public or hosted registries include e.g. cargo.ayedo.cloud or PolyHub.

A shared block for both clusters

The shared logic—for example deploying a web service—lives in one Polycrate block. You can develop it locally and share it with other workspaces through a registry.

A simple app-deploy block (after pushing to your registry; name = full registry path without tag, as in other posts):

# blocks/registry.acme-corp.com/acme/apps/app-deploy/block.poly
name: registry.acme-corp.com/acme/apps/app-deploy
version: 1.2.0
kind: generic

config:
  namespace: ""
  image: ""

actions:
  - name: deploy
    description: "Deploy the application to the cluster"
    playbook: deploy.yml

  - name: rollback
    description: "Rollback to a previous version"
    playbook: rollback.yml

There is no Jinja2 in workspace.poly or block.poly—you set namespace and image as literal values per workspace in the block instance config (see next section).

Key points:

  • Namespace isolation: Per environment you set e.g. acme-app-acme-staging-1 vs. acme-app-acme-production-1 in the block config—no overlap between clusters.
  • No kubeconfig path in the block: Polycrate sets KUBECONFIG and K8S_AUTH_KUBECONFIG in the action container to the kubeconfig under artifacts/secrets/kubeconfig.yml. kubernetes.core.k8s picks that up automatically—you do not pass kubeconfig: on tasks and no kubeconfig_path in block.poly.
  • Block versioning: version: 1.2.0 is the block version published to the registry. Staging might test 1.2.0 while production stays on 1.1.0 (pinned via from: …:1.1.0).

When the block is ready, push it to an OCI registry (see registry documentation). Any number of workspaces can use the same block type—each with its own explicitly pinned version.


Configuring workspaces: staging vs. production

Now we wire this block into two workspaces: acme-staging-1 and acme-production-1. Both use the same block instance name and registry URL, but different versions.

workspace.poly for staging

# acme-staging-1/workspace.poly
name: acme-staging-1
organization: acme

blocks:
  - name: app
    from: registry.acme-corp.com/acme/apps/app-deploy:1.2.0
    config:
      namespace: "acme-app-acme-staging-1"
      image: "ghcr.io/acme/myapp:1.2.0"

workspace.poly for production

# acme-production-1/workspace.poly
name: acme-production-1
organization: acme

blocks:
  - name: app
    from: registry.acme-corp.com/acme/apps/app-deploy:1.1.0
    config:
      namespace: "acme-app-acme-production-1"
      image: "ghcr.io/acme/myapp:1.1.0"

Important:

  • The version is always pinned in the from: URL (:1.2.0, :1.1.0). Never use :latest.
  • Settings like namespace and image can differ between staging and production—even when using the same block type.
  • Each workspace has independent state: logs, secrets, kubeconfig, and other artifacts stay separated.

With plain Ansible you would enforce this yourself—separate project trees, different ansible.cfg files, custom kubeconfig tooling, etc. Polycrate provides these guardrails through the block and workspace model out of the box.


Deploy workflow: from staging to production

What does a concrete deploy workflow look like?

Step 1: deploy the new version in staging

Staging already pins 1.2.0. Run the deploy action:

cd acme-staging-1
polycrate run app deploy

The deploy.yml playbook runs in a Polycrate container, not on your laptop. Ansible talks to the Kubernetes API via kubernetes.core.k8s. Polycrate handles cluster auth: KUBECONFIG points at the workspace kubeconfig—you do not set a kubeconfig parameter on the modules.

A full example playbook:

# …/app-deploy/deploy.yml
- name: Deploy application to Kubernetes
  hosts: localhost
  connection: local
  gather_facts: false

  vars:
    namespace: "{{ block.config.namespace }}"
    image: "{{ block.config.image }}"

  tasks:
    - name: Ensure namespace exists
      kubernetes.core.k8s:
        api_version: v1
        kind: Namespace
        name: "{{ namespace }}"
        state: present

    - name: Apply deployment
      kubernetes.core.k8s:
        state: present
        definition:
          apiVersion: apps/v1
          kind: Deployment
          metadata:
            name: acme-app
            namespace: "{{ namespace }}"
          spec:
            replicas: 3
            selector:
              matchLabels:
                app: acme-app
            template:
              metadata:
                labels:
                  app: acme-app
              spec:
                containers:
                  - name: acme-app
                    image: "{{ image }}"
                    ports:
                      - containerPort: 8080

    - name: Apply service
      kubernetes.core.k8s:
        state: present
        definition:
          apiVersion: v1
          kind: Service
          metadata:
            name: acme-app
            namespace: "{{ namespace }}"
          spec:
            selector:
              app: acme-app
            ports:
              - port: 80
                targetPort: 8080

Note: hosts: localhost and connection: local are correct here because we target the Kubernetes API from inside the Polycrate container, not a remote host over SSH.

Step 2: promote to production

Promote the same block version into the production workspace:

  1. Update the block version in production:
    # acme-production-1/workspace.poly (after approval)
    name: acme-production-1
    organization: acme
    
    blocks:
      - name: app
        from: registry.acme-corp.com/acme/apps/app-deploy:1.2.0
        config:
          namespace: "acme-app-acme-production-1"
          image: "ghcr.io/acme/myapp:1.2.0"
  2. Run deploy in production:
    cd acme-production-1
    polycrate run app deploy

That gives you a clean promotion model: staging tests first; production follows after approval; the block is the same artifact—only the version changes.


Rollback: revert version, rerun action

  1. You want production back from 1.2.0 to 1.1.0.
  2. Adjust the block reference in acme-production-1/workspace.poly:
    blocks:
      - name: app
        from: registry.acme-corp.com/acme/apps/app-deploy:1.1.0
        config:
          namespace: "acme-app-acme-production-1"
          image: "ghcr.io/acme/myapp:1.1.0"
  3. Run deploy again:
    cd acme-production-1
    polycrate run app deploy

Ansible idempotency brings the cluster to the state described in block version 1.1.0. Rollback logic can also live as another action in the same block (e.g. rollback.yml).


Namespace isolation per block

In the workspace.poly examples above:

config:
  namespace: "acme-app-acme-staging-1"   # or …-production-1

That yields:

  • Namespace acme-app-acme-staging-1 in the staging cluster
  • Namespace acme-app-acme-production-1 in the production cluster

Benefits:

  • No accidental resource overlap when multiple teams reuse blocks.
  • Clear audit: logs and cluster state show which workspace (and cluster context) a deployment came from.
  • Guardrails against playbook sprawl: namespace lives in block config, not scattered ad-hoc scripts.

The Polycrate best practices recommend namespace isolation per block as a default, especially in regulated environments.


Compliance, encryption, and auditability per workspace

  • Workspace encryption: Secrets such as kubeconfig.yml can be encrypted per workspace with age. See Workspace encryption.
  • Audit paths per cluster: Deployments, promotions, and rollbacks are traceable as Polycrate actions. Together with Git history on workspace.poly, you can answer who rolled which block version to which workspace when.
  • Clean separation of duties: The staging team does not need production secrets—and vice versa. Access is scoped at workspace level.

FAQ

Can I manage multiple Kubernetes clusters in one workspace?

You could theoretically use multiple kubeconfigs in a container, but Polycrate is intentionally different: one workspace has one kubeconfig under artifacts/secrets/kubeconfig.yml. Multi-cluster is modeled with multiple workspaces.

How do I handle staging vs. production differences?

Use the config section per block in workspace.poly—different replica counts, images, resource limits, feature flags, etc. The block stays the same; only per-workspace config changes.

How are kubeconfigs and secrets protected?

Workspace secrets live under artifacts/secrets/ and can be encrypted with the built-in workspace encryption. See Workspace encryption.

More questions? See our FAQ.


From theory to practice

  • One workspace = one cluster
  • Shared logic = shared blocks in the registry
  • Promotions and rollbacks = version changes in workspace.poly
  • Isolation = namespaces per block and secrets per workspace

Polycrate removes much of what plain Ansible pushes into custom scripts and conventions: containerized toolchains, a structured block model, registry integration, and workspace encryption.

If you want to see how this could look in your environment—including workspace.poly layout, block design, and registry strategy—a hands-on demo is a good next step.

More formats with ayedo: Workshops.

Ähnliche Artikel