Your First Productive Polycrate Workspace: A Checklist for Getting Started
TL;DR A well-named, clearly structured Polycrate workspace is half the battle: a consistent name …
Diese Serie zeigt Schritt für Schritt, wie Ansible mit Polycrate zu einer strukturierten, teilbaren und compliance-fähigen Automatisierungsplattform wird – von den Grundlagen bis zu Enterprise-Szenarien.
polycrate run, then pin the same block version in production and run again. Rollback means: revert the version in that workspace and rerun the action.Polycrate follows a clear convention: a workspace represents a self-contained automation domain. For Kubernetes solutions, that means: one workspace = one cluster.
There are several practical reasons:
Clear ownership
A production cluster has different requirements (SLAs, compliance, change processes) than a staging cluster. If both lived in the same workspace, policies, secrets, and audit trails would be mixed.
No kubeconfig chaos
Polycrate expects exactly one kubeconfig per workspace—under:
artifacts/secrets/kubeconfig.ymlThat makes it very hard for a playbook to “accidentally” hit the wrong cluster because the kubectl context was wrong. Multi-cluster is not solved with multiple kubeconfigs in one workspace, but with multiple workspaces.
Containerized execution without local dependencies
Polycrate always runs Ansible in a container. The full toolchain (Python, kubectl, helm, Ansible collections) is encapsulated. No local dependency drift, no Python version chaos, lower supply-chain risk—the classic dependency problem many teams see with plain Ansible.
Workspace encryption and compliance per cluster
Each workspace can be encrypted separately, including kubeconfig and secrets. That makes it easier to meet audit requirements per cluster. See Workspace encryption.
For more on recommended layouts, the Polycrate best practices describe this “one workspace = one cluster” pattern in detail.
Take a fictional but realistic setup at “Acme Corp”:
acme-staging-1: staging cluster for internal testsacme-production-1: production cluster for customer trafficEach cluster has its own workspace:
acme-staging-1/
workspace.poly
inventory.yml
artifacts/
secrets/
kubeconfig.yml
blocks/
registry.acme-corp.com/acme/apps/app-deploy/
block.poly
deploy.yml
rollback.yml
acme-production-1/
workspace.poly
inventory.yml
artifacts/
secrets/
kubeconfig.yml
blocks/
registry.acme-corp.com/acme/apps/app-deploy/
# same block path after polycrate blocks pull / first runImportant:
kubeconfig—there are never multiple kubeconfigs in one workspace.from: from an OCI registry; locally they live under blocks/<registry-path>/ (here fictional: registry.acme-corp.com/...). Public or hosted registries include e.g. cargo.ayedo.cloud or PolyHub.The shared logic—for example deploying a web service—lives in one Polycrate block. You can develop it locally and share it with other workspaces through a registry.
A simple app-deploy block (after pushing to your registry; name = full registry path without tag, as in other posts):
# blocks/registry.acme-corp.com/acme/apps/app-deploy/block.poly
name: registry.acme-corp.com/acme/apps/app-deploy
version: 1.2.0
kind: generic
config:
namespace: ""
image: ""
actions:
- name: deploy
description: "Deploy the application to the cluster"
playbook: deploy.yml
- name: rollback
description: "Rollback to a previous version"
playbook: rollback.ymlThere is no Jinja2 in workspace.poly or block.poly—you set namespace and image as literal values per workspace in the block instance config (see next section).
Key points:
acme-app-acme-staging-1 vs. acme-app-acme-production-1 in the block config—no overlap between clusters.kubeconfig path in the block: Polycrate sets KUBECONFIG and K8S_AUTH_KUBECONFIG in the action container to the kubeconfig under artifacts/secrets/kubeconfig.yml. kubernetes.core.k8s picks that up automatically—you do not pass kubeconfig: on tasks and no kubeconfig_path in block.poly.version: 1.2.0 is the block version published to the registry. Staging might test 1.2.0 while production stays on 1.1.0 (pinned via from: …:1.1.0).When the block is ready, push it to an OCI registry (see registry documentation). Any number of workspaces can use the same block type—each with its own explicitly pinned version.
Now we wire this block into two workspaces: acme-staging-1 and acme-production-1. Both use the same block instance name and registry URL, but different versions.
# acme-staging-1/workspace.poly
name: acme-staging-1
organization: acme
blocks:
- name: app
from: registry.acme-corp.com/acme/apps/app-deploy:1.2.0
config:
namespace: "acme-app-acme-staging-1"
image: "ghcr.io/acme/myapp:1.2.0"# acme-production-1/workspace.poly
name: acme-production-1
organization: acme
blocks:
- name: app
from: registry.acme-corp.com/acme/apps/app-deploy:1.1.0
config:
namespace: "acme-app-acme-production-1"
image: "ghcr.io/acme/myapp:1.1.0"Important:
from: URL (:1.2.0, :1.1.0). Never use :latest.namespace and image can differ between staging and production—even when using the same block type.kubeconfig, and other artifacts stay separated.With plain Ansible you would enforce this yourself—separate project trees, different ansible.cfg files, custom kubeconfig tooling, etc. Polycrate provides these guardrails through the block and workspace model out of the box.
What does a concrete deploy workflow look like?
Staging already pins 1.2.0. Run the deploy action:
cd acme-staging-1
polycrate run app deployThe deploy.yml playbook runs in a Polycrate container, not on your laptop. Ansible talks to the Kubernetes API via kubernetes.core.k8s. Polycrate handles cluster auth: KUBECONFIG points at the workspace kubeconfig—you do not set a kubeconfig parameter on the modules.
A full example playbook:
# …/app-deploy/deploy.yml
- name: Deploy application to Kubernetes
hosts: localhost
connection: local
gather_facts: false
vars:
namespace: "{{ block.config.namespace }}"
image: "{{ block.config.image }}"
tasks:
- name: Ensure namespace exists
kubernetes.core.k8s:
api_version: v1
kind: Namespace
name: "{{ namespace }}"
state: present
- name: Apply deployment
kubernetes.core.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: acme-app
namespace: "{{ namespace }}"
spec:
replicas: 3
selector:
matchLabels:
app: acme-app
template:
metadata:
labels:
app: acme-app
spec:
containers:
- name: acme-app
image: "{{ image }}"
ports:
- containerPort: 8080
- name: Apply service
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: acme-app
namespace: "{{ namespace }}"
spec:
selector:
app: acme-app
ports:
- port: 80
targetPort: 8080Note: hosts: localhost and connection: local are correct here because we target the Kubernetes API from inside the Polycrate container, not a remote host over SSH.
Promote the same block version into the production workspace:
# acme-production-1/workspace.poly (after approval)
name: acme-production-1
organization: acme
blocks:
- name: app
from: registry.acme-corp.com/acme/apps/app-deploy:1.2.0
config:
namespace: "acme-app-acme-production-1"
image: "ghcr.io/acme/myapp:1.2.0"cd acme-production-1
polycrate run app deployThat gives you a clean promotion model: staging tests first; production follows after approval; the block is the same artifact—only the version changes.
acme-production-1/workspace.poly:
blocks:
- name: app
from: registry.acme-corp.com/acme/apps/app-deploy:1.1.0
config:
namespace: "acme-app-acme-production-1"
image: "ghcr.io/acme/myapp:1.1.0"cd acme-production-1
polycrate run app deployAnsible idempotency brings the cluster to the state described in block version 1.1.0. Rollback logic can also live as another action in the same block (e.g. rollback.yml).
In the workspace.poly examples above:
config:
namespace: "acme-app-acme-staging-1" # or …-production-1That yields:
acme-app-acme-staging-1 in the staging clusteracme-app-acme-production-1 in the production clusterBenefits:
The Polycrate best practices recommend namespace isolation per block as a default, especially in regulated environments.
kubeconfig.yml can be encrypted per workspace with age. See Workspace encryption.workspace.poly, you can answer who rolled which block version to which workspace when.You could theoretically use multiple kubeconfigs in a container, but Polycrate is intentionally different: one workspace has one kubeconfig under artifacts/secrets/kubeconfig.yml. Multi-cluster is modeled with multiple workspaces.
Use the config section per block in workspace.poly—different replica counts, images, resource limits, feature flags, etc. The block stays the same; only per-workspace config changes.
Workspace secrets live under artifacts/secrets/ and can be encrypted with the built-in workspace encryption. See Workspace encryption.
More questions? See our FAQ.
workspace.polyPolycrate removes much of what plain Ansible pushes into custom scripts and conventions: containerized toolchains, a structured block model, registry integration, and workspace encryption.
If you want to see how this could look in your environment—including workspace.poly layout, block design, and registry strategy—a hands-on demo is a good next step.
More formats with ayedo: Workshops.
TL;DR A well-named, clearly structured Polycrate workspace is half the battle: a consistent name …
TL;DR Polycrate is more than just a CLI tool: With PolyHub, an API platform, and MCP, it forms an …
TL;DR Plain Ansible is a powerful tool for ad-hoc automation, quick scripts, and simple setups – but …