The Polycrate Ecosystem: PolyHub, API, MCP, and the Future of Automation
TL;DR Polycrate is more than just a CLI tool: With PolyHub, an API platform, and MCP, it forms an …
Diese Serie zeigt Schritt für Schritt, wie Ansible mit Polycrate zu einer strukturierten, teilbaren und compliance-fähigen Automatisierungsplattform wird – von den Grundlagen bis zu Enterprise-Szenarien.
block.poly, an Ansible playbook, and three Kubernetes templates for Deployment, Service, and Ingress.block.config serves as the single source of truth for image, replicas, namespace, and domain; the upgrade workflow is reduced to: change the image tag in the workspace, run polycrate run myapp install, and you’re done.kubectl locally – the entire toolchain is provided in the container and is identical for the whole team.install, uninstall, and status, you get a clean, reusable interface instead of loosely distributed playbooks; the block can be versioned and shared in an OCI registry.Many Kubernetes teams start with Helm charts and a few YAML files in the Git repo. Once multiple internal services, different environments, and compliance requirements come into play, things get confusing:
kubectl, Ansible, auth tools.With Polycrate, you bring order to this automation without giving up Ansible or known Kubernetes patterns:
install, uninstall, status). No “playbook sprawl”, just a clear interface.kubectl, kubernetes.core collection – everything runs in the Polycrate container. No locally installed tools, no version hell.Creating your own block is always worthwhile when:
polycrate run myapp install instead of “please read the README and run these five kubectl commands”.In this post, we’ll build exactly such a block – complete, from structure to registry push.
Our example:
Company: ACME Corp.
Workspace: acme-corp-automation
App: myapp, an internal web service (HTTP on port 8080)
Cluster: Already existing, kubeconfig is available locally
Domain: myapp.acme-corp.com
Goal: A Polycrate block myapp-k8s that we can use with
polycrate run myapp installfor different clusters/environments.
Important: We use the Ansible module kubernetes.core.k8s, which directly communicates with the Kubernetes API. The playbook runs inside the Polycrate container with hosts: localhost and connection: local. So nothing is installed in the container itself, only changed via API in the cluster.
Details on Kubernetes integration can also be found in the official documentation:
https://docs.ayedo.de/polycrate/kubernetes/
First, we define our workspace and integrate the block there. The workspace.poly is located in the root directory of your Git repo:
# workspace.poly
name: acme-corp-automation
organization: acme
blocks:
- name: myapp
from: registry.acme.corp/blocks/myapp-k8s:0.1.0
config:
image: "registry.acme-corp.com/myapp"
image_tag: "1.0.0"
replicas: 2
namespace: "apps"
domain: "myapp.acme-corp.com"A few notes:
from: uses registry-style block notation (<registry>/<path>:<version>). registry.acme.corp is a fictitious example; in practice you pull the block with polycrate blocks pull … or from your OCI registry.config:, we specify app-specific values. These end up as block.config.* in the block and are our single source of truth.dev, staging, prod).We place the kubeconfig – Polycrate-compliant – in the workspace under artifacts/secrets/kubeconfig.yml. A workspace repo usually holds only configuration and secrets – not a checked-in blocks/myapp-k8s/ when the block comes from a registry:
acme-corp-automation/
workspace.poly
artifacts/
secrets/
kubeconfig.ymlPolycrate sets KUBECONFIG in the action container for the workspace kubeconfig; kubectl and Ansible (kubernetes.core) use it without an extra path variable. You can encrypt the workspace (including kubeconfig) with the built-in encryption function and age to cleanly cover compliance requirements (see Workspace Encryption) – without any additional tool like Vault.
This is what the block looks like – when developing locally under blocks/myapp-k8s/ or in the workspace cache after pulling from a registry:
blocks/myapp-k8s/
block.poly
install.yml
templates/
deployment.yml.j2
service.yml.j2
ingress.yml.j2block.poly describes the block, its configuration, and actions.install.yml is the central Ansible playbook. We use it for install, uninstall, and status.templates/, the Jinja2 templates for the Kubernetes manifests are located.Here is the complete block.poly:
# blocks/myapp-k8s/block.poly
name: myapp-k8s
version: 0.1.0
kind: generic
config:
image: "registry.acme-corp.com/myapp"
image_tag: "1.0.0"
replicas: 2
namespace: "apps"
domain: "myapp.acme-corp.com"
actions:
- name: install
playbook: install.yml
- name: uninstall
playbook: install.yml
- name: status
playbook: install.ymlImportant notes:
config: defines default values. They are overridden by workspace.blocks[].config. In the templates and playbook, we always access them via block.config.*.install, uninstall, and status form a standard interface that is suitable for all Kubernetes apps:
install: Creates/updates all resources.uninstall: Removes all resources.status: Shows the current state in the cluster.install.yml. We later differentiate in the playbook which path to take using the Polycrate variable action.name.This interface is also valuable for operations teams: no matter which app – the interface remains the same.
Now come the Kubernetes manifests. They are defined as Jinja2 templates and directly access block.config.*. Thus, block.config is our single source of truth.
# blocks/myapp-k8s/templates/deployment.yml.j2
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: {{ block.config.namespace }}
labels:
app: myapp
spec:
replicas: {{ block.config.replicas }}
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: "{{ block.config.image }}:{{ block.config.image_tag }}"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
periodSeconds: 20# blocks/myapp-k8s/templates/service.yml.j2
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: {{ block.config.namespace }}
labels:
app: myapp
spec:
type: ClusterIP
selector:
app: myapp
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP# blocks/myapp-k8s/templates/ingress.yml.j2
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
namespace: {{ block.config.namespace }}
labels:
app: myapp
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
ingressClassName: nginx
rules:
- host: {{ block.config.domain }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp
port:
number: 80
tls:
- hosts:
- {{ block.config.domain }}
secretName: myapp-tlsYou see: All variable values (namespace, replicas, domain, image + tag) come directly from block.config. An upgrade will be simple later because we only need to adjust there.
install.yml with kubernetes.core.k8sNow we build the heart: a playbook that executes install, uninstall, or status depending on the action.
Important: The playbook runs inside the Polycrate container on localhost and communicates with the Kubernetes API server via the kubernetes.core collection. Polycrate sets KUBECONFIG; the modules do not need an explicit kubeconfig: parameter on the task.
# blocks/myapp-k8s/install.yml
- name: Manage myapp Kubernetes resources
hosts: localhost
connection: local
gather_facts: false
vars:
manifest_dir: "/tmp/myapp-k8s"
tasks:
- name: Ensure manifest directory exists (container-local)
ansible.builtin.file:
path: "{{ manifest_dir }}"
state: directory
mode: "0755"
when: action.name in ['install', 'uninstall', 'status']
# INSTALL: Ensure namespace + apply manifests
- name: Ensure namespace exists
kubernetes.core.k8s:
api_version: v1
kind: Namespace
name: "{{ block.config.namespace }}"
state: present
when: action.name == 'install'
- name: Render Deployment manifest
ansible.builtin.template:
src: "templates/deployment.yml.j2"
dest: "{{ manifest_dir }}/deployment.yml"
mode: "0644"
when: action.name == 'install'
- name: Render Service manifest
ansible.builtin.template:
src: "templates/service.yml.j2"
dest: "{{ manifest_dir }}/service.yml"
mode: "0644"
when: action.name == 'install'
- name: Render Ingress manifest
ansible.builtin.template:
src: "templates/ingress.yml.j2"
dest: "{{ manifest_dir }}/ingress.yml"
mode: "0644"
when: action.name == 'install'
- name: Apply Deployment
kubernetes.core.k8s:
state: present
src: "{{ manifest_dir }}/deployment.yml"
wait: true
wait_timeout: 300
when: action.name == 'install'
- name: Apply Service
kubernetes.core.k8s:
state: present
src: "{{ manifest_dir }}/service.yml"
when: action.name == 'install'
- name: Apply Ingress
kubernetes.core.k8s:
state: present
src: "{{ manifest_dir }}/ingress.yml"
when: action.name == 'install'
# UNINSTALL: Remove resources
- name: Delete Ingress
kubernetes.core.k8s:
state: absent
api_version: networking.k8s.io/v1
kind: Ingress
name: "myapp"
namespace: "{{ block.config.namespace }}"
when: action.name == 'uninstall'
- name: Delete Service
kubernetes.core.k8s:
state: absent
api_version: v1
kind: Service
name: "myapp"
namespace: "{{ block.config.namespace }}"
when: action.name == 'uninstall'
- name: Delete Deployment
kubernetes.core.k8s:
state: absent
api_version: apps/v1
kind: Deployment
name: "myapp"
namespace: "{{ block.config.namespace }}"
when: action.name == 'uninstall'
# STATUS: Query current state
- name: Get Deployment status
kubernetes.core.k8s_info:
api_version: apps/v1
kind: Deployment
namespace: "{{ block.config.namespace }}"
name: "myapp"
register: deployment_info
when: action.name == 'status'
- name: Show Deployment status
ansible.builtin.debug:
var: deployment_info.resources[0].status
when:
- action.name == 'status'
- deployment_info.resources | length > 0A few key points:
KUBECONFIG in the container; kubernetes.core uses it without a kubeconfig: parameter on the task.manifest_dir: A temporary path under /tmp/… is typical for rendered YAML (it goes away with the container). If rendered manifests should persist in the workspace (e.g. for review or GitOps), use something like {{ block.artifacts.path }}/… instead – see Artifacts.install:
manifest_dir.kubernetes.core.k8s applies the files and waits for a ready Deployment.uninstall, resources are removed explicitly with state: absent.status, we use kubernetes.core.k8s_info to read the current state.Polycrate provides the action.name variable automatically (see Actions), so you can branch cleanly in one playbook.
With the structure above, you can test the block directly:
# From workspace root
cd /path/to/acme-corp-automation
# Install: create/update resources in the cluster
polycrate run myapp installPolycrate starts a container with the full toolchain (Ansible, kubernetes.core, Python, optionally kubectl), mounts the workspace, and runs install.yml in the context of the myapp block.
More actions:
# Query status
polycrate run myapp status
# Remove resources again
polycrate run myapp uninstallCompared to plain Ansible, you do not need to:
ansible.cfg,Everything is encapsulated in the block and runs reproducibly in the container.
Without Polycrate, you would typically:
playbooks/, roles/, group_vars/).kubernetes.core collection.kubeconfig (Polycrate sets KUBECONFIG in the container).ansible-playbook install.yml -e env=prod?Sharing playbooks usually means: clone the Git repo, set up the local environment, read the README, type commands by hand.
With Polycrate:
polycrate run myapp install|uninstall|status.Ansible stays the same powerful tool – Polycrate adds structure, reproducibility, and a clear packaging model.
Once you are happy with the result, you can push the block to a registry. That makes it:
:0.1.0, :0.2.0, …),Pushing uses polycrate blocks push, not registry push: pass the full registry path to the block (same shape as from:), e.g. cargo.ayedo.cloud/acme/myapp-k8s – not the workspace instance name myapp. The OCI tag comes from version in block.poly (no version suffix on the push command). See CLI reference – polycrate blocks push.
Example: push to an OCI registry (e.g. cargo.ayedo.cloud):
# From workspace root where blocks/myapp-k8s lives (or --workspace <path>)
polycrate blocks push cargo.ayedo.cloud/acme/myapp-k8s
# Short form (alias):
# polycrate push cargo.ayedo.cloud/acme/myapp-k8sIn another workspace you can reference the block like this:
# workspace.poly in another project
name: another-workspace
organization: acme
blocks:
- name: myapp-prod
from: cargo.ayedo.cloud/acme/myapp-k8s:0.1.0
config:
image: "registry.acme-corp.com/myapp"
image_tag: "1.0.0"
replicas: 3
namespace: "apps-prod"
domain: "myapp.acme-corp.com"Always pin versions explicitly (:0.1.0), never use :latest. That is a core Polycrate best practice (see Best Practices).
The most common day-to-day task: roll out a new app version.
With our setup the workflow stays simple:
Adjust the image tag in the workspace
# workspace.poly (excerpt)
blocks:
- name: myapp
from: registry.acme.corp/blocks/myapp-k8s:0.1.0
config:
image: "registry.acme-corp.com/myapp"
image_tag: "1.1.0" # <-- new tag
replicas: 2
namespace: "apps"
domain: "myapp.acme-corp.com"Run install again
polycrate run myapp installBecause Ansible and kubernetes.core.k8s are idempotent, the Deployment is updated with the new image tag. You do not need extra flags or custom “upgrade” commands – idempotency and single source of truth do the rest.
Optionally check status
polycrate run myapp statusThat way you can see whether the Deployment behaves as expected.
This scales well:
config values.install again.Polycrate supports several technologies, but Ansible is a strong choice for Kubernetes workloads:
kubernetes.core modules.If you already rely on Ansible, moving to Kubernetes blocks with Polycrate is a small step – you mainly add Kubernetes-specific tasks.
Polycrate includes workspace encryption:
artifacts/secrets/ – such as our kubeconfig.yml – can be encrypted with age.See Workspace Encryption. You do not need an extra vault product – a real advantage for teams that want to move fast yet stay compliant.
Yes – that is what workspaces and blocks are for:
For multiple clusters, use multiple workspaces (or multiple block instances in one workspace) with different kubeconfigs and config values. The block code stays unchanged.
More questions? See our FAQ.
With the myapp-k8s block you have a complete, reusable building block:
block.config is the single source of truth – image, replicas, namespace, and domain are defined centrally.install, uninstall, and status give you a stable interface that operations and compliance can follow.kubernetes.core, and dependencies included. Local setups get much simpler.In many organizations this first block is the start of a broader automation strategy:
install/uninstall/status).At ayedo we guide teams on this path – from the first Kubernetes service to full platform automation. In our workshops we show hands-on how to:
If you want to structure your own Kubernetes apps as Polycrate blocks and benefit from proven practices, our Kubernetes block workshop is a good next step:
Kubernetes Block Workshop
TL;DR Polycrate is more than just a CLI tool: With PolyHub, an API platform, and MCP, it forms an …
TL;DR PolyHub functions like an app store for infrastructure: Ready-made ayedo blocks for Kubernetes …
TL;DR The Polycrate API transforms individual workspaces into a team platform: all workspaces, …