Your First Productive Polycrate Workspace: A Checklist for Getting Started
TL;DR A well-named, clearly structured Polycrate workspace is half the battle: a consistent name …
Diese Serie zeigt Schritt für Schritt, wie Ansible mit Polycrate zu einer strukturierten, teilbaren und compliance-fähigen Automatisierungsplattform wird – von den Grundlagen bis zu Enterprise-Szenarien.
web-dev, web-test, web-prod) and list only the hosts for that environment in each inventory.yml. Shared automation lives in blocks (block sharing), not in one giant inventory.inventory.yml sits in the workspace root and is the single source of truth for all blocks and actions in that workspace. A lean Ansible inventory (e.g. all.vars, clear groups) structures hosts for playbooks within that environment—Polycrate itself has no first-class parameters for Ansible tags or for selecting inventory groups; polycrate run … always applies to the entire workspace.As long as you’re managing only one or two Linux servers, Ansible often feels like a better SSH: one playbook, one host, done.
Problems begin when:
With plain Ansible, it’s tempting to maintain a separate inventory file for each team or use case. This works, but:
ansible_user set differently).Polycrate consciously reverses this concept: per workspace, one inventory, a shared understanding for all blocks and teams that use that workspace.
In Polycrate, a workspace is your logical boundary—often an environment (dev, test, prod) or a clearly scoped infrastructure line. It bundles workspace.poly, inventory.yml, secrets, and referenced blocks into one consistent unit.
Classic Ansible pushes you toward one large codebase (roles, groups, tags) and separating environments only via inventory groups or host patterns—so the shared roles tree stays maintainable and nothing is duplicated. That makes sense when code is the expensive part.
With Polycrate, shared functionality belongs in blocks (registry, polycrate blocks push / from:). You segment by workspace: e.g. three workspaces web-dev, web-test, web-prod, each inventory.yml listing only hosts for that stage—instead of one workspace web where you tell dev/test/prod apart purely with Ansible groups and conventions. The latter mirrors old Ansible thinking; the former uses Polycrate as intended: environment and ownership first, then a lean, honest inventory.
In short: workspaces for coarse operational segmentation; blocks for reusable automation; keep inventory.yml with lean groups for readable playbooks within a workspace—not tag-driven “control logic” as a substitute for environment boundaries.
Key points:
workspace.poly.inventory.yml.When executing actions, Polycrate automatically sets ANSIBLE_INVENTORY to this single inventory.yml. More details can be found in the official Ansible integration documentation.
server-setup BlockA simple workspace.poly for the ACME Corporation:
name: acme-corp-automation
organization: acme
blocks:
- name: server-setup
from: registry.acme-corp.com/acme/infra/server-setup:1.0.0
config:
maintenance_window: "Sundays 02:00-04:00 UTC"In the block package itself, block.poly uses the same logical name; the published OCI reference matches the from: line above:
# registry.acme-corp.com/acme/infra/server-setup:1.0.0
name: server-setup
version: 1.0.0
kind: genericImportant:
name and optionally organization define the workspace.blocks, you reference blocks with a full OCI registry URL (registry/path/block:version)—the version is the tag at the end of the from line, like container images; there is no separate version field on the block instance.config) are merged with the block definition—details on inheritance can be found in the best practices.The block is published on the fictional corporate registry registry.acme-corp.com (not ayedo’s production registry). You do not need to run polycrate blocks pull first: if the block is missing locally, Polycrate detects that when you run polycrate run … and asks whether to install the block from the registry automatically. After that, the unpacked layout lives under blocks/registry.acme-corp.com/acme/infra/server-setup/ (the path under blocks/ mirrors the reference). The apply action (do not confuse it with an action named update) runs the corresponding playbook in the container against the workspace root inventory.yml, using the maintenance_window set here.
The core of our multi-server setup is inventory.yml in the workspace root. It is the single source of truth for:
Consider a classic fleet on Ubuntu 22.04:
web01–web10db01–db03mon01, mon02The inventory might look like this: Every host is defined under all.hosts (with defaults from all.vars and per-host overrides); under children, groups only reference those hosts by name—without duplicating variables in the groups. That way Ansible and polycrate ssh share the same canonical host list (see the previous post: SSH sessions & operations):
all:
vars:
ansible_user: "ubuntu"
ansible_ssh_port: 22
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
ansible_python_interpreter: /usr/bin/python3
hosts:
web01.acme-corp.com: {}
web02.acme-corp.com: {}
web03.acme-corp.com: {}
web04.acme-corp.com: {}
web05.acme-corp.com: {}
web06.acme-corp.com: {}
web07.acme-corp.com: {}
web08.acme-corp.com: {}
web09.acme-corp.com: {}
web10.acme-corp.com: {}
db01.acme-corp.com: {}
db02.acme-corp.com: {}
db03.acme-corp.com:
ansible_user: "dbadmin" # Host-specific overrides
ansible_ssh_port: 2222
mon01.acme-corp.com: {}
mon02.acme-corp.com: {}
bastion.acme-corp.com:
ansible_user: "admin"
ansible_ssh_port: 2201
children:
webservers:
hosts:
web01.acme-corp.com:
web02.acme-corp.com:
web03.acme-corp.com:
web04.acme-corp.com:
web05.acme-corp.com:
web06.acme-corp.com:
web07.acme-corp.com:
web08.acme-corp.com:
web09.acme-corp.com:
web10.acme-corp.com:
databases:
hosts:
db01.acme-corp.com:
db02.acme-corp.com:
db03.acme-corp.com:
monitoring:
hosts:
mon01.acme-corp.com:
mon02.acme-corp.com:Key points for experienced admins:
all.vars, define the common defaults for all hosts—e.g., SSH user, port, Python interpreter.all.hosts; host-specific values (e.g., db03, bastion) are set there and override all.vars as usual.children.*.hosts, list hostnames only for group membership—without repeating variable blocks, so polycrate ssh and Ansible use the same canonical host list.children are your functional groups (webservers, databases, monitoring) that you target in playbooks.Polycrate takes care of the rest: This inventory is automatically used for every action. How to centralize SSH keys and connections is described in the SSH documentation of Polycrate.
The inventory is neatly grouped—now comes the real benefit: One playbook, multiple plays, each with its own hosts.
Instead of maintaining three different playbooks for web, DB, and monitoring, we define an update.yml that covers all three groups. Separation is only via each play’s hosts: line—without Ansible tags (tag-heavy playbooks are not recommended for Polycrate operations).
- name: Update Web Servers
hosts: webservers
become: true
tasks:
- name: Update package index
ansible.builtin.apt:
update_cache: yes
cache_valid_time: 3600
- name: Install security patches
ansible.builtin.apt:
upgrade: dist
register: web_update
- name: Restart web server if packages were updated
ansible.builtin.service:
name: nginx
state: restarted
when: web_update is changed
- name: Update Database Servers
hosts: databases
become: true
tasks:
- name: Update package index
ansible.builtin.apt:
update_cache: yes
cache_valid_time: 3600
- name: Update database packages
ansible.builtin.apt:
name: "{{ item }}"
state: latest
loop:
- postgresql
- postgresql-contrib
register: db_update
- name: Restart database service if packages were updated
ansible.builtin.service:
name: postgresql
state: restarted
when: db_update is changed
- name: Check Monitoring Agents
hosts: monitoring
become: true
tasks:
- name: Ensure monitoring agent is installed
ansible.builtin.apt:
name: acme-monitoring-agent
state: latest
- name: Monitoring agent is running
ansible.builtin.service:
name: acme-monitoring-agent
state: started
enabled: trueImportant:
hosts: localhost and no connection: local—we want to manage real servers, not the ephemeral Polycrate container.webservers, databases, monitoring).The advantage of Polycrate over plain Ansible is twofold:
ansible-playbook parameters to set or where your inventory is located. A polycrate run is sufficient.For the server-setup block instance (unpacked layout under blocks/registry.acme-corp.com/acme/infra/server-setup/ once the block is available locally), the call looks like this:
polycrate run server-setup applyPolycrate:
server-setup block instance (defined in workspace.poly).inventory.yml from the workspace root.apply action with Ansible in the container.Polycrate defines no first-class flags for Ansible tags or inventory groups—the control model is the workspace (and block instances with their config). Extra arguments to polycrate run are not passed through to ansible-playbook; model environment and role logic through workspaces, blocks, and block configuration, not tag matrices or extremely complex inventories (see best practices).
If you need to target only web servers in one step, the clean approach is usually not a polycrate run that implicitly filters via Ansible tags. The common pattern is to set something like config.hosts on the block and use hosts: "{{ block.config.hosts }}" in the playbook—so each action run can target groups such as webservers or individual hosts while keeping the same inventory.yml in the workspace. The article Linux servers on autopilot illustrates the same idea with default_target_group. A separate playbook with a fixed hosts: webservers or a dedicated block action can still make sense as an addition.
With plain Ansible you would reproduce the same run with local ansible-playbook and a consistent toolchain; Polycrate wraps execution in a standardized container action.
Many environments today are dynamic: VMs are created and destroyed, auto-scaling groups adjust the number of hosts. Classic Ansible uses “Dynamic Inventory Scripts” or plugins for this.
Polycrate takes a deliberately simple approach here:
inventory.yml.inventory.yml beforehand—e.g., through a separate block that queries the cloud API and writes the file.A minimalist example (pseudocode, concept in focus): A block blocks/inventory-sync could use Python or shell tooling to retrieve the list of web and DB servers from your cloud provider and update inventory.yml.
The crucial point:
If you want to delve deeper into dynamic inventories, it’s worth checking out the Ansible integration and our
TL;DR A well-named, clearly structured Polycrate workspace is half the battle: a consistent name …
TL;DR Plain Ansible is a powerful tool for ad-hoc automation, quick scripts, and simple setups – but …
TL;DR Most environments are hybrid: Windows servers for AD, file services, and specialized …