Many Servers, One Truth: Multi-Server Management with Polycrate Inventories
Fabian Peter 9 Minuten Lesezeit

Many Servers, One Truth: Multi-Server Management with Polycrate Inventories

Multi-server management with Ansible inventories in Polycrate
Ganze Serie lesen (24 Artikel)

Diese Serie zeigt Schritt für Schritt, wie Ansible mit Polycrate zu einer strukturierten, teilbaren und compliance-fähigen Automatisierungsplattform wird – von den Grundlagen bis zu Enterprise-Szenarien.

  1. Install Polycrate and Build Your First Ansible Block in 15 Minutes
  2. Blocks, Actions, and Workspaces: The Modular Principle of Polycrate
  3. Linux Servers on Autopilot: System Management with Polycrate and Ansible
  4. Nginx and Let's Encrypt as a Reusable Polycrate Block
  5. Managing Docker Stacks on Linux Servers with Polycrate
  6. Many Servers, One Truth: Multi-Server Management with Polycrate Inventories
  7. Windows Automation with Polycrate: Ansible and WinRM Without Pain
  8. Windows Software Deployment without SCCM: Chocolatey and Ansible
  9. Hybrid Automation: Windows and Linux in the Same Polycrate Workspace
  10. Deploy Kubernetes Apps from the PolyHub: From Idea to Deployment in Minutes
  11. Creating Your Own Kubernetes App as a Polycrate Block: A Step-by-Step Guide
  12. Multi-Cluster Kubernetes with Polycrate: Why One Cluster, One Workspace
  13. SSH Sessions and kubectl Debugging: Polycrate as an Operations Tool
  14. Helm Charts as a Polycrate Block: More Control Over Chart Deployments
  15. Policy as Code: Automating Compliance Requirements with Polycrate
  16. Workspace Encryption: Managing Secrets in GDPR Compliance – Without External Tooling
  17. Managing Raspberry Pi and Edge Nodes with Polycrate in IoT and Edge Computing
  18. Enterprise Automation: Building, Versioning, and Sharing Blocks Within Teams
  19. Polycrate MCP: Connecting AI Assistants with Live Infrastructure Context
  20. Polycrate vs. plain Ansible: What You Gain – and Why It's Worth It
  21. The Polycrate Ecosystem: PolyHub, API, MCP, and the Future of Automation
  22. Your First Productive Polycrate Workspace: A Checklist for Getting Started
  23. Auditable Operations: SSH Sessions and CLI Activities with Polycrate API
  24. Polycrate API for Teams: Centralized Monitoring and Remote Triggering

TL;DR

  • Managing a single server with Ansible is quick and easy, but once you add 10, 50, or 200 hosts, the inventory becomes a critical scaling factor. Polycrate enforces a centralized, YAML-based inventory per workspace, preventing sprawl.
  • Think of workspaces as environments: they bundle logically related infrastructure. If you have 100 web servers—10 dev, 10 test, 80 prod—you would ideally create three workspaces (e.g. web-dev, web-test, web-prod) and list only the hosts for that environment in each inventory.yml. Shared automation lives in blocks (block sharing), not in one giant inventory.
  • In Polycrate, inventory.yml sits in the workspace root and is the single source of truth for all blocks and actions in that workspace. A lean Ansible inventory (e.g. all.vars, clear groups) structures hosts for playbooks within that environment—Polycrate itself has no first-class parameters for Ansible tags or for selecting inventory groups; polycrate run … always applies to the entire workspace.
  • Modeling complex conditionals with Ansible tags or an overloaded inventory is not best practice. Express the same intent through workspace and block segmentation and block instance configuration (see best practices).
  • Polycrate always runs Ansible in a container, resolving typical dependency chaos (Python versions, Ansible version, modules). Once defined, workspaces and inventories can be reproducibly used across the entire team.
  • In ayedo’s multi-server workshops, we show admin teams in formats such as Platform Architecture and Platform Operations how to transition existing Ansible automation to Polycrate, consolidate inventories, and perform fleet updates safely and transparently.

From Single Server to Fleet: The Real Scaling Problem

As long as you’re managing only one or two Linux servers, Ansible often feels like a better SSH: one playbook, one host, done.

Problems begin when:

  • Suddenly you have 10 web servers, 3 database servers, 2 monitoring hosts, and a few utility servers.
  • Staging, testing, and production differ only by hostname.
  • Colleagues “quickly” create their own inventory—and after a few months, you no longer know which hosts are managed where.

With plain Ansible, it’s tempting to maintain a separate inventory file for each team or use case. This works, but:

  • Hosts are duplicated across multiple files.
  • Variables conflict (e.g., ansible_user set differently).
  • No one knows which inventory is the “correct” one.

Polycrate consciously reverses this concept: per workspace, one inventory, a shared understanding for all blocks and teams that use that workspace.


One Workspace, One Inventory: Polycrate Convention Instead of Sprawl

In Polycrate, a workspace is your logical boundary—often an environment (dev, test, prod) or a clearly scoped infrastructure line. It bundles workspace.poly, inventory.yml, secrets, and referenced blocks into one consistent unit.

Workspaces instead of “one inventory for everything”

Classic Ansible pushes you toward one large codebase (roles, groups, tags) and separating environments only via inventory groups or host patterns—so the shared roles tree stays maintainable and nothing is duplicated. That makes sense when code is the expensive part.

With Polycrate, shared functionality belongs in blocks (registry, polycrate blocks push / from:). You segment by workspace: e.g. three workspaces web-dev, web-test, web-prod, each inventory.yml listing only hosts for that stage—instead of one workspace web where you tell dev/test/prod apart purely with Ansible groups and conventions. The latter mirrors old Ansible thinking; the former uses Polycrate as intended: environment and ownership first, then a lean, honest inventory.

In short: workspaces for coarse operational segmentation; blocks for reusable automation; keep inventory.yml with lean groups for readable playbooks within a workspace—not tag-driven “control logic” as a substitute for environment boundaries.

Key points:

  • The workspace configuration is located in workspace.poly.
  • The inventory for all Ansible actions is centrally located in the workspace root as inventory.yml.

When executing actions, Polycrate automatically sets ANSIBLE_INVENTORY to this single inventory.yml. More details can be found in the official Ansible integration documentation.

Minimal Workspace with the server-setup Block

A simple workspace.poly for the ACME Corporation:

name: acme-corp-automation
organization: acme

blocks:
  - name: server-setup
    from: registry.acme-corp.com/acme/infra/server-setup:1.0.0
    config:
      maintenance_window: "Sundays 02:00-04:00 UTC"

In the block package itself, block.poly uses the same logical name; the published OCI reference matches the from: line above:

# registry.acme-corp.com/acme/infra/server-setup:1.0.0
name: server-setup
version: 1.0.0
kind: generic

Important:

  • name and optionally organization define the workspace.
  • Under blocks, you reference blocks with a full OCI registry URL (registry/path/block:version)—the version is the tag at the end of the from line, like container images; there is no separate version field on the block instance.
  • Configurations (config) are merged with the block definition—details on inheritance can be found in the best practices.

The block is published on the fictional corporate registry registry.acme-corp.com (not ayedo’s production registry). You do not need to run polycrate blocks pull first: if the block is missing locally, Polycrate detects that when you run polycrate run … and asks whether to install the block from the registry automatically. After that, the unpacked layout lives under blocks/registry.acme-corp.com/acme/infra/server-setup/ (the path under blocks/ mirrors the reference). The apply action (do not confuse it with an action named update) runs the corresponding playbook in the container against the workspace root inventory.yml, using the maintenance_window set here.


Central Inventory: Clean Use of Groups and all.vars

The core of our multi-server setup is inventory.yml in the workspace root. It is the single source of truth for:

  • Host lists
  • Groups
  • SSH parameters
  • Host and group variables

Example: 10 Web Servers, 3 Database Servers, 2 Monitoring Hosts

Consider a classic fleet on Ubuntu 22.04:

  • Web servers: web01web10
  • Databases: db01db03
  • Monitoring: mon01, mon02

The inventory might look like this: Every host is defined under all.hosts (with defaults from all.vars and per-host overrides); under children, groups only reference those hosts by name—without duplicating variables in the groups. That way Ansible and polycrate ssh share the same canonical host list (see the previous post: SSH sessions & operations):

all:
  vars:
    ansible_user: "ubuntu"
    ansible_ssh_port: 22
    ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
    ansible_python_interpreter: /usr/bin/python3

  hosts:
    web01.acme-corp.com: {}
    web02.acme-corp.com: {}
    web03.acme-corp.com: {}
    web04.acme-corp.com: {}
    web05.acme-corp.com: {}
    web06.acme-corp.com: {}
    web07.acme-corp.com: {}
    web08.acme-corp.com: {}
    web09.acme-corp.com: {}
    web10.acme-corp.com: {}
    db01.acme-corp.com: {}
    db02.acme-corp.com: {}
    db03.acme-corp.com:
      ansible_user: "dbadmin"     # Host-specific overrides
      ansible_ssh_port: 2222
    mon01.acme-corp.com: {}
    mon02.acme-corp.com: {}
    bastion.acme-corp.com:
      ansible_user: "admin"
      ansible_ssh_port: 2201

  children:
    webservers:
      hosts:
        web01.acme-corp.com:
        web02.acme-corp.com:
        web03.acme-corp.com:
        web04.acme-corp.com:
        web05.acme-corp.com:
        web06.acme-corp.com:
        web07.acme-corp.com:
        web08.acme-corp.com:
        web09.acme-corp.com:
        web10.acme-corp.com:

    databases:
      hosts:
        db01.acme-corp.com:
        db02.acme-corp.com:
        db03.acme-corp.com:

    monitoring:
      hosts:
        mon01.acme-corp.com:
        mon02.acme-corp.com:

Key points for experienced admins:

  • Under all.vars, define the common defaults for all hosts—e.g., SSH user, port, Python interpreter.
  • Each host appears once under all.hosts; host-specific values (e.g., db03, bastion) are set there and override all.vars as usual.
  • Under children.*.hosts, list hostnames only for group membership—without repeating variable blocks, so polycrate ssh and Ansible use the same canonical host list.
  • children are your functional groups (webservers, databases, monitoring) that you target in playbooks.

Polycrate takes care of the rest: This inventory is automatically used for every action. How to centralize SSH keys and connections is described in the SSH documentation of Polycrate.


Groups in Practice: One Playbook, Multiple Plays

The inventory is neatly grouped—now comes the real benefit: One playbook, multiple plays, each with its own hosts.

Instead of maintaining three different playbooks for web, DB, and monitoring, we define an update.yml that covers all three groups. Separation is only via each play’s hosts: line—without Ansible tags (tag-heavy playbooks are not recommended for Polycrate operations).

- name: Update Web Servers
  hosts: webservers
  become: true
  tasks:
    - name: Update package index
      ansible.builtin.apt:
        update_cache: yes
        cache_valid_time: 3600

    - name: Install security patches
      ansible.builtin.apt:
        upgrade: dist
      register: web_update

    - name: Restart web server if packages were updated
      ansible.builtin.service:
        name: nginx
        state: restarted
      when: web_update is changed

- name: Update Database Servers
  hosts: databases
  become: true
  tasks:
    - name: Update package index
      ansible.builtin.apt:
        update_cache: yes
        cache_valid_time: 3600

    - name: Update database packages
      ansible.builtin.apt:
        name: "{{ item }}"
        state: latest
      loop:
        - postgresql
        - postgresql-contrib
      register: db_update

    - name: Restart database service if packages were updated
      ansible.builtin.service:
        name: postgresql
        state: restarted
      when: db_update is changed

- name: Check Monitoring Agents
  hosts: monitoring
  become: true
  tasks:
    - name: Ensure monitoring agent is installed
      ansible.builtin.apt:
        name: acme-monitoring-agent
        state: latest

    - name: Monitoring agent is running
      ansible.builtin.service:
        name: acme-monitoring-agent
        state: started
        enabled: true

Important:

  • No hosts: localhost and no connection: local—we want to manage real servers, not the ephemeral Polycrate container.
  • Each play definition targets a group (webservers, databases, monitoring).
  • Thanks to Ansible’s idempotency, it’s safe to run the playbook multiple times—changed packages are detected, unnecessary changes are avoided.

Executing with Polycrate: Actions and Workspace

The advantage of Polycrate over plain Ansible is twofold:

  1. Guardrails and UX: You don’t need to remember which ansible-playbook parameters to set or where your inventory is located. A polycrate run is sufficient.
  2. Consistent Toolchain in the Container: Ansible version, Python, additional tools—all come from the Polycrate container. No local setup chaos, no “works only on my laptop” situations.

For the server-setup block instance (unpacked layout under blocks/registry.acme-corp.com/acme/infra/server-setup/ once the block is available locally), the call looks like this:

polycrate run server-setup apply

Polycrate:

  • Loads the server-setup block instance (defined in workspace.poly).
  • Starts the configured container.
  • Automatically uses inventory.yml from the workspace root.
  • Executes the playbook bound to the apply action with Ansible in the container.

Polycrate defines no first-class flags for Ansible tags or inventory groups—the control model is the workspace (and block instances with their config). Extra arguments to polycrate run are not passed through to ansible-playbook; model environment and role logic through workspaces, blocks, and block configuration, not tag matrices or extremely complex inventories (see best practices).

If you need to target only web servers in one step, the clean approach is usually not a polycrate run that implicitly filters via Ansible tags. The common pattern is to set something like config.hosts on the block and use hosts: "{{ block.config.hosts }}" in the playbook—so each action run can target groups such as webservers or individual hosts while keeping the same inventory.yml in the workspace. The article Linux servers on autopilot illustrates the same idea with default_target_group. A separate playbook with a fixed hosts: webservers or a dedicated block action can still make sense as an addition.

With plain Ansible you would reproduce the same run with local ansible-playbook and a consistent toolchain; Polycrate wraps execution in a standardized container action.


Dynamic Inventories: Translating Cloud Fleets into a Static Inventory

Many environments today are dynamic: VMs are created and destroyed, auto-scaling groups adjust the number of hosts. Classic Ansible uses “Dynamic Inventory Scripts” or plugins for this.

Polycrate takes a deliberately simple approach here:

  • At runtime of actions, Ansible always works with a YAML inventory in the workspace root: inventory.yml.
  • However, you can dynamically generate this inventory.yml beforehand—e.g., through a separate block that queries the cloud API and writes the file.

A minimalist example (pseudocode, concept in focus): A block blocks/inventory-sync could use Python or shell tooling to retrieve the list of web and DB servers from your cloud provider and update inventory.yml.

The crucial point:

  • Polycrate clearly separates the generation of the inventory (your logic, e.g., via API) from the use of the inventory (all blocks and actions in the workspace).
  • This creates a central place of truth for teams—even if the hosts are dynamic in the background.

If you want to delve deeper into dynamic inventories, it’s worth checking out the Ansible integration and our

Ähnliche Artikel