Enterprise Automation: Building, Versioning, and Sharing Blocks Within Teams
Fabian Peter 12 Minuten Lesezeit

Enterprise Automation: Building, Versioning, and Sharing Blocks Within Teams

Enterprise Automation: Building, Versioning, and Sharing Polycrate Blocks Within Teams
Ganze Serie lesen (24 Artikel)

Diese Serie zeigt Schritt für Schritt, wie Ansible mit Polycrate zu einer strukturierten, teilbaren und compliance-fähigen Automatisierungsplattform wird – von den Grundlagen bis zu Enterprise-Szenarien.

  1. Install Polycrate and Build Your First Ansible Block in 15 Minutes
  2. Blocks, Actions, and Workspaces: The Modular Principle of Polycrate
  3. Linux Servers on Autopilot: System Management with Polycrate and Ansible
  4. Nginx and Let's Encrypt as a Reusable Polycrate Block
  5. Managing Docker Stacks on Linux Servers with Polycrate
  6. Many Servers, One Truth: Multi-Server Management with Polycrate Inventories
  7. Windows Automation with Polycrate: Ansible and WinRM Without Pain
  8. Windows Software Deployment without SCCM: Chocolatey and Ansible
  9. Hybrid Automation: Windows and Linux in the Same Polycrate Workspace
  10. Deploy Kubernetes Apps from the PolyHub: From Idea to Deployment in Minutes
  11. Creating Your Own Kubernetes App as a Polycrate Block: A Step-by-Step Guide
  12. Multi-Cluster Kubernetes with Polycrate: Why One Cluster, One Workspace
  13. SSH Sessions and kubectl Debugging: Polycrate as an Operations Tool
  14. Helm Charts as a Polycrate Block: More Control Over Chart Deployments
  15. Policy as Code: Automating Compliance Requirements with Polycrate
  16. Workspace Encryption: Managing Secrets in GDPR Compliance – Without External Tooling
  17. Managing Raspberry Pi and Edge Nodes with Polycrate in IoT and Edge Computing
  18. Enterprise Automation: Building, Versioning, and Sharing Blocks Within Teams
  19. Polycrate MCP: Connecting AI Assistants with Live Infrastructure Context
  20. Polycrate vs. plain Ansible: What You Gain – and Why It's Worth It
  21. The Polycrate Ecosystem: PolyHub, API, MCP, and the Future of Automation
  22. Your First Productive Polycrate Workspace: A Checklist for Getting Started
  23. Auditable Operations: SSH Sessions and CLI Activities with Polycrate API
  24. Polycrate API for Teams: Centralized Monitoring and Remote Triggering

TL;DR

  • In many enterprise organizations, each team builds its own Ansible environment—without clear versioning, without central reuse, without governance. This does not scale organizationally.
  • Polycrate turns Ansible playbooks into reusable, versioned blocks that can be published in an internal OCI registry (e.g., Harbor or registry.acme-corp.com) and used by any team.
  • The block model creates guardrails: clear interfaces, semantic versioning, CHANGELOG.poly, and simple actions instead of playbook sprawl—including compliance mechanisms to ensure only approved block versions are used in production.
  • With Polycrate’s containerized execution, local Ansible setups and Python or dependency chaos are eliminated. Every developer workstation uses the same toolchain; blocks can be safely shared and versioned in the registry.
  • ayedo supports you with Platform Engineering, Polycrate expertise, and an Enterprise Platform Workshop to build a company-wide automation ecosystem that empowers teams and meets compliance requirements.

Why Enterprise Sharing with Plain Ansible Rarely Works

If you look at a larger company today, you often see the same pattern:

  • The Linux team has its own Ansible repo.
  • The Windows team has another.
  • The network team uses something else—or no Ansible at all.
  • Departments have ad-hoc playbooks built by individual engineers.

All solve similar problems but with different directory structures, roles, modules, and dependencies. Knowledge is stored in heads and Git repos, not in a consistent automation product.

Typical symptoms:

  • No unified standard: Directory structure, variable names, inventories, roles—everything is different.
  • No real versioning at the building block level: There may be Git tags for the entire repo, but no clear “VPN block 1.2.0 vs. 2.0.0”.
  • Dependency issues: Different Python versions, modules, Ansible versions on each admin laptop.
  • Poor sharing: “Can you send me your playbook?” is not a sustainable distribution mechanism.

With plain Ansible, for enterprise sharing, you would need to:

  • enforce a central Git monorepo,
  • define conventions for folder structure, roles, and reuse,
  • establish disciplined versioning at the role or collection level,
  • provide all teams with a consistent toolchain (Python, Ansible, modules).

Ansible is an excellent tool—but it does not inherently provide this enterprise governance and sharing model. This is where Polycrate comes in.


Polycrate Blocks as Building Blocks of Your Platform

Polycrate packages Ansible playbooks, configuration, and toolchain into clearly defined blocks. A block is:

  • a clearly defined function (e.g., “create VPN”, “manage AD users”, “patch Linux”),
  • with a stable interface (config parameters in the block),
  • with versioned implementation (semantic versioning like 1.0.0, 1.1.0, 2.0.0),
  • distributed via an OCI registry (Harbor, registry.acme-corp.com, or PolyHub).

For enterprise architects and platform teams, this creates a model they know from the container world: one team builds images (here: blocks), other teams consume them.

Platform Team as Block Producer, Domain Teams as Consumers

In a mature setup, it looks like this:

  • A platform or networking team develops and maintains a “VPN block”.
  • This block is published in the company-wide OCI registry (registry.acme-corp.com).
  • Application teams simply integrate the block via from: in their workspace.poly and execute it using polycrate run.
  • Semantic versioning and a maintained CHANGELOG.poly clearly communicate what changes between versions—including breaking changes.

Polycrate addresses several core issues:

  1. Dependency problem eliminated:
    Ansible runs exclusively in the Polycrate container. Python version, Ansible version, ansible-galaxy collections—everything is part of the container toolchain and thus identical for all teams. No “doesn’t work for me because of Python 3.11”.

  2. Sharable automation via registry:
    Blocks are versioned and shared via an OCI registry—like container images. What the platform team builds, the app team can use in seconds. More on this in the Registry Documentation and the Best Practices.

  3. Guardrails instead of playbook sprawl:
    The block model gives your Ansible automation structure. Instead of “calling any playbook from the repo”, there are clearly defined actions (create, update, delete) with documented parameters.

Enterprise: Audit trail via the Polycrate API (action runs)

For governance and traceability, the Polycrate API matters: with the CLI connected to the API (api.enabled and API key in ~/.polycrate/polycrate.yml), executions of polycrate run … can be submitted to the API as action runs (configurable via submit_action_runs or --api-submit-action-runs, typically on by default). That yields a central work trail per workspace: who ran which action on which block, when, with which exit code and context—instead of only scattered laptop logs. The API/web UI exposes action runs and history—useful for auditability alongside Git and CHANGELOG.poly. See Polycrate API and Audit & Compliance.


Practical Example: VPN Block of the Networking Team

Suppose your networking team centrally operates a firewall/VPN appliance with an HTTP API. It should provide a block that allows departments to independently create site-to-site VPNs—of course, controlled and documented.

The Block: vpn-site2site

After publishing, the block lives under a path such as blocks/registry.acme-corp.com/acme/networking/vpn-site2site/ (after polycrate pull). The block.poly might look like this:

name: registry.acme-corp.com/acme/networking/vpn-site2site
version: 1.0.0
kind: generic

config:
  vpn_name: ""
  peer_cidr: ""
  appliance_api_url: ""
  appliance_api_token: ""

actions:
  - name: create
    description: "Creates a site-to-site VPN on the central appliance"
    playbook: create.yml

  - name: delete
    description: "Removes a site-to-site VPN from the central appliance"
    playbook: delete.yml

Important from an enterprise perspective:

  • The interface is clear: anyone using the block must provide vpn_name, peer_cidr, appliance_api_url, and appliance_api_token.
  • Everything else—API details, error handling, logging—remains the responsibility of the networking team.

The Ansible Playbook in the Block

The create.yml playbook calls the appliance’s API from within the Polycrate container. This is a valid case for hosts: localhost, as the action is deliberately executed in the container (HTTP API call, no SSH login to target hosts):

- name: Create site-to-site VPN
  hosts: localhost
  gather_facts: false

  vars:
    vpn_name: "{{ block.config.vpn_name }}"
    peer_cidr: "{{ block.config.peer_cidr }}"
    api_url: "{{ block.config.appliance_api_url }}"
    api_token: "{{ block.config.appliance_api_token }}"

  tasks:
    - name: Create VPN via appliance API
      ansible.builtin.uri:
        url: "{{ api_url }}/vpn"
        method: POST
        headers:
          Authorization: "Bearer {{ api_token }}"
          Content-Type: "application/json"
        body_format: json
        body:
          name: "{{ vpn_name }}"
          peer_cidr: "{{ peer_cidr }}"
      register: vpn_result

    - name: Print VPN ID
      ansible.builtin.debug:
        msg: "Created VPN {{ vpn_name }} with id {{ vpn_result.json.id }}"

The variables come from block.config.*—thus the block’s interface remains stable, even if something changes internally (e.g., additional API parameters).

CHANGELOG.poly as a Communication Medium

To help other teams understand what changes between versions (and whether there are breaking changes), the networking team maintains a CHANGELOG.poly in the block directory. Format and fields follow Polycrate conventions: a list of entries with version, date, type (feat, fix, chore, breaking), message, and description (multiline); optional author (e.g. team or person). Breaking changes are indicated via type: breaking and the description—not via a bespoke releases: / breaking_changes: schema.

# CHANGELOG.poly (simplified example)
- version: "1.0.0"
  date: "2025-03-10"
  type: feat
  author: "Networking Team <networking@acme-corp.com>"
  message: "Initial VPN site-to-site module"
  description: |
    - Create and delete site-to-site VPNs with name and peer_cidr
    - API integration with central firewall appliance, auth via bearer token

- version: "1.1.0"
  date: "2025-04-22"
  type: feat
  message: "Optional local_cidr, improved API error logging"
  description: |
    - Optional field local_cidr (compatible with 1.0.0)
    - Improved error logging for API errors

- version: "2.0.0"
  date: "2025-06-01"
  type: breaking
  message: "Rename peer_cidr, multiple peer networks"
  description: |
    - **Breaking:** Field peer_cidr renamed to peer_network – migrate workspace configs
    - Support for multiple peer networks in the VPN

Full conventions: Best practices – changelog and versioning / change types.

With semantic versioning, it is clear:

  • 1.1.0 is compatible with 1.0.0 (minor upgrade, new option).
  • 2.0.0 contains breaking changes—consumers must adjust their configuration.

Usage in the App Team Workspace

An application team now wants to set up a VPN to the hosting provider for its CRM application. It uses the block provided by the networking team.

workspace.poly of the App Team

The workspace.poly lives in the workspace root. Important: There is no Jinja or env substitution in workspace.poly (or block.poly)—lines like {{ workspace.secrets[...] }} are not evaluated. Sensitive block values belong in secrets.poly (merged with workspace.poly). The top-level config map is not a free-form key-value store for application logic; documented fields such as environment or optional image (container image) are described in the configuration documentation.

name: acme-corp-automation
organization: acme

config:
  environment: production

blocks:
  - name: vpn-to-crm
    from: registry.acme-corp.com/acme/networking/vpn-site2site:1.0.0
    config:
      vpn_name: "vpn-crm-prod"
      peer_cidr: "10.50.0.0/16"
      appliance_api_url: "https://firewall-prod.acme-corp.com/api"
      appliance_api_token: ""

The API token is maintained in secrets.poly (versioned as secrets.poly.age and protected with workspace encryption), not in the committed workspace.poly:

# secrets.poly – sensitive overrides for the block instance
blocks:
  - name: vpn-to-crm
    config:
      appliance_api_token: "<bearer-token>"

Some important points:

  • from: contains the full registry reference with explicit version :1.0.0—no :latest.
  • The app team consciously decides which version to use. An update to 1.1.0 or 2.0.0 is a change in the workspace and thus traceable.
  • Secrets in secrets.poly and files under artifacts/secrets/ are protected with Polycrate workspace encryption (age)—an important compliance aspect, especially since the GDPR came into effect on May 25, 2018. See Workspace encryption.

Inventory: For this example (hosts: localhost, API-only), no inventory.yml is required. If other blocks in the same workspace target SSH hosts, inventory is stored as usual in the workspace root as inventory.yml—see configuration – block instance fields, for example:

# inventory.yml (only when SSH hosts are targeted)
all:
  children:
    app_servers:
      hosts:
        app01.example.com:
          ansible_user: deploy

The app team simply executes the automation:

polycrate run vpn-to-crm create

This starts a container with the predefined toolchain, loads inventory.yml when present, merges workspace.poly and secrets.poly into block.config, and runs the create.yml playbook in the container.

Comparison: How Would It Look with Plain Ansible?

Without Polycrate, the networking team would need to:

  • Provide a Git repository with roles/playbooks.
  • Document how to call ansible-playbook with which variables.
  • Ensure all teams have the same Python/Ansible/collection versions installed.
  • Organize versioning via Git tags and manual conventions.

The app team would need to:

  • Clone the repo or integrate it via submodule.
  • Create local variable files.
  • Figure out their own way to identify “approved” versions.

With Polycrate, it’s enough to:

  • from: registry.acme-corp.com/acme/networking/vpn-site2site:1.0.0 in the workspace.poly.
  • A polycrate run vpn-to-crm create.

The rest—container toolchain, Ansible version, modules, inventory handling—comes from Polycrate. This significantly reduces friction and error sources.


Block Lifecycle and Governance in the Enterprise Context

For enterprise architects, not only the technology is important but especially the lifecycle and governance.

Build → Test → Tag → Push → Pull

A typical lifecycle in the networking team might look like this:

  1. Build:
    The block is developed locally under blocks/registry.acme-corp.com/acme/networking/vpn-site2site/ (or equivalent after pulling from the registry). Polycrate uses a container in which all necessary tools (Ansible, ansible.builtin.uri, possibly additional collections) are defined.

  2. Test:
    Tests run in a staging environment, e.g., via CI, which executes polycrate run vpn-site2site create with test parameters.

  3. Tag:
    Once a version is stable, the version in block.poly is set to, e.g., 1.1.0 and documented in CHANGELOG.poly.

  4. Push to the Registry:
    The block is uploaded to the internal OCI registry (registry.acme-corp.com). How exactly the push is done is described in the Registry Documentation.

  5. Pull by Other Teams:
    Application or infrastructure teams reference the approved version with from: registry.acme-corp.com/acme/networking/vpn-site2site:1.1.0 in their workspace.poly.

This creates a company-wide ecosystem:

  • The networking team supplies VPN blocks.
  • The Windows team supplies AD/GPO blocks.
  • The Linux team supplies patch-management blocks.
  • The platform team provides generic blocks for Kubernetes, monitoring, or storage—or uses official blocks from PolyHub (hub.polycrate.io/) as a baseline.

Governance: Which Version Is Allowed in Production?

Compliance owners want to ensure only approved block versions are used in production workspaces. Typical mechanisms:

  • Tag whitelisting: Only certain tags (e.g., 1.0.0, 1.1.0) are marked “approved for prod” in the registry UI.

  • Review processes at workspace level: Every change to workspace.poly (especially from: lines) goes through code review. Reviewers immediately see when a block moves from 1.0.0 to 2.0.0—including a look at CHANGELOG.poly.

  • Policy as code: Additional checks in CI/CD pipelines ensure only approved tags are used in production workspaces.

Polycrate’s built-in workspace encryption also helps: secrets are stored encrypted in the repo via polycrate workspace encrypt. An external vault is not mandatory—a plus for auditability and simplicity; see the workspace encryption documentation.

Good UX/DX: Actions Instead of CLI Sprawl

For consuming teams, complexity drops:

  • No one needs to know how to invoke ansible-playbook in detail.
  • polycrate run BLOCK ACTION is enough—even for colleagues who are not deep in Ansible.
  • That improves developer experience (DX) and makes automation usable for teams that only occasionally create a VPN or AD account.

The Best Practices help keep block interfaces consistent and naming patterns aligned across the company.


Frequently Asked Questions

Do we need Polycrate if we already have a central Ansible Git repo?

A central Git repo is a good first step, but it does not fully solve three core problems:

  1. Toolchain consistency: Git does not govern which Python/Ansible versions, collections, or CLI tools are installed on developer workstations. Polycrate encapsulates those dependencies in containers—everyone uses the same environment.

  2. Sharable automation as a product: Git roles are not automatically “product-ready” building blocks. Polycrate blocks define a clear interface, actions, and versions—including distribution via an OCI registry.

  3. Governance & compliance: Git alone does not separate approved from experimental building blocks. Through the registry, tags, and CHANGELOG.poly, you can communicate clearly what is production-ready—and what is not.

How do we prevent unapproved block versions from reaching production?

Companies typically combine:

  • Registry governance: Only certain tags are approved for production namespaces.
  • Code review at workspace level: Every change to the from: version in workspace.poly is reviewed by platform or security teams.
  • CI checks: Pipelines validate that only allowed tags are used before deployments run.

Polycrate fits well into existing processes—versions are visible and actions are traceable. Together with Platform Engineering, you get a robust governance layer over your automation.

How does Polycrate fit our existing platform strategy?

Polycrate addresses the “automation building blocks” layer of your platform:

  • The platform team defines which blocks it offers as internal products (e.g., VPN, AD, Kubernetes namespaces, monitoring).
  • Business and application teams consume those blocks in their workspaces.
  • Through the registry and semantic versioning, you get a controlled ecosystem that meshes with CI/CD, GitOps, and central governance.

If you are starting or expanding a platform initiative, we support you with our consulting to integrate Polycrate effectively.

More questions? See our FAQ.


From Theory to Practice

With a block- and registry-centric approach, Ansible automation moves from a hard-to-grasp pile of playbooks to a clear product portfolio: every team knows which blocks exist, which versions are stable, and how to use them. That reduces friction, increases reuse, and makes compliance achievable instead of a drag.

ayedo accompanies organizations on this path:

  • We help platform and architecture teams design a viable block strategy—from the first VPN or AD block to a full ecosystem.
  • Together we define how your internal registry (registry.acme-corp.com or Harbor) serves as the backbone for automation and Platform Engineering.
  • We support governance where semantic versioning, CHANGELOG.poly, workspace encryption, and registry publishing work together.
  • In hands-on formats we work with your networking, Windows, Linux, and application teams to turn today’s playbooks into reusable enterprise blocks.

If you want unified automation, clear building blocks, and reliable compliance rather than leaving them to chance, now is a good time to sharpen your platform strategy.

Overview and registration: Workshops.

Ähnliche Artikel