Managing Docker Stacks on Linux Servers with Polycrate
Fabian Peter 13 Minuten Lesezeit

Managing Docker Stacks on Linux Servers with Polycrate

Automate Docker Compose stacks on Linux servers with Polycrate and Ansible
Ganze Serie lesen (24 Artikel)

Diese Serie zeigt Schritt für Schritt, wie Ansible mit Polycrate zu einer strukturierten, teilbaren und compliance-fähigen Automatisierungsplattform wird – von den Grundlagen bis zu Enterprise-Szenarien.

  1. Install Polycrate and Build Your First Ansible Block in 15 Minutes
  2. Blocks, Actions, and Workspaces: The Modular Principle of Polycrate
  3. Linux Servers on Autopilot: System Management with Polycrate and Ansible
  4. Nginx and Let's Encrypt as a Reusable Polycrate Block
  5. Managing Docker Stacks on Linux Servers with Polycrate
  6. Many Servers, One Truth: Multi-Server Management with Polycrate Inventories
  7. Windows Automation with Polycrate: Ansible and WinRM Without Pain
  8. Windows Software Deployment without SCCM: Chocolatey and Ansible
  9. Hybrid Automation: Windows and Linux in the Same Polycrate Workspace
  10. Deploy Kubernetes Apps from the PolyHub: From Idea to Deployment in Minutes
  11. Creating Your Own Kubernetes App as a Polycrate Block: A Step-by-Step Guide
  12. Multi-Cluster Kubernetes with Polycrate: Why One Cluster, One Workspace
  13. SSH Sessions and kubectl Debugging: Polycrate as an Operations Tool
  14. Helm Charts as a Polycrate Block: More Control Over Chart Deployments
  15. Policy as Code: Automating Compliance Requirements with Polycrate
  16. Workspace Encryption: Managing Secrets in GDPR Compliance – Without External Tooling
  17. Managing Raspberry Pi and Edge Nodes with Polycrate in IoT and Edge Computing
  18. Enterprise Automation: Building, Versioning, and Sharing Blocks Within Teams
  19. Polycrate MCP: Connecting AI Assistants with Live Infrastructure Context
  20. Polycrate vs. plain Ansible: What You Gain – and Why It's Worth It
  21. The Polycrate Ecosystem: PolyHub, API, MCP, and the Future of Automation
  22. Your First Productive Polycrate Workspace: A Checklist for Getting Started
  23. Auditable Operations: SSH Sessions and CLI Activities with Polycrate API
  24. Polycrate API for Teams: Centralized Monitoring and Remote Triggering

TL;DR

  • Docker Compose remains a sensible, pragmatic solution for many Linux server setups, especially if you are managing individual hosts or small groups and do not wish to introduce Kubernetes.
  • With Polycrate, you can package your Docker Compose stack into a reusable block: block.poly for configuration, docker-compose.yml.j2 as a template, Ansible playbook as an action—all neatly structured and team-friendly. This post uses the same registry convention as Nginx and Let’s Encrypt as a reusable Polycrate block (registry.acme-corp.com/infra/…, version pinned via from: …:0.1.0 in the workspace, publishing with polycrate blocks push …).
  • Secret values do not belong in docker-compose.yml or in the readable workspace.poly; they live in secrets.poly using the same YAML shape as workspace.poly (merged at runtime). That file is protected by workspace encryption; templates reference block.config.db_password after the merge.
  • Rolling updates and (nearly) zero-downtime deployments on Linux servers become reproducible and easy for colleagues to execute with Ansible community.docker.docker_compose, health checks, and Polycrate actions.
  • ayedo supports you with Polycrate, best practices, and custom Docker automation solutions—from local Docker Compose to complete platform setups.

Docker Compose on Linux Servers: When It Makes Sense

Not every team needs Kubernetes right away. Many system admins today operate:

  • a few Linux servers with Docker,
  • applications as simple Docker Compose stacks,
  • perhaps a small HA configuration across two or three hosts.

For such scenarios, Docker Compose is often just right:

  • Easy Entry: Many admins are already familiar with docker-compose up -d.
  • Low Complexity: No control plane cluster, no etcd, no ingress controllers.
  • Direct Control: You see the containers directly on the host, logs via docker logs, volumes in the filesystem.

The problems arise elsewhere:

  • Each team member has a different Docker/Compose/Python version.
  • Playbooks and shell scripts are unstructured across different machines.
  • Secrets are in docker-compose.yml, .env files, or the wiki.
  • Updates are manual SSH sessions and copy-paste of commands.

This is where Polycrate comes in: Ansible runs entirely in the container, automation is structured as a block, secrets are encrypted, and everything is shareable as a reusable unit.

A good overview of the Ansible integration in Polycrate can be found in the official documentation.


Starting Point: Polycrate Workspace and Inventory

We are building an example setup for the fictional company acme-corp.com. Target audience: Linux admins managing a Docker Compose stack (e.g., app + Postgres) on multiple Ubuntu servers.

workspace.poly

First, we define our workspace:

# workspace.poly
name: acme-corp-automation
organization: acme

blocks:
  - name: acme-app-stack
    from: registry.acme-corp.com/infra/docker-stack:0.1.0
    config:
      stack_name: acme-app
      docker_host_group: docker_hosts
      host_port: 80
      app_port: 8080
      image_app: "ghcr.io/acme/app:1.2.3"
      db_image: "postgres:15-alpine"
      db_name: "acmeapp"
      db_user: "acmeapp"
      backup_host: "backup01.acme-corp.com"
      backup_path: "/data/backups/acme-app"

Important:

  • The block comes from your container registry registry.acme-corp.com. The version in from: is explicitly pinned (:0.1.0)—a best practice for reproducible builds (see Registry documentation). On first use, Polycrate pulls the block into blocks/registry.acme-corp.com/infra/docker-stack/ (path mirrors the registry name).
  • Configurations are centrally stored under config and later used in the template and playbook.
  • We name an Ansible host group docker_hosts, which we define in the inventory.

inventory.yml

Polycrate uses a YAML inventory in the workspace root:

# inventory.yml
all:
  hosts:
    docker01.acme-corp.com:
      ansible_user: ubuntu
    docker02.acme-corp.com:
      ansible_user: ubuntu
  children:
    docker_hosts:
      hosts:
        docker01.acme-corp.com:
        docker02.acme-corp.com:
  • Polycrate automatically sets ANSIBLE_INVENTORY to this file.
  • Hosts live under all.hosts; child groups only list host names (no duplicate vars)—this matches Polycrate’s SSH integration (polycrate ssh).
  • We will soon use hosts: "{{ block.config.docker_host_group }}" in the playbook.

Docker Compose Stack as a Polycrate Block

Now we model the Docker Compose stack as a block. This block contains:

  • block.poly with actions (deploy, backup, remove) and block configuration,
  • a Jinja2 template docker-compose.yml.j2,
  • Ansible playbooks deploy.yml, backup.yml, and remove.yml,
  • in the workspace: secrets.poly for sensitive configuration values (same format as workspace.poly; see below).

block.poly

# blocks/registry.acme-corp.com/infra/docker-stack/block.poly
name: registry.acme-corp.com/infra/docker-stack
version: 0.1.0
kind: generic

config:
  stack_name: acme-app
  docker_host_group: docker_hosts
  host_port: 80
  app_port: 8080
  image_app: "ghcr.io/acme/app:1.2.3"
  db_image: "postgres:15-alpine"
  db_name: "acmeapp"
  db_user: "acmeapp"
  db_container_name: "acme-app-db"
  backup_host: "backup01.acme-corp.com"
  backup_path: "/data/backups/acme-app"

actions:
  - name: deploy
    playbook: deploy.yml
    description: "Deploy or update the Docker Compose stack with rolling update"
  - name: backup
    playbook: backup.yml
    description: "Backup the Postgres database from the Docker container"
  - name: remove
    playbook: remove.yml
    description: "Remove the stack completely (containers, volumes, project directory)"

This provides:

  • Guardrails: The block provides structure, rather than playbooks lying “somewhere”.
  • Sharable Automation: The same block is an OCI artifact in your registry and consumable in other workspaces via from: with a tag; optionally also discoverable via PolyHub.

Registry: publishing the block (polycrate blocks push)

Prerequisites and the naming model (full OCI name without a tag in block.poly, version in the version: field) are explained in Nginx and Let’s Encrypt as a reusable Polycrate block. Here is the block sharing command to internalize:

polycrate blocks push registry.acme-corp.com/infra/docker-stack

That uploads registry.acme-corp.com/infra/docker-stack:0.1.0 to the registry—colleagues use exactly that reference in workspace.poly (from: with tag) without copying playbooks by hand.

The command for the team is always the same:

polycrate run acme-app-stack deploy
polycrate run acme-app-stack backup
polycrate run acme-app-stack remove

No more Ansible CLI confusion—Polycrate actions offer a simple UX, even for colleagues who do not work with Ansible daily.


Jinja2 Template: docker-compose.yml from block.config

Instead of committing a fixed docker-compose.yml, we use a Jinja2 template. This way, ports, images, and volumes dynamically come from block.config (including secrets merged from secrets.poly, such as db_password).

# blocks/registry.acme-corp.com/infra/docker-stack/docker-compose.yml.j2
version: "3.9"

services:
  app:
    image: "{{ block.config.image_app }}"
    container_name: "{{ block.config.stack_name }}-app"
    restart: unless-stopped
    depends_on:
      - db
    environment:
      DATABASE_URL: "postgresql://{{ block.config.db_user }}:{{ block.config.db_password }}@db:5432/{{ block.config.db_name }}"
    ports:
      - "{{ block.config.host_port }}:{{ block.config.app_port }}"
    volumes:
      - "{{ block.config.stack_name }}-app-data:/var/www/data"
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:{{ block.config.app_port }}/health || exit 1"]
      interval: 10s
      timeout: 3s
      retries: 10

  db:
    image: "{{ block.config.db_image }}"
    container_name: "{{ block.config.db_container_name }}"
    restart: unless-stopped
    environment:
      POSTGRES_DB: "{{ block.config.db_name }}"
      POSTGRES_USER: "{{ block.config.db_user }}"
      POSTGRES_PASSWORD: "{{ block.config.db_password }}"
    volumes:
      - "{{ block.config.stack_name }}-db-data:/var/lib/postgresql/data"

volumes:
  {{ block.config.stack_name }}-app-data: {}
  {{ block.config.stack_name }}-db-data: {}

Important:

  • After merging workspace.poly and secrets.poly, db_password appears in block.config and is available in the template as block.config.db_password.
  • The template does not contain plaintext passwords; workspace.poly stays readable without secrets.
  • We use the app’s health check later in the Ansible playbook for zero-downtime updates.

Secrets: secrets.poly uses the same format as workspace.poly

Passwords do not belong in .env or in docker-compose.yml. In Polycrate, sensitive values go into secrets.poly in the workspace root—with the same YAML schema as workspace.poly so Polycrate can merge the files at runtime (overlay order: block.poly, then workspace.poly, then secrets.poly). That keeps workspace.poly readable and free of secrets; secrets.poly is protected by workspace encryption (plaintext typically only locally; encrypted .age artifacts in Git). See Configuration and Workspace encryption.

secrets.poly (workspace root)

# secrets.poly (workspace root, next to workspace.poly)
blocks:
  - name: acme-app-stack
    config:
      db_password: "the-real-postgres-password-here"

The block instance name acme-app-stack matches the entry under blocks: in workspace.poly. After the merge, db_password is available in Ansible/templates as block.config.db_password.

Benefits:

  • No HashiCorp Vault needed for this use case.
  • No plaintext passwords in Git, no .env leaks.
  • In the template, use block.config.db_password (see docker-compose.yml.j2 above).

Ansible Playbook: Deploy & Rolling Update with community.docker.docker_compose

The core is the Ansible playbook that Polycrate runs in the container. Important: The playbook runs in the Polycrate container, but controls the Linux servers from the inventory via SSH (hosts: docker_hosts). We install nothing in the container itself except the necessary tools.

deploy.yml

# blocks/registry.acme-corp.com/infra/docker-stack/deploy.yml
- name: Deploy Docker Compose Stack with Rolling Update
  hosts: "{{ block.config.docker_host_group }}"
  become: true
  serial: 1
  vars:
    project_name: "{{ block.config.stack_name }}"
    project_dir: "/opt/{{ project_name }}"
  tasks:
    - name: Create target directory for the stack
      ansible.builtin.file:
        path: "{{ project_dir }}"
        state: directory
        owner: root
        group: root
        mode: "0750"

    - name: Render docker-compose.yml from template
      ansible.builtin.template:
        src: "docker-compose.yml.j2"
        dest: "{{ project_dir }}/docker-compose.yml"
        owner: root
        group: root
        mode: "0640"

    - name: Pull latest images
      community.docker.docker_compose:
        project_src: "{{ project_dir }}"
        files:
          - "docker-compose.yml"
        pull: yes
        state: present

    - name: Update stack (Rolling Update per host)
      community.docker.docker_compose:
        project_src: "{{ project_dir }}"
        files:
          - "docker-compose.yml"
        state: present
        remove_orphans: true
      register: compose_result

    - name: Wait for app container to be healthy
      ansible.builtin.uri:
        url: "http://{{ inventory_hostname }}:{{ block.config.host_port }}/health"
        status_code: 200
        timeout: 5
        validate_certs: false
      register: healthcheck
      retries: 30
      delay: 2
      until: healthcheck.status == 200

remove.yml

The remove action tears down the stack on target hosts in a controlled way: it removes containers and named volumes and deletes the project directory—destructive for Postgres data in the volume when using remove_volumes: true.

# blocks/registry.acme-corp.com/infra/docker-stack/remove.yml
- name: Remove Docker Compose stack completely
  hosts: "{{ block.config.docker_host_group }}"
  become: true
  serial: 1
  vars:
    project_name: "{{ block.config.stack_name }}"
    project_dir: "/opt/{{ project_name }}"
  tasks:
    - name: Stop and remove compose project (including volumes)
      community.docker.docker_compose:
        project_src: "{{ project_dir }}"
        files:
          - "docker-compose.yml"
        state: absent
        remove_volumes: true

    - name: Remove project directory on host
      ansible.builtin.file:
        path: "{{ project_dir }}"
        state: absent

A few points:

  • serial: 1 ensures a rolling update across hosts in docker_hosts.
    • docker01 is updated and the health check is awaited.
    • Only then does docker02 follow, and so on.
  • Within a host, community.docker.docker_compose handles “updating without tearing everything down”:
    • state: present with the same project performs the equivalent of docker-compose pull + docker-compose up -d in the background.
  • The health check in the compose file and the uri task minimize downtime:
    • The request only hits “the new” container when it is truly ready.

More details on using Ansible with Polycrate can be found in the Ansible Integration section of the documentation.

Polycrate Command

Execution is now trivial:

polycrate run acme-app-stack deploy
polycrate run acme-app-stack remove

Polycrate:

  • starts the prepared container with Ansible, Python, and community.docker,
  • mounts your workspace,
  • provides inventory and merged block configuration (including values from secrets.poly),
  • executes deploy.yml or remove.yml for the block acme-app-stack.

No local Ansible, no Python version chaos, no fiddling with ansible.cfg. This is the solution to the classic dependency problem.


Backup Action in the Block: docker exec, pg_dump, tar, rsync

Backups are often what gets “automated later” in everyday life—until it’s too late. The good news: In the same block, you can define a backup action that runs regularly or ad hoc.

backup.yml

# blocks/registry.acme-corp.com/infra/docker-stack/backup.yml
- name: Backup the Postgres database from the Docker container
  hosts: "{{ block.config.docker_host_group }}"
  become: true
  vars:
    project_name: "{{ block.config.stack_name }}"
    backup_dir: "/var/backups/{{ project_name }}"
    timestamp: "{{ ansible_date_time.iso8601_basic }}"
    backup_file: "{{ backup_dir }}/{{ block.config.db_name }}-{{ timestamp }}.sql.gz"
  tasks:
    - name: Create backup directory on host
      ansible.builtin.file:
        path: "{{ backup_dir }}"
        state: directory
        owner: root
        group: root
        mode: "0750"

    - name: Run pg_dump in container and compress
      ansible.builtin.shell: >
        docker exec {{ block.config.db_container_name }}
        pg_dump -U {{ block.config.db_user }} {{ block.config.db_name }}
        | gzip > {{ backup_file }}
      args:
        executable: /bin/bash

    - name: Sync backup to backup server
      ansible.builtin.shell: >
        rsync -az {{ backup_dir }}/
        backup@{{ block.config.backup_host }}:{{ block.config.backup_path }}/
      args:
        executable: /bin/bash

Here we deliberately use classic admin tools:

  • docker exec for pg_dump against the database container.
  • gzip for compression.
  • rsync to transfer to a backup server.

This also runs entirely from the Polycrate container against the target hosts. Execute:

polycrate run acme-app-stack backup

You can wire this action into a Polycrate workflow, e.g. “backup then cleanup”. See the documentation on Workflows.


Polycrate vs. plain Ansible: same stack, less friction

What would the same setup look like with “plain Ansible”?

  • You install Ansible manually on your laptop (or a jump host).
  • You juggle Python versions, the community.docker collection, possibly docker Python bindings.
  • Everyone on the team has a different setup; “works on my machine” is normal.
  • ansible-vault is used for secrets, but not everyone is comfortable with it.
  • Playbooks and roles live partly on laptops, partly on file shares.

With Polycrate:

  • Everything runs in the container—no drifting local dependencies.
  • Workspace and block structure prevent playbook sprawl.
  • Sensitive values live in secrets.poly (same format as workspace.poly) and are protected with workspace encryption.
  • Actions are clearly named, reusable cases (deploy, backup, remove) you can hand to colleagues who rarely touch Ansible.

The block is intended as an artifact in registry.acme-corp.com as described above; polycrate blocks push registry.acme-corp.com/infra/docker-stack is the recurring step when you release new versions. Additionally: PolyHub and Polycrate best practices.


Docker Compose or Kubernetes? A pragmatic decision guide

Docker Compose and Kubernetes are tools with different strengths. From an admin perspective, this rule of thumb helps:

Docker Compose is often the better choice when:

  • You run one host per stack: Docker Compose does not describe multi-node deployments; pushing the same stack to several machines is orchestration with Ansible/Polycrate (as in this article), not a Compose feature.
  • The number of services is manageable.
  • You mainly have static deployments (e.g. 1–2 releases per month).
  • You have direct SSH access to hosts and want to keep it that way.
  • You do not have complex multi-tenant requirements.

Kubernetes becomes more interesting when:

  • You have many more services (>10) and/or teams.
  • You need to scale quickly (more replicas, autoscaling).
  • You want self-service for development teams.
  • You plan multi-region or hybrid-cloud scenarios.
  • You want to use operators, service mesh, GitOps, and similar.

With Polycrate you can work in both worlds:

  • Today: Docker Compose on classic Linux servers, as in this article.
  • Tomorrow: possibly Kubernetes with the same principles: blocks, workspaces, actions, encrypted secrets. Official blocks are available in PolyHub.

The point: you do not need Kubernetes to get clean, reproducible automation. Polycrate brings order, shareability, and security to existing Docker Compose setups too.


Frequently asked questions

How do I install Docker and the community.docker collection on target hosts?

Install Docker itself (if missing) in the usual way for your OS—e.g. apt on Ubuntu. That is deliberately not part of the playbook above to keep responsibilities clear.

The Ansible collection community.docker is provided inside the Polycrate container. You do not install it on target hosts—only the Docker daemon and docker CLI need to be there. Polycrate ensures Ansible in the container has a matching collection version without changing your local machine.

Alternatively, you can build a separate block (e.g. using apt/dnf and the official Docker repo) that installs Docker on target hosts—and run it before the app stack block. That is intentionally left as a reader exercise; this article focuses on the Compose stack.

What happens to my secrets if I version the workspace in Git?

Sensitive values from secrets.poly are stored encrypted; the encrypted artifacts can live in Git. workspace.poly stays readable without passwords—secrets are only in secrets.poly (or in encrypted form in Git).

As long as you use workspace encryption (see Workspace encryption), no plaintext secrets appear in Git. Only someone with the workspace key can decrypt; in playbooks and templates you use the merged values such as block.config.db_password.

Can I use the same block in several environments (e.g. staging, prod)?

Yes—that is a strength of the block model. Typical approach:

  • One workspace per environment (acme-corp-automation-staging, acme-corp-automation-prod).
  • Both reference the same block (e.g. from a registry).
  • Different config values in each workspace.poly (e.g. other images, ports, host groups).
  • Different secrets per workspace (e.g. separate DB passwords).

Automation stays identical while environment parameters stay cleanly separated and traceable.

More questions? See our FAQ.


From routine to a reproducible platform

In this article you saw how to turn an existing Docker Compose stack on Linux servers into a structured Polycrate block:

  • You created a workspace with inventory and a block instance.
  • You combined block.poly, docker-compose.yml.j2, deploy.yml, backup.yml, remove.yml, and workspace secrets.poly so deployments, backups, and teardown are reproducible, secure, and team-friendly.
  • You used Ansible strengths (community.docker.docker_compose, idempotence, rolling updates) without worrying about local Python or collection versions.
  • You saw how Polycrate and workspace encryption set guardrails without getting in your way.

That is the approach we take at ayedo: we help teams turn existing infrastructure—whether classic Linux servers with Docker Compose or later Kubernetes—into robust, shareable automation. Polycrate is the tool that lets you add structure, security, and reuse step by step without swapping your stack overnight.

If you want to turn your own Docker Compose stacks into similar blocks or migrate an existing Ansible estate to Polycrate, we are happy to help—from first workshops to tailored platform solutions.

Get started with a no-obligation Docker automation demo.

Ähnliche Artikel