Polycrate API for Teams: Centralized Monitoring and Remote Triggering
TL;DR The Polycrate API transforms individual workspaces into a team platform: all workspaces, …
Diese Serie zeigt Schritt für Schritt, wie Ansible mit Polycrate zu einer strukturierten, teilbaren und compliance-fähigen Automatisierungsplattform wird – von den Grundlagen bis zu Enterprise-Szenarien.
polycrate ssh and block actions for kubectl, it becomes a central operations tool for Linux, Windows, and Kubernetes environments.polycrate run myapp debug runs Ansible playbooks with kubernetes.core.k8s_log and k8s_info in the container – always with the correct kubeconfig from the workspace.Many teams use Ansible and kubectl primarily for provisioning and deployments. However, day-to-day operations look different:
web-03?”payments cluster?”This is where Polycrate comes in. Polycrate not only encapsulates your automation in blocks, it provides:
The following sections show concretely how to map SSH access and Kubernetes debugging with Polycrate – including complete workspace.poly, block.poly, and Ansible playbooks.
polycrate sshIn the Polycrate workspace, the inventory is always located as inventory.yml in the workspace root. Polycrate automatically sets the environment for Ansible and polycrate ssh.
A minimal inventory for our example workspace acme-corp-automation:
# inventory.yml
all:
hosts:
web-01.acme-corp.com:
ansible_user: ubuntu
web-02.acme-corp.com:
ansible_user: ubuntu
k8s-node-01.acme-corp.com:
ansible_user: ubuntuNo INI files, no -i parameter, no guessing. Polycrate reads this inventory with every action and also with polycrate ssh.
Our workspace:
# workspace.poly
name: acme-corp-automation
organization: acme
blocks:
- name: k8s-debug
from: registry.acme-corp.com/acme/ops/k8s-debug:0.1.0
config:
namespace: "payments"
app_label: "app=payments-api"Important:
from: from an OCI registry (here fictional registry.acme-corp.com/...; public examples include cargo.ayedo.cloud or PolyHub). The version is pinned in from: (:0.1.0).kubeconfig_path in workspace.poly or block.poly: Polycrate sets KUBECONFIG and K8S_AUTH_KUBECONFIG in the action container to the workspace kubeconfig (artifacts/secrets/kubeconfig.yml or your configured source). kubernetes.core.k8s* and kubectl pick that up automatically.artifacts/secrets/kubeconfig.yml; Polycrate decrypts it transparently when needed (details: Workspace encryption).polycrate ssh: Host Selection with Tab CompletionWith this basis, it’s enough to:
polycrate sshPolycrate reads the inventory, shows you the known hosts, and you can:
polycrate ssh web-0<TAB> to jump to web-01.acme-corp.com via tab completion.No copy-pasting IP addresses, no key searching, no individual ~/.ssh/config variants within the team.
Without Polycrate, it often looks like this:
ssh ubuntu@web-01.acme-corp.com commands without unified loggingWith Polycrate, the inventory brings structure and polycrate ssh provides a unified UX – regardless of whether you manage Linux servers, Windows hosts (via WinRM), or edge nodes. Details on SSH integration can be found in the Polycrate SSH documentation.
From a compliance perspective (e.g., ISO 27001 or GDPR, which came into effect on 25.05.2018), it’s not enough to “somehow” access servers. You must be able to prove:
Polycrate logs every polycrate ssh session and provides this data via the Polycrate API:
This turns “Admin was on the server” into a clean audit trail.
Details on the API are described in the Polycrate API documentation. Typical use cases:
k8s-node-01 between 02:00 and 03:00?”Instead of scattered SSH logins from individual terminals, SSH access becomes a centrally controllable and analyzable process – without additional tinkering. For compliance officers, this is invaluable, and for platform teams, it’s an important component of a modern platform engineering strategy.
polycrate k8s debugBesides repeatable block actions, ad hoc work inside the cluster uses the CLI command polycrate k8s debug. It starts a temporary pod with the Polycrate image, attaches you to a shell in the pod, and cleans up afterward.
According to the documentation, the flow is essentially:
polycrate-admin in the target namespace (if missing)cluster-admin to that ServiceAccountexit, the pod is deletedThe kubeconfig comes from the workspace like all Kubernetes features (typically artifacts/secrets/kubeconfig.yml). The pod includes the same tools as the local Polycrate container (kubectl, helm, ansible, …); see Debug pod and the CLI reference for polycrate k8s debug.
# Default namespace kube-system (override with -n)
polycrate k8s debug
polycrate k8s debug -n defaultSecurity (per documentation): The debug pod runs with cluster-admin. Use it only in trusted environments; see the warning in Debug pod.
Contrast: polycrate k8s debug is the fast interactive cluster session. The block actions below are the repeatable, versioned path for logs, events, and structured checks across the team.
kubectl and Ansible PlaybooksSSH solves the access side. For Kubernetes debugging, you also need:
kubectl with the correct kubeconfigPolycrate bundles this in the block model: the same block that deploys can also contain debugging actions.
blocks/registry.acme-corp.com/acme/ops/k8s-debug/block.polyA simple generic block (after pushing to your registry; name = full registry path without tag):
# blocks/registry.acme-corp.com/acme/ops/k8s-debug/block.poly
name: registry.acme-corp.com/acme/ops/k8s-debug
version: 0.1.0
kind: generic
config:
namespace: "default"
app_label: "app=myapp"
actions:
- name: kubectl
command: |
kubectl -n "{{ block.config.namespace }}" {{ action.args | default('get pods') }}
- name: debug
playbook: debug.ymlKey points:
kubectl uses the same environment as Ansible: KUBECONFIG points at the workspace kubeconfig – no --kubeconfig in the action.kubectl allows quick queries (polycrate run k8s-debug kubectl -- get pods).debug launches an Ansible playbook that collects logs, events, and pod status.Set concrete values like namespace and app_label per workspace in the block instance config (see workspace.poly above), not only in the block’s block.poly.
Best practices for block structure can be found in the Polycrate best practices and the overview of blocks.
kubernetes.core.k8s_log and k8s_infoThe Ansible playbook debug.yml is located in the same directory as the block.poly:
# blocks/registry.acme-corp.com/acme/ops/k8s-debug/debug.yml
- name: Kubernetes Debugging for an Application
hosts: localhost
connection: local
gather_facts: false
vars:
namespace: "{{ block.config.namespace }}"
app_label: "{{ block.config.app_label }}"
tasks:
- name: Determine Pods for the Application
kubernetes.core.k8s_info:
api_version: v1
kind: Pod
namespace: "{{ namespace }}"
label_selectors:
- "{{ app_label }}"
register: pod_info
- name: Display Found Pods
ansible.builtin.debug:
var: pod_info.resources
- name: Output Logs of the First Pod
when: pod_info.resources | length > 0
kubernetes.core.k8s_log:
namespace: "{{ namespace }}"
name: "{{ pod_info.resources[0].metadata.name }}"
register: pod_logs
- name: Display Pod Logs
when: pod_info.resources | length > 0
ansible.builtin.debug:
var: pod_logs.logs
- name: Retrieve Events in the Namespace
kubernetes.core.k8s_info:
api_version: v1
kind: Event
namespace: "{{ namespace }}"
register: ns_events
- name: Filter Relevant Events (Only Warning)
ansible.builtin.set_fact:
warning_events: >-
{{ ns_events.resources
| selectattr('type', 'defined')
| selectattr('type', 'equalto', 'Warning')
| list }}
- name: Display Warning Events
ansible.builtin.debug:
var: warning_eventsImportant:
hosts: localhost and connection: local are correct here because the playbook does not operate on a remote host but speaks to the cluster via the Kubernetes API. It runs in the Polycrate container; Polycrate sets KUBECONFIG / K8S_AUTH_KUBECONFIG – the kubernetes.core.k8s_info and kubernetes.core.k8s_log modules do not need a kubeconfig: parameter on tasks. Details: Ansible integration.
Typical situation: The deployment is through, but the pod immediately goes into CrashLoopBackOff.
With Polycrate:
# Overview of Pods in the Namespace
polycrate run k8s-debug kubectl -- get pods -o wide
# Execute Standard Debug Playbook
polycrate run k8s-debug debugResult:
k8s_info lists all pods with label app=payments-apik8s_log retrieves the logs of the first podk8s_info pulls all events in the namespace, filtered on WarningWith one command, you get logs and events in structured form – repeatable, versioned in the block, and with the assurance that the correct kubeconfig is always used.
The payments-api service is available in Kubernetes as ClusterIP, but the application appears “down”.
Possible workflow steps, all mappable as actions in the block:
# Check Pods
polycrate run k8s-debug kubectl -- get pods -l app=payments-api -o wide
# Check Events
polycrate run k8s-debug debug
# Check Service and Endpoints
polycrate run k8s-debug kubectl -- get svc,ep -l app=payments-apiBecause everything lives in a block, you can push this debug block to an OCI registry (e.g. fictional registry.acme-corp.com, public cargo.ayedo.cloud) and share it with other teams via PolyHub. Sharable automation applies not only to deployments but also to operations workflows.
Many toolchains distinguish between “deployment” and “operations”. This often leads to:
Polycrate brings these phases together:
polycrate run.polycrate ssh and debug actions in the same workspace, with the same inventories, secrets, and audit functions.Because all actions run in a containerized context, local dependency chaos is eliminated:
Dockerfile.poly or a setup script)The block model also acts as a guardrail: instead of scattered playbooks and shell scripts, you have clearly defined actions (“deploy”, “patch”, “debug”, “logs”) that less specialized colleagues can run safely.
polycrate ssh different from plain ssh?polycrate ssh uses the workspace inventory (inventory.yml) and the connection parameters and secrets maintained there (e.g. ansible_user). You do not manually type hostnames or IPs, maintain individual SSH configs, or share credentials in chat.
Additionally, all sessions are logged via the Polycrate API – user, host, start/end time, duration, and exit code. That is not achievable with a plain ssh command without custom tooling.
Yes. Polycrate wraps existing playbooks and scripts in blocks and actions. You define which playbooks or commands run under which action name. The deployment playbook stays the same, but you gain:
polycrate run BLOCK ACTIONYou can turn existing kubectl one-liners into command actions in the block.
Polycrate complements established CI/CD and platform approaches instead of replacing them. The strength is that deployments, operations tasks, and debugging:
Especially in a platform engineering context, Polycrate helps deliver standardized but flexible runbooks – with clear guardrails and audit trails.
More questions? See our FAQ.
This article showed how Polycrate works as an operations tool:
polycrate ssh makes access to servers and nodes repeatable and auditable without manually managing local SSH configs and credentials.polycrate k8s debug gives an interactive debug pod with the Polycrate image for quick cluster work.kubernetes.core.k8s_log and k8s_info becomes a defined process you can share and evolve as a team.If you want to apply these concepts in your environment, we can help: from the first Polycrate evaluation through workspace and block design to an integrated platform engineering approach that aligns operations and compliance.
The next step is simple: book a session where we review your landscape and sketch a fitting setup together.
Workshops
TL;DR The Polycrate API transforms individual workspaces into a team platform: all workspaces, …
TL;DR Polycrate not only logs Action Runs (Ansible playbooks) but also SSH sessions, workspace …
TL;DR Polycrate is more than just a CLI tool: With PolyHub, an API platform, and MCP, it forms an …