SSH Sessions and kubectl Debugging: Polycrate as an Operations Tool
Fabian Peter 11 Minuten Lesezeit

SSH Sessions and kubectl Debugging: Polycrate as an Operations Tool

SSH sessions and kubectl debugging directly from the Polycrate workspace
Ganze Serie lesen (24 Artikel)

Diese Serie zeigt Schritt für Schritt, wie Ansible mit Polycrate zu einer strukturierten, teilbaren und compliance-fähigen Automatisierungsplattform wird – von den Grundlagen bis zu Enterprise-Szenarien.

  1. Install Polycrate and Build Your First Ansible Block in 15 Minutes
  2. Blocks, Actions, and Workspaces: The Modular Principle of Polycrate
  3. Linux Servers on Autopilot: System Management with Polycrate and Ansible
  4. Nginx and Let's Encrypt as a Reusable Polycrate Block
  5. Managing Docker Stacks on Linux Servers with Polycrate
  6. Many Servers, One Truth: Multi-Server Management with Polycrate Inventories
  7. Windows Automation with Polycrate: Ansible and WinRM Without Pain
  8. Windows Software Deployment without SCCM: Chocolatey and Ansible
  9. Hybrid Automation: Windows and Linux in the Same Polycrate Workspace
  10. Deploy Kubernetes Apps from the PolyHub: From Idea to Deployment in Minutes
  11. Creating Your Own Kubernetes App as a Polycrate Block: A Step-by-Step Guide
  12. Multi-Cluster Kubernetes with Polycrate: Why One Cluster, One Workspace
  13. SSH Sessions and kubectl Debugging: Polycrate as an Operations Tool
  14. Helm Charts as a Polycrate Block: More Control Over Chart Deployments
  15. Policy as Code: Automating Compliance Requirements with Polycrate
  16. Workspace Encryption: Managing Secrets in GDPR Compliance – Without External Tooling
  17. Managing Raspberry Pi and Edge Nodes with Polycrate in IoT and Edge Computing
  18. Enterprise Automation: Building, Versioning, and Sharing Blocks Within Teams
  19. Polycrate MCP: Connecting AI Assistants with Live Infrastructure Context
  20. Polycrate vs. plain Ansible: What You Gain – and Why It's Worth It
  21. The Polycrate Ecosystem: PolyHub, API, MCP, and the Future of Automation
  22. Your First Productive Polycrate Workspace: A Checklist for Getting Started
  23. Auditable Operations: SSH Sessions and CLI Activities with Polycrate API
  24. Polycrate API for Teams: Centralized Monitoring and Remote Triggering

TL;DR

  • Polycrate is not just a deployment tool: With polycrate ssh and block actions for kubectl, it becomes a central operations tool for Linux, Windows, and Kubernetes environments.
  • SSH sessions run directly from the workspace – with inventory-based host selection, tab completion, and no “Where’s the password?” questions. All sessions are logged in an auditable way via the Polycrate API.
  • Kubernetes debugging becomes a repeatable process: polycrate run myapp debug runs Ansible playbooks with kubernetes.core.k8s_log and k8s_info in the container – always with the correct kubeconfig from the workspace.
  • This solves several problems of classic setups: no local Ansible, no kubectl chaos, no shared kubeconfigs, clear audit trails for SSH access and K8s operations.
  • ayedo supports you with Polycrate, tailored platform engineering and consulting so you can align operations and compliance – from the first demo to production.

Polycrate as an Operations Tool, Not Just a Deployment Helper

Many teams use Ansible and kubectl primarily for provisioning and deployments. However, day-to-day operations look different:

  • “Who can quickly log into web-03?”
  • “Which kubeconfig do I need for the payments cluster?”
  • “Who debugged on the prod node last night?”
  • “Why is the pod crashing even though the last deployment was green?”

This is where Polycrate comes in. Polycrate not only encapsulates your automation in blocks, it provides:

  1. A consistent, containerized toolchain (Ansible, Python, kubectl, Helm …) for everyone, without local setup chaos.
  2. Structure through the block model, which also maps debugging and operational tasks as actions – not just deployments.
  3. SSH and API functions that make operations activities auditable.

The following sections show concretely how to map SSH access and Kubernetes debugging with Polycrate – including complete workspace.poly, block.poly, and Ansible playbooks.


SSH Directly from the Workspace: polycrate ssh

Inventory as Single Source of Truth

In the Polycrate workspace, the inventory is always located as inventory.yml in the workspace root. Polycrate automatically sets the environment for Ansible and polycrate ssh.

A minimal inventory for our example workspace acme-corp-automation:

# inventory.yml
all:
  hosts:
    web-01.acme-corp.com:
      ansible_user: ubuntu
    web-02.acme-corp.com:
      ansible_user: ubuntu
    k8s-node-01.acme-corp.com:
      ansible_user: ubuntu

No INI files, no -i parameter, no guessing. Polycrate reads this inventory with every action and also with polycrate ssh.

Workspace Definition with Registry Block

Our workspace:

# workspace.poly
name: acme-corp-automation
organization: acme

blocks:
  - name: k8s-debug
    from: registry.acme-corp.com/acme/ops/k8s-debug:0.1.0
    config:
      namespace: "payments"
      app_label: "app=payments-api"

Important:

  • The block is pulled via from: from an OCI registry (here fictional registry.acme-corp.com/...; public examples include cargo.ayedo.cloud or PolyHub). The version is pinned in from: (:0.1.0).
  • No kubeconfig_path in workspace.poly or block.poly: Polycrate sets KUBECONFIG and K8S_AUTH_KUBECONFIG in the action container to the workspace kubeconfig (artifacts/secrets/kubeconfig.yml or your configured source). kubernetes.core.k8s* and kubectl pick that up automatically.
  • The kubeconfig typically lives under artifacts/secrets/kubeconfig.yml; Polycrate decrypts it transparently when needed (details: Workspace encryption).
  • The same secret handling applies to SSH (e.g. private SSH keys) without integrating external tools like Vault.

polycrate ssh: Host Selection with Tab Completion

With this basis, it’s enough to:

polycrate ssh

Polycrate reads the inventory, shows you the known hosts, and you can:

  • interactively select a host or
  • simply type polycrate ssh web-0<TAB> to jump to web-01.acme-corp.com via tab completion.

No copy-pasting IP addresses, no key searching, no individual ~/.ssh/config variants within the team.

Comparison Without Polycrate

Without Polycrate, it often looks like this:

  • locally different SSH clients and configs
  • chains of messages like “Can you send me the SSH key for prod?”
  • manual ssh ubuntu@web-01.acme-corp.com commands without unified logging

With Polycrate, the inventory brings structure and polycrate ssh provides a unified UX – regardless of whether you manage Linux servers, Windows hosts (via WinRM), or edge nodes. Details on SSH integration can be found in the Polycrate SSH documentation.


SSH Audit with Polycrate API: Who Was on Which Host and When?

From a compliance perspective (e.g., ISO 27001 or GDPR, which came into effect on 25.05.2018), it’s not enough to “somehow” access servers. You must be able to prove:

  • Who started an SSH session and when?
  • On which host?
  • How long did the session last?
  • How did it end (exit code)?

Polycrate logs every polycrate ssh session and provides this data via the Polycrate API:

  • Session start time
  • Session end
  • Duration
  • User (e.g., via SSO integration)
  • Target host
  • Exit code

This turns “Admin was on the server” into a clean audit trail.

Queries via the Polycrate API

Details on the API are described in the Polycrate API documentation. Typical use cases:

  • Daily export of all prod SSH sessions to the SIEM
  • Ad-hoc analysis: “Who was on k8s-node-01 between 02:00 and 03:00?”
  • Traceability in security incidents

Instead of scattered SSH logins from individual terminals, SSH access becomes a centrally controllable and analyzable process – without additional tinkering. For compliance officers, this is invaluable, and for platform teams, it’s an important component of a modern platform engineering strategy.


Interactive in the Cluster: polycrate k8s debug

Besides repeatable block actions, ad hoc work inside the cluster uses the CLI command polycrate k8s debug. It starts a temporary pod with the Polycrate image, attaches you to a shell in the pod, and cleans up afterward.

According to the documentation, the flow is essentially:

  1. ServiceAccount polycrate-admin in the target namespace (if missing)
  2. ClusterRoleBinding granting cluster-admin to that ServiceAccount
  3. Pod with the Polycrate image, then attach to an interactive shell
  4. After exit, the pod is deleted

The kubeconfig comes from the workspace like all Kubernetes features (typically artifacts/secrets/kubeconfig.yml). The pod includes the same tools as the local Polycrate container (kubectl, helm, ansible, …); see Debug pod and the CLI reference for polycrate k8s debug.

# Default namespace kube-system (override with -n)
polycrate k8s debug

polycrate k8s debug -n default

Security (per documentation): The debug pod runs with cluster-admin. Use it only in trusted environments; see the warning in Debug pod.

Contrast: polycrate k8s debug is the fast interactive cluster session. The block actions below are the repeatable, versioned path for logs, events, and structured checks across the team.


Kubernetes Debugging as a Block: kubectl and Ansible Playbooks

SSH solves the access side. For Kubernetes debugging, you also need:

  • consistent kubectl with the correct kubeconfig
  • repeatable debug steps, instead of “kubectl commands from memory”
  • logs, events, and status in structured form

Polycrate bundles this in the block model: the same block that deploys can also contain debugging actions.

The Debug Block: blocks/registry.acme-corp.com/acme/ops/k8s-debug/block.poly

A simple generic block (after pushing to your registry; name = full registry path without tag):

# blocks/registry.acme-corp.com/acme/ops/k8s-debug/block.poly
name: registry.acme-corp.com/acme/ops/k8s-debug
version: 0.1.0
kind: generic

config:
  namespace: "default"
  app_label: "app=myapp"

actions:
  - name: kubectl
    command: |
      kubectl -n "{{ block.config.namespace }}" {{ action.args | default('get pods') }}

  - name: debug
    playbook: debug.yml

Key points:

  • The toolchain (kubectl, Python, Ansible) runs in the Polycrate container – no local setup needed.
  • kubectl uses the same environment as Ansible: KUBECONFIG points at the workspace kubeconfig – no --kubeconfig in the action.
  • Action kubectl allows quick queries (polycrate run k8s-debug kubectl -- get pods).
  • Action debug launches an Ansible playbook that collects logs, events, and pod status.

Set concrete values like namespace and app_label per workspace in the block instance config (see workspace.poly above), not only in the block’s block.poly.

Best practices for block structure can be found in the Polycrate best practices and the overview of blocks.


Debug Playbook with kubernetes.core.k8s_log and k8s_info

The Ansible playbook debug.yml is located in the same directory as the block.poly:

# blocks/registry.acme-corp.com/acme/ops/k8s-debug/debug.yml
- name: Kubernetes Debugging for an Application
  hosts: localhost
  connection: local
  gather_facts: false

  vars:
    namespace: "{{ block.config.namespace }}"
    app_label: "{{ block.config.app_label }}"

  tasks:
    - name: Determine Pods for the Application
      kubernetes.core.k8s_info:
        api_version: v1
        kind: Pod
        namespace: "{{ namespace }}"
        label_selectors:
          - "{{ app_label }}"
      register: pod_info

    - name: Display Found Pods
      ansible.builtin.debug:
        var: pod_info.resources

    - name: Output Logs of the First Pod
      when: pod_info.resources | length > 0
      kubernetes.core.k8s_log:
        namespace: "{{ namespace }}"
        name: "{{ pod_info.resources[0].metadata.name }}"
      register: pod_logs

    - name: Display Pod Logs
      when: pod_info.resources | length > 0
      ansible.builtin.debug:
        var: pod_logs.logs

    - name: Retrieve Events in the Namespace
      kubernetes.core.k8s_info:
        api_version: v1
        kind: Event
        namespace: "{{ namespace }}"
      register: ns_events

    - name: Filter Relevant Events (Only Warning)
      ansible.builtin.set_fact:
        warning_events: >-
          {{ ns_events.resources
             | selectattr('type', 'defined')
             | selectattr('type', 'equalto', 'Warning')
             | list }}

    - name: Display Warning Events
      ansible.builtin.debug:
        var: warning_events

Important:
hosts: localhost and connection: local are correct here because the playbook does not operate on a remote host but speaks to the cluster via the Kubernetes API. It runs in the Polycrate container; Polycrate sets KUBECONFIG / K8S_AUTH_KUBECONFIG – the kubernetes.core.k8s_info and kubernetes.core.k8s_log modules do not need a kubeconfig: parameter on tasks. Details: Ansible integration.

Debug Scenario 1: Pod Keeps Crashing

Typical situation: The deployment is through, but the pod immediately goes into CrashLoopBackOff.

With Polycrate:

# Overview of Pods in the Namespace
polycrate run k8s-debug kubectl -- get pods -o wide

# Execute Standard Debug Playbook
polycrate run k8s-debug debug

Result:

  • k8s_info lists all pods with label app=payments-api
  • k8s_log retrieves the logs of the first pod
  • k8s_info pulls all events in the namespace, filtered on Warning

With one command, you get logs and events in structured form – repeatable, versioned in the block, and with the assurance that the correct kubeconfig is always used.

Debug Scenario 2: Service Not Reachable

The payments-api service is available in Kubernetes as ClusterIP, but the application appears “down”.

Possible workflow steps, all mappable as actions in the block:

# Check Pods
polycrate run k8s-debug kubectl -- get pods -l app=payments-api -o wide

# Check Events
polycrate run k8s-debug debug

# Check Service and Endpoints
polycrate run k8s-debug kubectl -- get svc,ep -l app=payments-api

Because everything lives in a block, you can push this debug block to an OCI registry (e.g. fictional registry.acme-corp.com, public cargo.ayedo.cloud) and share it with other teams via PolyHub. Sharable automation applies not only to deployments but also to operations workflows.


Lifecycle Argument: From Install to Debug – All Auditable

Many toolchains distinguish between “deployment” and “operations”. This often leads to:

  • different tools
  • different log paths
  • missing audit trails for manual interventions

Polycrate brings these phases together:

  • Installation / Provisioning – classic Ansible playbooks in blocks, executed via polycrate run.
  • Changes / Deployments – also defined as actions or workflows in the workspace.
  • Live Debugging / Incident Responsepolycrate ssh and debug actions in the same workspace, with the same inventories, secrets, and audit functions.

Because all actions run in a containerized context, local dependency chaos is eliminated:

  • No “On my laptop, Ansible is 2.9, on yours 2.15”
  • No different kubectl versions and contexts
  • Lower supply-chain risk because the toolchain is defined in the container (via Dockerfile.poly or a setup script)

The block model also acts as a guardrail: instead of scattered playbooks and shell scripts, you have clearly defined actions (“deploy”, “patch”, “debug”, “logs”) that less specialized colleagues can run safely.


Frequently Asked Questions

How is polycrate ssh different from plain ssh?

polycrate ssh uses the workspace inventory (inventory.yml) and the connection parameters and secrets maintained there (e.g. ansible_user). You do not manually type hostnames or IPs, maintain individual SSH configs, or share credentials in chat.
Additionally, all sessions are logged via the Polycrate API – user, host, start/end time, duration, and exit code. That is not achievable with a plain ssh command without custom tooling.

Can I reuse my existing kubectl and Ansible scripts?

Yes. Polycrate wraps existing playbooks and scripts in blocks and actions. You define which playbooks or commands run under which action name. The deployment playbook stays the same, but you gain:

  • a consistent, containerized environment (no local setup)
  • unified execution via polycrate run BLOCK ACTION
  • the ability to share the same blocks via registries with other teams

You can turn existing kubectl one-liners into command actions in the block.

How does this fit my existing platform or DevOps strategy?

Polycrate complements established CI/CD and platform approaches instead of replacing them. The strength is that deployments, operations tasks, and debugging:

  • use the same workspaces, inventories, and secrets
  • are structured and versioned through the block model
  • can be analyzed and automated via the Polycrate API

Especially in a platform engineering context, Polycrate helps deliver standardized but flexible runbooks – with clear guardrails and audit trails.

More questions? See our FAQ.


From Theory to Practice

This article showed how Polycrate works as an operations tool:

  • polycrate ssh makes access to servers and nodes repeatable and auditable without manually managing local SSH configs and credentials.
  • polycrate k8s debug gives an interactive debug pod with the Polycrate image for quick cluster work.
  • Kubernetes debugging via blocks, actions, and Ansible playbooks with kubernetes.core.k8s_log and k8s_info becomes a defined process you can share and evolve as a team.
  • The containerized toolchain and block model keep deployments, debugging, and compliance evidence on the same foundation – without local version chaos.

If you want to apply these concepts in your environment, we can help: from the first Polycrate evaluation through workspace and block design to an integrated platform engineering approach that aligns operations and compliance.

The next step is simple: book a session where we review your landscape and sketch a fitting setup together.
Workshops

Ähnliche Artikel