AI Gateway in the Kubernetes Ecosystem:
Why the Next Evolutionary Stage of the Platform Has Already Begun The discussion around AI …
azure.azcollection, encapsulated in Polycrate blocks—no local Python or Ansible installation required.inventory.yml so you can continue with polycrate ssh (Linux) or WinRM (Windows) immediately afterward.Many teams sit somewhere between traditional data centers and the cloud: hundreds of Linux servers on-premise, Active Directory on Windows Server, plus some workloads in Azure. This is where Polycrate shines:
The Azure integration uses the Ansible integration in Polycrate. The Azure modules (azure.azcollection) run inside the Polycrate container and talk to the Azure APIs directly (hosts: localhost, connection: local is correct—you are not connecting to a VM via SSH/WinRM, only the Azure control plane).
A central goal of this post: the dynamic inventory. Newly created Azure VMs land automatically in inventory.yml in the workspace root—alongside your on-premise hosts—so you can reuse the same Ansible roles you already use on-premise.
First we define a minimal workspace with on-premise hosts and Azure blocks.
# workspace.poly
name: acme-corp-automation
organization: acme
blocks:
- name: azure-infra
from: registry.acme-corp.com/acme/infra/azure-infra:0.1.0
config:
subscription_id: "00000000-0000-0000-0000-000000000000"
location: "westeurope"
resource_group: "rg-acme-hybrid"
vnet_name: "vnet-acme-hybrid"
address_prefix: "10.50.0.0/16"
- name: azure-vms
from: registry.acme-corp.com/acme/infra/azure-vms:0.1.0
config:
subscription_id: "00000000-0000-0000-0000-000000000000"
location: "westeurope"
resource_group: "rg-acme-hybrid"
vnet_name: "vnet-acme-hybrid"
subnet_name: "default"
admin_username: "acmeadmin"
- name: azure-cost-control
from: registry.acme-corp.com/acme/infra/azure-cost-control:0.1.0
config:
subscription_id: "00000000-0000-0000-0000-000000000000"
resource_group: "rg-acme-hybrid"
tag_filter: "env=dev"
- name: azure-backup
from: registry.acme-corp.com/acme/infra/azure-backup:0.1.0
config:
subscription_id: "00000000-0000-0000-0000-000000000000"
resource_group: "rg-acme-backup"
storage_account: "acmehybridbackup"Blocks are not loose blocks/<name> directories in the workspace; they are pulled via from: from an OCI registry (registry.acme-corp.com/... is a fictional example; in practice use polycrate blocks pull … or your own registry). from: contains the full reference including a version tag. There is no Jinja2 in workspace.poly or block.poly—paths to files under artifacts/secrets/ are literal strings in the block config; the local admin password for Azure Windows VMs belongs in secrets.poly like in the hybrid post (merged into block.config). See Configuration.
secrets.poly (excerpt, merged with workspace.poly):
blocks:
- name: azure-vms
config:
win_admin_password: "…"Our shared inventory lives in the workspace root as inventory.yml. As in the multi-server post and the hybrid post, every host appears once under all.hosts; under children we assign onprem_linux, onprem_windows, azure_linux, and azure_windows—so Ansible and polycrate ssh share the same canonical host list. Initially we only list on-premise servers; the Azure groups stay empty until the inventory action runs:
# inventory.yml (workspace root)
all:
vars:
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
ansible_python_interpreter: /usr/bin/python3
hosts:
server01.acme-corp.com:
ansible_user: ubuntu
dc01.acme-corp.com: {}
children:
onprem_linux:
hosts:
server01.acme-corp.com:
onprem_windows:
hosts:
dc01.acme-corp.com:
vars:
ansible_connection: winrm
ansible_winrm_transport: ntlm
ansible_winrm_server_cert_validation: ignore
azure_linux:
hosts: {}
azure_windows:
hosts: {}
vars:
ansible_connection: winrm
ansible_winrm_transport: ntlm
ansible_winrm_server_cert_validation: ignorePolycrate sets ANSIBLE_INVENTORY to this file automatically. More detail: Best Practices.
The first block handles the Resource Group and Virtual Network.
# blocks/registry.acme-corp.com/acme/infra/azure-infra/block.poly
# name = from: without tag (full registry path)
name: registry.acme-corp.com/acme/infra/azure-infra
version: 0.1.0
kind: generic
config:
azure_credentials_path: "artifacts/secrets/azure-credentials.json"
subscription_id: ""
location: "westeurope"
resource_group: "rg-acme-hybrid"
vnet_name: "vnet-acme-hybrid"
address_prefix: "10.50.0.0/16"
subnet_name: "default"
subnet_prefix: "10.50.1.0/24"
actions:
- name: provision
driver: ansible
playbook: provision.ymlAzure credentials (e.g. a service principal JSON file) are stored encrypted under artifacts/secrets/azure-credentials.json. Encryption uses age in Polycrate—see Workspace encryption.
# …/azure-infra/provision.yml
- name: Provision Azure Resource Group and network
hosts: localhost
connection: local
gather_facts: false
vars:
azure_credentials_path: "{{ block.config.azure_credentials_path }}"
subscription_id: "{{ block.config.subscription_id }}"
location: "{{ block.config.location }}"
resource_group: "{{ block.config.resource_group }}"
vnet_name: "{{ block.config.vnet_name }}"
address_prefix: "{{ block.config.address_prefix }}"
subnet_name: "{{ block.config.subnet_name }}"
subnet_prefix: "{{ block.config.subnet_prefix }}"
pre_tasks:
- name: Load Azure credentials
set_fact:
azure_credentials: "{{ lookup('file', azure_credentials_path) | from_json }}"
tasks:
- name: Create Resource Group
azure.azcollection.azure_rm_resourcegroup:
subscription_id: "{{ subscription_id }}"
tenant: "{{ azure_credentials.tenant }}"
client_id: "{{ azure_credentials.client_id }}"
secret: "{{ azure_credentials.client_secret }}"
name: "{{ resource_group }}"
location: "{{ location }}"
- name: Create Virtual Network
azure.azcollection.azure_rm_virtualnetwork:
subscription_id: "{{ subscription_id }}"
tenant: "{{ azure_credentials.tenant }}"
client_id: "{{ azure_credentials.client_id }}"
secret: "{{ azure_credentials.client_secret }}"
resource_group: "{{ resource_group }}"
name: "{{ vnet_name }}"
address_prefixes:
- "{{ address_prefix }}"
- name: Create subnet
azure.azcollection.azure_rm_subnet:
subscription_id: "{{ subscription_id }}"
tenant: "{{ azure_credentials.tenant }}"
client_id: "{{ azure_credentials.client_id }}"
secret: "{{ azure_credentials.client_secret }}"
resource_group: "{{ resource_group }}"
name: "{{ subnet_name }}"
address_prefix: "{{ subnet_prefix }}"
virtual_network_name: "{{ vnet_name }}"Run:
polycrate run azure-infra provisionWith plain Ansible you would install azure.azcollection locally, align Python versions, and tune ansible.cfg. With Polycrate this playbook runs reproducibly in the container—including the right Azure modules.
# blocks/registry.acme-corp.com/acme/infra/azure-vms/block.poly
name: registry.acme-corp.com/acme/infra/azure-vms
version: 0.1.0
kind: generic
config:
azure_credentials_path: "artifacts/secrets/azure-credentials.json"
subscription_id: ""
location: "westeurope"
resource_group: "rg-acme-hybrid"
vnet_name: "vnet-acme-hybrid"
subnet_name: "default"
admin_username: "acmeadmin"
ssh_public_key_path: "artifacts/secrets/azure-ssh.pub"
actions:
- name: provision
driver: ansible
playbook: provision.yml
- name: inventory
driver: ansible
playbook: inventory.ymlThe Windows admin password for VM creation comes from secrets.poly (see above) and is available in the playbook after merge as block.config.win_admin_password.
# …/azure-vms/provision.yml
- name: Provision Azure Linux and Windows VMs
hosts: localhost
connection: local
gather_facts: false
vars:
azure_credentials_path: "{{ block.config.azure_credentials_path }}"
subscription_id: "{{ block.config.subscription_id }}"
location: "{{ block.config.location }}"
resource_group: "{{ block.config.resource_group }}"
vnet_name: "{{ block.config.vnet_name }}"
subnet_name: "{{ block.config.subnet_name }}"
admin_username: "{{ block.config.admin_username }}"
ssh_public_key: "{{ lookup('file', block.config.ssh_public_key_path) }}"
win_admin_password: "{{ block.config.win_admin_password }}"
linux_vms:
- name: "vm-ubuntu-01"
size: "Standard_B2s"
windows_vms:
- name: "vm-win-01"
size: "Standard_B2s"
pre_tasks:
- name: Load Azure credentials
set_fact:
azure_credentials: "{{ lookup('file', azure_credentials_path) | from_json }}"
tasks:
- name: Create Linux VMs
azure.azcollection.azure_rm_virtualmachine:
subscription_id: "{{ subscription_id }}"
tenant: "{{ azure_credentials.tenant }}"
client_id: "{{ azure_credentials.client_id }}"
secret: "{{ azure_credentials.client_secret }}"
resource_group: "{{ resource_group }}"
name: "{{ item.name }}"
vm_size: "{{ item.size }}"
admin_username: "{{ admin_username }}"
ssh_password_enabled: false
ssh_public_keys:
- path: "/home/{{ admin_username }}/.ssh/authorized_keys"
key_data: "{{ ssh_public_key }}"
image:
offer: "0001-com-ubuntu-server-jammy"
publisher: "Canonical"
sku: "22_04-lts"
version: "latest"
os_type: "Linux"
state: present
started: true
network_interfaces:
- name: "{{ item.name }}-nic"
virtual_network: "{{ vnet_name }}"
subnet: "{{ subnet_name }}"
public_ip_allocation_method: "Dynamic"
loop: "{{ linux_vms }}"
- name: Create Windows VMs
azure.azcollection.azure_rm_virtualmachine:
subscription_id: "{{ subscription_id }}"
tenant: "{{ azure_credentials.tenant }}"
client_id: "{{ azure_credentials.client_id }}"
secret: "{{ azure_credentials.client_secret }}"
resource_group: "{{ resource_group }}"
name: "{{ item.name }}"
vm_size: "{{ item.size }}"
admin_username: "azureadmin"
admin_password: "{{ win_admin_password }}"
image:
offer: "WindowsServer"
publisher: "MicrosoftWindowsServer"
sku: "2022-Datacenter"
version: "latest"
os_type: "Windows"
state: present
started: true
network_interfaces:
- name: "{{ item.name }}-nic"
virtual_network: "{{ vnet_name }}"
subnet: "{{ subnet_name }}"
public_ip_allocation_method: "Dynamic"
loop: "{{ windows_vms }}"Run:
polycrate run azure-vms provisionAll modules come from azure.azcollection. Polycrate supplies the toolchain in the container—you do not maintain pip install or ansible-galaxy collection install on your laptop.
Now the interesting part: newly created VMs should land in inventory.yml—in the azure_linux and azure_windows groups.
# …/azure-vms/inventory.yml
- name: Write Azure VMs to inventory
hosts: localhost
connection: local
gather_facts: false
vars:
azure_credentials_path: "{{ block.config.azure_credentials_path }}"
subscription_id: "{{ block.config.subscription_id }}"
resource_group: "{{ block.config.resource_group }}"
pre_tasks:
- name: Load Azure credentials
set_fact:
azure_credentials: "{{ lookup('file', azure_credentials_path) | from_json }}"
tasks:
- name: Gather VM information
azure.azcollection.azure_rm_virtualmachine_info:
subscription_id: "{{ subscription_id }}"
tenant: "{{ azure_credentials.tenant }}"
client_id: "{{ azure_credentials.client_id }}"
secret: "{{ azure_credentials.client_secret }}"
resource_group: "{{ resource_group }}"
register: vm_info
- name: Build host lists for inventory
set_fact:
linux_hosts: "{{ vm_info.virtual_machines | selectattr('os_type', 'equalto', 'Linux') | list }}"
windows_hosts: "{{ vm_info.virtual_machines | selectattr('os_type', 'equalto', 'Windows') | list }}"
- name: Render new inventory
template:
src: inventory.j2
dest: "{{ workspace.path }}/inventory.yml"Use {{ workspace.path }}/inventory.yml for dest: in Polycrate Ansible, workspace.path is the workspace root—more stable than relative paths like ../../inventory.yml, which depend on how deep the block tree is.
Template in the same block:
# …/azure-vms/inventory.j2
all:
vars:
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
ansible_python_interpreter: /usr/bin/python3
hosts:
server01.acme-corp.com:
ansible_user: ubuntu
dc01.acme-corp.com: {}
{% for vm in linux_hosts %}
{{ vm.name }}:
ansible_host: {{ vm.public_ip_address }}
ansible_user: {{ block.config.admin_username }}
{% endfor %}
{% for vm in windows_hosts %}
{{ vm.name }}:
ansible_host: {{ vm.public_ip_address }}
ansible_user: azureadmin
{% endfor %}
children:
onprem_linux:
hosts:
server01.acme-corp.com:
onprem_windows:
hosts:
dc01.acme-corp.com:
vars:
ansible_connection: winrm
ansible_winrm_transport: ntlm
ansible_winrm_server_cert_validation: ignore
azure_linux:
hosts:
{% for vm in linux_hosts %}
{{ vm.name }}:
{% endfor %}
azure_windows:
hosts:
{% for vm in windows_hosts %}
{{ vm.name }}:
{% endfor %}
vars:
ansible_connection: winrm
ansible_winrm_transport: ntlm
ansible_winrm_server_cert_validation: ignoreFor SSH to Azure Linux VMs, Polycrate sets ANSIBLE_PRIVATE_KEY_FILE (among other env vars) in the action container—do not put ansible_ssh_private_key_file in the inventory. WinRM passwords for Azure Windows hosts do not belong in the YAML file; keep them in secrets.poly / block.config and use them in playbooks (see the hybrid post).
Run:
polycrate run azure-vms inventoryAfter this action, the inventory covers both on-premise hosts and Azure VMs. Compared with a classic dynamic Ansible inventory, the result is a YAML file in the workspace—versionable in Git and readable by any tool that consumes inventory.yml.
Linux VMs:
polycrate ssh vm-ubuntu-01Windows: use regular Ansible playbooks over WinRM, e.g. with a block from the registry:
# workspace.poly (excerpt)
blocks:
- name: windows-hardening
from: registry.acme-corp.com/acme/infra/windows-hardening:0.1.0
config: {}polycrate run windows-hardening cis-baseline --limit vm-win-01More on SSH: SSH with Polycrate.
Many backup tools treat Azure Blob Storage like an S3-style target even though the API differs. With Ansible we use native Azure modules.
# blocks/registry.acme-corp.com/acme/infra/azure-backup/block.poly
name: registry.acme-corp.com/acme/infra/azure-backup
version: 0.1.0
kind: generic
config:
azure_credentials_path: "artifacts/secrets/azure-credentials.json"
subscription_id: ""
resource_group: "rg-acme-backup"
storage_account: "acmehybridbackup"
container_name: "artifacts"
actions:
- name: prepare
driver: ansible
playbook: prepare.yml
- name: upload-artifact
driver: ansible
playbook: upload.ymlprepare.yml creates the storage account and container:
# …/azure-backup/prepare.yml
- name: Prepare Azure Blob Storage
hosts: localhost
connection: local
gather_facts: false
vars:
azure_credentials_path: "{{ block.config.azure_credentials_path }}"
subscription_id: "{{ block.config.subscription_id }}"
resource_group: "{{ block.config.resource_group }}"
storage_account: "{{ block.config.storage_account }}"
container_name: "{{ block.config.container_name }}"
pre_tasks:
- name: Load Azure credentials
set_fact:
azure_credentials: "{{ lookup('file', azure_credentials_path) | from_json }}"
tasks:
- name: Ensure backup Resource Group
azure.azcollection.azure_rm_resourcegroup:
subscription_id: "{{ subscription_id }}"
tenant: "{{ azure_credentials.tenant }}"
client_id: "{{ azure_credentials.client_id }}"
secret: "{{ azure_credentials.client_secret }}"
name: "{{ resource_group }}"
location: "westeurope"
- name: Create Storage Account
azure.azcollection.azure_rm_storageaccount:
subscription_id: "{{ subscription_id }}"
tenant: "{{ azure_credentials.tenant }}"
client_id: "{{ azure_credentials.client_id }}"
secret: "{{ azure_credentials.client_secret }}"
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
kind: "StorageV2"
sku: "Standard_LRS"
- name: Create blob container
azure.azcollection.azure_rm_storageblob:
subscription_id: "{{ subscription_id }}"
tenant: "{{ azure_credentials.tenant }}"
client_id: "{{ azure_credentials.client_id }}"
secret: "{{ azure_credentials.client_secret }}"
resource_group: "{{ resource_group }}"
storage_account_name: "{{ storage_account }}"
container: "{{ container_name }}"
state: presentA simple upload playbook could push local artifacts from artifacts/ into the container. Once storage is ready, external tools with S3-compatible endpoints can use gateways or backup software.
For smaller teams, cost is often a key reason to use Azure cautiously. A practical pattern: stop non-production VMs overnight and start them in the morning.
# blocks/registry.acme-corp.com/acme/infra/azure-cost-control/block.poly
name: registry.acme-corp.com/acme/infra/azure-cost-control
version: 0.1.0
kind: generic
config:
azure_credentials_path: "artifacts/secrets/azure-credentials.json"
subscription_id: ""
resource_group: "rg-acme-hybrid"
tag_filter: "env=dev"
actions:
- name: stop
driver: ansible
playbook: stop.yml
- name: start
driver: ansible
playbook: start.ymlstop.yml can shut down all VMs with a given tag:
# …/azure-cost-control/stop.yml
- name: Stop dev VMs
hosts: localhost
connection: local
gather_facts: false
vars:
azure_credentials_path: "{{ block.config.azure_credentials_path }}"
subscription_id: "{{ block.config.subscription_id }}"
resource_group: "{{ block.config.resource_group }}"
tag_filter: "{{ block.config.tag_filter }}"
pre_tasks:
- name: Load Azure credentials
set_fact:
azure_credentials: "{{ lookup('file', azure_credentials_path) | from_json }}"
tasks:
- name: List candidate VMs
azure.azcollection.azure_rm_virtualmachine_info:
subscription_id: "{{ subscription_id }}"
tenant: "{{ azure_credentials.tenant }}"
client_id: "{{ azure_credentials.client_id }}"
secret: "{{ azure_credentials.client_secret }}"
resource_group: "{{ resource_group }}"
register: vm_info
- name: Filter VMs by tag
set_fact:
target_vms: >-
{{ vm_info.virtual_machines
| selectattr('tags', 'defined')
| selectattr('tags.env', 'equalto', 'dev')
| list }}
- name: Stop dev VMs
azure.azcollection.azure_rm_virtualmachine:
subscription_id: "{{ subscription_id }}"
tenant: "{{ azure_credentials.tenant }}"
client_id: "{{ azure_credentials.client_id }}"
secret: "{{ azure_credentials.client_secret }}"
resource_group: "{{ resource_group }}"
name: "{{ item.name }}"
state: present
started: false
loop: "{{ target_vms }}"start.yml is analogous with started: true.
A cron job or small CI job can run:
polycrate run azure-cost-control stopat 20:00 and
polycrate run azure-cost-control startat 07:00. Polycrate keeps the automation consistent—on an admin laptop, a build agent, or a central automation host.
With plain Ansible you would:
azure.azcollection locally,ansible.cfg and credentials per workstation,With Polycrate you get:
That reduces friction and helps compliance because you can see which block runs which action.
Sensitive files (e.g. azure-credentials.json, SSH keys, passwords) live under artifacts/secrets/ and are encrypted with age. Polycrate includes this workflow so you do not need a separate secret store. Details: Workspace encryption.
Yes. Blocks such as azure-infra, azure-vms, or azure-cost-control can be versioned and published as OCI images to a registry. With PolyHub (see PolyHub documentation) you can share them across teams or publicly.
In practice: one team builds solid Azure blocks; others reference them with from: cargo.ayedo.cloud/...:0.1.0 (or your registry)—with versioning and explicit config parameters.
More questions? See our FAQ.
With this setup you have:
inventory.yml,azure.azcollection.azure_rm_virtualmachine,You move away from a sprawl of loose Ansible playbooks toward structured, reusable automation—no local dependency drift, encrypted secrets, and clear guardrails from the block model. That combination is what makes Polycrate a strong hybrid framework: on-premise servers, Azure VMs, container workloads—one operational model.
If you want to evaluate this for your environment, ayedo can help with:
Dig deeper on inventories and hybrid setups: Multi-server management and Hybrid automation. Formats with ayedo: Workshops.
Polycrate brings on-premise and Azure together in one workspace: a single operational model for hybrid teams.
Why the Next Evolutionary Stage of the Platform Has Already Begun The discussion around AI …
TL;DR Ansible can fully automate Azure Entra ID (formerly Azure AD) via the azure.azcollection: …
TL;DR Active Directory changes via GUI or non-versioned PowerShell scripts are error-prone, hard to …