Seamless Handover: Session Persistence in Failover Scenarios
In the world of critical infrastructures (KRITIS), the success of a disaster recovery concept is …

In traditional data processing, “batch processes” dominated for a long time: data was collected throughout the day and processed in large batches overnight. For modern industrial applications, this is too slow. When a turbine in a factory shows anomalies or an eCommerce system needs to react to inventory changes, every second counts.
Apache Kafka has established itself as the standard for event streaming. It acts as a highly available buffer and distribution center, receiving data from producers (sensors, web apps) and forwarding it in real-time to consumers (ClickHouse, ML models, dashboards).
Kafka is known for being complex to operate. It requires precise management of storage capacities, network identities, and broker states. Kubernetes provides the perfect runtime environment here - especially through the use of the Strimzi Operator:
The Strimzi Operator allows us to manage Kafka clusters declaratively. This means we describe the desired state (e.g., “3 brokers, 24 partitions per topic”) in a YAML file, and the operator takes care of deployment, updates, and scaling.
Thanks to the Container Storage Interface (CSI) of Kubernetes, Kafka can directly access fast SSD storage (e.g., via CEPH). If a Kafka pod fails, Kubernetes immediately restarts it and reattaches the existing storage volume - without data loss.
Production environments are dynamic. During shift times, significantly more sensor data is generated than on weekends. On Kubernetes, we can horizontally scale Kafka clusters to handle throughput rates of gigabytes per second without bottlenecks.
In a modern ayedo architecture, the data flow typically looks like this:
The greatest architectural advantage of Kafka is decoupling. Producers and consumers do not need to know about each other.
Apache Kafka on Kubernetes forms the backbone for responsive, data-driven companies. It transforms static data silos into vibrant event streams that deliver immediate business value.
Is your data flow stalling, or are you struggling with outdated batch processes? ayedo supports you in implementing a robust Kafka infrastructure on Kubernetes - from the first topic to the company-wide event backbone.
What is the role of the Strimzi Operator? Strimzi is a Kubernetes operator that automates the lifecycle of Apache Kafka clusters. It handles tasks such as managing user permissions, creating topics, and safely performing rolling updates of brokers.
How is data security ensured in Kafka? Through integration with the Kubernetes identity system: we use TLS for in-flight encryption and SCRAM or mTLS for authentication between clients and brokers.
Does Kafka still need Zookeeper? In older versions, yes. However, modern Kafka installations increasingly rely on the KRaft mode (Kafka Raft), which makes Zookeeper obsolete. This significantly simplifies operations on Kubernetes as fewer components need to be managed.
What is Kafka Connect? Kafka Connect is a framework for scaling data transfer between Kafka and other systems (e.g., databases like PostgreSQL or S3 storage). It allows for reading and writing data through configuration rather than having to write code.
In the world of critical infrastructures (KRITIS), the success of a disaster recovery concept is …
In the traditional IT world, things are binary: A server is either running or it’s not. A …
TL;DR In the microservices world, services need a way to communicate. Tools like RabbitMQ (based on …