Video Processing
From Bare-Metal Tinkering to Elastic Video Infrastructure: How ayedo Made Streambase Scalable for …

Many teams today deliver containerized software, use GitHub, automate builds—and yet don’t feel “platform-ready.” This is rarely due to the code. It’s because CI/CD is understood as pipeline logic, not as a system. As long as deployments directly impact server states from build pipelines, delivery remains fragile: hard to reproduce, hard to audit, hard to operate.
In this post, we illustrate through an anonymized customer project how ayedo guided a software house with strong project business and growing enterprise demands from a GitHub-centered deployment model to a modern Internal Developer Platform (IDP) on Managed Kubernetes. The customer remains anonymous. The approach is transferable—especially for organizations that want to use best-of-breed tooling but finally need a consistent, revision-proof delivery and operational base.
The customer develops custom software solutions for industry, energy, and the public sector and also operates two own SaaS products. About 100 employees, approximately 40 engineers, organized into eight teams. The organization is large enough that “everyone does it a bit differently” no longer works—and small enough that a dedicated platform team is not automatically present.
Technically, much was already modern. Applications were containerized, development processes established on GitHub. However, the entire lifecycle relied on a structure that quickly becomes fragile as it grows: mono-repositories per customer and environment, GitHub Actions as the central automation mechanism, and deployments directly from pipelines to virtual servers, secured by self-hosted Action Runners.
This works as long as the complexity remains manageable. Once multiple teams roll out in parallel, multiple products and customer contexts exist, and enterprise customers demand auditability, the model collapses.
The problems did not manifest in a spectacular failure but in friction—every day.
Deployments were error-prone, pipelines broke down, manual restarts became routine. As long as delivery is understood as a pipeline event, this is normal: Every small glitch in runners, network, or dependencies impacts the release process. And because deployments directly affect servers from pipelines, a pipeline error is not just a “build problem” but a potentially inconsistent target state.
At the same time, observability was lacking. There was no central view of logs, metrics, or traces. This made operations reactive: Errors were seen when users reported them—not when they occurred. In an enterprise context, this is not only inefficient but also dangerous, as proof obligations and incident processes must be based on data.
Secrets were maintained in GitHub repo variables. Although pragmatic, this is a risk in regulated environments. There is no clear separation of who used which secrets when, how rotation occurs, and how access can be audited.
The versioning of artifacts was also inconsistent. If container images are not cleanly versioned and propagated to downstream repos, couplings and manual steps arise. These manual steps become painful in audits and incident analyses later: “Which version is really running where?”
Finally, there was the compliance issue: Build and deployment were not separated, there was no revision-proof traceability along a clear delivery chain, and operations were heavily dependent on the GitHub tooling cosmos. With new enterprise customers, it became clear: Independence from the deployment system is not a luxury but a prerequisite.
In modern delivery models, CI/CD is not “a pipeline” but a controlled process with clear responsibilities:
Build creates an artifact. Deploy enforces a desired state. Operate monitors, reacts, and provides evidence.
When these three levels merge into one tool and one step, everything becomes simultaneously more complex: Compliance is harder, rollouts riskier, operations opaque. This was precisely the bottleneck here.
We did not see the task as a “tool change” but as building an internal platform that relieves engineering teams while increasing audit and operational security.
The ayedo Managed Kubernetes Platform became the basis for an Internal Developer Platform (IDP) that enables a unified CI/CD landscape—with clear separation between build, deployment, and operations and with components that have established themselves as standards in enterprise environments.
A key principle was important: Best-of-breed tooling, efficiently integrated and fully managed. The goal is not for teams to have to operate more tooling, but for the platform to encapsulate the tooling so that teams can deliver faster—without operations as a side job.
ArgoCD was introduced as the central deployment system. This shifts the model from “pipeline pushes” to “cluster pulls.” Deployments arise from declarative IaC repositories that describe the desired state. ArgoCD continuously reconciles this state.
This is not only more elegant. It solves several problems simultaneously:
The delivery chain becomes traceable because every change is historized in Git. Rollbacks are controlled because you revert to known states. And the dependency on the build system decreases because build no longer “also deploys” but only produces artifacts.
We established a full-stack observability approach that covers monitoring, logging, and tracing. VictoriaMetrics provides scalable metrics, VictoriaLogs centralizes logs, Grafana makes dashboards and alerting consumable, Tempo complements the tracing layer.
This creates an operational reality that was previously missing: Teams not only see if something “runs” but how it runs. Latencies, error rates, saturations, outliers—and above all, the connection between deployments and behavior in operation. This is crucial to make deployments more frequent without increasing risk.
A recurring enterprise blocker is the question: “Which artifacts do we deploy, and how do we ensure they are checked?” Here we integrated Harbor as a central container registry—including CVE scanning and SBOM generation.
The important transition is from “we scan sometime” to “scanning is a gate.” When artifacts are checked and classified in the registry, organizations can define clear rules about which risk levels are allowed in which environments. This makes security part of the delivery process, not a downstream project.
With growing team and customer complexity, access becomes a compliance issue. Keycloak was introduced as an identity provider to consistently control SSO and role-based access across platform components.
This not only improves security. It also makes operations more efficient because access does not have to be “built” separately in each application. In audits, this is a strong signal: clear roles, clear responsibilities, clear access path.
The migration to Vault solves a typical compliance and security problem: Secrets no longer exist as “pipeline configuration” but as a controlled, auditable system. Access, rotation, and policies are centrally controlled.
This has direct effects on risk and speed. Teams lose less time with secret sprawl, and at the same time, it can be demonstrated to customers that secrets do not leak uncontrollably into repos, variables, or build logs.
An important point was that build pipelines do not necessarily have to be replaced. Many teams are productive with GitHub Actions—others prefer GitLab. What matters is not the build tool but the interface: Build creates versioned artifacts and automatically updates downstream IaC repositories.
In practice, this means: Commit-based versioning of images, push to Harbor, update of deployment definitions in a separate IaC repo. From this moment, ArgoCD takes over. This turns “pipeline as a Swiss army knife” into a clean, decoupled process.
This reduces lock-in because deployment is no longer dependent on a single tooling ecosystem. And it increases auditability because build and deploy steps create separate, traceable trails.
The new platform also consistently utilized Kubernetes-native operational mechanisms: Liveness and readiness probes, resource requests and limits, standardized health checks, and automated alerting.
This is not “Kubernetes basics.” In practice, it is the difference between operations as constant intervention and operations as controlled routine. When services self-heal and overload states become visible early, the frequency of manual interventions drops drastically. This was precisely a goal because a project and product business can only function in parallel if operations do not constantly slow down development.
With the introduction of the Internal Developer Platform, delivery at Codehaus became significantly faster without losing stability. Multiple productive deployments per day are possible because rollouts are controlled, versioned, and observable.
The compliance capability increased noticeably because build, deploy, and operate are separated and every change becomes revision-proof traceable. Secrets are managed via Vault, artifacts are checked via Harbor, and observability provides reliable data for operations and evidence.
At the same time, genuine flexibility emerged: Deployments can be rolled out in cloud and on-premise environments behind firewalls without having to rethink the tooling. This is a crucial argument for enterprise customers—not because on-prem is “better,” but because the option is often a prerequisite in tenders and governance processes.
In the end, the leap was less “Continuous Delivery” and more “Continuous Confidence”: delivering more frequently, with less risk, with more provability.
Many organizations try to solve compliance through process documents. That doesn’t scale. What scales is a platform model that automatically generates evidence: Git histories for changes, ArgoCD for deployments, Harbor for artifact checks, Vault for secrets, observability for operational reality.
When these building blocks are integrated on Managed Kubernetes, best-of-breed emerges without tooling chaos. And teams gain something that always counts in enterprise projects: the ability
Wir helfen Ihnen, diesen Use Case auf Ihrer Infrastruktur zu realisieren – skalierbar, sicher und DSGVO-konform.
From Bare-Metal Tinkering to Elastic Video Infrastructure: How ayedo Made Streambase Scalable for …
From VM Operation to Platform: How ayedo’s Planwerk Led to Scalable, Auditable SaaS …
From GPU Bottlenecks to Industrial-Scale MLOps: How ayedo Led Sensoriq to a Kubernetes-Based ML …