Multi-Tenant App Hosting
Use Cases Multi-Tenant App Hosting

Multi-Tenant App Hosting

Multi-Tenant Kubernetes for eCommerce: How ayedo Guided a Software House Without a DevOps Team from VM Scripting to a Developer Platform

multi-tenant kubernetes ecommerce devops application-lifecycle cloud-hosting container-strategy

Multi-Tenant Kubernetes for eCommerce: How ayedo Guided a Software House Without a DevOps Team from VM Scripting to a Developer Platform

In many eCommerce teams, the bottleneck is not feature development but operations. Not because developers write poor software, but because the operational model doesn’t scale. What starts as “a few servers and a few scripts” becomes a business risk at scale: each customer instance evolves differently, deployments are no longer reproducible, maintenance windows accumulate, and developers’ time is spent firefighting rather than adding product value.

This was exactly the situation for an eCommerce software house developing and operating modular shop and B2B commerce solutions for brands, retailers, and white-label end customers. The client remains anonymized. However, the pattern is not—and it can be applied to many SaaS and platform teams that must grow without a dedicated DevOps team.

In this post, we show how ayedo migrated this client’s application lifecycle to a Kubernetes-based multi-tenant platform—including an Internal Developer Platform, standardized container strategy, and flexible location choice across multiple European providers. The result was not just “modern infrastructure” but a new delivery model: faster, more reproducible, auditable, and significantly less support-intensive.


Initial Situation: When Growth from Scripts and VMs Creates Operational Debt

The client is a team of around 20 people, about 10 of whom are in development. There was no dedicated DevOps team. Operations and deployment were carried out by developers—pragmatic but increasingly costly.

The platform was technologically based on a modern stack: JavaScript/TypeScript, PostgreSQL, Redis, and S3. The problem wasn’t the application but the way it was delivered. Deployments were done manually via Bash scripts on virtual servers. This can work for individual projects. However, once you operate many customer instances in parallel, the model collapses.

VMs plus scripts are rarely truly reproducible. Even if everyone uses the same process, deviations occur over time: small configuration changes, different package states, “quick fixes,” divergent cron jobs, specific exceptions for certain customers. This config drift is the silent cost driver in growing hosting setups. It makes deployments unreliable, complicates updates, and increases the risk of incidents because no one can be sure if “Customer A” really has the same system as “Customer B.”

This is exactly what happened here. With increasing demand for individual eCommerce solutions, not only did the projects grow, but so did customer requirements: location specifications, compliance certificates, infrastructure policies. Every new requirement meant new variants in the VM model. New variants meant more drift. More drift meant more effort—and thus longer delivery times and shrinking margins.

What initially looks like an operational problem is actually a strategic bottleneck: if operations are individual per customer, the organization doesn’t scale. It only scales its complexity.


The Core of the Problem: Lack of Standardization at the Application Lifecycle Level

In conversations, a pattern almost always emerges in such situations: developers know how the application should run. However, they cannot roll it out as a standard because the operational model is not declarative.

VM scripting inevitably leads to “deployment” being a sequence of imperative steps. This makes rollbacks difficult, updates risky, and provisioning slow. This becomes particularly critical with white-label end customers because the support effort rarely grows linearly there. A small bug or performance issue can suddenly become a support storm due to the multitude of instances—and the team becomes increasingly reactive.

The team also lacked containerization. Without containerization, there is no uniform artifact that can be cleanly built, tested, and rolled out. You don’t deploy a version; you change a server state. It’s precisely these state changes that become problematic as you grow.

The goal was therefore not to “introduce Kubernetes.” The goal was to standardize the application lifecycle so that deployments are reproducible, operations are automated, and new customer instances can be provisioned quickly and consistently—regardless of which developer has time.


ayedo’s Approach: Multi-Tenant Platform + Internal Developer Platform Instead of “More Scripts”

We understood the project as platform building: not as a single migration but as the introduction of an operational model that relieves developers and simultaneously increases operational quality. This requires two levels.

The first level is the runtime platform: Kubernetes as a multi-tenant operating environment with clear isolation, standardized deployments, and scalable observability. The second level is an Internal Developer Platform (IDP) that translates the lifecycle into a standard process: build, scan, deploy, observability, secrets—without each team having to assemble these building blocks themselves.

The client should not “operate Kubernetes” in the end but deliver features—and the platform should reliably handle the boring, repetitive tasks.


Operational Platform: Operation in ayedo Fleet Clusters—Scalable, Isolated, Highly Available

As a basis, customer environments were operated in one or more ayedo Fleet Clusters. The basic idea is simple: multi-tenancy only works well if isolation and operational standards are part of the platform—not an afterthought.

Each customer is operated in a logically isolated environment. This allows policies, resource limits, and access controls to be clearly defined. If an end customer has load peaks, it should not lead to a noisy neighbor effect. If a customer requires special policies, these should be representable as declarative configuration, not as an exception on a server.

From a company perspective, this is the decisive step away from “we operate servers” to “we operate a platform.” Because once the platform standardizes isolation, it becomes possible to operate many customer instances without the team “chasing” each instance.


Internal Developer Platform: Standardized Building Blocks Instead of a Tool Zoo

In parallel, an Internal Developer Platform was built that provides the operational foundations often missing in small teams—and which are typically distributed and inconsistent in VM worlds.

The key here is not the mere existence of tools but the fact that they are conceived and operated as a cohesive platform. CI/CD, registry, secrets, observability, and IAM are no longer separate projects but part of a unified lifecycle standard.

GitLab takes over build and pipeline standardization. Harbor provides a private container registry and enables security and quality checks before anything goes live. Vault becomes the central source for secrets and configuration, instead of parameters being scattered in scripts or environment variables. Keycloak provides a consistent identity and role model for internal access. VictoriaMetrics, VictoriaLogs, and Grafana deliver the observability layer that makes the difference between “customers report problems” and “we see problems early” in operations.

This creates a core principle that almost always leads to a breakthrough in high-growth product teams: developers build artifacts. The platform operates them.


Base Image Strategy: One Artifact per Version, Parameters per Customer

A typical mistake in multi-tenant eCommerce is containerizing but still building separate images per customer—thus creating drift again, just at the image level. Our approach was therefore a consistent base image strategy.

For each software version, there is a standardized container image. Customer-specific differences arise not through forks but through parameters. These parameters are cleanly managed as configuration and injected via Vault and defined environments. This keeps the software base identical while controlling variability.

This is a crucial lever for operating costs: if all customers run on the same image version, updates become a manageable rollout instead of 80 individual exceptions. If a security fix is necessary, not every customer needs to be “caught up” separately. A new version is rolled out, and parameters continue as before.

The second advantage is debuggability. If a problem occurs, it is much easier to reproduce because the runtime base is the same. And specific regressions can be traced back to versions instead of wondering if “something is different with this customer.”


Automating the Lifecycle: From Manual Deployments to CI/CD as Standard

Once container artifacts, secrets, and observability are standardized, the next step becomes possible: deployments are no longer “done” but occur as the result of a process.

CI/CD became the default. Every change goes through pipeline stages, is checked, builds an image, publishes it in the registry, and rolls it out to the target environment. This eliminates the dependency on individual developers who “know the script.” Operations become reproducible, and rollbacks become a clean state, not an improvised action.

This is not just efficiency. This is risk management. Because in eCommerce, release windows and sales campaigns are real business criticality. A deployment process that is manual is structurally too fragile in such contexts.


Location and Certification Requirements: Loopback Cloud Broker as a Scaling Lever

A growth driver—and at the same time a brake in the old setup—were location requirements and compliance policies. Some customers want specific countries, certain certified providers, or defined infrastructure requirements. In a VM-based model, this quickly leads to sprawl because every new requirement forces a new operational variant.

Here, the Loopback Cloud Broker brought a strategic advantage: clusters can be dynamically provisioned at various European cloud providers to meet location or certification requirements without the product team having to build a new operational world for each region. The crucial point is that the lifecycle standard remains identical. The location changes—not the process.

This allows an eCommerce product to become a truly scalable platform offering: same software, same delivery, different operating locations according to customer requirements—without having to reinvent how to deploy, monitor, and operate each time.


Result: Faster Provisioning, Less Drift, More Focus on Product

After the migration, the change was especially noticeable in day-to-day business.

New customer instances can now be provisioned within a few hours—including monitoring, logging, and alerting. This is a massive difference from a VM model, where provisioning and monitoring configuration often take days or are “followed up” later.

Config drift is no longer the norm because deployments arise from standardized artifacts and declarative configurations. Updates are

Diesen Use Case umsetzen?

Wir helfen Ihnen, diesen Use Case auf Ihrer Infrastruktur zu realisieren – skalierbar, sicher und DSGVO-konform.

Weitere Use Cases

Video Processing

From Bare-Metal Tinkering to Elastic Video Infrastructure: How ayedo Made Streambase Scalable for …

19.02.2026

SaaS Apps

From VM Operation to Platform: How ayedo’s Planwerk Led to Scalable, Auditable SaaS …

19.02.2026

Machine Learning

From GPU Bottlenecks to Industrial-Scale MLOps: How ayedo Led Sensoriq to a Kubernetes-Based ML …

19.02.2026