Preview Environments
Use Cases Preview Environments

Preview Environments

From Staging Bottleneck to Continuous Delivery: How ayedo Built Automated Preview Environments for Vantara

preview-environments continuous-delivery git-flow staging-environment feature-branches qa-feedback devops

From Staging Bottleneck to Continuous Delivery: How ayedo Built Automated Preview Environments for Vantara

Continuous Delivery is not a tool problem.
It’s a feedback problem.

Vantara Digital is developing a cloud-based platform for digital contract management. Three feature teams work in parallel on frontend, backend/API, and integrations. The management’s goal: weekly releases – with a long-term perspective of Continuous Delivery.

On paper, the development process was well-organized: feature branches, pull requests, code reviews, merge into the main branch, deployment to a shared staging environment. A classic Git-Flow model.

In reality, this model became the bottleneck.


Initial Situation: A Staging Environment as the Central Bottleneck

All teams shared a single staging environment. Every pull request had to be deployed and tested there before it could be merged.

What initially seemed controlled created massive friction in everyday operations.

When Team A was testing a major database overhaul, Teams B and C couldn’t test their features in parallel. Pull requests piled up in a merge queue. QA feedback was delayed by two to three days. Developers switched contexts during this time, merge conflicts increased, and code reviews were conducted on outdated states.

The late feedback loop was particularly problematic. Testers could only evaluate features once they were running on staging. UX errors or conceptual weaknesses often became visible only days after implementation. Corrections were correspondingly costly – both technically and mentally.

Product owners and stakeholders were effectively disconnected from the development process. They wanted to see features early but had to wait until they were on staging or even in production.

In some cases, developers manually set up temporary environments on local machines or a dev server to enable reviews. These environments were not reproducible, deviated from the production setup, and disappeared after a short time.

And then there was the recurring problem of the “broken staging environment.” A faulty deployment could render the entire system unusable. The QA team stood still until someone from development found time to analyze and fix the problem.

The result:
Instead of weekly releases, Vantara effectively managed only one release every two to three weeks. Continuous Delivery remained a strategic goal – but operationally unattainable.


The Turning Point: Parallelizing Feedback Instead of Managing Sequentially

For us, it was quickly clear:
The problem wasn’t QA.
The problem was shared infrastructure.

When multiple teams develop in parallel, they must also be able to test in parallel. A single staging environment forces them into a sequential process.

The solution wasn’t more discipline or faster reviews, but isolated, automated preview environments – per pull request.


The Solution: Fully Automated Preview Environments on ayedo Managed Kubernetes

On the ayedo Managed Kubernetes platform, we built a GitOps-based preview system for Vantara.

The basic idea is simple – the implementation well thought out:
Every pull request gets its own, complete environment. Automatically.

GitLab CI as a Trigger

As soon as a pull request is opened, a CI/CD pipeline automatically starts. This pipeline generates a declarative manifest that describes the complete environment:

  • Application with the image of the feature branch
  • Own PostgreSQL database with seed data
  • Dedicated Kubernetes namespace
  • Ingress with an individual URL in the format pr-<number>.preview.vantara.dev

This manifest is not deployed directly – but written into a separate GitOps repository.

ArgoCD as Deployment Mechanism

ArgoCD monitors the GitOps repository. As soon as a new manifest appears, ArgoCD rolls out the complete preview environment in the cluster.

This happens within about 90 seconds – without manual intervention.

If the pull request is updated, ArgoCD detects the change and automatically updates the running environment. No re-setup, no waiting, no manual re-deploy.

Namespaces and Resource Isolation

Each preview environment runs in its own namespace with defined resource quotas. No feature branch can block resources of other environments.

This means: Three teams can simultaneously test five or more features in parallel – on production-like, identically configured environments.

Automatic Lifecycle Management

When a pull request is merged or closed, the pipeline removes the manifest from the GitOps repository. ArgoCD then automatically deletes the entire environment.

Namespace, database, ingress – everything is cleaned up neatly.
No forgotten environments, no sprawl.

Authentik for Secure Access

All preview environments are secured via Authentik. Developers, QA, product owners, and customer advisors authenticate with their existing company credentials.

If needed, temporary access for external stakeholders can be created – with limited duration and clear access control.


The Real Effect: Feedback Becomes a Real-Time Process

The biggest difference was not technical – but organizational.

The new process looks like this today:

A developer opens a pull request.
90 seconds later, a complete, isolated environment is ready.
The URL is automatically commented in the pull request.
QA, product owners, and stakeholders test in parallel.
Feedback flows directly back into the same PR.
After the merge, the environment disappears automatically.

No deployment blocks another.
No team waits on staging.
No context switching due to days-long merge queues.

The development process is no longer sequential – but parallelized.


Result: From Bi-Weekly Releases to True Delivery Capability

The average time from “feature complete” to “QA feedback” dropped from several days to under four hours.

The shared staging environment is now used only for final integration tests. It remains stable because most tests already occur in isolated preview environments.

Design and UX errors are detected early – while they are still small and inexpensive to correct.

Releases now occur reliably weekly. The teams are already working on further shortening the cycles.

And an often underestimated effect:
“Works on my machine” practically no longer exists. Every preview environment is identically configured – same infrastructure, same database seeds, same policies as in production.

Continuous Delivery is no longer a goal on a roadmap.
It is lived practice.


Why This Approach Works

Many companies try to enforce Continuous Delivery through stricter processes. In truth, it requires infrastructural parallelism.

Shared test environments inevitably create wait times.
Isolated, declarative environments eliminate them.

GitOps ensures that every environment is reproducible, versioned, and automatically deployable. Kubernetes provides the isolation. ArgoCD ensures consistency.

The result is not a faster QA process – but a fully decoupled development flow.


Call to Action

If your teams are waiting on a shared staging environment, it’s not an organizational problem – it’s a platform issue.

With automated preview environments on the ayedo Managed Kubernetes platform, we create the foundation for true parallel operation, early stakeholder feedback, and secure, fast releases.

If you want to establish Continuous Delivery not just as a vision but as an operational standard, let’s talk. We analyze your current CI/CD and QA process and show you how isolated, GitOps-based preview environments can sustainably accelerate your development flow.


If you wish, we can further emphasize this case on “Developer Experience” or optimize it for SEO with keywords like “Kubernetes Preview Environments,” “GitOps QA Automation,” or “Continuous Delivery with ArgoCD.”

Diesen Use Case umsetzen?

Wir helfen Ihnen, diesen Use Case auf Ihrer Infrastruktur zu realisieren – skalierbar, sicher und DSGVO-konform.

Weitere Use Cases

Video Processing

From Bare-Metal Tinkering to Elastic Video Infrastructure: How ayedo Made Streambase Scalable for …

19.02.2026

SaaS Apps

From VM Operation to Platform: How ayedo’s Planwerk Led to Scalable, Auditable SaaS …

19.02.2026

Machine Learning

From GPU Bottlenecks to Industrial-Scale MLOps: How ayedo Led Sensoriq to a Kubernetes-Based ML …

19.02.2026