Weekly Backlog Week 16/2026
🧠 Editorial This week, tech feels less like progress and more like a dose of reality. The same …

When scaling a DBaaS platform, storage quickly becomes the most critical bottleneck. Databases have two opposing demands on storage infrastructure: on one hand, they require extremely low latencies for read and write operations (I/O), and on the other, backups and transaction logs (WAL) generate massive amounts of data that need to be stored cost-effectively.
Relying on “one-size-fits-all” storage means either paying too much for backup space on expensive SSDs or sacrificing database performance on slow archival disks. The solution for a sovereign European provider lies in an intelligent, software-defined design with Ceph.
Instead of trying to use a single type of storage for everything, we divided the storage into two specialized layers:
For active database volumes, we use Ceph RBD. This is where the actual data that PostgreSQL operates on resides.
For backups and the continuous archiving of Write-Ahead Logs (WAL), we use Ceph RGW, an S3-compatible interface.
A nightmare for any DBaaS provider: A customer writes massive amounts of data, overloading the entire storage system and slowing down the databases of all other customers.
By using Ceph in combination with Kubernetes limits (Cgroups), we prevent this effect:
A key feature for European sovereignty is independence from a single location. Our storage design allows backups to be automatically replicated to a second, geographically separate region. Should an entire data center fail, valuable customer data is securely stored in the S3 storage of the second location and can be used there for a quick restart.
A well-thought-out storage design is an economic lever for a DBaaS provider. It allows high performance where needed while keeping the costs of massive data growth (backups) under control. Solving storage systemically builds a platform that not only convinces technically but also scales profitably.
Why not just use the cloud provider’s block storage? Local provider storage is often expensive and ties you technically to that provider. With your own Ceph layer, you retain full control over performance profiles and can theoretically move your platform to any infrastructure (multi-cloud capability).
How secure is the data with Ceph against hardware failures? Ceph is “self-healing.” We typically configure triple replication. This means that even if two servers fail simultaneously, the data is still available. The system immediately begins restoring redundancy on the remaining servers after a defect.
Do backups affect the performance of the running database? By separating RBD (for the DB) and RGW (for backups), we minimize the impact. Writing backups to the S3 layer uses different resource paths than critical database I/O.
Can storage space for customers grow dynamically? Yes. Thanks to Kubernetes integration, customers can increase their storage space through the portal. The platform expands the volume in the background “on-the-fly” without needing to restart the database.
🧠 Editorial This week, tech feels less like progress and more like a dose of reality. The same …
In a multi-region architecture for critical infrastructures (KRITIS), data consistency is the …
Operating highly available platforms for critical infrastructures (KRITIS) presents an …