Building cloud native software and running it in production is quite a challenge these days. Besides getting the architecture of your software right, you have to deal with things like cloud infrastructure, CI/CD and a whole lot of security concerns – especially if you build your software in teams. Our work at ayedo is focused on helping you getting the latter part right with our managed Kubernetes and Applications offerings. On of these applications is Otomi – a self-hosted PaaS the enables organizations to build their own developer platform on top of Kubernetes.
Otomi is open source software for Kubernetes that allows you to quickly onboard your teams to a well integrated developer platform of cloud native tools that takes care of many of the challenges of running applications on Kubernetes:
Git repositories for your code hosting provided by Gitea
This article is part of a series in which we explore the capabilities of Otomi as a developer platform and set it up for use in enterprise environments step by step. We will start by getting Otomi up and running on a standard Kubernetes cluster without any customization. Let’s dive in.
Prerequisites
To follow this tutorial, you’ll need 2 things:
a standard Kubernetes cluster such as Minikube, EKS or ayedo Fleet
Polycrate works with so called Workspaces. A workspace is, more or less, a single folder that contains all the necessary code and artifacts to build your desired system – in our case: Otomi on top of Kubernetes.
First, create your workspace folder:
mkdir -p ~/.polycrate/workspaces/otomi-fleet
cd ~/.polycrate/workspaces/otomi-fleet
Then, inside your workspace directory, create the workspace configuration file that contains all the settings we need to run Otomi on our Kubernetes cluster:
Our workspace configuration contains the following settings:
name: the name of the workspace, here: otomi-fleet
dependencies: you can define blocks from the Polycrate registry as dependencies for your workspace. These blocks work like classes that you can instantiate as virtual blocks
blocks: a Polycrate workspace is composed of blocks which can contain arbitrary code
fleet: this is a virtual block that we need to get access to the Kubernetes cluster. We will learn more about this in the next block
otomi: this is a virtual block derived from our dependency ayedo/k8s/otomi
kubeconfig: here we specify the block that holds the Kubeconfig for our cluster. Polycrate will make sure that all Kubernetes-related code will be executed against the cluster defined in that Kubeconfig.
config: the config section holds all Otomi-specific settings. In this case it’s only the admin password as more is not needed for the scope of the article
The last thing we have to do before we can install Otomi is to add the Kubeconfig of our cluster to the workspace:
Note: the file must be named kubeconfig.yml for Polycrate to pick it up
Now that our workspace is assembled, we first inspect the workspace:
polycrate workspace inspect
This will result in an error at first because the dependency ayedo/k8s/otomi has not been installed to the workspace yet:
INFO[0000] Successfully installed block to workspace block=ayedo/k8s/otomi version=0.0.4 workspace=otomi-fleet
ERRO[0000] Dependency not found in the workspace block=otomi dependency=ayedo/k8s/otomi missing=1 workspace=otomi-fleet
FATA[0000] Block 'ayedo/k8s/otomi' not found in the Workspace. Please check the 'from' stanza of Block otomi or run 'polycrate block install ayedo/k8s/otomi'
As you can see, by executing the above command, Polycrate automatically pulls the dependency from the registry to your workspace, so re-executing polycrate workspace inspect will now show the compiled workspace configuration without error:
The compiled configuration apparently contains a whole lot more than what we configured in our workspace configuration. We will not go into detail here – if you want to learn more about how Polycrate works, please refer to the official documentation.
This should only give you an idea about additional configuration supported by Otomi and Polycrate.
Now that our workspace has all its dependencies prepared, we can deploy Otomi by running the install action of the otomi block:
polycrate run otomi install
This installs Otomi to the configured cluster using Helm – pretty much what the official Otomi docs expect you to do.
The command should finish within roughly 30 seconds, depending on the size and “juice” of your Kubernetes cluster. However, the real installation process has just been started in the background.
How Otomi works
Otomi features a very interesting way of installing and managing the product. Here’s what happens:
the Helm release creates a Job called otomi in the otomi-core namespace
this job creates a Pod called otomi--1-$ID in the same namespace
this Pod runs a container that contains the necessary runtime artifacts used to actually install Otomi to your cluster. It’s a complex construct of scripts and manifests that roughly does the following things:
install ingress-nginx to create a Loadbalancer and then acquire the IPv4/IPv6 address of that Loadbalancer’s Endpoint. This will be used to create a dynamic base-domain for your installation on top of the nip.io wildcard DNS service, e.g. 212-121-243-9.nip.io
install cert-manager and prepare a custom CA to enable TLS for all applications
install Keycloak as a central IDP for Otomi and all apps on keycloak.212-121-243-9.nip.io
install Gitea as a central git repository for the platform on gitea.212-121-243-9.nip.io. Here’s where the magic begins: the Otomi installer will create a repository called otomi-values in Gitea and persists all configuration for the platform inside that repository
install oauth2-proxy on auth.212-121-243-9.nip.io to connect all applications with Keycloak
install Drone CI on drone.212-121-243-9.nip.io and connect it to Gitea. The magic goes on: whenever you make changes to the Otomi configuration – either through the Helm values (i.e. the Polycrate config) or the Web UI – they will be commited to Gitea. Upon push to the otomi-values repository, Drone will run a Pipeline to validate and apply that configuration to the runtime platform. This is a neat GitOps mechanic that allows for safe and easy to replicate changes to your Otomi instance.
install the Otomi API and Web UI on otomi.212-121-243-9.nip.io – this is where all of the good stuff happens once the installation has been finished. Here you create and manage your teams, applications and policies.
The full installation takes roughly 10 minutes to finish. Once done, you can visit https://otomi.212-121-243-9.nip.io which will forward you to Keycloak for login.
NOTE: due to the use of a custom CA you will have to acknowledge that the certificate is “safe” multiple times. We will upgrade Otomi to use a custom domain integrated with a DNS provider like Azure in the next article. This clears all those errors as cert-manager will work with LetsEncrypt to provide valid production certificates using DNS01 validation.
Finalize the installation
As described in the Otomi docs, we will have to do an additional, manual step to get the GitOps-Magic going – we will need to activate Drone and connect it to Gitea. This can be done by visiting drone.212-121-243-9.nip.io and simply clicking through the provided prompts without entering anything. You can follow that procedure in the video for more clarity.
Once that is done, all further changes to the platform can be done in the UI and applied by clicking the Deploy Changes link in the left navigation. You can follow the GitOps-Magic in Drone to see how your new applications or settings will be applied to the cluster in realtime.
Enjoy your internal developer platform
At this point, we have Otomi running with default values – this is viable for development clusters on your machine and production clusters alike, but we would recommend to follow the next part of the series if you’re serious about using Otomi as a developer platform for production workloads. In the next part we will learn how to properly set up a custom domain and valid TLS certificates.
In the video linked above I also explain many of the high-level features Otomi offers for development teams and organizations and I show how to run a first application – Kubeclarity – by simply enabling it through drag & drop. We will explore what else is possible with Otomi in one the next parts of the series to highlight the enormous value Otomi delivers as a developer platform for teams building on top of Kubernetes.
Until then, enjoy the ride!
Hosten Sie Ihre Apps in der ayedo Cloud
Profitieren Sie von skalierbarem App Hosting in Kubernetes, hochverfügbarem Ingress Loadbalancing und erstklassigem Support durch unser Plattform Team. Mit der ayedo Cloud können Sie sich wieder auf das konzentrieren, was Sie am besten können: Software entwickeln.
Maximale Datensouveränität mit unserer internen RAG-Lösung und der ayedo Cloud Einleitung In der heutigen digitalen Ära ist der effiziente Umgang mit großen Datenmengen entscheidend für den …
Erfolgreiche Partnerschaft: ESCRA und ayedo revolutionieren ZTNA mit Kubernetes und Cloud-Hosting Strategische Partnerschaften sind entscheidend, um Stärken zu bündeln und gemeinsam zu wachsen. Ein …
Hochverfügbare SaaS-Infrastruktur für mehr als 2 Milliarden Requests pro Monat In der heutigen digitalisierten Welt sind Ausfallsicherheit und Skalierbarkeit unverzichtbare Merkmale jeder …
![Schutz vor Cyber-Bedrohungen: Ein umfassender Leitfaden zum Cyber Risiko Check] (ein-umfassender-leitfaden-zum-cyber-risiko-check.png)
Ein effektiver Weg, um diese Risiken zu minimieren, ist der …
Compliance leicht gemacht: Die ISO27001 als Schlüssel zur Einhaltung gesetzlicher Vorschriften Die Einhaltung gesetzlicher Anforderungen und Datenschutzrichtlinien ist für Unternehmen eine ständige …
Interessiert an weiteren Inhalten? Hier gehts zu allen Blogs →
Kontaktieren Sie uns
Unsere Cloud-Experten beraten Sie gerne und individuell.
Wir antworten in der Regel innerhalb weniger Stunden auf Ihre Nachricht.
Zu Gen-Z für E-Mail? Einfach mal Discord versuchen. Unter +49 800 000 3706 können Sie unter Angabe Ihrer Kontaktdaten auch einen Rückruf vereinbaren. Bitte beachten Sie, dass es keine Möglichkeit gibt, uns telefonisch direkt zu erreichen. Bitte gar nicht erst versuchen. Sollten Sie dennoch Interesse an synchroner Verfügbarkeit via Telefon haben, empfehlen wir Ihnen unseren Priority Support.