Build your own developer platform on Kubernetes with Otomi

Build your own developer platform on Kubernetes with Otomi

Meta: Fabian Peter · 28.08.2022 · ⏳ 9 Minuten · Alle Blogs →
Tagskubernetes · apps

ayedo-bg-otomi

Building cloud native software and running it in production is quite a challenge these days. Besides getting the architecture of your software right, you have to deal with things like cloud infrastructure, CI/CD and a whole lot of security concerns – especially if you build your software in teams. Our work at ayedo is focused on helping you getting the latter part right with our managed Kubernetes and Applications offerings. On of these applications is Otomi – a self-hosted PaaS the enables organizations to build their own developer platform on top of Kubernetes.

Otomi is open source software for Kubernetes that allows you to quickly onboard your teams to a well integrated developer platform of cloud native tools that takes care of many of the challenges of running applications on Kubernetes:

  • Git repositories for your code hosting provided by Gitea
  • CI/CD provided by ArgoCD and Drone
  • Application Performance Monitoring provided by Prometheus, Loki and Grafana
  • Single-Sign-On that integrates with many of the industry’s favorite identity providers, provided by Keycloak
  • FaaS provided by Knative
  • A visual app-store provided by Bitnami Kubeapps
  • A Docker image and Helm chart registry provided by Harbor
  • Distributed tracing provided by Jaeger
  • Secrets management provided by Hashicorp Vault
  • Advanced multi-tenancy through separate namespaces and network policies
  • Management of multiple Kubernetes clusters
  • An easy to use web UI
  • Workflows and abstractions to easily run your own applications and expose them through Services and Ingresses
  • Many developer platform self-service features
  • Managed through “Configuration as Code

Otomi

This article is part of a series in which we explore the capabilities of Otomi as a developer platform and set it up for use in enterprise environments step by step. We will start by getting Otomi up and running on a standard Kubernetes cluster without any customization. Let’s dive in.

Prerequisites

To follow this tutorial, you’ll need 2 things:

Set up your Polycrate workspace

Polycrate works with so called Workspaces. A workspace is, more or less, a single folder that contains all the necessary code and artifacts to build your desired system – in our case: Otomi on top of Kubernetes.

First, create your workspace folder:

mkdir -p ~/.polycrate/workspaces/otomi-fleet
cd ~/.polycrate/workspaces/otomi-fleet

Then, inside your workspace directory, create the workspace configuration file that contains all the settings we need to run Otomi on our Kubernetes cluster:

cat <<EOF > workspace.poly
name: otomi-fleet
dependencies:
  - ayedo/k8s/otomi:0.0.4
blocks:
- name: fleet
- name: otomi
  from: ayedo/k8s/otomi
  kubeconfig:
    from: fleet
  config:
    admin_password: otomi1234
EOF

Our workspace configuration contains the following settings:

  • name: the name of the workspace, here: otomi-fleet
  • dependencies: you can define blocks from the Polycrate registry as dependencies for your workspace. These blocks work like classes that you can instantiate as virtual blocks
  • blocks: a Polycrate workspace is composed of blocks which can contain arbitrary code
    • fleet: this is a virtual block that we need to get access to the Kubernetes cluster. We will learn more about this in the next block
    • otomi: this is a virtual block derived from our dependency ayedo/k8s/otomi
      • kubeconfig: here we specify the block that holds the Kubeconfig for our cluster. Polycrate will make sure that all Kubernetes-related code will be executed against the cluster defined in that Kubeconfig.
      • config: the config section holds all Otomi-specific settings. In this case it’s only the admin password as more is not needed for the scope of the article

The last thing we have to do before we can install Otomi is to add the Kubeconfig of our cluster to the workspace:

mkdir -p artifacts/blocks/fleet
cp $KUBECONFIG artifacts/blocks/fleet/kubeconfig.yml

Note: the file must be named kubeconfig.yml for Polycrate to pick it up

Now that our workspace is assembled, we first inspect the workspace:

polycrate workspace inspect

This will result in an error at first because the dependency ayedo/k8s/otomi has not been installed to the workspace yet:

INFO[0000] Successfully installed block to workspace     block=ayedo/k8s/otomi version=0.0.4 workspace=otomi-fleet
ERRO[0000] Dependency not found in the workspace         block=otomi dependency=ayedo/k8s/otomi missing=1 workspace=otomi-fleet
FATA[0000] Block 'ayedo/k8s/otomi' not found in the Workspace. Please check the 'from' stanza of Block otomi or run 'polycrate block install ayedo/k8s/otomi'

As you can see, by executing the above command, Polycrate automatically pulls the dependency from the registry to your workspace, so re-executing polycrate workspace inspect will now show the compiled workspace configuration without error:

name: otomi-fleet
dependencies:
- ayedo/k8s/otomi:0.0.4
config:
  image:
    reference: ghcr.io/polycrate/polycrate
    version: 0.8.14
  blocksroot: blocks
  blocksconfig: block.poly
  workspaceconfig: workspace.poly
  workflowsroot: workflows
  artifactsroot: artifacts
  containerroot: /workspace
  sshprivatekey: id_rsa
  sshpublickey: id_rsa.pub
  remoteroot: /polycrate
  dockerfile: Dockerfile.poly
  globals: {}
blocks:
- name: fleet
  kubeconfig:
    path: /workspace/artifacts/blocks/fleet/kubeconfig.yml
    localpath: /root/.polycrate/workspaces/otomi-fleet/artifacts/blocks/fleet/kubeconfig.yml
    containerpath: /workspace/artifacts/blocks/fleet/kubeconfig.yml
  artifacts:
    path: /workspace/artifacts/blocks/fleet
    localpath: /root/.polycrate/workspaces/otomi-fleet/artifacts/blocks/fleet
    containerpath: /workspace/artifacts/blocks/fleet
- name: otomi
  actions:
  - name: install
    script:
    - ansible-playbook install.yml
    block: otomi
  - name: uninstall
    script:
    - ansible-playbook uninstall.yml
    block: otomi
  config:
    admin_password: otomi1234
    apps:
      cert_manager:
        email: ""
        issuer: custom-ca
        stage: staging
    chart:
      create_namespace: true
      name: otomi
      repo:
        name: otomi
        url: https://otomi.io/otomi-core
      version: 0.5.18
    cluster:
      domain_suffix: ""
      k8s_version: "1.22"
      name: otomi
      owner: otomi
      provider: custom
    namespace: otomi-core
    oidc:
      admin_group_id: ""
      client_id: ""
      client_secret: ""
      enabled: false
      issuer: ""
      team_admin_group_id: ""
    otomi:
      has_external_dns: false
      has_external_idp: false
    version: main
  from: ayedo/k8s/otomi
  version: 0.0.4
  workdir:
    path: /workspace/blocks/ayedo/k8s/otomi
    localpath: /root/.polycrate/workspaces/otomi-fleet/blocks/ayedo/k8s/otomi
    containerpath: /workspace/blocks/ayedo/k8s/otomi
  kubeconfig:
    from: fleet
  artifacts:
    path: /workspace/artifacts/blocks/otomi
    localpath: /root/.polycrate/workspaces/otomi-fleet/artifacts/blocks/otomi
    containerpath: /workspace/artifacts/blocks/otomi
- name: ayedo/k8s/otomi
  actions:
  - name: install
    script:
    - ansible-playbook install.yml
    block: ayedo/k8s/otomi
  - name: uninstall
    script:
    - ansible-playbook uninstall.yml
    block: ayedo/k8s/otomi
  config:
    admin_password: ""
    apps:
      cert_manager:
        email: ""
        issuer: custom-ca
        stage: staging
    chart:
      create_namespace: true
      name: otomi
      repo:
        name: otomi
        url: https://otomi.io/otomi-core
      version: 0.5.18
    cluster:
      domain_suffix: ""
      k8s_version: "1.22"
      name: otomi
      owner: otomi
      provider: custom
    namespace: otomi-core
    oidc:
      admin_group_id: ""
      client_id: ""
      client_secret: ""
      enabled: false
      issuer: ""
      team_admin_group_id: ""
    otomi:
      has_external_dns: false
      has_external_idp: false
    version: main
  version: 0.0.4
  workdir:
    path: /workspace/blocks/ayedo/k8s/otomi
    localpath: /root/.polycrate/workspaces/otomi-fleet/blocks/ayedo/k8s/otomi
    containerpath: /workspace/blocks/ayedo/k8s/otomi
  artifacts:
    path: /workspace/artifacts/blocks/ayedo/k8s/otomi
    localpath: /root/.polycrate/workspaces/otomi-fleet/artifacts/blocks/ayedo/k8s/otomi
    containerpath: /workspace/artifacts/blocks/ayedo/k8s/otomi
path: /workspace
sync:
  local:
    branch:
      name: main
  remote:
    branch:
      name: main
    name: origin
localpath: /root/.polycrate/workspaces/otomi-fleet
containerpath: /workspace

The compiled configuration apparently contains a whole lot more than what we configured in our workspace configuration. We will not go into detail here – if you want to learn more about how Polycrate works, please refer to the official documentation.

This should only give you an idea about additional configuration supported by Otomi and Polycrate.

Now that our workspace has all its dependencies prepared, we can deploy Otomi by running the install action of the otomi block:

polycrate run otomi install

This installs Otomi to the configured cluster using Helm – pretty much what the official Otomi docs expect you to do.

The command should finish within roughly 30 seconds, depending on the size and “juice” of your Kubernetes cluster. However, the real installation process has just been started in the background.

How Otomi works

Otomi features a very interesting way of installing and managing the product. Here’s what happens:

  • the Helm release creates a Job called otomi in the otomi-core namespace

  • this job creates a Pod called otomi--1-$ID in the same namespace

  • this Pod runs a container that contains the necessary runtime artifacts used to actually install Otomi to your cluster. It’s a complex construct of scripts and manifests that roughly does the following things:

    • install ingress-nginx to create a Loadbalancer and then acquire the IPv4/IPv6 address of that Loadbalancer’s Endpoint. This will be used to create a dynamic base-domain for your installation on top of the nip.io wildcard DNS service, e.g. 212-121-243-9.nip.io
    • install cert-manager and prepare a custom CA to enable TLS for all applications
    • install Keycloak as a central IDP for Otomi and all apps on keycloak.212-121-243-9.nip.io
    • install Gitea as a central git repository for the platform on gitea.212-121-243-9.nip.io. Here’s where the magic begins: the Otomi installer will create a repository called otomi-values in Gitea and persists all configuration for the platform inside that repository
    • install oauth2-proxy on auth.212-121-243-9.nip.io to connect all applications with Keycloak
    • install Drone CI on drone.212-121-243-9.nip.io and connect it to Gitea. The magic goes on: whenever you make changes to the Otomi configuration – either through the Helm values (i.e. the Polycrate config) or the Web UI – they will be commited to Gitea. Upon push to the otomi-values repository, Drone will run a Pipeline to validate and apply that configuration to the runtime platform. This is a neat GitOps mechanic that allows for safe and easy to replicate changes to your Otomi instance.
    • install the Otomi API and Web UI on otomi.212-121-243-9.nip.io – this is where all of the good stuff happens once the installation has been finished. Here you create and manage your teams, applications and policies.

    The full installation takes roughly 10 minutes to finish. Once done, you can visit https://otomi.212-121-243-9.nip.io which will forward you to Keycloak for login.

NOTE: due to the use of a custom CA you will have to acknowledge that the certificate is “safe” multiple times. We will upgrade Otomi to use a custom domain integrated with a DNS provider like Azure in the next article. This clears all those errors as cert-manager will work with LetsEncrypt to provide valid production certificates using DNS01 validation.

Finalize the installation

As described in the Otomi docs, we will have to do an additional, manual step to get the GitOps-Magic going – we will need to activate Drone and connect it to Gitea. This can be done by visiting drone.212-121-243-9.nip.io and simply clicking through the provided prompts without entering anything. You can follow that procedure in the video for more clarity.

Once that is done, all further changes to the platform can be done in the UI and applied by clicking the Deploy Changes link in the left navigation. You can follow the GitOps-Magic in Drone to see how your new applications or settings will be applied to the cluster in realtime.

Enjoy your internal developer platform

At this point, we have Otomi running with default values – this is viable for development clusters on your machine and production clusters alike, but we would recommend to follow the next part of the series if you’re serious about using Otomi as a developer platform for production workloads. In the next part we will learn how to properly set up a custom domain and valid TLS certificates.

In the video linked above I also explain many of the high-level features Otomi offers for development teams and organizations and I show how to run a first application – Kubeclarity – by simply enabling it through drag & drop. We will explore what else is possible with Otomi in one the next parts of the series to highlight the enormous value Otomi delivers as a developer platform for teams building on top of Kubernetes.

Until then, enjoy the ride!

ayedo Alien Kubernetes Hat

Hosten Sie Ihre Apps in der ayedo Cloud

Profitieren Sie von skalierbarem App Hosting in Kubernetes, hochverfügbarem Ingress Loadbalancing und erstklassigem Support durch unser Plattform Team. Mit der ayedo Cloud können Sie sich wieder auf das konzentrieren, was Sie am besten können: Software entwickeln.

Jetzt ausprobieren →

Ähnliche Inhalte

Alle Blogs →



Fabian Peter · 04.07.2024 · ⏳ 3 Minuten

ESCRA und ayedo revolutionieren ZTNA mit Kubernetes und Cloud-Hosting

Erfolgreiche Partnerschaft: ESCRA und ayedo revolutionieren ZTNA mit Kubernetes und Cloud-Hosting Strategische Partnerschaften sind entscheidend, um Stärken zu bündeln und gemeinsam zu wachsen. Ein …

Lesen →

Fabian Peter · 01.07.2024 · ⏳ 4 Minuten

Wie bewältigt man 160 Millionen User im Monat? Mit K8s und Docker!

Hochverfügbare SaaS-Infrastruktur für mehr als 2 Milliarden Requests pro Monat In der heutigen digitalisierten Welt sind Ausfallsicherheit und Skalierbarkeit unverzichtbare Merkmale jeder …

Lesen →

Fabian Peter · 13.06.2024 · ⏳ 3 Minuten

Schutz vor Cyber-Bedrohungen: Ein umfassender Leitfaden zum Cyber Risiko Check

![Schutz vor Cyber-Bedrohungen: Ein umfassender Leitfaden zum Cyber Risiko Check] (ein-umfassender-leitfaden-zum-cyber-risiko-check.png) Ein effektiver Weg, um diese Risiken zu minimieren, ist der …

Lesen →

Fabian Peter · 10.06.2024 · ⏳ 3 Minuten

Compliance leicht gemacht: Die ISO27001 als Schlüssel zur Einhaltung gesetzlicher Vorschriften

Compliance leicht gemacht: Die ISO27001 als Schlüssel zur Einhaltung gesetzlicher Vorschriften Die Einhaltung gesetzlicher Anforderungen und Datenschutzrichtlinien ist für Unternehmen eine ständige …

Lesen →

Fabian Peter · 03.06.2024 · ⏳ 3 Minuten

Sichere Daten durch den Cyber Risiko Check: So schützen Sie Ihr Unternehmen effektiv

![Sichere Daten durch den Cyber Risiko Check: So schützen Sie Ihr Unternehmen effektiv] (cyber-risiko-check-so-schuetzen-sie-ihr-unternehmen-effektiv.png) Der Schutz sensibler Daten ist daher von …

Lesen →


Interessiert an weiteren Inhalten? Hier gehts zu allen Blogs →

Kontaktieren Sie uns

Unsere Cloud-Experten beraten Sie gerne und individuell.

Wir antworten in der Regel innerhalb weniger Stunden auf Ihre Nachricht.

Zu Gen-Z für E-Mail? Einfach mal Discord versuchen. Unter +49 800 000 3706 können Sie unter Angabe Ihrer Kontaktdaten auch einen Rückruf vereinbaren. Bitte beachten Sie, dass es keine Möglichkeit gibt, uns telefonisch direkt zu erreichen. Bitte gar nicht erst versuchen. Sollten Sie dennoch Interesse an synchroner Verfügbarkeit via Telefon haben, empfehlen wir Ihnen unseren Priority Support.