Here’s how I want to deploy my stuff:

Provision hosts and hardware with Terraform and clicks

Provisioning of hosts should happen ideally via Terraform on unreliable hardware (e.g. cloud or pi or old stuff). This makes it super quick and easy to re-provision broken stuff, but also to remember the spec of the hosts I use. I have a few modules I’ve created for OCI instances and AWS instances.

My current core terraform code uses terraform-git-backend, because conceptually, I wanted my terraform state stored in git, but simply comitting state files alongside terraform code commits does not lead to reliable locking and state management if I’m switching between devices and dev environments. The reality of using this git backend, especially with onlykey-agent, is that it’s a little cumbersome, so I might move the state somewhere else.

For reliable hardware or platforms that I control, for pet like hosts, I don’t mind doing a clicky-clicky setup. E.g. my physical servers, I’ll just manually install proxmox, and for some virtual hosts on there, I’ll create VMs or LXC containers via UI, because it’s not hard, frequent or time consuming to do myself.

One day I might setup some of the Terraform proxmox modules floating around, but Proxmox in particular feels like it’s had far more time put into it’s UI than it’s API - automating is diminishing returns here for me at the moment.

Provisioning just needs to get the host up with a reachable IP and an authorized_ssh key for whatever is configuring.

Configuring hosts with Ansible via Gitea Actions and Ansible Semaphore

Configuring of hosts, incl logging and overlay network onboarding for VM, LXC or bare metal machines should happen via a base Ansible playbook. I have a couple of these different playbooks for cloud vs local, but they share similar roles and could probably be combined.

I have some playbooks setup to be run via ansible-semaphore, because gitea actions at the time wasn’t quite working well enough. Going forward though, I think I’d like to try run ansible directly from gitea actions (depending on workflow-dispatch support) and remove semaphore.

Deploy and run application on Podman hosts with docker compose

Deploying initial application code and config should be a docker-compose-repo deployed by docker-compose-repo ansible role. This role basically clones a repo and runs podman compose. This repo is the services root folder.

Starting and stopping of the services is handled by ensuring podman compose is configured to start with the host and restart the container based on the compose restart values.

For stateful services with data beyond an initial startup config, e.g. sqlite databases or binary data, I’ll typically ignore it from the repo and back it up a different way (typically via a process from the container’s host, such as restic cron job or syncthing.)

This approach works for me. It’s a combination of various tools and approaches I’ve personally seen, used or implemented over my career, applied to a depth that aligns with my current wants and needs. The core tools have open source cores around text based config.

Many of them I started using when they were considered shiny and new, between 5 and 15 years ago. They have reached a critical mass now, and although there maybe shinier, newer toys out there now, I keep coming back to these ones. Sometimes I go looking for them, sometimes they find me, but we’ve spent a lot of time together.

This is basically how my stuff is deployed, but it’s grown to this position, meaning a bunch of stuff wasn’t actually deployed like this, but is migrating to or maintained like this.