mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

15K
active users

#DevOps

143 posts112 participants7 posts today
Replied in thread

@jwolynko When it comes to on-prem, GitLab still seems to be the preferred solution (GitHub enterprise seems to be deprecated, thank the universe for that). I personally am running Forgejo wherever possible.

But most of the customers I know do run some form of GitLab. And I wanted to make sure I had a working test setup, whenever I need one. Currently I wanted to do some testing for GitLab runners on Kubernetes.

The package based ones are pretty simple to setup (not talking about maintenance, of course).

codeberg.org/johanneskastl/git
codeberg.org/johanneskastl/git

Summary card of repository johanneskastl/gitlab_vagrant_libvirt_ansible
Codeberg.orggitlab_vagrant_libvirt_ansibleVagrant-libvirt setup with a VM that is running a Gitlab instance

<rant>
OK, so however thought up the structure of the #Gitlab helm chart was ... creative, to put it politely.

The chart itself has dependencies, as is common with helm charts.
But it also has a charts directory, which contains 5 other charts. Including one called gitlab.
Which again has a charts directory as well as dependencies.

So, depending on which chart you want to configure, it might be chart-name.something or gitlab.chart-name.something. Oh, they also use global.something or global.chart-name.something.

And as this is not creative enough, some charts are installed if chart-name.install is true. For some it is chart-name.enabled...

But help is near, there is an operator, that does the heavy lifting for you. Oh wait, it uses the values from the helm chart of its CRD...
</rant>

🧑‍🎓Cultivating a Learning Culture: Unlocking the Power of Continuous Development

In just 5 powerful minutes, Marcelo Ancelmo dives into one of the most important ingredients for long-term success in any modern organization:
🔑 A thriving learning culture.

If you're passionate about building resilient, innovative, and engaged teams, this talk is a must-watch.

📺 Watch it now and get inspired: buff.ly/9aQQIhi

🚀 #OdooGCI - Tu navaja suiza para el deploy de proyectos en #Odoo 🔄
🔥 Rewritten from scratch con Typer CLI + Python 🐍
Estoy desarrollando una herramienta potente para manejar repositorios Odoo como un PRO:
✨ Features clave:
📜 JSON Power: Clona/actualiza múltiples repos de golpe con un solo archivo JSON (¡bye bye scripts manuales!)
🛠️ Mantenimiento masivo: Gestiona decenas de módulos como si fuera uno solo
🔐 Token-friendly: Soporte nativo para autenticación en GitHub/GitLab
⚡ Próximamente: Soporte para submódulos Git (en desarrollo)
🔍 ¿Por qué esto cambia el juego?
Ideal para empresas con stacks complejos (+50 módulos)
Perfecto para migraciones controladas entre versiones
Infraestructura como código aplicado a ecosistemas Odoo
🚧 Actualmente en fase beta privada - Próximamente open source!

Your logs are lying to you - metrics are meaner and better.

Everyone loves logs… until the incident postmortem reads like bad fan fiction.
Most teams start with expensive log aggregation, full-text searching their way into oblivion. So much noise. So little signal. And still, no clue what actually happened. Why? Because writing meaningful logs is a lost art.
Logs are like candles, nice for mood lighting, useless in a house fire.

If you need traces to understand your system, congratulations: you're already in hell.

Let me introduce my favourite method: real-time, metric-driven user simulation aka "Overwatch".

Here's how you do it:

🧪 Set up a service that runs real end-to-end user workflows 24/7. Use Cypress, Playwright, Selenium… your poison of choice.
📊 Every action creates a timed metric tagged with the user workflow and action.
🧠 Now you know exactly what a user did before everything went up in flames.

Use Grafana + InfluxDB (or other tools you already use) to build dashboards that actually tell stories:

* How fast are user workflows?
* Which steps are breaking, and how often?
* What's slower today than yesterday?
* Who's affected, and where?

🎯 Alerts now mean something.
🚨 Incidents become surgical strikes, not scavenger hunts.
⚙️ Bonus: run the same system on every test environment and detect regressions before deployment. And if you made it reusable, you can even run the service to do load tests.

No need to buy overpriced tools. Just build a small service like you already do, except this one might save your soul.

And yes, transform logs into metrics where possible. Just hash your PII data and move on.

Stop guessing. Start observing.
Metrics > Logs. Always.

Backups are only good as long as they work.

Spent this morning, restoring the backup of our Mastodon instance burningboard.net to a fresh virtual machine in Proxmox and did some validation that it is valid, complete, restorable and the disaster recovery documentation is up to date.

Everything worked perfectly ✅

Next restore-test: 10/2025