After nearly one year using Arch Linux, my workflow has shifted again: from terminal-first minimalism toward a browser-first system powered by web applications, Surfingkeys, and AI agents controlled through web interfaces.
The next step after vibe coding is not removing humans entirely, but removing them from every inner development loop. When agents can verify more of their own work, multi-agent systems stop being blocked by one human reviewer.
A practical breakdown of the two main paths for AI agent token access — pay-per-token API and subscription code plans — with real cost numbers and a concrete recommendation depending on whether you work solo or multi-agent.
md2video is a small but important experiment in agent-first software: the main input is a prompt file for the AI agent, not a GUI form or SDK call, and the result is a narrated slide video generated from markdown-based source content.
In the agent coding era, the key OS question is no longer desktop polish for humans, but safe and scriptable execution for agents. For serious AI coding workflows, Linux stands out through its permission model, CLI-first ecosystem, and open customization.