I used 2025 to turn AI into muscle memory. Not one big workflow. A stack of small, reliable habits to become more AI-native. Exploration was the theme.

I began with GitHub Copilot inside VS Code. Then I moved to Cursor and stayed there. The reason was simple: fewer steps between intent → outcomes; fast inline completion mattered more than “agent mode” for important professional codes, and cursor tab is the best in those situations.

I sampled other VS Code-fork IDEs too. Windsurf, AWS Kiro, Google Antigravity etc, but not my daily driver. Cursor remained the default.

Agentic browsers with computer use was also another theme. Comet, ChatGPT Atlas, Dia. One off usage for things that are heavy GUI.

My most consistent gains came from CLI agents. They’re direct and keyboard friendly(what I love, saves time and transferable). They don’t need UI ceremony of IDEs. Two tools dominated my terminal time. Claude Code: First Half of 2025. OpenAI Codex CLI: Second Half of 2025.

For low-stakes tasks, I leaned on Gemini CLI(and its open source free/cheap cousins). It’s open-source and intentionally “prompt → model → result” with a generous free tier. One lesson was cost. Agentic coding can burn money quietly. Gotta budget attention and tokens. Luckily CC and Codex have their decent enough rate limited subscriptions.

I experimented with “agent frameworks” mainly to learn what’s real. And what’s just ceremony of benchmark hacking. Vercel AI-SDK, pydantic-ai, crew-ai etc to name a few helped in experimenting. Official SDKs from openai and anthropic as well. And for local model testing, Ollama was my go-to.

Google’s media models were a big part of my year: Imagen3/4, Veo2/3, NanoBanana2.5/3. I used them in bursts especially for creative experiments. One fun project: generating chapter/character-style visuals for public-domain books from PG. Another was this youtube channel. I also tried Amazon Nova Reel for video. Quality didn’t justify the effort. The cli-agents removed the use friction of api endpoints. Creative taste and ideas are the bottleneck.

A surprisingly strong workflow: converting audio/video into text I can think with and ask follow ups. NotebookLM was a major helper here, especially when I wanted answers grounded in a single/multiple original source. Later, I combined speech-to-text + a rewrite prompt to turn podcasts into readable notes. This fit how I actually learn: pause, reflect, write, connect ideas. AI didn’t replace thinking. It protected time for it. It actually gave more time to think about the ideas and ask questions to satisfy curiosity.