Blog
Writing about RAG, Agents, AI in production, and things that break.
What It's Like to Feed Yourself to an AI
I've been logging every hour of my day for seven years. Then I fed all of it to an AI.
The MBTI Guide to Making Slides with AI
Five paths to AI-generated slides, each one a near-perfect fit for a different personality type. The worst AI can do is make the slide. The best it can do is write the code that renders it.
Most of Agent Engineering Has Nothing to Do with AI
The hardest part of building agents isn't prompt engineering — it's context management. When dynamic tool loading and skills both hit dead ends, sometimes you just build the ugly thing that works.
I Was Accidentally Donating to Cloud Providers
I built a smart tool-filtering system for my AI agent to cut costs. Turns out I was silently nuking my KV cache on every request. Here's what Claude Code does instead.
The End of the Six-Figure Engineer?
Tech companies spent two decades selling Wall Street on the asset-light story. Now they're buying hundreds of billions in GPU infrastructure — and the math has to balance somewhere.
The End of the AI Pipeline Era
Why the shift from hardcoded workflows to sandboxed agents isn't just an architectural upgrade — it's a fundamental change in how we think about building AI systems.
AI Search Traffic Is Up 527%. Here's How to Get Cited.
Traditional SEO is losing ground as AI answers replace search clicks. This is a practical guide to GEO — making ChatGPT, Perplexity, and Claude actually cite your content.
Why AI Is Making You More Exhausted, Not Less
A Harvard study tracked 200 tech workers for 8 months and found AI made them more burned out, not less. The root cause isn't AI itself — it's that AI rewrote the cost structure of work, but our decision-making hasn't caught up.
Why Your RAG Can't Find What You're Looking For
A brand manager couldn't find an article that was definitely in the database. The problem wasn't a bug — it was an economics problem hiding inside a technical one.
LLM Training Isn't Alchemy. It's a Three-Stage Rocket.
Most people think training a large language model means feeding it data, tuning hyperparameters, and hoping for the best. The reality is an entirely different class of engineering — and the stage everyone obsesses over matters the least.