We Are the Compiler Guys
There's a Chinese saying: 工欲善其事,必先利其器 — to do good work, one must first sharpen one's tools.
I was in a late-night Slack thread last week with a friend who's a serious engineer — the kind who shipped production code back when a bug meant real customers losing real money. He was feeling the weight of it:
"The slop I'm pushing right now is making me depressed. It's really bad, how unreliable the code is in production."
I know that feeling. AI-generated code has a confidence that exceeds its correctness. It fills in the blanks with plausible-looking logic that quietly fails under edge cases you didn't think to test. For someone with battle scars from shipping critical systems, the gap between "looks right" and "is right" is uncomfortable in a way that's hard to shake.
But then I said something that reframed it for both of us.
Think about the researchers who wrote the first compiler textbooks. Andrew Appel, who wrote Modern Compiler Implementation — the Tiger Book. When he and others like him were building the first parsing algorithms, designing the first type systems, the tools were primitive and the theory was still being invented in real time. They were working on problems nobody had solved before, with instruments that were more prototype than product.
They weren't just using the tools. They were making them. And in doing so, they wrote the textbooks that everyone who came after would study.
We are those guys.
This is easy to miss when you're staring at a hallucinated function call or debugging logic that almost makes sense. But zoom out: right now, we are the earliest generation of practitioners using AI as a serious engineering tool. The techniques we're developing — how to prompt reliably, how to structure tasks for AI agents, how to verify AI output without re-doing the work yourself, when to trust the model and when to override it — none of this has been codified yet.
The Tiger Book took decades of iteration to become a textbook. The algorithms didn't spring fully-formed from someone's head. They emerged from engineers who hit real walls, failed in interesting ways, and gradually figured out what actually worked. Then they wrote it down.
We are doing that now. Except the timeline is compressed.
The tool is dull right now. That's not a bug — it's the precondition for the interesting work.
A perfectly sharp tool doesn't need pioneers; it needs users. A dull tool is an invitation.
Every time you catch an AI making a subtle mistake and understand why, you're building intuition that doesn't exist anywhere yet. Every pattern you discover for structuring prompts, decomposing tasks, verifying outputs — that's the proto-textbook accumulating in your head. The vocabulary doesn't fully exist yet. You're inventing it.
My friend's anxiety about unreliable production code is actually an asset here. That hard-won skepticism, the reflex to verify, the discomfort with "looks right" — that's the right posture for this moment. The people who got burned by brittle systems are better equipped to build robust ones. His frustration is calibrated to reality in a way that blind optimism isn't.
工欲善其事,必先利其器.
The old saying assumes the tool already exists and just needs honing. But sometimes you're early enough that you have to build the honing stone itself.
That's where we are. The tools will get sharper. Some of the sharpening will happen in model labs and research papers. But a lot of it will happen in the daily practice of engineers working out what actually works and what doesn't — and eventually, writing it down.
The textbooks don't exist yet. We are writing them.