Seeing is believing, and that's the problem
There's a reason the phrase "I'll believe it when I see it" has stuck around for centuries. We are, at our core, sensory creatures. We experience reality through what we can touch, hear, taste, and more than anything else, what we can see. Sight is the dominant sense for most of us. It's how we navigate the world, how we assess risk, and how we form our deepest convictions about what's real.
This is exactly why the shift to AI-generated code feels so disorienting right now, even when the results speak for themselves.
I've been using Claude to write production code, and at this point I've seen it work enough times that I genuinely believe it's going to be the default way products get built across the industry. But I didn't arrive at that belief overnight. It took repetition. It took seeing it work, over and over, with my own eyes.
We don't trust what we haven't experienced
Think about the first time someone told you about something that sounded too good to be true. Your instinct was probably some version of skepticism. Not because you thought they were lying, but because you hadn't experienced it yourself yet.
That's not a flaw in how we think. It's a feature. Our brains are wired to trust direct experience over secondhand information. The problem is that this wiring creates a lag between when something becomes possible and when we actually believe it's possible.
AI writing production-ready code is sitting right in that gap. The capability is there. The evidence is mounting. But for a lot of people, it still doesn't feel true because they haven't had enough firsthand experience with it yet.
Repetition is what builds trust
I really believe the thing that moves someone from "that's interesting" to "I'm building with this" isn't a single demo or a compelling blog post. It's accumulated experience. It's the tenth time you prompt a model and get back code that actually runs, actually works, and actually solves the problem you had.
You can read every benchmark, every case study, every thread about AI coding agents. But until you've personally watched it turn a prompt into something functional enough times, there's going to be a gap between what you know and what you trust.
Where this is heading
What's happening across the industry right now is a collective version of this same process. Thousands of builders are each accumulating their own experiences. Some are past the threshold. Others are early. The adoption curve isn't going to follow some clean line because it depends on something deeply personal: how many times each individual has seen it work for themselves.
The people and companies who start accumulating those experiences now are going to have a real advantage. Not because the technology is going anywhere (it's only getting better), but because trust takes time to build, and that's the one thing you can't compress.
So if you're still on the fence, just start using it. Not to prove a point, but to give yourself the direct experience your brain needs to form its own opinion. Prompt it. Build something small. See what happens. Then do it again. The trust will come, or it won't, but you'll be making that call based on what you've actually seen.
I'm curious though. For those of you who have crossed that threshold, what got you there? Was it a specific project, a particular moment, or just enough small wins stacking up? And more importantly, how do we take whatever that was and shape it into something more accessible for the people who are still on the outside looking in, curious but understandably terrified about the AI unknown? I'd love to hear your thoughts.
Subscribe
Honored you're into it
TBD on a newsletter, etc.

