
At some point in the last few years, I built a fake computer.
Not a real one — a simulated 1980s Unix terminal, running entirely inside a language model. I called it NexOS. And when I typed commands into it, it responded the way an actual Unix terminal would. Correct outputs. Accurate syntax. It could generate functions, simulate file systems, behave like a real operating environment. All of it conjured out of nothing but a carefully written prompt.
Around the same time, I was building something I called Nexus Quest — a role-playing adventure engine where you could step inside a fictional world and interact with it. I used Stranger Things as my test universe. The level of immersion was unlike anything I expected. The AI wasn't just answering questions about the world. It was the world. It understood the rules, the characters, the atmosphere — and it held all of it together through language.
These two experiments changed how I think about almost everything I build.
The app was never the point.
I had spent a lot of time building web applications that integrated with AI. Interfaces, backends, data layers, API calls — the whole architecture. And there's real value in that. But somewhere between NexOS and Nexus Quest, something clicked.
The application was a wrapper. Sometimes a useful one, sometimes a beautiful one. But underneath every impressive AI-powered app was a prompt doing the actual work. The language was the logic. The words were the code.
Once I saw that, I couldn't unsee it. You don't always need the wrapper. Sometimes all you need is a well-crafted prompt and something to send it through — a chatbot interface, a simple API call, a thin web layer. The large language model is the application. Everything else is just the face you put on it.
What this revealed about language itself.
Here's the thing that really got me: the better I got at prompting, the more I realized it wasn't a technical skill. It was a communication skill. The ability to articulate exactly what you want — precisely, clearly, without ambiguity — turned out to be the core competency.
Language models respond to language. Which means the person who can describe a problem most accurately, most completely, most intentionally — that person gets the best results. Not the best programmer. Not the most technical person in the room. The best communicator.
I've thought about this a lot, because it reframes something fundamental. We've always said that clear thinking leads to clear writing. What AI has added to that equation is: clear writing leads to powerful outcomes in ways that weren't possible before.
The implications are still unfolding.
NexOS was a proof of concept. A fake terminal conjured from words. But the principle it demonstrated scales to almost anything. Simulated environments. Custom personas. Entire workflows. Systems that would have taken months to build as traditional software can now be sketched out, tested, and iterated on through conversation.
I'm still finding the edges of what this means. But I know this: the builders who will do the most interesting things in the next decade aren't necessarily the ones who understand the models best at a technical level. They're the ones who understand how to talk to them — how to think clearly enough to describe what they want with enough precision that the model can do the rest.
That's a different kind of literacy than we've talked about before. And it's worth developing.