
I once advised a founder who had it all. A brilliant product, a world-class engineering team, and a solution that was objectively better than the competition.
He asked for my opinion. I told him to focus on building an audience. An email list. A community. Something.
He smiled, nodded, and went right back to tweaking his algorithm.
Six months later, his company was dead.
It was a brutal lesson. The founders were some of the smartest people I’ve ever met. Their tech was flawless. They had the data, the benchmarks, the charts that went up and to the right. They were so sure they were going to win. They were the smart ones.
And they got absolutely obliterated.
Their main competitor, a company with what they considered “inferior” tech, had spent the last two years building a massive, loyal email list. While the founders I advised were perfecting their code, their rivals were writing newsletters. While they were optimizing servers, their rivals were building relationships.
The competitor launched their “inferior” product to a million people on day one. The founders I advised launched their “genius” product to the sound of crickets.
That failure was a painful, expensive lesson. It was the tuition they paid to the market for being naive. And it’s a lesson I’ve never forgotten.
Here’s the thing everyone is missing about AI. The tech is becoming a commodity.
Look, I’m not saying the technology is easy. It isn't. But the gap is closing fast. A few years ago, OpenAI looked like they had an untouchable lead. Now? Google, Anthropic, Meta, a dozen other startups. They all have models that are damn good. The gap between the best and the second-best is shrinking every month. Soon, for most practical purposes, the models will be indistinguishable.
Consider the recent acquisition of Manus by Meta. The headline number was $2 billion. The reaction from the technical elite was predictable. "It's just a wrapper."
They missed the point entirely.
Under the hood, Manus relies on the same foundation models available to any developer. They didn't train a better LLM. They didn't discover a new transformer architecture. By the standard definition of "deep tech," they shouldn't have a moat.
Yet they built a business generating nine figures in revenue in less than a year.
Why? Because they understood the difference between raw intelligence and completed work.
Foundation models are engines. They are powerful, loud, and increasingly commoditized. But you cannot drive an engine to work. You need a chassis, a transmission, a steering wheel, and a comfortable seat. You need a car.
Manus didn't try to build a better engine. They just built a better car.
They focused on the "last mile" of AI. The messy, complex orchestration required to take a vague human instruction and translate it into a tangible result. They built the infrastructure that allows the AI to browse the web, manage files, and execute multi-step workflows without human hand-holding.
I know the counter-argument. "What happens when Claude or GPT-5 does this natively? Won't they just Sherlock the wrappers?"
It's a fair question. But it misunderstands the DNA of these companies. Foundation model labs are research organizations. They build for builders. They build for engineers who live in terminals.
I don't care what anyone says. Currently having to run Claude Code from a terminal is not user friendly. It's powerful, yes. But it's not a product for the 99% of people who just want the job done.
Manus is.
Meta didn't spend $2 billion for another model. They have Llama. They bought Manus because they solved the product problem. They bought the interface that bridges the gap between "artificial intelligence" and "actual utility."
This is the reality of the application layer. As models converge in capability, the value shifts to the companies that can harness that power to solve specific, high-value problems.
We need to retire "wrapper" as an insult. The wrapper is the product. The wrapper is the workflow. The wrapper is the business.
So if the tech isn’t the moat, what is?
It’s the same thing that crushed the company I advised.
Distribution.
But let's be real. In an AI world, distribution itself is about to be commoditized. We are about to be flooded with infinite content, infinite emails, infinite noise.
So distribution isn't enough. You need a filter. You need something that cuts through the noise.
You need Trust.
"Capital is a commodity. Attention is not."
Here's what I've come to believe. Capital is a commodity. Attention is not. In a world where anyone can spin up a powerful AI, the only thing that matters is your ability to get that AI in front of people. To build an audience. To earn their trust. That’s the final moat.
Think about it. When every company can spin up a chatbot that sounds human, who do you listen to? When every marketing agency can generate a thousand photorealistic images, which one do you hire? When AI can write a thousand articles, which one do you read?
You read the one you trust. You hire the one you trust. You listen to the one you trust.
Trust is the final moat. It’s the only thing that can’t be copied and pasted. It can’t be automated. It has to be earned. And in the age of AI, it’s going to be more valuable than ever.
And how do you build trust? You start by respecting privacy.
Privacy isn’t a feature. It’s the foundation. It’s the new currency. In a world of data breaches and surveillance capitalism, the companies that protect their users’ privacy will be the ones that win their trust. The ones that see privacy as a box to be checked, or worse, as an obstacle to be overcome, are going to get left behind.
So if you’re building in the AI space, stop worrying about whether your model is slightly better than the next guy’s. It won’t be for long.
Start worrying about your distribution. Start worrying about your brand. Start worrying about earning the trust of your users.
Because in the end, that’s the only thing that matters.
