My daughter asked an AI to help her write a story last week. She's nine. She didn't ask me how to use it. She didn't read a tutorial. She just talked to it like she talks to a friend, negotiated the plot, pushed back when it gave her a boring ending, and produced something genuinely funny about a cat who runs for mayor.
I sat there watching and realised something that stopped me cold. She wasn't learning to use AI. She was growing up inside it. The distinction matters more than anything else I've written about this year.
We talk endlessly about AI adoption — how to get businesses to embrace it, how to train employees, how to overcome resistance. But there's an entire generation for whom the word "adoption" makes no sense. You don't adopt oxygen. You just breathe.
The Gap Nobody's Measuring
Every generation has its native technology. Millennials grew up with the internet. Gen Z grew up with smartphones. But what's happening now is categorically different. The internet was a medium. Smartphones were devices. AI is a thinking partner. It's the first technology that responds, adapts, and meets you where you are.
When a twelve-year-old sits down with an AI tutor that adjusts its explanations in real time based on what the child does and doesn't understand, that's not using a tool. That's a fundamentally different relationship with knowledge itself. The child doesn't experience the frustration of being left behind in a classroom. They don't develop the learned helplessness that comes from raising your hand and being told to wait.
They just learn. At their pace. In their style. With infinite patience from a system that never gets tired, never gets frustrated, and never makes them feel stupid for asking the same question three times.
The first AI-native generation won't understand why learning was ever one-size-fits-all. That's either a miracle or a crisis, depending on what we build around it.
The gap I'm talking about isn't between children and adults. It's between the world these children are growing up in and the systems we're building to govern it. Right now, those two things are evolving at wildly different speeds. The kids are sprinting. The governance is crawling. And every day that gap widens, we're making decisions by default that should be made by design.
What Entrepreneurs Are Actually Building
Here's what keeps me up at night. When entrepreneurs build AI products, we think about users, revenue, market fit, competitive advantage. All the right things for building a business. But we almost never think about what happens when the seven-year-old in the next room grows up inside the system we just shipped.
Every AI product that goes to market becomes part of the environment. It shapes expectations, habits, and mental models. The recommendation algorithm you built to increase engagement is also teaching a teenager what "interesting" means. The chatbot you deployed to reduce support costs is also teaching an eight-year-old what conversation with a non-human entity feels like. The AI assistant you designed for productivity is also showing a child what it means to delegate thinking.
I'm not saying any of this is inherently bad. Some of it is extraordinary. But we need to be honest about the fact that we're not just building products. We're building the cognitive environment for the next generation. And most of us are doing it without giving that reality a single minute of deliberate thought.
Architecture beats ambition. I've said it before. But it takes on a different weight when the architecture you're building isn't just a business system — it's the scaffolding around someone else's childhood.
The Responsibility We're Ignoring
There's a conversation happening in boardrooms and policy offices about AI safety. It tends to focus on existential risk — will AI become uncontrollable, will it be used for mass manipulation, will it destabilise geopolitics. These are important questions. But they're also conveniently abstract. They let us feel concerned without actually changing what we do on Monday morning.
The more immediate question is simpler and harder: what does it do to a developing mind when AI is always available? When a child never has to sit with not-knowing? When the answer is always one question away?
We spent decades worrying about screen time. The real question was never how long children stare at screens. It's what stares back.
I don't have clean answers. Nobody does. But I know that the entrepreneurs building AI products right now have more influence over this question than any government committee. Policy moves slowly. Products ship fast. By the time regulation catches up, an entire cohort of children will have formed their relationship with AI based on whatever we decided to build — or decided not to think about.
This isn't a call to stop building. It's a call to build with your eyes open. To add "what does this mean for an eight-year-old user?" to your product review checklist, even if your target market is enterprise B2B. Because the enterprise worker has a kid. And that kid will find your product, or one like it, sooner than you think.
What the Kids Already Know
Here's the part that gives me hope. Children are remarkably good at this. Better than us, in most cases.
When I watch kids interact with AI, I see something that most adults have lost: an intuitive understanding that AI is a collaborator, not an authority. Kids negotiate with AI. They argue with it. They tell it when it's wrong. They don't treat its output as gospel the way many adults do — because they haven't been trained to defer to technology the way we have.
A child says "that's a rubbish answer, try again" without any existential crisis about what it means that they're talking to a machine. An adult agonises about whether it's appropriate to use AI for this task, whether they're being lazy, whether their colleagues will judge them. The child just uses it and moves on.
This is the human-led, AI-amplified future in its purest form. Not because children have read the thought leadership about it. Because they haven't been taught to be afraid of it yet.
The fear comes later. It comes from us. From the anxious articles, the dystopian framing, the school bans that treat AI like contraband instead of curriculum. Every time we respond to AI with panic rather than pedagogy, we're teaching kids that this powerful tool is something to be ashamed of using.
Children don't need to learn to trust AI. They need us to build AI that deserves their trust.
Building for Inheritance
I think about legacy differently than I used to. Early in my career, legacy meant the business you built, the money you left behind, the reputation you earned. Those things still matter. But the legacy that keeps me up at night now is the invisible one — the systems, norms, and defaults that our generation is encoding into AI and that the next generation will inherit as assumptions.
If we build AI systems that prioritise engagement over wellbeing, the next generation will assume that's what technology is for. If we build AI systems that centralise power, the next generation will assume that's natural. If we build AI systems without meaningful transparency, the next generation won't know to ask for it.
But the reverse is also true. If we build AI that respects privacy by default, children grow up expecting privacy. If we build AI that explains its reasoning, children grow up demanding explanations. If we build AI that acknowledges uncertainty, children grow up comfortable with not-knowing — which might be the most important cognitive skill of the next century.
Every product decision is a vote for the world you want your children to live in. That's not hyperbole. It's just what happens when technology becomes environment.
The Entrepreneur's Actual Job
I've been saying for years that entrepreneurship is building lifeboats. Building things that help people navigate change, survive disruption, find opportunity in chaos. But when it comes to AI and the next generation, the metaphor shifts. We're not building lifeboats. We're building the ocean.
The AI systems shipping today are the water the next generation swims in. That's an extraordinary amount of power for any group of people to have. And the tech industry's track record with that kind of power is, let's be honest, mixed at best.
Social media was supposed to connect us. It did — and it also gave us an anxiety epidemic among teenagers. The smartphone was supposed to give us freedom. It did — and it also created the attention economy. Every transformative technology delivers on its promise and exacts a cost that nobody budgeted for.
AI will be no different. The question is whether we learn from the pattern or repeat it. And the window for learning is now — not after the first generation has already been shaped by whatever we shipped in a hurry to hit quarterly targets.
What I'd Actually Do
I'm not a policy maker. I'm an entrepreneur. So here's what I think entrepreneurs should actually do, starting this week.
First, assume children will use your product. Even if they're not supposed to. Even if your terms of service say otherwise. Design accordingly. Not by dumbing things down — by building in transparency, consent, and the ability to understand what's happening under the hood.
Second, think about defaults. Your product's default settings are the values you're encoding into the next generation's environment. If the default is maximum data collection, that's a statement. If the default is privacy-first, that's a different statement. Pick the one you'd want for your own kids.
Third, talk to actual children. Watch them use AI. You'll learn more in thirty minutes than in thirty hours of market research. They'll show you things about your product that no focus group of adults ever will — because they don't have the same assumptions, biases, or politeness that make adult feedback useless half the time.
Fourth, build the guardrails before you need them. Not after a scandal. Not after a congressional hearing. Now. Because the best time to install safety systems is when the stakes feel low. By the time they feel high, it's too late to do it well.
The entrepreneurs who shape the next decade aren't the ones who move fastest. They're the ones who build with inheritance in mind.
The World They'll Thank Us For
I keep coming back to my daughter and her cat-mayor story. She didn't just use AI to write it. She used AI to think through a problem — how do you make a story funny without being mean? — and landed on a solution that was better than what either she or the AI would have produced alone.
That's collaboration. Real collaboration. The kind we keep promising AI will enable for businesses but that a nine-year-old figured out in twenty minutes because nobody told her it was supposed to be complicated.
The first AI-native generation will be extraordinary. They'll collaborate with AI in ways we can barely imagine. They'll solve problems we've been staring at for decades. They'll build things that make our best work look like cave paintings.
But only if we give them the right foundation. Only if the systems we build today — the products, the defaults, the guardrails, the values encoded in code — are worthy of what they'll become.
We're not just building businesses. We're building their world. And the least we can do is build it like we mean it.