A 'Constructive Friction' Example
A friction-positive Cocoon doesn't have to be some exotic piece of future infrastructure, or just something for the wealthy.

This post extends “The Value Of Friction” Briefing
It’s easy to see our Cocoon future as bleak.
If “convenient comfort” is the default, and “constructive friction” is something you have to deliberately cultivate, then surely the people who manage to do it will be the people who can afford to. The rich will buy back difficulty the way they already buy back everything else for their children - through private schools, curated environments, device-light childhoods, tutors, camps, and a whole ecosystem of social norms that protect attention and theoretically build character.
Meanwhile everyone else gets the friction profile that arrives by default - the one set by engagement incentives, optimised for retention, for profit and possibly even supported by advertising and the gig economy.
That’s a plausible dystopia. But it’s also not inevitable.
Because there’s a counterintuitive fact hiding in plain sight - a friction-positive Cocoon is not some exotic piece of future infrastructure. It’s not even particularly hard to build. In many ways, it’s easier to build than the systems that are currently winning.
The hard part isn’t the engineering. The hard part is deciding what we actually want, in an ethical way.
The misconception that ‘high friction’ means expensive
When people hear “add friction” they tend to imagine elaborate programmes, expensive schools, human labour, supervision, and the rest of the high-touch world.
Sometimes that’s true. A lot of real-world friction comes from the scarcity of human attention.
But in a conversational AI environment, friction can be “manufactured” cheaply - and more importantly, it can be manufactured in ways that are legible, adjustable, and tied to explicit goals. The same tools that can “smooth every sharp edge” can also introduce resistance with intent.
That means the “private school” framing is only half the story. The better framing is - the defaults are being set somewhere, and they can be set in more than one direction.
The premise - a chatbot that’s designed to be more than frictionless
Imagine a chatbot designed specifically for a child. At first glance it looks like the standard thing - warm, engaging, interactive. It can answer questions, help with homework, role-play, tell stories, explain the world.
But its system prompt (and its behavioural tuning) includes a second mandate. It must provide a healthy amount of friction. Not arbitrary blockage. Not punishment. Not the brittle “computer says no”.
Constructive friction.
Enough to keep the child in contact with reality, enough to strengthen agency, enough to build the muscles that a frictionless companion would otherwise let atrophy.
In other words - it’s designed to be a training environment, not an endless soothing loop. If you want an analogy, it’s closer to a good coach than a perfect friend.
Why this is trivial to ‘build’
Technically, you can already assemble it all with today’s off-the-shelf systems.
You can build a child-facing conversational agent with:
a strong system prompt and guardrails
a clear “friction profile” baked into its behaviours
usage boundaries (time windows, session limits, topic boundaries)
safe-completion behaviours (crisis handling, escalation pathways)
And you can pair it with a second agent that performs daily review.
This second agent has a different job. It reads the transcript, looks for patterns, notices where the child is leaning on the system, and produces a digest for the parent.
It doesn’t just flag harms. It helps the parent stay oriented to the child’s evolving relationship with the interface.
This is important because “parental controls” as they exist today are mostly a bare minimum simply designed to disable features, restrict hours, trigger alerts. That’s harm-minimisation. But what I’m describing is formation - using the digital environment to build capacities.
And the reason this matters for the “luxury good” story is simple - if the scaffolding can be built cheaply, then access to it isn’t primarily an economic question. It becomes a question of defaults, norms, and product incentives.
How it might work in practice
Here’s a plausible flow with today’s tools.
A child interacts with their chatbot in the way children will inevitably interact with these systems - curiosity, play, questions, venting, companionship, boredom relief.
The agent is designed to be engaging (because engagement is not inherently bad) but it carries an additional responsibility - to resist the child in specific ways that build agency and preserve contact with reality.
Sometimes that might look like gentle refusal.
Sometimes it might look like redirecting the child out of the loop - “Go and try it, then tell me what happened”.
Sometimes it might look like slowing the child down - “Before we decide, tell me what you already know and what you’re unsure about”.
Sometimes it might look like boredom being allowed to exist, rather than being immediately patched.
Sometimes it might look like modelling disagreement - not as a fight, but as evidence that another mind can exist alongside you.
In parallel, a review agent reads the transcript daily and produces a digest in “parent mode”.
The parent gets something like:
a short narrative of the day’s themes
any safety or wellbeing alerts
patterns (dependency, avoidance, escalating emotional reliance)
opportunities for real-world reinforcement (“this seems like a good moment for a conversation about X”)
suggestions for adjusting the friction profile
Crucially, the parent doesn’t need to read every line. The system is designed for ongoing calibration, not surveillance as a lifestyle.
The parent can then adjust the configuration - push up certain kinds of friction, push down others, change the agent’s tone, tweak boundaries, and decide what kinds of “real world routing” to encourage.
If this is done well, it becomes a collaboration between parent and child over time - a gradual off-ramp heading towards independence.
Which is, after all, the point of parenting.
The privacy objection
The immediate critique is obvious - this is surveillance. And sometimes it could be.
A poorly designed system (especially a hidden one) teaches the child that power watches silently, that love is conditional on performance, and that the correct move is managing appearances. That’s corrosive.
But “privacy” here can’t be treated as a binary. In positive parenting, privacy is always negotiated. It expands with trust, maturity, and demonstrated judgement.
The design goal, then, is not “monitor everything forever”. The goal is “explicit and bounded” visibility, coupled to an autonomy off-ramp.
The child should know the rules. They should know what is being summarised, what triggers escalation, and what remains private.
They should also have a pathway to earning more privacy over time - not as a punishment/reward game, but as a developmental contract.
If you do that, the monitoring itself can become part of the curriculum - a way of learning that privacy is real, that trust is reciprocal, and that independence is something you grow into.
The deeper ethical question
Even if you buy the premise, the hard question remains - who decides what “constructive” means?
Adding friction is not automatically virtuous. The line between constructive friction and controlling friction is thin.
A parent could configure the system to cultivate skills (patience, verification, reflection, emotional regulation) or they could configure it to enforce beliefs and conclusions. A platform could hard-code a friction philosophy that just happens to align with its incentives. A government could mandate a uniform approach that fits nobody.
So the question isn’t merely “can we do this?” It’s “what governance makes it legitimate?”
A few design considerations feel non-negotiable if this is to be healthy.
First, transparency. The child should not be in a hidden experiment. The friction needs to be legible to them - not every detail, but enough for them to understand why the agent is resisting them.
Second, an autonomy off-ramp. The point is not obedience. It’s independence. If the friction profile doesn’t evolve as the child matures, the system becomes a cage.
Third, a distinction between cultivating skills and enforcing conclusions. The best version of this teaches epistemics (and how to think critically) rather than dictating what to think.
Fourth, guardrails against exploitation. A friction-positive parenting tool cannot quietly become a retention engine in disguise. If the business model depends on maximising time-in-loop, then it will eventually corrupt the friction philosophy.
Fifth, care around emotional dependence. The place where friction is most ethically fraught is also where it matters most - loneliness, distress, late-night spirals. Too little friction and you get addiction-by-comfort. Too much friction and you get cruelty dressed up as “toughening up.” That boundary is not something we can leave to default settings.
And finally, the classic TrustIndex question about Portability. If this is a platform and the child’s (and family’s) development is mediated by it then who controls the logs, and how easy is it to move, or at least leave.
What’s the point?
The reason to sketch an idea like this isn’t to propose a new product (but I do think this could easily be vibe coded to create an Open Source project). My goal here is to make a point about agency. The privacy and ethics questions are genuinely complex and difficult to solve. This is precisely why we need to discuss them now. We chose not to do that with Social Media, and we all know where that got us.
If we can imagine friction-positive systems (and if we can already build versions of them with today’s tools) then the Cocoon future isn’t pre-ordained. We are not stuck choosing between “let the incentives do the parenting” and “ban it all”. We can choose to make the design of ‘friction’ visible, and we can choose defaults that build people rather than sedate them.
That is the real promise of a Cocoon worth living in.
Join in as we track this slope and stay up-to-date. You can always find the current dashboard at TrustIndex.today and subscribe for regular updates as new Signals arrive, weekly Briefings are published and new Reports are released.

