Can You Avoid 'AI Lock-In'?
There’s a real risk of getting locked into a single AI platform as your AI-mediated interface thickens. But taking a different view of 'Portability' may help protect you from this.
The more a system knows about your work, your preferences, your tone, your projects, your history, the more it starts to feel like the most natural place to think. That same continuity then becomes a switching cost. If the platform is also the interface through which you plan, decide, buy, write, organise and delegate, then the lock-in isn’t primarily technical. It’s behavioural.
Portability is sometimes presented as the antidote. But the way we talk about Portability often makes lock-in feel inevitable.
This Briefing reframes what Portability can mean, and why rising Autonomy quietly amplifies it.
The ‘obvious’ Portability frame
Portability is often framed as a “right of exit”. I did that myself in the initial “vision” post that launched the TrustIndex.
This framing is naturally borrowed from places where it feels clean and morally obvious - you keep your phone number when you change telcos (Mobile Number Portability), or you can switch banks (if you’re lucky) without your whole financial life getting trapped in one institution. The intuition is simple - providers shouldn’t be able to hold your data and identity hostage.
It’s a compelling frame, and it maps neatly onto how people tend to think about AI platforms. If one company becomes “the interface” to your life, surely you should be free to take your AI-Mediated Cocoon and move it elsewhere.
The trouble is that this framing quietly suggests lock-in is the natural end state, and that Portability is a kind of bureaucratic afterthought. It also ignores the real complexity around AI memory and platform features that function like an AI equivalent of the ‘social graph’ lock-in relied upon by social media platforms.
If Portability means moving your Cocoon, then the thing you’re trying to move is not a dataset. It’s a living relationship - projects, habits, context, and a growing stack of small decisions that sit underneath your daily work. The more useful a system becomes, the more it encourages you to store those decisions inside it, because it makes the next interaction smoother. Over time, the system isn’t just responding to you. It is shaping how you ask, what you notice, what you tolerate, and what you expect.
Under that framing, Portability starts to look like an export button that can never be good enough. Even if you can download a transcript, you can’t export the interaction quality that came from a specific model’s behaviour. Even if you can export preferences, you can’t export the subtle calibration you built up through months of seeing how it fails. Even if you can export artefacts, you can’t export the way the system has started to become the default surface you use to expand your thinking.
So the old Portability frame tends to land in a predictable place - Portability is hard, lock-in is inevitable, and the best system becomes the least portable simply because it becomes the most used.
That conclusion has been haunting the discussion. But there is another option - what if Portability is already improving, but just not as migration?
The subtle shift people are already making
If you watch what the most capable users actually do, you’ll see a pattern that doesn’t fit the “export and switch” story.
They distribute their thinking.
They’ll ask the same question in more than one place, not because they are indecisive, but because they’ve learnt that model responses vary in quality and depth in ways that matter. A plan that feels sharp in one system can feel vague in another. A confident answer in one can trigger a caveat in another. One model will notice an edge case. Another will miss it entirely.
That variance is the very thing they are using. It’s also one of the most important tools for expanding your thinking, because it reveals where the problem is under-specified, where the model is bluffing, and where your own assumptions are doing the work.
In traditional machine learning, this is the classic logic behind ensembles - when individual outputs are noisy, diversity stabilises the result. You are less exposed to any one model’s blind spots. You get a more robust outcome, not because any single response is perfect, but because the differences between responses reveal structure.
In social contexts, people point to the “wisdom of the crowd”, and in its more formalised form it shows up in prediction markets.
In practice, people already do this as a kind of informal quality control with AI. They treat multiple models the way you might treat multiple colleagues - one is good at synthesis, another is good at critique, another is good at generating options. Then they contrast and compare to see what survives contact with each system.
The important point is that this behaviour produces a form of Portability without any formal Portability feature. The user isn’t moving a Cocoon in totality. They’re refusing to let a single platform become the only place their Cocoon can exist.
Once you notice this, the Portability dial starts to look less like a legal right and more like a cognitive capability - one you can control yourself, right now.
Portability as Distributed Cognition
Portability, under this frame, is the ability to distribute and then reconstitute your thinking across platforms.
That sounds abstract until you define the unit that actually moves.
The unit is not “the chat”, and it’s not even “the memory”. The unit is a workflow of thought - you pose a problem, you generate options, you compare and stress-test, you synthesise constraints, you decide, and then you act. The output might be a document, a code change, a plan, an email, a purchase, a meeting agenda, or a decision you want to live with. What makes it yours is not where it was generated, but how you orchestrated its formation.
When Portability is framed this way, you stop waiting for a perfect export mechanism to appear. Instead, you start paying attention to whether your cognitive workflow can move smoothly between surfaces.
The lock-in story changes too. Lock-in is no longer just a matter of data captivity. It becomes a matter of process captivity. If your best outcomes depend on a single platform’s particular personality, interface and memory, then Portability is genuinely low even if you can technically export your data. If, on the other hand, your workflow is designed to run across multiple systems, then Portability can be high even if exports are mediocre, because you are not relying on any one system to hold the whole thread. And if one platform begins to tighten the screws, you can respond by reducing your dependence on it.
This reframing doesn’t magically dissolve lock-in. It relocates the battleground. The question then becomes - how easily can you keep continuity while your tools change?
Autonomy amplifies Portability
At low Autonomy, distributed Portability is mostly a human practice. You run the workflow yourself - you copy, paste, compare, and curate.
As Autonomy rises, that manual work starts to become delegable.
The shift is subtle at first. Asking multiple models becomes delegating to multiple agents. Comparison becomes a task. Synthesis becomes a task. Cross-checking becomes a task. The ensemble becomes active rather than purely interpretive.
This is where Minsky’s “Society of Mind” shape becomes visible - not as an internal architecture inside a single model, but as an external behaviour across platforms. You naturally end up with roles - an agent that proposes, an agent that critiques, an agent that checks constraints, an agent that looks for missing information, and an agent that turns a plan into actions. You don’t have to label or formalise it that way for the pattern to emerge.

That’s the coupling - rising Autonomy makes it easier to distribute the workflow, so more people do it, so Portability rises in this distributed sense.
And it matters because it pushes Portability away from being a provider-controlled feature and towards being a user-controlled capability.
Cross-model/platform arbitration becomes the bottleneck
Once you have an ensemble, a new scarcity appears. The scarce resource stops being generation. It becomes resolution.
If you involve multiple models and agents across platforms, you need a way to settle disagreements across those platforms. You need to decide which claims to trust, which suggestions to discard, which uncertainties to investigate, and when to act. You need to decide whether a disagreement is a sign of risk, a sign of ambiguity, or simply a difference in style. This is cross-model/platform arbitration.
I want to keep that boundary clear, because there is a separate story about arbitration inside a single model or a single agent system. Here, the pressure comes specifically from coordination across platforms - multiple external models, multiple training histories, multiple incentive gradients, and often multiple interfaces.
Arbitration as mechanism
When cross-platform arbitration is strong, ensembles become stabilising. Variance becomes signal. Disagreement becomes a map of uncertainty. You get better outcomes because you have built a system that can surface weak assumptions, reveal missing constraints, and reduce the probability of marching confidently into the wrong answer.
In practice, strong arbitration usually means you are explicit (sometimes only lightly explicit) about what “good” looks like for the task. Are you optimising for correctness, speed, regret minimisation, value alignment, or preserving optionality? Different tasks legitimately demand different selection signals.
Arbitration as failure mode
Most people already have an arbitration rule, even if they’ve never clearly defined it. Often it shows up as a bundle of quick heuristics - we gravitate towards the response that best fits the current task, or the one delivered with the most certainty, or the one coming from the platform we’ve mentally anointed as “the strong one”. On tired days it can be even simpler than that - whatever arrives first and feels actionable becomes the decision.
Those implicit rules explain why distributed setups can produce wildly different outcomes for different users. The ensemble isn’t doing the work on its own. The arbitration rule is doing the work. When the arbitration rule is weak, ensembles inflate confidence without increasing correctness.
This matters because models adapt to us. They mirror our style, our expectations, our inferred literacy, and the level of complexity we seem to want.
If the system is reflecting you, then “porting the interaction” is not just porting content. It is porting the loop between your prompts, your arbitration habits, and the model’s learned response posture.
As Autonomy rises, arbitration becomes more consequential. When agents are tied to actions, resolution is no longer a philosophical preference. It becomes a practical safety mechanism. It is the difference between “we considered multiple views” and “we acted on the most persuasive narrative”.
So the central mechanism in the Portability reframing is straightforward - Portability rises when you can hold a coherent thread across platforms, and that depends on how you arbitrate.
Second-order effects that actually matter
Once Portability is distributed in this way, the second-order effects stop being edge cases.
For one thing, lock-in reduces, but not because an export button got better. It reduces because the skill that matters (designing and running the workflow) sits outside any single platform. That shifts incentives. Compatibility becomes strategic. Interoperability stops being a courtesy and starts being a growth lever.
There is also a robustness gain that is hard to overstate. When model behaviour varies, an ensemble can turn variation into signal. A single model’s blind spot becomes less catastrophic. A single model’s confident mistake becomes easier to spot. A single model’s stylistic bias becomes easier to counterbalance. In the best cases, these differences become signals in their own right.
This has knock-on effects for the Reality dial as well. If one platform is trying to become the default mediation layer through which you see the world, distributed Portability disperses that mediation. You are less likely to have your preferences shaped by a single surface when your workflow naturally moves across surfaces.
But the shadow side is real. Ensembles can become confidence engines.
If a user can always find a model to endorse a plan, distribution becomes a kind of rationalisation machine. You don’t need a social-media-style bubble if you can shop for a satisfying justification across models.
Action-tied selection pressure helps here. When agents act, the world responds. Bad plans create costs. Good plans create value. In principle, that is a stronger feedback loop than pure discourse.
In practice, selection signals are messy. Outcomes are delayed, causality is diffuse, and externalities are often borne by people who didn’t opt in. This creates room for a more sophisticated failure mode - accountability becomes smeared. The ensemble can provide plausible narratives while the environment absorbs the costs.
At the same time, the economics shift. Distributed cognition is expensive by default. Tokens compound. Latency compounds. The friction you removed from “thinking” can reappear as a cost wall. And then there is a risk the memory starts to break.
Distributed workflows fragment context. Each platform may hold a partial history. Each agent may remember a different slice. You can become more robust in reasoning while becoming more fragile in continuity. That is where the Portability reframing becomes operational rather than conceptual.
TrustIndex pressure: Equality and Transparency come under pressure as evaluation and audit become the new separators, and Autonomy turns mistakes into externalities.
Convenience vs Sovereignty
This is the tension that sits underneath everything above.
For most people, the appeal of a thick Cocoon is not ideological. It’s ergonomic. A single, continuous interface makes our life easier and reduces cognitive load. It makes work feel smoother. It removes the need to compare, arbitrate, and maintain continuity across surfaces.
Distributed Portability asks you to reintroduce some friction. That friction can be small - running the same question twice, forcing a critique pass, checking disagreement before acting. But it is still friction, and it is still effort. For some users it will feel like prudence. For others it will feel like a tax.
The TrustIndex frame here is measurement - there isn’t one “correct” choice. The point is to recognise the trade and make it visible.
Convenience buys you speed and cognitive relief. Sovereignty buys you optionality and a lower chance of waking up inside a single provider’s mediation layer.
The second-order impact is that this trade can become a separator. If sovereignty requires extra friction, then not everyone will choose it, and not everyone will be able to choose it equally.
TrustIndex pressure: this naturally puts downward pressure on the Equality dial as different people choose different levels of effort vs. comfort.
Memory reforms outside the platforms
Fragmented memory is not a minor inconvenience. It is the point where distributed Portability forces a new architecture.
If you distribute your cognition, you need a way to preserve continuity across systems. At first, people patch this manually. They summarise. They carry context. They do the translation work between platforms. That’s happening now. But as Autonomy rises, the obvious adaptation is to externalise memory.
Not memory as “a transcript”, but memory as a personal substrate - a semantic store, an evidence trail, a log of decisions and constraints, a record of preferences that can be queried by whichever agent is currently working. This is where people begin to take back real control, because platforms become participants in a workflow rather than owners of the workflow.
Externalising memory also creates a new control surface. It forces explicit choices about what gets written, what gets forgotten, what is shareable, and what remains private. Those choices are not abstract governance talk. They are the practical boundary between a helpful swarm and a swarm that slowly leaks your life.
Memory sovereignty becomes a thing you build, not a thing you request.
TrustIndex pressure: external memory can relieve Portability pressure by providing continuity across platforms, while increasing Transparency and Equality pressure around provenance, access control, and who can write to (or query) the substrate.
How the reframed Portability dial behaves
With this updated framing, the Portability dial stops being mostly about export. It becomes - how easily can your cognition be distributed and then reconstituted.
A system can score poorly on the old Portability story (limited exports, fragile formats), while users are still able to forge independence by distributing their workflows. Another system can score better on exports while still leaving users constrained because their process and memory attempt to keep them inside.
Autonomy tends to push this dial upwards. Delegation makes ensembles easier, orchestration becomes normal, and cross-checking becomes cheap enough to do routinely and automatically.
But Autonomy also raises the governance requirements. When agents act, you need legible provenance. You need to know what was considered, what evidence was used, and how a decision was reached across platforms. This is where Transparency begins to matter in a more structural way - without traceable arbitration, distributed cognition becomes a machine for plausible stories.
Equality shows up sharply here too. Equality is not only access to tools. It is access to outcomes. If distributed Portability depends on cross-platform arbitration and evaluation, then the separator becomes literacy - who can design strong loops, choose strong selection signals, and maintain coherence across platforms. Two people can have the same models and live in entirely different cognitive worlds.
Intermediaries and new lock-in points
Once Portability is framed as cross-platform distributed cognition, it becomes obvious why the orchestration layer is contested.
The major platforms are not well positioned to own cross-platform orchestration, because the strength is precisely in running across platforms. A platform can orchestrate inside its boundary, but the moment the ensemble is distributed, the centre of gravity moves elsewhere.

That creates room for new intermediaries - conductors that broker multiple models, manage the workflow, hold memory, and provide arbitration tooling.
It also creates room for something more user-owned. For some users, orchestration may arrive as generated artefacts - workflows, scripts, agent graphs, disposable pieces of coordination created on demand and then thrown away, running locally or in environments the user controls.
These paths have different TrustIndex profiles.
A SaaS conductor lowers the barrier and reduces immediate friction, which makes distributed Portability accessible to more people. It can look like an Equality win in the short run. But it also concentrates metadata, arbitration logic and continuity in a new place. If the conductor becomes the thing you can’t leave, lock-in has simply moved up a layer.
User-owned orchestration preserves sovereignty and keeps the exit options real, but it carries a higher friction cost. That friction can show up as maintenance, security burden, and a need to understand the workflow at a deeper level.
Distributed Portability doesn’t eliminate lock-in - it moves it upward. The gravitational points become the orchestrator, the memory substrate, and the cross-platform arbitration layer. The question is not whether lock-in exists, but where it concentrates and how easily users can step out of it.
TrustIndex pressure: Portability can rise at the model layer while falling at the orchestration layer if conductors become the new choke point. Transparency becomes a competitive axis (arbiter legibility), and Equality shifts towards who can choose and switch orchestration.
Lifecycle trust for swarms
When you take back the orchestration layer, you also inherit the cleanup duty. This is where the story becomes unavoidable. A world of swarms is a world of lifecycle governance.
If you can spawn agents across platforms, you need to know where they run, what permissions they hold, how those permissions are revoked, and what happens when an agent is disposed of. You need to know what “forgetting” means when memory is externalised, and what auditability means when decisions crossed multiple systems.
These are not boutique security questions. They are the trust substrate for distributed Autonomy.
If you can spawn, act, remember and forget safely, distributed Portability becomes a source of resilience.
If you can’t, it becomes a source of silent risk.
TrustIndex pressure: high Autonomy without lifecycle controls is effectively low Transparency with a high blast radius, while Portability depends on safe swarm instantiation and disposal.
What this reframing changes
Reframing Portability as distributed cognition changes where you look.
It shifts attention away from the fantasy of a perfect export and towards the mechanics that actually determine whether you can move - ensembles, cross-platform arbitration, and memory sovereignty.
It also clarifies why Autonomy amplifies Portability in this sense. Delegation makes distribution easy enough to become normal, and once distribution is normal, dependence becomes harder to concentrate. The economics change.
Portability, here, is not a feature. It is a capability - the ability to keep your continuity intact while your tools change, and to hold a coherent thread while your work moves across platforms.
As that capability spreads, the Portability dial rises for reasons that don’t look like traditional Portability at all - and that is exactly why it has been easy to miss.
Seen through the TrustIndex, Portability is no longer an export problem - it’s an orchestration problem, and Autonomy amplifies it fast. If the pressures are shifting this way, how much convenience are you willing to trade for sovereignty?
Join in as we track this slope and stay up-to-date. You can always find the current dashboard at TrustIndex.today and subscribe for regular updates as new Signals arrive, weekly Briefings are published and new Reports are released.





Regarding the topic of the article, how might we practically implement such behavioral portability, and this reframing are truly insightful.
Great question Rainbow Roxy. If you're just using chatbot UIs then I'd just test your ideas and discussions across multiple platforms. If you're using coding agents then using Claude Code with the Ollama extension allows you to test and try different underlying models. And if you're experimenting with OpenClaw or similar Autonomous Assistants then they let you try different platforms/models too - but of course they bring their own issues (see the report I just released on that https://latentgeometrylab.robman.fyi/p/feb-2026-autonomous-assistants). Hope that's helpful and I'd love to hear how you experiment with this.