Latent Geometry Lab
Subscribe
Sign in
Home
Archive
About
Latest
Top
Discussions
LLMs need their heads read!
A 'Best Paper' at NeurIPS makes LLMs faster & more stable by adding a gate on each attention head. This turned attention from a free‑for‑all into a…
Dec 7
•
Rob Manson
What Makes LLMs So Fragile (and Brilliant)?
They can feel like geniuses one moment & confused muppets the next. But within their internal geometry it's token-by-token “arbitration” that makes them…
Dec 1
•
Rob Manson
November 2025
Does 'Latent Model' Equal 'Understanding'?
It's common to say AI have ‘world models’ now, but is this ‘understanding’? Lets take a closer look, zooming in on Latent Models as the real units of…
Nov 24
•
Rob Manson
What is a 'Latent Model'?
LLMs can create convincing plans or personas - until they suddenly fall apart. The difference isn't magic, just how "internal handles" interact. When…
Nov 16
•
Rob Manson
3
2
Why aren't video codec intrinsics used to train generative AI?
There doesn't seem to be any research exploiting this existing body of data which provides a video-specific latent space that's already tuned for human…
Nov 3
•
Rob Manson
1
Latent Confusion - The Many Meanings Hidden Behind AI's Favourite Word
Latent is a really widely used word today - but are you using one meaning while other people are hearing a very different one?
Nov 2
•
Rob Manson
1
October 2025
Do NOT Think Of A Polar Bear!
An extended response that situates Anthropic’s new "Introspection" study inside a geometric account of how large language models hold, route, and…
Oct 30
•
Rob Manson
1
Can LLMs get addicted to gambling?
A recent study shows mechanistic evidence that large language models exhibit behavioural patterns and neurological mechanisms similar to human gambling…
Oct 27
•
Rob Manson
1
Can you break your LLM's sense of Cause & Effect?
Try this one‑sentence prompt to test if your model binds "what things are" (cause) to "what their world allows" (effect).
Oct 26
•
Rob Manson
Anthropic's Linebreaks add support for Geometric Interpretability
Anthropic's new research on linebreaks in transformer processing provides fascinating support for the Geometric Interpretability framework I've been…
Oct 22
•
Rob Manson
Can you beat 17?
A tiny experiment that reveals how LLMs actually “decide” when asked to pick a random number (HINT: They almost always pick 17).
Oct 19
•
Rob Manson
The '3-Process' View Of How LLMs Build 'Latent Models' in Context
Why do Large Language Models feel so brilliant one moment and so bafflingly fragile the next?
Oct 12
•
Rob Manson
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts