LLMs as we know them are already on their way out. In this video, I break down five breakthroughs that will redefine AI over the next 18 months: Diffusion LLMs (with Stanford’s Stefano Ermon), Power Attention for massive context, hidden/latent-space thinking and private chains of thought, Google’s Nested Learning for continual learning, and the most disruptive shift yet — Continuous Thought Machines (with Jaime Sevilla) as an early break from the Transformer itself.


Looping Transformer
Redundant Neural Net
“If you don’t force the model to think in an human-readable way, it naturally drifts into mixing languages, making up words, and using strange symbols.” 19:00 in

Visited 1 times, 1 visit(s) today