2 Comments
User's avatar
Interesting Engineering ++'s avatar

🧠 Ben—Quick Thoughts on Your "healthy, with skeptism" Hallucinated World Piece🙏👍👏, which is enjoyed...

1. Hallucinations ≠ Randomness

You’re right—models like Genie 3 hallucinate. But DeepMind sees generation as a test of understanding. If a model can keep objects consistent across frames, that’s not noise—it’s structure. I take these as "evolutionary phases" - things will improve over time. I take more comfort from Demis' views (vs Sam - sorry to say), given DeepMind's track record. Also impressed with their research -> product integration off late.

Veo’s fluid dynamics and AlphaFold’s protein predictions suggest these models are picking up real patterns, even without direct supervision.

2. Memory Wall Is Real—but Cracking

Genie 3 struggles with long-term coherence. No argument there. But DeepMind’s quietly optimizing across training and architecture (e.g. AlphaEvolve). Video generation’s progress—from seconds to minutes—shows the wall’s eroding, bit by bit.

Still a journey, I guess. Destination unknown, but hopefully.😊🤭

3. Modular vs. Omni-Model Isn’t Binary

Modular systems are easier to trust, yup. But Hassabis is betting on fusion—language, video, simulation in one model. Gemini’s tool use hints at a hybrid future: structured scaffolding with generative reasoning inside. Pieces of a complex puzzles (one day, maybe) with flawless integration, as jagged as the edges may presently appear.

4. AI Winter Risk Is Legit—but Context Matters

You flagged the risk of overpromising. Fair. But DeepMind’s track record—AlphaGo, AlphaFold...their Alpha Model ranges — are being built on solving real problems. Hassabis isn’t selling certainty, just possibility. That’s a key difference.

👀 Ummmm

Hallucinated worlds may not be fully ready for deployment—but they’re not just fantasy either. They’re starting to show internal logic. Maybe not 100% trustworthy yet, but definitely worth watching.

I also belief hallucination is a feature we shudnt exclude completely - or we risk discarding our venture with Novelty. Just a bit more "fine tuning" required and one day that balance coyld be productively applied...

Expand full comment
Gregory Forché's avatar

The need for a world model is not a new idea in AI, it has been there since the beginning.

The failure of adequately conceiving of one, it could be said, led to the AI winter.

The image of a world model characterized as an “Omni model” makes me concerned that we may be headed in the same direction.

Expand full comment