Back to Insights

AI • Technology • Innovation

For three years, AI got smarter by reading more.

FG
Felix Ghauri

· 3 min read

For three years, AI got smarter by reading more.

For three years, AI got smarter by reading more.

Now the labs are teaching it physics.

The entire boom was built on language models. Feed them enough text and they learn to write, code and even reason.

Impressive, until you ask them to do something physical.

An LLM can describe a glass falling. It cannot calculate the trajectory, the shatter pattern or where the shards will land. It can explain physics but cannot catch a ball.

That’s the wall.

And the biggest names in AI are moving to climb over it.

Yann LeCun has spent years arguing that scaling language models alone would hit limits. He is now putting weight behind world models. Systems that learn how objects move through space, how forces interact and how cause becomes effect.

Fei-Fei Li, who helped teach machines to see through ImageNet, has launched World Labs and shipped one of the first commercial world models.

Google DeepMind is pushing Genie. General Intuition just raised $134 million to teach agents spatial reasoning.

From predicting words to modelling worlds.

💬 Join the conversation on LinkedIn

View on LinkedIn →
FG

Felix Ghauri

Applied AI Practitioner · Founder, Futures Forum

Felix helps organisations navigate AI and exponential change. He writes about technology, geopolitics, and the future of work.

Thinking about AI in your workflow?

Let's discuss what might work for you.

Let's Talk