Discussion about this post

User's avatar
The AI Architect's avatar

The idea that intelligence is compression rather than expansion really clicks. Current LLMs basically try to memorize everything instead of abstracting the key patterns like humans do with episodic memory. Your example of remembering 'fire burns' as a compressed story instead of raw sensory data from every encounter makes the point super clear. It's interesting that Chain-of-Thought basically tricks models into building temporary narratives, but you're right that baking this structure directly into the architecture would be way more robust than relying on prompts to coax it out.

No posts

Ready for more?