How Geometry Might Unlock Human-Like Intelligence in AI: Why Your Brain Gets Lunch (and Your AI Can’t)

By Frederick d’Oleire Uquillas, Science Communications Fellow for the AI Lab

Say you walk into your favorite diner. You know the drill – find a seat, order a grilled cheese, maybe throw in a side of fries. The tables might be a bit shuffled, the server might be new, but despite these little changes, you still know exactly what’s going on. You’re not confused. You don’t need to relearn the concept of “diner” every single time.

Why? Because your brain is a generalization machine. It’s constantly recognizing patterns, identifying what matters, and discarding the stuff that doesn’t. You’ve built a mental blueprint of “diner” – not the specific people or where the napkins are stacked today, but the abstract idea of it. The same rules generally apply, even if the details shift.

As one might say: the universe rhymes, but never repeats. And your brain is fluent in that rhyme scheme.

Two professors from Princeton University, Tim Buschman in psychology and neuroscience and Gautam Reddy in physics, alongside Princeton Neuroscience Institute postdoc Adel Ardalan, have received a seed grant from the AI Lab to figure out how that magic works, both in people and in AI. If successful, they could bring us one step closer to artificial general intelligence – the kind of AI that doesn’t get totally confused when it’s asked to do something slightly different than what it’s trained for.

Generalization: The Secret Sauce of Intelligence

At the core of this project is the idea of generalization: our ability to apply knowledge from one situation to another. It’s what lets you recognize that your dog and your neighbor’s dog are both dogs, even if one is a Chihuahua and the other looks like it could fight a bear and win.

The researchers draw from a branch of mathematics called group theory to describe two flavors of generalization:

  1. Invariance: Ignore the noise (i.e. irrelevant changes). A grilled cheese is a grilled cheese whether it’s cut diagonally or straight.
  2. Equivariance: Track the meaningful, relevant changes. If it’s extra crispy today, that tells you something new, but if it is charred, you might want to skip lunch.
Source: Adobe Firefly

These two modes are how both brains and (potentially) artificial networks learn to recognize, adapt and, crucially, not freak out when things change systematically.

The Pink Marker Test

Let’s say you are looking for your favorite pink marker to draw with. Instead, you might find a new red marker. Despite the change in color, your brain doesn’t start from scratch. It recognizes: still a marker. Still cylindrical. Still good for writing. What’s wild is how your brain represents this information.

Neurons (or collections of neurons) tuned to color or shape fire in a way that reflects this shift in a sort of geometrical dance across a high-dimensional space. You can think of the brain as mapping concepts not just by what they are, but by how they relate to similar things. When you see a pink marker, your brain represents that concept as a point on a circular, ring-like pattern of neural activity in a complex, multi-dimensional space. When you see a red marker, the spot on the mental ring shifts slightly in that multi-dimensional, abstract space but remains on the ring. Even though the color has changed, the geometry, the form, of the representation stays intact.

This is mind-blowing because it suggests that what matters isn’t just the specifics of an experience (pink or red), but the shape of how the brain organizes knowledge. The brain doesn’t just recognize the object. It recognizes the structure of its own understanding. It treats both the pink and red markers as variations on the same theme. That’s what lets it catch the rhyme – the deeper pattern – even as the lines of experience change.

It’s a bit like hearing the same melody played in a different key – the notes shift, but the structure of the song is intact.

What This Means for AI (and Why Transformers Might Be Just Guessing)

Today’s AI models, like your friendly neighborhood chatbot, are powerful but quite literal. They need to be trained on thousands, or millions, of examples to “get” anything. Want an AI to know what a grilled cheese is? Better show it every possible variation from every menu on Earth.

But what if we could build inductive biases – a kind of starter pack of how the world works – right into the system? That’s what these researchers are exploring. By learning from how the brain encodes geometry and structure, we might design AI that starts with some flexible expectations about the world.

Imagine giving an AI the ability to recognize the idea of “diner,” instead of just memorizing images of diners from Yelp. That’s the goal here: to make AI that doesn’t just memorize, but understands.

What’s Next?

The team is diving into how natural and artificial neural networks use geometry – ring shapes, transformations in high-dimensional space, all that spicy mathematical goodness – to generalize. The big questions they’re asking:

  • Can we uncover the architectural features (i.e., the brain’s secret sauce) that make generalization possible?
  • Can we then bake those into AI models, maybe even new transformer architectures, to teach them to learn like we do?

If their research pans out, it could help shift AI from being a pattern-matching powerhouse to something more like a flexible thinker – capable of abstract understanding, not just statistical mimicry – and move us closer to true artificial general intelligence. The kind that doesn’t need 40,000 cat photos to recognize a cat.

Transformers are the reigning champions of modern AI. They’re the engine behind chatbots, language models, and many image generators. What makes them special is their ability to focus attention on different parts of the input – like skimming a sentence and zeroing in on the key words to make sense of the meaning. But as powerful as they are, they still tend to be excellent at pattern recognition, but not so great at flexible understanding.

On the left, the human brain represents the concept of a cat using a fluid, ring-like geometry, suggesting flexible, abstract generalization. A single cat icon floats nearby, enough for the brain to “get it”. On the right, the AI “brain” relies on raw data: a matrix of identical cat images and streams of binary code. Unlike the human side, it requires massive input to build the concept from scratch. The image illustrates the contrast between human cognition’s structure-preserving shortcuts and AI’s brute-force pattern recognition. Source: Ideogram

Why It Matters

This research sits at the intersection of neuroscience, physics, and AI, and it’s the kind of bold, foundational science that doesn’t just patch a hole but reimagines the whole system. If we crack the code of how the brain generalizes, we may be able to train AI to do the same – faster, smarter, and with far fewer examples.

And maybe one day, you’ll ask your AI assistant to book lunch at a new diner, and it’ll understand what that means…without needing a three-week training seminar on sandwiches.

So give a little cheer for seed grants – the tiny sparks that can light big fires in science.

Stay tuned. We’ll be following these researchers as they explore how the mind maps the world, and how AI might follow in its footsteps.

Leave a Reply

Your email address will not be published. Required fields are marked *


Discover more from Princeton Laboratory for Artificial Intelligence Research Blog

Subscribe to get the latest posts sent to your email.