Did you know there’s a World Memory Championship? It’s a competition where people try to be the best at remembering things.
For example, you have two minutes with a deck of randomly shuffled cards. How far into their order could you remember? Believe it or not, there are “memory athletes” who can commit the complete order of 52 cards to memory in just two minutes, then immediately recite it back.
And these people aren’t savants or anything. They just have techniques that they practice a lot.
The most popular one, by far, is called “the memory palace.” To use this, you mentally project yourself into a space that you know well — your childhood home perhaps — and visualize yourself encountering things you need to remember.
Right now, in your head, walk through your front door. Open the coat closet and imagine JFK handing you a nine-iron. This is the jack of clubs. Close the door to see your beloved grandmother with her arms wide open. This is the queen of hearts.
And so on. Try it. Using this technique for the first time, you could easily remember a sequence of a dozen cards in about 10 minutes. With practice, you’d get better.
The Spatial Dimension of Thinking
This works because our brains are spatially wired. We remember things in part by remembering where they are. We remember things from context. Our memory is not just of the thing itself. It’s the surroundings in which we encounter it.
So, what happens when we strip out the spatial characteristics from information? When we move information out of its home, re-package it and deliver it, will we change the information in some way? Will we change how we process it?
This shift is already underway with AI. Our relationship to information is changing: Instead of existing in a specific “place” that we seek out, information is increasingly detached from location and simply delivered to us.
In the future, information is probably going to be delivered far more often than it’s found. Over the long-term, what will it mean when we start “DoorDashing” all of our information?
I can think of three clear consequences.
How Is AI Reshaping Our Thinking?
The shift from finding information in a specific location to having AI simply deliver it will have three major consequences.
1. Lack of Information Adjacency: This is the loss of serendipitous discovery that accompanies activities like getting lost in Wikipedia. Delivered information strips out the surrounding, related facts, which can diminish context and the ability to connect disparate ideas — a key element of creativity.
2. Murkier Provenance: Information that is simply delivered lacks the contextual clues and effort that accompany searching, making it harder to subconsciously validate its source and credibility. This heightens the risk of manipulation and forgery.
3. Illusion of Clarity: AI often presents answers to specific questions in a clean, narrow way, which can strip out the messy, subjective and disputed nature of reality. This lack of nuance creates the mistaken impression that a topic is more settled than it is.
1. Lack of Information Adjacency
This problem is closely related to one of the core arguments of the anti-AI movement: Getting information delivered will make us all dumber. But let’s break the claim down a bit.
When considered spatially, information has “adjacency,” meaning there’s related information surrounding what we’re looking for. In the process of finding what we want, we see incidental, adjacent information, which fleshes out our target. That adjacent information gives it more dimension or context.
Imagine you’re looking for Elvis’s birthdate. In finding that, you might be incidentally exposed to all of the following information, which you’ll likely internalize at some level.
- Elvis was born in Tupelo, Mississippi.
- Elvis had a stillborn twin.
- Elvis was his parents’ only surviving child.
- There are organizations to celebrate and venerate Elvis’s life, which implies the magnitude of his stature in 20th century entertainment history.
None of this is what you were looking for, but these facts give you more context and information. In finding the specific piece of information you were looking for, you’ve gained a greater understanding of it.
I think everyone has gotten lost in Wikipedia at some point. I vaguely remember looking for some information about David Beckham and somehow ending up reading the entry for Pontius Pilate about 20 minutes later. Was this inefficient? Maybe. Was it unproductive? Not in the big picture. I have doubtless absorbed some information on that journey, along with enjoying the general pursuit of learning.
AI will essentially “teleport” information to us without any transitional states. Although the saying “getting there is half the fun” is a cliché, how much would we lose if we could teleport without the need for physical travel. Likewise, if we lose the transitional states that accompany searching for information, what have we lost?
Creativity and the ability to think abstractly are fundamentally a process of connecting disparate ideas. The formation of a new idea literally involves the connection between neural synapses. Part of an intellectual life involves noticing things. We collect and synthesize experiences over time, then use the resulting matrix of information to develop context around future ideas. Then, every once in a while, we put together two things we noticed and come up with a new idea.
When we teleport information, will it reduce our ability to make these connections? If it does, the question becomes how do we mitigate it? How do we increase our chances of finding serendipitous information?
I think AI vendors are doing a credible job of solving this problem at the moment. Asking a question of an AI engine generally brings back a “wider” scope of information than what was specifically queried. Though some people are annoyed by this, I think there’s value in doing it.
I also like the tendency of some engines to prompt the user with additional information. This isn’t exactly the same as passing through transitional states on the way to finding information yourself, but it’s better than the alternative.
In some senses, this is like eating your vegetables. AI infrastructure would do society well to provoke curiosity. I hope AI vendors continue with this tone, perhaps even more explicitly than they do now. We need to ask how can we envelop the user in the larger context of information and encourage them them to explore further transitional states.
2. Murkier Provenance
If you go to your kitchen and make yourself a sandwich, you have a reasonable knowledge of where that sandwich came from. Sure, you didn’t watch the peanuts get harvested or turned into spread, but you made the sandwich and combined the ingredients yourself, so your confidence in the origins and safety of that sandwich is pretty strong.
But what if someone just comes up to you on the street and hands you a sandwich with no context. Where did the sandwich come from? Who made it? What were the ingredients?
To be clear, life is about managing risk, and maybe the person who handed you the sandwich is your town’s beloved Sandwich Bandit. In that case, you have at least some idea that his sandwiches are fine. But even here you have some level of context.
In the process of searching for information, we develop a contextual belief that the information is reliable. We can evaluate sources, not only from the information provided but also from the contextual setting the information is in.
In fine art, there’s a concept called “provenance.” This is the history of ownership of a piece of art since it was created. A complete provenance for a painting will explain who has owned it and where it has hung for the entirety of its existence. This context ensures the authenticity of the work. Art without provenance, or with significant gaps, may be suspected of fraud or forgery.
The same is true of information. Do we trust the information we seek out more than the information that’s simply given to us? Does investing time and energy in discovering a fact give us more belief in the provenance of the information?
You might say, “This is all simply about our own belief. It says nothing about whether or not the information is actually valid.” By this line of reasoning, we should simply evolve to trust information that’s delivered to us.
This view carries a significant risk of manipulation, however. When we seek out information, we’re subconsciously validating it. Every detail, contextual clue or small obstacle along the way reinforces its credibility because it shows us how the information is connected, sourced, and situated. The effort we put into finding something gives us a trail of context to evaluate. So, in many ways, we’re checking on the truth of the information without even realizing it.
If you make your own sandwich, you’re much harder to poison.
3. Illusion of Clarity
When I started college, I took Psychology 101. In the first week, we learned all about Sigmund Freud. Then, the next week, we started in on Jung. I remember thinking, “Wait … this isn’t what we learned last week. Why is this totally different?”
And that’s when I learned that a lot of psychology is simply disputed opinions and theories. Freud believed one thing. Jung believed another. The point of the class was to discuss the disputes between the different figures in psychology. The professor was, incidentally, teaching me how to evaluate the validity of them all.
This is part of the context. When we search for information, we often find things we weren’t looking for. We find information that contradicts what we believe. We find Jung when we were searching for Freud.
When information finds us, however, it’s usually in response to a specific question. We ask, “What did Freud believe?” rather than searching for something like, “Freud’s theories on psychology.”
Questions tend to be narrower and more focused, so the answers are similarly narrow. This might strip out important context and give us the mistaken impression that something is far more settled than it really is. Information delivered in this way is clean, clear and with minimal ambiguity.
In reality, information is messy, subjective and often disputed. A key to understanding something is understanding the terms on which it’s debated. There’s the intellectual center of something, and then there are the edges where things get more vague and the shadows where competing theories and disagreements emerge.
“Leveling” this information not only prevents a truly rounded perspective on something, but it’s ripe for abuse. When information is presented as fact without nuance, the consumer is at the mercy of whomever controls the perspective of the AI engine itself or the biases of whatever corpus of information it was training on.
The origin of information has clues to a broader context. When it’s processed, all of that context tends to collapse into itself or fade away in the face of the brighter light of clear explanation.
We Need to Think About AI Carefully
As AI reshapes how information reaches us, we’ll need to be mindful of what’s lost in the process: the adjacency, the provenance and the ambiguity that give knowledge its depth. Efficiency isn’t the same as understanding, and clarity isn’t the same as truth. The challenge ahead is not just to receive information faster, but to preserve the richness and context that make it meaningful.
