Your Brain Learns by Being Wrong
Dopamine doesn't fire when you get a reward. It fires when you get a reward you didn't expect. This single discovery about prediction errors rewired everything we know about learning, attention, and why habits are so hard to break.
In the 1990s, Wolfram Schultz at the University of Cambridge stuck electrodes into the midbrain of monkeys and watched their dopamine neurons fire. What he found broke the most basic assumption about how reward works.
Dopamine neurons don't fire when you get a reward.
They fire when you get a reward you didn't expect. If the reward is fully predicted, the neurons go silent. If a predicted reward fails to show up, they actively suppress their firing rate below baseline. The signal isn't "that was good." The signal is "that was different than I thought."
Schultz published this in Science in 1997, and it became one of the most cited findings in modern neuroscience. Not because it told us something interesting about monkeys. Because it told us something fundamental about how brains learn.
They learn from being wrong.
The Error Is the Signal
If your brain is a prediction machine (and the first article in this series laid out the evidence that it is), then the most important signal in the entire system isn't the prediction itself. It's the mismatch. The moment reality deviates from what you expected.
That mismatch is called a prediction error. And it's the only thing that forces your brain to update its model of the world.
Andy Clark, a philosopher at the University of Sussex, described the brain as a system that's constantly trying to minimize prediction error. In his 2013 paper "Whatever Next?" in Behavioral and Brain Sciences, he argued that the brain has essentially two options when it encounters a mismatch. It can update its predictions to match reality. Or it can act on reality to make it match its predictions.
Think about that second one. When you feel cold and put on a jacket, your brain didn't passively receive the signal "cold." It predicted a certain body temperature, detected an error, and then moved your body to eliminate the mismatch. Perception and action are both in service of the same goal. Minimize the error.
Why You Didn't See the Gorilla
Prediction errors explain one of the strangest findings in psychology.
In 1999, Daniel Simons and Christopher Chabris at Harvard ran a study where participants watched a video of people passing a basketball and counted the passes. Halfway through the video, a person in a gorilla suit walked into the frame, faced the camera, beat their chest, and walked off.
Roughly 50% of participants didn't see the gorilla.
Not "didn't notice right away." Didn't see it at all. When asked afterward, they refused to believe it had happened until they rewatched the video.
This isn't a vision problem. Their eyes received the gorilla data just fine. But their brain was predicting basketball passes. It was modeling a scene that contained people throwing a ball. A gorilla didn't fit the model. And because the gorilla-related sensory data didn't generate a strong enough prediction error to override the existing model, it simply never reached conscious awareness.
Your brain doesn't show you reality. It shows you predictions. And only errors that are large enough, surprising enough, actually break through.
Errors Flow Up, Predictions Flow Down
Karl Friston at University College London formalized this in a framework he published in 2005 in the Philosophical Transactions of the Royal Society B. He proposed that the brain is organized as a hierarchy of prediction machines. Higher levels send predictions downward. Lower levels send errors upward.
Each level tries to explain away the errors from the level below it. Only the errors that can't be resolved at lower levels propagate upward to update higher-level beliefs.
Picture it like layers of management. A front-line sensor detects something unexpected. The first level of processing checks whether it can explain the error with a minor adjustment. If it can, the error dies there. If it can't, it gets escalated. Then the next level tries. And the next. Only the really surprising stuff makes it to the top.
This is why strong beliefs are so hard to change. Not because people are stupid or stubborn. Because the hierarchical architecture of the brain is literally designed to absorb small contradictions before they reach the belief level. Jakob Hohwy laid this out extensively in The Predictive Mind (2013). Lower-level errors get explained away. Reinterpreted. Absorbed. The belief never sees them.
One study that shows this in a striking way involves the hollow mask illusion. When you look at the concave (inside) surface of a face mask, your brain flips it so it looks convex. It looks like a normal face poking outward, even though it's actually caved inward. Your prior that faces are convex is so strong that your brain literally overrides the visual data.
Danai Dima and colleagues published a study in NeuroImage in 2009 showing that people with schizophrenia don't fall for this illusion. They see the mask as concave, as it actually is. Their prediction machinery appears to weight sensory data more heavily relative to prior expectations. They're not "more wrong" than neurotypical people. In this specific case, they're more right. But the cost of that weakened prediction system shows up everywhere else in their experience.
Why This Matters for Changing Anything
Here's where this stops being abstract neuroscience and starts being a user manual.
To update a deeply held prediction, whether it's a belief, a habit, or an emotional pattern, you need prediction errors that are large enough and persistent enough to propagate through the entire hierarchy.
Small contradictions don't work. Your brain absorbs them. One good experience doesn't override years of expecting failure. One conversation doesn't undo a deeply held assumption about yourself. The lower levels explain it away. Lucky break. Exception to the rule. Fluke.
Michelle Craske at UCLA has studied this directly in the context of anxiety treatment. In a 2014 paper in Behaviour Research and Therapy, she argued that the most effective exposure therapy isn't about "getting used to" the feared thing. It's about maximizing prediction error. You need the experience to violate your expectations as strongly as possible. The bigger the mismatch between what you predicted (panic, disaster, rejection) and what actually happened (nothing, safety, acceptance), the more your brain is forced to update.
This is also why gradually easing into change often fails. A tiny deviation from your comfort zone generates a tiny error. Easily absorbed. Easily explained away. Sometimes the thing that actually rewires the model is the experience that was genuinely shocking in how different it was from what you expected.
I think about this with my own patterns. The moments that actually changed how I operate weren't the small incremental shifts. They were the times reality was so different from my prediction that my brain couldn't ignore it. Moving to a new place. Losing someone. Building something that worked when I was sure it wouldn't. Those errors were too big to explain away.
Dopamine Isn't What You Think
Back to Schultz's dopamine finding, because the implications go further than learning.
The reward prediction error signal he discovered became the foundation of modern reinforcement learning in AI. DeepMind's algorithms, the ones behind AlphaGo and its successors, are directly inspired by this biological mechanism. The artificial system learns the same way your neurons do. Not from reward. From unexpected reward. From the error.
But in humans, it also means that predictable pleasures lose their dopamine punch. The first time you try a great restaurant, dopamine fires. By the tenth visit, it's silent. The food didn't get worse. Your prediction caught up. No error, no signal.
This is why novelty feels so alive and routine feels so flat. Not because new is objectively better. Because new generates prediction errors. Your brain is paying attention when it's wrong. When it's right, it coasts.
Anil Seth explores this in Being You (2021). He describes consciousness itself as the brain's "best guess" about the causes of its sensory signals. And the quality of conscious experience, how vivid, how present, how real something feels, is closely tied to the prediction error dynamics. High error states feel intense. Low error states feel automatic, almost unconscious.
This connects to something Norman Farb and colleagues found in a 2007 study in Social Cognitive and Affective Neuroscience. Experienced meditators showed distinct neural patterns when attending to present-moment sensory experience versus their default narrative mode. Meditation practice appears to shift the balance, giving more weight to incoming sensory prediction errors rather than letting top-down stories dominate. You become more responsive to what's actually happening rather than what you expect to happen.
The Brain Doesn't Want Truth
Here's the uncomfortable part.
Your brain's job isn't to show you reality accurately. Its job is to minimize prediction error. And there are two ways to do that. Update the model to match reality. Or filter reality to match the model.
It does both. Constantly. And it strongly prefers the second option, because updating high-level beliefs is expensive. It cascades through the entire hierarchy. It's destabilizing.
So your brain would rather explain away contradictory evidence than change what it believes. It would rather not show you the gorilla than rebuild its model of the scene. It would rather reinterpret a compliment as sarcasm than update its prediction that people don't like you.
This isn't a bug you can patch with awareness. It's the architecture. Prediction errors are the only signal that forces an update. And the system is specifically designed to suppress them.
The next article in this series explores what happens when this system turns inward, when your brain starts predicting your own body's signals and constructing what you experience as emotion. Turns out, the feelings you think are reactions to the world might be predictions too.
Sources
- A Neural Substrate of Prediction and Reward (Schultz, 1997, Science)
- Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science (Clark, 2013, Behavioral and Brain Sciences)
- A Theory of Cortical Responses (Friston, 2005, Philosophical Transactions of the Royal Society B)
- Gorillas in Our Midst: Sustained Inattentional Blindness for Dynamic Events (Simons & Chabris, 1999, Perception)
- The Predictive Mind (Hohwy, 2013, Oxford University Press)
- Understanding Why Patients with Schizophrenia Do Not Perceive the Hollow-Mask Illusion Using Dynamic Causal Modelling (Dima et al., 2009, NeuroImage)
- Maximizing Exposure Therapy: An Inhibitory Learning Approach (Craske et al., 2014, Behaviour Research and Therapy)
- Being You: A New Science of Consciousness (Seth, 2021, Dutton/Faber & Faber)
- Attending to the Present: Mindfulness Meditation Reveals Distinct Neural Modes of Self-Reference (Farb et al., 2007, Social Cognitive and Affective Neuroscience)
- The Experience Machine: How Our Minds Predict and Shape Reality (Clark, 2023, Pantheon)
Part of the Prediction Machine series. Previous: Your Brain Is Hallucinating Right Now. Next: Your Brain Runs on Probability, Not Facts.



