Doctoral Studies in Seconds | Part 2: Conceptual Belief Change
In Part 1 of this series, I introduced a framework that’s already shifted how I approach strategy and message design: the difference between adaptive and technical challenges. Adaptive challenges are problems for which there is not yet a known, concrete, or consensus solution. They are problems that need new answers, which almost always need new ways of thinking.
That framework laid the groundwork for another powerful concept I’ve encountered in my doctoral work: conceptual belief change (changing the underlying concepts someone is using to think about something), and specifically, categorical shifts (which are the “kind” of concepts some is applying to their model—more on that below!).
This idea comes from cognitive scientist Michelene Chi, who’s studied how people restructure their understanding when faced with new information. In her 2008 paper, she outlines three types of conceptual change: belief revision, mental model transformation, and categorical shifts.
Let’s take a look at each and why that third one might just be the most critical (and frustrating) barrier to real change.
- Belief Revision: Correcting the Simple Stuff
This is the cleanest, most technical kind of change. Belief revision happens when someone simply has the wrong information, and the correction is straightforward. Think: I used to think whales were fish, now I know they’re mammals.
This is the realm of training, fact-checking, and technical fixes. It’s useful but limited to situations where misinformation is the main barrier, and for when you’re facing a technical challenge (one with a known, “correct” solution).
Tool to use: Information
- Mental Model Transformation: Completing the Picture
Here, someone isn’t entirely wrong, but they don’t fully grasp how a system works. Maybe they understand part of the mechanism, but not how it fits into the whole.
Take a car, for example. You might believe pressing the gas pedal makes the car go, which is technically true. But a fuller model would include air intake, fuel injection, combustion, and so on.
You’re not fixing an error so much as deepening understanding. This kind of shift requires explanation and scaffolding, but it’s still doable, especially when the foundational belief is compatible with the new insight. I suspect this approach can apply to both adaptive and technical challenges, depending on what the person you’re talking to already does or doesn’t know.
Tool to use: Argument (as in the case for something, not a fight!), because arguments are rationales. They’re a way, your way, to think about the problem, which is what the core case does: it makes your reasoning—your mental model!—explicit, which makes it easier for someone to adopt it, or expand their own.
- Categorical Shifts: The Hidden Obstacle
Now we get to what is often the most difficult form of belief change: when someone doesn’t just have the wrong fact or an incomplete picture or mental model. They’re thinking about the entire thing in the wrong way. They’ve put the idea in the wrong mental folder.
To use Chi’s example, what is “heat”? Many of us (mistakenly!) think of heat as if it’s a substance—something that can be stored, contained, or transferred like a liquid. But from a scientific standpoint, heat is not a material “thing” at all. It’s a process, and specifically, the process of transferring energy between molecules when there’s a temperature difference.
Miscategorization matters, and not just when it comes to heat! But let’s use that example to see why: if someone thinks heat is a substance, they’d likely try to conserve heat like water, or think they could “use it up” like fuel, neither of which reflects how heat actually works.
Because they’re using “substance” solutions on a “process” problem, their conclusions (and strategies) will be consistently off or completely ineffective.
Sound familiar? I hope it does! Stakeholders and audiences are often struggling with a strategy or approach because they’re putting the problem, or some aspect of it, into the wrong category. They don’t see your solution as wrong. They see it as irrelevant because they’ve placed it in the wrong category.
For example:
- If your audience sees “leadership” as a personal trait (a thing) rather than a practice (a process), they won’t respond to strategies focused on skill development.
- If someone thinks “strategy” is a static plan rather than an ongoing decision-making process, they’ll resist iterative change as inconsistency.
- If your team sees “communication” as a one-time transfer of information, they won’t value feedback as part of the communication loop.
And unless you address that, no amount of explaining will help! So, when your message isn’t landing, ask yourself:
- What kind of thing do they think this is?
- What kind do you think it is?
- Where’s the overlap or the conflict?
(Does this sound familiar, too? It should! This is what the “duck-bunny” of the Red Thread does!)
Thankfully, if your solution or strategy is a better category fit, then that’s the belief change your message needs to focus on.
You’re not just building a bridge between beliefs. You’re building a shared understanding of what kind of thing you’re talking about.
That’s what makes this kind of conceptual shift so powerful and so hard. So let’s make it easier:
Tool to use: Story, because a story is an argument for why a certain series of events produces particular outcomes for those involved, so a story “solves” for a mental model transformation as well (WINNING)! And, because it both (a) puts the elements of an argument into an easy-to-understand wrapper and (b) adds the concrete elements and details that make a conceptual argument concrete, it creates the perfect mental conditions for conceptual belief change. It’s especially effective when you make the category conflict explicit, which is exactly what the duck-bunny does (MORE WINNING)!
You’re not just building a bridge between beliefs. You’re building a shared understanding of what kind of thing you’re talking about.
That’s what makes this kind of conceptual shift so powerful and so hard.
Why All This Matters for Message Design
This is where it gets practical. And, yeah, a little scary, because when you’re trying to help people adopt a new idea, framework, or strategy, there’s a whole HOST of issues you may be up against:
- Missing information
- Incomplete information
- Incorrect or MISinformation
- Missing mental models
- Incomplete mental models
- Incorrect mental models
- Misaligned mental models
- …And miscategorization of any part of the mental model involved (problem, solution, premises), which is often what causes the misalignment
Phew! No wonder changing minds is hard!
But each of these situations is a technical challenge, which is good news! Each of them has a known solution that effectively solves the problem.
- Missing information → More information
- Incomplete information → More information
- Incorrect or MISinformation → Correct information
(If those three solutions aren’t working, you’re likely looking at:)
- Missing mental models → Complete mental model
- Incomplete mental models → Complete mental model
(If those two solutions aren’t working, you’re likely looking at:)
- Misaligned mental models → Complete mental model, built with elements you’re already aligned on
- …And miscategorization of any part of the mental model involved (problem, solution, premises), which is often what causes the misalignment → Complete mental model, built with elements you’re aligned on
Thankfully, the Core Case and the Red Thread (the Core Story), either individually or together, solve for all of them, because each of them provides a complete mental model and a framework around which to “hang” the necessary kinds of information.
The key? Understanding why externalizing mental models is so important to belief change in the first place, which is what Part 3 in this series is all about!