Why Probability (Not Persuasion!) is Your Problem

One of the reasons I love ideas and messages so much is that they don’t require math…. Except, as it turns out, they do because when people evaluate your idea, they’re calculating probability (yes, literally) — and they’re doing it fast.

Here’s what I mean: We’ve talked before about how, when you’re presenting a new idea — a product, a strategy, an approach — there’s a foundational building block that has to be present: the conditional, the “If this, then that”-type statement of cause and effect or stimulus and response. 

If you do X, then you can expect Y.
If you want Y, then you should do X.

Without both pieces — the cause AND the effect — people struggle to quickly make sense of an idea. And if it doesn’t make sense (and by “make sense” I mean both intellectually and intuitively), people aren’t gonna do it. People do not do long-term anything that does not make sense to them.

But there’s another problem. As it turns out, you can have both an “if” and a “then,” both your cause and effect, and still not be successful getting that idea out there. Yep, because of probability.

A lot of older information about how people make decisions is based on something called the mental model theory. Both it and that older decision-making information is based on the assumption that we humans are perfectly, rationally, logical at all times. And you and I both know that just isn’t always true—or maybe I’m the only one that avoids walking under ladders or that throws salt over my left shoulder if I spill some.

What’s core here is the (outdated) assumption that we’re logical only in a rational-logic, mathy way. But that isn’t what the data actually show. What the data (and updated theory) show is that people are logical, but in an intuitive way — and the way that shows up is they assess the probabilities of those If/Then statements.

We don’t break out our statistics calculators and figure it out. We just go based on everything we already know and believe. Does this feel like it could work?

This is one of the places that I see ideas and messages break down all the time: there’s just too big of a gap between the “if” and the “then” for the combination to be intuitively logical.

The Likelihood Gap

One of my favorite examples — which I can share because they shared it, too! — is from my client Databox. One of several defensibly distinctive aspects of their business intelligence software is that it formats data contextually based on the situation, the user, the desired outcome, and so on.

Now, when we first started working together, their “then” (the outcome they said their deliver to their clients) was “grow your business.” That sounds good, I know. But stop and think about it. There’s a Big Ol’ Gap between “business intelligence software” and “grow your business.”

Could it grow a business? Yes. Are there a lot of other things that have to go exactly right in order for that to happen? Also, yes.

This is what I’m talking about. People are assessing very quickly (thanks to the automatic processes of our brains): “How likely is it that what you’re saying is true? That I will get THAT outcome from THIS thing?”  

You may “know” that connection is true, that your approach delivers on the stated outcome, but they don’t know it yet — and you don’t have the luxury of time required to convince people you’re talking to when they’re hearing an idea for the first time. 

So with Databox, we scaled the core question (which produces that IF statement) much closer to what business intelligence software can actually do. In this case, we settled on: “How can I make my data more usable?”

Do you feel the difference between “This business intelligence software will grow your business” and “This business intelligence software will make your data more usable”? The more usable data piece “fits” better with business intelligence software, it feels more…reasonable.

But that doesn’t solve our differentiation problem, does it? This is where the core strategy comes in — Databox’s “contextual formatting.” The answer to “how do you make your data more usable?” is yes, Databox and their business intelligence software — but more specifically, the claim that contextually formatted data is much more likely to make data more usable, because it’s going to be formatted based on how people like to see it, who’s using it, what their expertise level is, etc.

Once you put those two things together — “contextual formatting [approach] makes data more usable [outcome]” — then someone goes, “Ah, that makes sense! That sounds like a reasonable outcome from that approach!” And that puts Databox in the beautiful position to say, “Yes! And that’s exactly what our business intelligence software is designed to do. Let’s show you how….”

The First Test Your Idea Has to Pass

As you’re thinking about presenting your idea, make sure not only that you have that conditional in place, but also that a reasonable human would look at that If/Then statement and say, “Yes, that makes [intuitive and intellectual] sense as a way to achieve that outcome.” After all, after making sure they understand the words you’ve said, that probability assessment is the very first test that your audience’s brain is going to put your idea through. 

So, save yourself a heap of trouble and do the test yourself first. Make sure that you’ve got that “if” and that “then” working very tightly together. The best way to do that? Stop trying to reach for the moon (no matter what your marketing and branding team may say).

Instead, name the problem you’re directly solving, and make sure you’re describing not just your idea and what it is, but the core strategy — the Prime Strategy — that’s embedded in how it does that.

And forgive me for sneaking in some math in the middle of talking about ideas and messaging, but you don’t have to do any special calculations. In this case, your brain is already wired to do anything you need it to.

All you have to do is ask yourself: “How likely is it that this will lead to that? That our product, our idea, will actually lead to that outcome?” And more importantly, ask, “How would a skeptic answer that question?” My personal rule of thumb: anything less than ~90%, even for the skeptic, won’t pass.

So run that test yourself before your audience does. Because they will.

Until next time,
Tamsen

 

Want to go deeper?

Johnson-Laird, P. N. (1994). Mental models and probabilistic thinking. Cognition, 50(1–3), 189–209. https://doi.org/10.1016/0010-0277(94)90028-0

Johnson-Laird, P. N., & Byrne, R. M. J. (2002). Conditionals: A theory of meaning, pragmatics, and inference. Psychological Review, 109(4), 646–678. https://doi.org/10.1037/0033-295x.109.4.646

Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119(1), 3–22. https://doi.org/10.1037/0033-2909.119.1.3