4. Prospect Theory and Risky Choices
Introduce prospect theory's value function and probability weighting, demonstrating why people value gains and losses asymmetrically.
Content
Probability Weighting: Overweighting Small Odds
Versions:
Watch & Learn
AI-discovered learning video
Sign in to watch the learning video for this topic.
Probability Weighting Explained: Why We Overweight Small Odds
"You buy a $2 lottery ticket and then insure your $1,000 TV. Welcome to probability weight-lifting: we can't carry objective odds, so we train subjective ones."
We already built the stage: reference points (gains vs. losses) and diminishing sensitivity (the value curve flattens as amounts grow). Now meet the co-star that makes lotteries, insurance, and fear-based headlines work in practice — probability weighting. This is the piece of Prospect Theory that says: your brain does not treat probability like a mathematician.
What probability weighting is (micro explanation)
- Objective probability = the numeric chance (e.g., 1%).
- Decision weight = the subjective weight your brain assigns to that chance.
Prospect Theory (especially its cumulative form) replaces raw probabilities with a weighting function π(p). For many people, π(p) behaves like an inverse S-shaped curve:
- Overweights small probabilities (π(0.01) > 0.01). Small chances feel bigger than they are. Hello, lottery dreams.
- Underweights large probabilities (π(0.95) < 0.95). High chances feel less certain than their math suggests. Hello, skipping earthquake drills.
This is different from Expected Utility Theory (EUT), which multiplies utility by the objective p and expects people to be probability-accurate. Prospect Theory says: we distort p first, then apply the value function.
Why this matters — intuitive examples
Lottery tickets
You pay $2 for a 0.000001 chance at millions. Rational expected value says don’t. But if π transforms that microscopic p into a larger subjective weight, the weighted utility of the jackpot can exceed the cost.
Insurance
You pay a pricey premium to avoid a small chance of a big loss. Overweighting the small probability of catastrophe makes the insurance premium psychologically affordable — even if the actuarial math is shaky.
Media and panic
News highlights rare disasters. If people overweight tiny risks, the headlines produce outsized fear and behavior changes — e.g., refusing to fly because of recent crashes despite flying being statistically safer than driving.
How it connects to what you already learned
From reference points: whether a gamble is framed as a gain or a loss matters. Overweighting small probabilities interacts with that framing — small chance at a big gain makes you risk-seeking; a small chance at a big loss makes you risk-averse (and insurance-friendly).
From diminishing sensitivity: the value function is concave in gains and convex in losses. Combine that curvature with overweighting and you can predict classic behaviors:
- Small-probability large gains → overweighting + concavity can still make the gamble attractive (lottery buying).
- Small-probability large losses → overweighting + convexity leads to extreme risk aversion (insurance, avoidance).
From biases: loss aversion intensifies these effects. Confirmation and availability biases amplify overweighting because memorable rare events are used to justify distorted weights.
A quick numeric look (yes, the tiny math helps)
Imagine: Sure $50 vs. 1% chance of $10,000 (and 99% nothing). Under EUT, EV of the gamble = 0.01 * 10,000 = $100, so EV prefers the gamble. But real peoples' choices depend on value curvature and weighting.
If your value function has diminishing sensitivity, the subjective value of $10,000 is less than 200× the value of $50. But if your weighting function overweights 1% to, say, 5% (π(0.01)=0.05), the decision weight multiplies the subjective value and could make the gamble look even better.
Rough sketch (not formal):
v(50) ≈ 50^0.5 = 7.07
v(10000) ≈ 10000^0.5 = 100
Decision weight π(0.01) = 0.05
Gamble subjective = 0.05 * 100 = 5
Sure subjective = 1 * 7.07 = 7.07
→ Prefer the sure $50 here.
But tweak numbers slightly (larger jackpot, stronger overweight), and the gamble wins. The point: small tweaks in π(p) radically change choices when probabilities are tiny.
Why does the brain do this? (mechanisms and evolutionary flavor)
- Evolutionary selectivity: rare catastrophes matter more for survival than small daily rewards. The brain errs on the side of attention.
- Representativeness & availability: vivid rare events (plane crash clips, lottery winners on TV) inflate perceived frequency.
- Computation shortcut: estimating probability precisely is costly; the brain uses heuristics that bias small-probability assessments upward.
Why do people keep misunderstanding this? Because probabilities are boring and humans are noisy pattern-seekers — we overweight the dramatic and underweight the mundane.
Where probability weighting shows up in the real world
- Marketing: “Only 3 seats left!” exploits overweighting of small scarcity cues.
- Gambling industry: casinos and lotteries design payoff structures that exploit distorted decision weights.
- Insurance: insurers sell peace of mind by leveraging overweighted catastrophe probabilities.
- Public policy: risk communication (e.g., vaccination side effects) must consider overweighting to avoid misperception.
Contrasting viewpoint — does probability weighting break rationality?
Classical economists would say it violates normative EUT axioms. But Prospect Theory isn’t a prescription for how you should decide; it’s a descriptive model of how people actually decide. Cumulative prospect theory improves on earlier versions by avoiding some paradoxes and preserving stochastic dominance, but it still embraces the psychological reality of distorted weights.
Step-by-step: How to predict a choice using Prospect Theory
- Set the reference point — is the outcome a gain or a loss?
- Map outcomes through the value function (diminishing sensitivity matters).
- Transform probabilities via the weighting function π(p).
- Multiply weights × values and compare — pick the option with higher summed subjective value.
Try this on a lottery vs. insurance example and watch how small changes in π(p) flip preferences.
Key takeaways
- Probability weighting means we subjectively distort probabilities: small p's often feel bigger; large p's often feel smaller.
- This distortion explains why we both buy lottery tickets and pay for insurance — two behaviors that look inconsistent under classical expected-value reasoning.
- Combine probability weighting with reference points and diminishing sensitivity to predict whether people will be risk-seeking or risk-averse in specific situations.
- Real-world systems (marketing, policy, gambling) exploit these distortions; awareness helps you avoid being nudged without consent.
"If your gut tells you a 1% chance 'feels' like 5% — that's not a math problem, it's a mind problem. But the good news: once you see the function, you're less at the mercy of its tricks."
Quick exercise: think of a time you overpaid for peace of mind or bought an optimistic long-shot. Which part — the reference point, diminishing sensitivity, or probability weighting — was doing the heavy lifting?
Comments (0)
Please sign in to leave a comment.
No comments yet. Be the first to comment!