Every model in this book has assumed rational agents — consumers who maximize expected utility, firms that minimize costs, agents with consistent time preferences and correct beliefs. This chapter asks: what if these assumptions are systematically wrong?
Behavioral economics documents predictable deviations from the standard model: people are loss-averse, overweight small probabilities, discount the future inconsistently, and are influenced by framing and context. The question is not whether people are "irrational" — it's whether the deviations are systematic enough to improve our models and inform better policy.
Under the axioms of Chapter 10 (completeness, transitivity, continuity) plus the independence axiom, preferences over lotteries can be represented by expected utility:
Gamble A: \$1,000,000 with certainty. Gamble B: 89% chance of \$1M; 10% chance of \$1M; 1% chance of \$1. Most people choose A.
Gamble C: 11% chance of \$1M; 89% chance of \$1. Gamble D: 10% chance of \$1M; 90% chance of \$1. Most people choose D.
But $A \succ B$ and $D \succ C$ together violate the independence axiom.
Select your preferred gamble in each pair, then see whether your choices are consistent with expected utility theory.
| Gamble | Probabilities & Payoffs |
|---|---|
| A | 100% chance of \$1M |
| B | 89% × \$1M + 10% × \$1M + 1% × \$1 |
| C | 11% × \$1M + 89% × \$1 |
| D | 10% × \$1M + 90% × \$1 |
Pair 1: A vs B — I prefer:
Pair 2: C vs D — I prefer:
Figure 17.A. Expected utility of each gamble under power utility $u(x) = x^{1-r}/(1-r)$. The slider varies the risk aversion parameter $r$. If your choices are A and D (the common Allais pattern), no value of $r$ can rationalize both preferences simultaneously — the independence axiom is violated.
An urn contains 30 red balls and 60 balls that are either black or yellow (unknown ratio). People prefer known probabilities over unknown ones — revealing ambiguity aversion that EU cannot accommodate.
Kahneman and Tversky (1979) proposed prospect theory as a descriptive alternative to expected utility.
with $\gamma \approx 0.88$ and $\lambda \approx 2.25$.
The value function is S-shaped: concave for gains (risk aversion), convex for losses (risk seeking), and steeper for losses than gains (loss aversion). Compare to the linear EU value function.
Figure 17.1. The prospect theory value function (blue S-curve) versus expected utility (gray straight line). The kink at the origin reflects loss aversion — the slope is steeper on the loss side. Higher $\lambda$ makes losses more painful; lower $\gamma$ increases curvature. Drag sliders to reshape the function.
People do not weight outcomes by their true probabilities:
with $\delta \approx 0.65$. Small probabilities are overweighted (explaining lottery ticket purchases); large probabilities are underweighted (explaining insurance against near-certain losses).
Compare the weighted probability $\pi(p)$ to the true probability (the 45-degree line). Where the curve is above the diagonal, people act as if the probability is higher than it really is.
Figure 17.2. The probability weighting function. Above the 45-degree line: overweighting (small probabilities seem larger than they are). Below: underweighting (large probabilities seem smaller). At $\delta = 1$, the curve collapses to the diagonal — no distortion. Drag the slider to explore.
Prospect theory valuation:
Endowment effect: People demand more to sell an owned object than they'd pay to acquire it. Equity premium puzzle: Myopic loss aversion with short evaluation horizons explains the large stock-bond return gap. Insurance and gambling: The same person buys insurance (loss domain, concave) and lottery tickets (overweighted small-probability gains).
A gamble offers a 50% chance of winning \$100 and a 50% chance of losing \$100. Compare evaluations.
Expected Utility (CRRA with $r = 0.5$, $W = 1000$): $EU = 0.5 \cdot u(1200) + 0.5 \cdot u(900) = 0.5 \times 1200^{0.5} + 0.5 \times 900^{0.5} = 0.5(34.64) + 0.5(30.00) = 32.32$. Certainty equivalent: \$12.32^2 = 1044.6$. Net CE gain: \$14.6 > 0$. Take the gamble.
Prospect Theory ($\gamma = 0.88$, $\lambda = 2.25$, $\pi(0.5) = 0.42$):
$V = \pi(0.5) \cdot v(200) + \pi(0.5) \cdot v(-100)$
$= 0.42 \times 200^{0.88} + 0.42 \times (-2.25)(100^{0.88})$
$= 0.42 \times 138.4 + 0.42 \times (-2.25 \times 72.4) = 58.1 - 68.5 = -10.4 < 0$. Reject the gamble.
Key insight: Loss aversion flips the decision. EU says the positive expected value makes this attractive. Prospect theory says the \$100 loss looms larger than the \$100 gain — consistent with the empirical observation that most people reject such gambles.
where $\beta < 1$ captures present bias. The discount factor between now and next period is $\beta\delta$, but between any two future periods is just $\delta$. This creates time inconsistency: today you plan to start exercising tomorrow; tomorrow you prefer the day after.
A task costs 6 utils today but yields 8 utils of benefit arriving in 3 days. A present-biased agent keeps planning to do it "tomorrow" but never does. A sophisticated agent recognizes the pattern.
Figure 17.3. Discounted value of doing the task on each day, as seen from that day (blue) vs. as seen from the day before (orange). The gap is present bias — the task always looks better when it's "tomorrow" than when it's "today." Naive agents keep postponing; sophisticated agents anticipate their future selves' behavior. Drag sliders to explore.
A student must write a paper. Cost of doing it today: $c = 10$ utils. Benefit (received at submission in 7 days): $b = 20$ utils. Parameters: $\beta = 0.6$, $\delta = 0.99$.
Step 1 (Day 1, perspective of Day 1): Do it now: $-10 + \beta\delta^7 \times 20 = -10 + 0.6 \times 0.93 \times 20 = -10 + 11.2 = 1.2 > 0$. Looks worth doing!
Step 2 (Day 1, re-evaluation): Wait until tomorrow: $\beta\delta \times (-10) + \beta\delta^7 \times 20 = 0.6 \times 0.99 \times (-10) + 0.6 \times 0.93 \times 20 = -5.9 + 11.2 = 5.3$. Waiting looks even better! The naive agent delays.
Step 3 (Day 2, from Day 2's perspective): The same calculation repeats: doing it today still has net value \$1.2$, but waiting has \$1.3$. The agent procrastinates again — and again, and again.
Naive outcome: The student never does the paper until the deadline forces action (or misses the deadline entirely).
Sophisticated outcome: Knowing future selves will procrastinate, the sophisticated agent recognizes that "do it tomorrow" means "never." If the deadline binds at Day 7, the sophisticated agent may set an artificial deadline or accept the immediate cost on Day 1.
An agent earns \$1,000/month and wants to save \$100/month for retirement. Parameters: $\beta = 0.7$, $\delta = 0.95$, $r = 5\%$/year.
Without commitment: Each month, the agent plans to save \$100 but faces the temptation to spend. The immediate utility of spending \$100: $u(200) = 200^{0.5} = 14.1$. The discounted future benefit of saving: $\beta\delta^{12} \times u(200 \times 1.05) = 0.7 \times 0.54 \times 14.5 = 5.5$. Since \$14.1 > 5.5$, the agent spends the \$100 every month.
With commitment device: An illiquid savings account automatically deducts \$100/month. The agent cannot access the money for 12 months. From the perspective of enrollment: $PV(\text{annual savings at } r=5\%) = 200 \times 12 \times 1.05 = 2,520$. The agent's long-run self values this highly.
Value of commitment: The difference between the committed outcome (\$1,520$ saved) and the uncommitted outcome (\$1$ saved) is the value of the commitment device. The agent would pay up to $\beta \times PV - 0 = 0.7 \times 2,520 = 1,764$ in present-biased terms to have the option.
Setup: Player 1 proposes how to split \$10. Player 2 accepts (both get the amounts) or rejects (both get nothing).
Subgame perfect equilibrium: Player 1 offers \$1.01; Player 2 accepts.
Actual behavior: Modal offer is 40–50%. Offers below 20% are rejected about half the time. People sacrifice real money to punish unfairness — suggesting utility functions include fairness and reciprocity.
You are Player 1. Propose a split of \$10. The computer (Player 2) accepts or rejects based on a fairness threshold. How much do you need to offer to avoid rejection?
Figure 17.4. Your earnings per round. Green bars: accepted offers. Red bars: rejected offers (\$1 for both). The rational strategy is to offer just above the threshold — but in real experiments, people offer much more than the minimum. Play multiple rounds to see the pattern.
| Nudge | Bias Addressed | Outcome |
|---|---|---|
| Default enrollment in 401(k) | Procrastination, status quo bias | Participation: ~50% → ~90% |
| Save More Tomorrow | Present bias | Savings rates nearly quadruple |
| Opt-out organ donation | Status quo bias | Consent: ~15% → ~85% |
| Social norms messaging | Conformity | 2–4% energy reduction |
| Simplified financial aid forms | Complexity aversion | +8pp college enrollment |
Two identical programs — same benefits, same freedom to choose. The only difference is the default. With opt-in, people must actively enroll. With opt-out, people must actively leave. Small switching costs create enormous differences in participation.
Figure 17.5. Participation rates under opt-in vs. opt-out defaults. At zero switching cost, both converge to the "true preference" rate. As switching cost rises, each default becomes stickier — fewer people switch away from whatever the default is. The policy implication: set the default to the socially beneficial option. Drag the slider to vary switching costs.
A company with 10,000 employees wants to increase 401(k) participation. Current opt-in rate: 40%. Average contribution rate among participants: 6% of salary.
Step 1 (Diagnosis): The low opt-in rate is consistent with status quo bias and present bias. Employees intend to enroll but procrastinate. The default (not enrolled) is the problem.
Step 2 (Nudge design — auto-enrollment): Change the default to automatic enrollment at 3% contribution rate. Employees can opt out at any time (preserving libertarian criterion).
Step 3 (Predicted effect): With switching cost $e = 3$ on a 0–10 scale: opt-out participation $\approx 90\%$ vs. opt-in $\approx 40\%$. The 50-percentage-point gap is entirely due to the default — the economic incentives are unchanged.
Step 4 (Auto-escalation): Add automatic contribution increase of 1% per year until reaching 10%. Present-biased agents do not opt out of gradual increases because each increment is small.
Step 5 (Evidence): Madrian and Shea (2001) found that auto-enrollment raised 401(k) participation from 37% to 86% at one company. Thaler and Benartzi's "Save More Tomorrow" program increased contribution rates from 3.5% to 13.6% over 40 months.
Arguments that markets correct biases: Arbitrageurs exploit mispricing; competition punishes irrational firms; experience teaches better decisions.
Arguments that biases persist: Limits to arbitrage (short-selling constraints, noise trader risk); some biases are robust to experience (loss aversion among professional traders); market prices can reflect aggregate biases (financial bubbles).
The evidence is mixed. Financial markets are approximately efficient for liquid assets, less so for complex or illiquid ones. Consumer markets show persistent behavioral patterns.
Maya added a free cookie with every lemonade. Sales increased 15%. She later removed it. Rational prediction: Customers should be indifferent (if cookie worth \$1.25 and price adjusts). Behavioral prediction: Removing the cookie is a loss, weighted $\lambda \approx 2.25$x. Sales dropped 20% — far more than the 15% gain from introducing it.
Lesson: It is easier to add a benefit than to remove one. Loss aversion means "taking away" is not the mirror image of "giving."
| Label | Equation | Description |
|---|---|---|
| Eq. 17.1 | $EU = \sum p_i u(x_i)$ | Expected utility |
| Eq. 17.2 | $v(x) = x^\gamma$ for gains; $-\lambda(-x)^\gamma$ for losses | Prospect theory value function |
| Eq. 17.3 | $\pi(p) = \frac{p^\delta}{(p^\delta + (1-p)^\delta)^{1/\delta}}$ | Probability weighting |
| Eq. 17.4 | $V = \sum \pi(p_i) v(x_i)$ | Prospect theory valuation |
| Eq. 17.5 | $U_0 = u_0 + \beta\sum\delta^t u_t$ | Quasi-hyperbolic discounting |