Big Question #4

Are people rational?

A psychologist won the Nobel in Economics by proving we’re not. A lawyer and an economist want the government to help. Should it?

Stage 1 of 3

The demolition

The psychologist who won the Nobel Prize in Economics. His experiments demolished the assumption that people are rational.

Daniel Kahneman never took an economics class. He was a psychologist who ran experiments on how people actually make decisions — and what he found was so devastating to the foundations of economic theory that the field gave him its highest honor in 2002. The core of his work, done with Amos Tversky across two decades, is simple: people don’t evaluate outcomes the way economists assumed they do. They evaluate outcomes relative to a reference point, feel losses more sharply than equivalent gains, and get probabilities systematically wrong.

Prospect theory. Standard economics says a rational agent evaluates final wealth states. Getting \$100 when you have \$900 and getting \$100 when you have \$900,000 produce different utility, but the calculation is always about the total. Kahneman and Tversky (1979) showed people don’t think this way at all. They evaluate changes from where they are. And changes aren’t symmetric: losing \$100 hurts roughly 2.25 times more than gaining \$100 pleases. This is loss aversion — and it explains a bewildering range of behavior that the standard model cannot.

Why do investors hold losing stocks too long and sell winners too quickly? Loss aversion — selling a loser means realizing the loss, making it psychologically real. Why do people reject a coin flip that pays \$150 on heads and costs \$100 on tails, even though the expected value is positive? Because the pain of losing \$100 outweighs the pleasure of gaining \$150. Why do homeowners refuse to sell their house below the price they paid, even in a falling market? The purchase price is the reference point, and selling below it registers as a loss.

Prospect theory’s value function is defined over gains and losses relative to a reference point $r$, not absolute wealth:

$$v(x) = \begin{cases} x^\alpha & \text{if } x \geq 0 \\ -\lambda(-x)^\beta & \text{if } x < 0 \end{cases}$$

With typical estimates $\alpha \approx \beta \approx 0.88$ and $\lambda \approx 2.25$. The function is concave for gains (risk aversion) and convex for losses (risk seeking). Additionally, people overweight small probabilities and underweight large ones through a probability weighting function $\pi(p)$, which explains both lottery purchases (overweighting a tiny chance of a jackpot) and insurance (overweighting a tiny chance of catastrophe).

Intuition

The core insight is three-fold. First, you judge outcomes by whether they’re gains or losses relative to where you started, not by where they leave you. Second, losses loom about twice as large as gains. Third, you overreact to small probabilities and underreact to large ones. Together, these three features explain why people buy lottery tickets AND insurance, hold losing investments too long, and refuse good bets when the downside is framed as a loss.

Framing effects. If people were truly rational, it wouldn’t matter how you phrase a question. But Kahneman and Tversky’s “Asian disease” experiment demonstrated otherwise. When told “200 out of 600 people will be saved,” most chose the sure option. When told “400 out of 600 people will die” — the identical outcome — most chose the gamble. The phrasing flipped the reference point from gains to losses, and that flip reversed the decision. This isn’t a minor laboratory curiosity. It violates the most basic requirement of rational choice: that preferences should be invariant to description.

Want the full formal treatment with interactive graphs? Ch 17 §17.1 builds the model from scratch.

Take

“A choice architect has the responsibility for organizing the context in which people make decisions.”

— Richard Thaler & Cass Sunstein, Nudge (2008)

"Should the government nudge people toward better choices?"

If Kahneman is right that our decisions are systematically biased, someone is always structuring the choices we face. Should that someone be the government — deliberately?

Is rationality a useful assumption?

“The idea that agents are rational is so ingrained in economic thinking that many economists would not even see its special character. They simply view the rational-agent model as a description of reality. But it’s a theory, and a bad one at that — people are predictably irrational.”

— Daniel Kahneman, Thinking, Fast and Slow, 2011

Kahneman’s challenge isn’t that people make random errors — noise would average out in markets. His challenge is that errors are systematic: biases that point in one direction, predictably, across populations. Loss aversion makes people too conservative with gains and too reckless with losses. Overweighting small probabilities inflates demand for both lotteries and insurance. These patterns aren’t noise — they’re signal, and they don’t cancel out in aggregate.

“The economic approach does not assume that all participants in any market necessarily have complete information, but it does assume that the desire for information is subject to the same rational calculations as any other good. People invest in information up to the point where its marginal benefit equals its marginal cost.”

— Gary Becker, The Economic Approach to Human Behavior, 1976

Becker’s defense of rationality is subtle: it doesn’t require perfect decisions, only that people optimize given their constraints, including information costs. Not checking every label at the grocery store isn’t irrational — your time has value. The “biases” Kahneman documents might be rational adaptations to the cost of thinking more carefully. This “rational inattention” framework has become a major research program in its own right.

Where this leaves us

Kahneman and Tversky proved that people systematically violate the axioms of rational choice — not occasionally, not only in labs, but in predictable patterns that formal models can capture. The question is no longer whether biases exist. It’s whether markets correct them or whether they persist in the wild. Gary Becker says the rational framework survives if you expand what counts as a “cost.” But that expansion risks making rationality unfalsifiable — anything can be called rational if you posit an unobservable cost. The real test is whether the biases survive competition and aggregation.

Kahneman demolished the rational agent in the lab. But economists had a defense: markets discipline irrationality. Competition weeds out bad decisions. The aggregate behaves rationally even if individuals don’t. Is that true? Before we can test it, we need to ask a deeper question: what does the formal theory of rational choice actually require?

Stage 2 of 3

The defense

“The purely economic man is indeed close to being a social moron. Economic theory has been much preoccupied with this rational fool.”

— Amartya Sen, “Rational Fools,” 1977

Sen asked: can choice theory even tell the difference between rational and confused?

Sen’s attack wasn’t empirical — it was logical. He wasn’t saying people fail to be rational. He was saying the economist’s definition of rationality is so thin that it can’t distinguish a genius from a fool. The entire apparatus of consumer theory, welfare economics, and mechanism design rests on a small set of axioms about preferences. If those axioms are trivially satisfiable, the edifice is built on sand.

The choice axioms. Formal rational choice requires three properties:

  1. Completeness: For any two options A and B, you can rank them — either A is preferred, B is preferred, or you’re indifferent. No option is “unrankable.”
  2. Transitivity: If you prefer A to B and B to C, you prefer A to C. Preferences don’t cycle.
  3. Continuity: Small changes in options produce small changes in preferences. No jumps.

If your preferences satisfy these axioms, the Debreu representation theorem guarantees a utility function exists — a number $u(A)$ assigned to every option such that $u(A) > u(B)$ whenever you prefer A to B. This isn’t a minor technical detail. It’s the foundation of everything: consumer theory, producer theory, general equilibrium, welfare economics, and mechanism design all flow from the assumption that people maximize a utility function.

Revealed preference. The axioms have an empirical counterpart. The Weak Axiom of Revealed Preference (WARP): if you chose bundle A when B was affordable, you shouldn’t choose B when A is affordable at the same prices. The Strong Axiom (SARP) extends this to chains. These are testable: give someone a sequence of budget sets, observe choices, check consistency with any utility function. Violate WARP and you’re not rational in the formal sense, period.

Intuition

Revealed preference says: if you picked the steak when the salad was available and affordable, you shouldn’t pick the salad when the steak is available and affordable at the same price. Your choices should tell a consistent story about what you like. If they don’t, no utility function can describe you.

Sen’s critique. Here’s the problem. Sen pointed out that the standard framework collapses two different things into one. It assumes your choices reveal your preferences, and your preferences define your welfare. But what about someone who donates a kidney to a stranger? Standard theory says they “prefer” donating — otherwise they wouldn’t do it. But this makes “preference” vacuous. The theory can’t distinguish between genuine self-interest, altruism, moral duty, habit, confusion, or coercion. If every choice is “revealed preference,” the concept has no content.

Where the axioms actually break. And beyond the philosophical problem, the axioms fail empirically. Completeness is implausible for complex choices — people genuinely don’t have rankings over all possible bundles. Choosing between career paths or medical treatments often produces not indifference but undecidedness, which is a different thing. Transitivity fails systematically: Grether and Plott (1979) documented robust preference reversals between gambles. People assign higher dollar values to gamble A but choose gamble B in direct comparison — not occasionally, but reliably across decades of experiments. Context dependence violates independence of irrelevant alternatives: add a clearly inferior “decoy” option and people change which of the original options they pick.

For the full formal treatment of choice axioms and representation theorems, see Ch 10 §10.1.

Take

"Markets can remain irrational longer than you can remain solvent."

-- attributed to John Maynard Keynes (likely apocryphal)

If people aren't rational, how do markets work at all?

The "as if" defense says markets discipline irrationality: irrational agents lose money to rational ones and get driven out. But what if the irrational agents are the billionaires? What if irrational behavior is profitable precisely because everyone else is irrational too?

Do the axioms hold?

“Heuristics are not second best. In many real-world environments — where information is scarce, time is limited, and the future is uncertain — simple heuristics outperform complex optimization. The ‘biases’ documented in the laboratory are often smart adaptations to the structure of real environments.”

— Gerd Gigerenzer, Rationality for Mortals, 2008

Gigerenzer’s “ecological rationality” program directly challenges Kahneman’s framing. Where Kahneman sees bias, Gigerenzer sees adaptation. The “recognition heuristic” — pick the option you recognize — outperforms complex portfolio models in some stock-picking tasks. The “1/N rule” — divide equally among options — beats mean-variance optimization in many real portfolios because it avoids estimation error. Laboratory experiments that demonstrate “irrationality” often use artificial environments that penalize the very heuristics that work well in the wild.

“Competition in markets generates rationality. Firms that make irrational decisions go bankrupt. Consumers who consistently overpay lose resources to those who don’t. Evolution — whether biological or economic — selects for behavior that approximates optimization, even if no individual is literally maximizing.”

— Summary of Alchian (1950) / Friedman (1953) “as if” defense

The “as if” defense has been the mainstream’s first line for decades. It concedes that individuals aren’t literally maximizing utility but argues that market outcomes behave as if they were, because competition weeds out irrational agents over time. Vernon Smith’s experimental economics program partially supports this: in simple market experiments with clear price signals, even inexperienced traders converge on equilibrium prices within a few rounds. But the defense weakens in markets with complex products, infrequent transactions, and limited feedback — which describes most of life.

Where this leaves us

The axioms are best understood as a benchmark, not a description. They tell you what consistency requires, and deviations from them are informative -- they point to specific psychological mechanisms (loss aversion, framing, present bias) that can themselves be modeled. The "as if" defense works in some domains (simple repeated market decisions with clear feedback) but fails in others (complex infrequent decisions, financial markets with limited arbitrage). The real question is no longer "are people rational?" but "in which domains does rationality fail, and does it matter for outcomes?"

So people aren’t rational in the way the theory requires, and the market doesn’t fully correct for it. This creates an uncomfortable policy question. If people predictably make choices they’d regret on reflection — undersaving for retirement, ignoring organ donation forms, eating themselves into disease — should someone step in? And if so, how?

Stage 3 of 3

The synthesis

“A nudge is any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives.”

— Richard Thaler & Cass Sunstein, Nudge, 2008

If people aren’t rational, should the government help them decide?

Here’s the uncomfortable fact that nudge theory forces you to confront: there is no neutral way to present a choice. The order of options on a ballot affects who gets elected. The default on an organ donation form determines whether 27% or 99% of people consent. Whether a discount is framed as “save \$5” or “avoid losing \$5” changes how many people take it. Someone is always designing the choice environment. The only question is whether the design is deliberate or accidental.

Present bias and the savings crisis. Standard economic theory says people discount the future at a constant rate: a dollar next year is worth, say, 95 cents today, and a dollar in two years is worth 95% of that. But Laibson (1997) showed that people discount the immediate future far more steeply than the distant future. You might prefer \$100 today over \$110 tomorrow, but also prefer \$110 in 31 days over \$100 in 30 days — even though the tradeoff is identical. This isn’t a laboratory curiosity. It’s why half of Americans have less than \$1,000 in retirement savings despite knowing they should save more.

Quasi-hyperbolic discounting introduces a parameter $\beta < 1$ that discounts all future periods relative to the present:

$$U_t = u_t + \beta \sum_{s=1}^{\infty} \delta^s \, u_{t+s}$$

With $\beta \approx 0.7$ and $\delta \approx 0.97$, the model explains time inconsistency: your present self makes plans your future self doesn’t follow. The key policy implication is that people are “sophisticated” (they know they have self-control problems) or “naive” (they think they’ll follow through). Both types benefit from commitment devices and better defaults.

Intuition

Think of present bias as a war between two versions of yourself. Future You plans to start saving next month. Present You, when next month arrives, pushes it off again. This isn’t because you’re stupid — it’s because the immediate cost (less money now) always looms larger than the distant benefit (retirement security). A nudge like auto-enrollment in a 401(k) means Present You saves by default, and only the rare motivated opt-out breaks the pattern.

The evidence. The results are not subtle. When the default for 401(k) enrollment switched from opt-in to opt-out, participation jumped from 49% to 86%. In countries with opt-out organ donation, consent rates are 85–100%; in opt-in countries, 4–27%. Placing fruit at eye level in school cafeterias increases consumption by 25%. Printing energy usage comparisons on electricity bills reduces consumption by 2%. None of these interventions restrict choice. All of them transform outcomes.

Behavioral finance: the market test. The critical question is whether these biases survive the most competitive environment on Earth — financial markets. If arbitrage corrects mispricing, then biases are an individual quirk, not a systemic problem. But DeLong, Shleifer, Summers, and Waldmann (1990) showed that “noise traders” — irrational participants — can survive and affect prices because arbitrage is costly and risky. Short-selling is expensive. Margin calls force positions closed at the worst time. The dot-com bubble persisted for years despite widespread recognition that it was a bubble. Betting against it was ruinous for anyone who was early. If biases survive financial markets, they survive anywhere.

For the full treatment of nudge theory, defaults, and libertarian paternalism, see Ch 17 §17.7. For behavioral finance and limits to arbitrage, see Ch 17 §17.8.

Take

“A nudge is any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives.”

— Thaler & Sunstein, Nudge, 2008

"Should the government nudge people toward better choices?"

Defaults determine whether 27% or 99% of people donate organs. Auto-enrollment determines whether half or nearly all workers save for retirement. If the choice architecture is never neutral, should the government design it deliberately?

Does irrationality survive markets?

“Libertarian paternalism is not an oxymoron. We argue for self-conscious efforts, by private and public institutions, to steer people’s choices in directions that will improve the choosers’ own welfare. A policy is paternalistic if it attempts to influence the choices of affected parties in a way that will make those parties better off. It is libertarian if it preserves freedom of choice.”

— Richard Thaler & Cass Sunstein, American Economic Review, 2003

Thaler and Sunstein’s original argument was carefully constructed: they accepted the standard economic premise that freedom of choice matters, then showed that the concept of a “neutral” choice environment is incoherent. Someone must decide whether the default for organ donation is opt-in or opt-out. Someone must decide the order of foods in a cafeteria. Given that defaults exist and matter enormously, refusing to choose a good default is itself a choice — one that often benefits incumbent arrangements or corporate interests rather than citizens.

“Experimental economics demonstrates that markets work far better than economists had any right to expect. Even with inexperienced subjects, competitive equilibrium is achieved within a few trading periods. The rationality is in the institutions, not necessarily in the individuals.”

— Vernon Smith, Nobel Lecture, 2002

Vernon Smith shared the 2002 Nobel with Kahneman — and reached nearly opposite conclusions. Where Kahneman saw individual irrationality undermining markets, Smith saw market institutions generating rational outcomes from irrational participants. In his double-auction experiments, even traders who couldn’t spell “equilibrium” converged on equilibrium prices within rounds. The critical variable wasn’t individual rationality but institutional design: clear rules, transparent prices, repeated interaction. This insight actually supports nudge theory from an unexpected direction — if institutions shape outcomes more than individual cognition, then designing better institutions (including choice architecture) is the right intervention.

The verdict

The evidence is decisive on three points. First, people systematically violate rational choice axioms in ways that formal behavioral models predict precisely. Second, these violations survive in important real-world domains — financial markets, retirement decisions, health choices — where competition and experience don’t fully correct them. Third, simple changes to choice architecture produce enormous effects on outcomes at near-zero cost. The open question is political, not empirical: how much latitude should governments have in designing the choice environment, and who holds them accountable for the defaults they choose?

Where this leaves us

We started with a psychologist demolishing the economist’s most basic assumption. Three stages later, here’s what you now know:

  1. The biases are real and systematic (Stage 1). Kahneman and Tversky didn’t just show that people make mistakes. They showed that mistakes follow precise, predictable, modelable patterns — loss aversion, reference dependence, probability weighting, framing effects. Prospect theory is not a critique of economics; it’s an alternative model that often predicts better than expected utility theory.
  2. The formal theory is thinner than it looks (Stage 2). The axioms of rational choice — completeness, transitivity, continuity — fail empirically, and Sen showed they fail conceptually: revealed preference can’t distinguish rational from confused. The “as if” defense works for simple competitive markets but breaks down for complex, infrequent, high-stakes decisions. Gigerenzer’s ecological rationality offers a partial rescue: heuristics aren’t always biases; sometimes they’re smart adaptations to messy environments.
  3. The policy question is real (Stage 3). If biases are systematic and choice architecture is never neutral, then designing better defaults is not paternalism — it’s rational institutional design. Opt-out organ donation and auto-enrollment in retirement savings are among the most cost-effective policy interventions ever documented. The limit is the “who nudges the nudgers” problem: government regulators are biased too, and the line between helpful defaults and manipulative control requires democratic vigilance.

Economics has evolved from treating rationality as an axiom to treating it as a hypothesis — one that holds approximately in competitive markets with simple goods and fails systematically for complex, infrequent, high-stakes decisions. The rational-choice framework remains indispensable as a benchmark: deviations from it are informative precisely because they point to specific mechanisms that can be modeled, measured, and sometimes corrected. The question is no longer “are people rational?” but “in which domains, under which conditions, and what should we do about the domains where they’re not?”