Chapters 8-9 used the IS-LM model to analyze short-run fluctuations. That model, built on Keynesian foundations, treats aggregate demand as the primary driver of business cycles. In the late 1970s, a methodological revolution challenged this approach. Robert Lucas argued that any model used for policy evaluation must be built from microeconomic foundations — optimizing agents, rational expectations, and market clearing. This is the Lucas critique, and it destroyed the large-scale Keynesian models that had dominated macroeconomics.
The Real Business Cycle (RBC) model, pioneered by Kydland and Prescott (1982), took the Lucas critique seriously. It asks: can an economy with fully flexible prices, rational agents, and technology shocks reproduce the key features of the business cycle? The answer is a qualified yes — and even where the answer is no, the RBC framework became the chassis for all subsequent macroeconomic modeling.
In 1976, Robert Lucas published what may be the most influential methodological paper in macroeconomics. His argument was simple but devastating: if agents are rational, their behavior depends on the policy regime. When policy changes, agents' decision rules change — so parameters estimated under the old regime are invalid under the new one.
The old approach. In the 1960s–70s, central banks and governments used large-scale econometric models (hundreds of equations) to predict the effects of policy changes. These models estimated behavioral parameters — the marginal propensity to consume, the slope of the Phillips curve, the sensitivity of investment to interest rates — from historical data, then simulated "what if" scenarios by changing policy variables.
The critique. Lucas pointed out that these parameters are not structural constants of nature. They reflect agents' optimal responses to the economic environment — including the policy regime. Change the regime, and the parameters change.
A Keynesian model estimates the MPC at 0.8 from historical data and predicts that a \$100 billion tax cut will raise consumption by \$10 billion. But if the tax cut is perceived as temporary, forward-looking consumers may save most of it to pay higher future taxes (Ricardian equivalence, Chapter 16). The MPC under a temporary tax cut is much lower than 0.8.
The Phillips curve appeared to offer a stable tradeoff: the Fed could "buy" lower unemployment by accepting higher inflation. But when the Fed actually tried this in the late 1960s, workers and firms adjusted their inflation expectations upward. The Phillips curve shifted — the tradeoff disappeared. The parameter (the slope) changed because the policy regime changed.
The solution: Build models from structural primitives — preferences, technology, and constraints — that don't change when policy changes. Agents' decision rules are derived from optimization, not assumed. This is the microfoundations approach.
where $c_t$ is consumption, $l_t$ is labor supply, and \$1 - l_t$ is leisure. Technology: $Y_t = z_t K_t^\alpha l_t^{1-\alpha}$.
Technology shocks follow an AR(1) process:
Capital accumulation: $K_{t+1} = (1-\delta)K_t + I_t$. Resource constraint: $c_t + I_t = Y_t$.
Euler equation (intertemporal):
Intratemporal labor supply:
RBC models introduced calibration: set parameters using external information (long-run averages, microeconomic studies, national accounts), then check whether the model reproduces business cycle features that weren't targeted.
| Parameter | Value | Source / Target |
|---|---|---|
| $\beta$ | 0.99 | Matches 4% annual real interest rate |
| $\alpha$ | 0.36 | Capital share of income |
| $\delta$ | 0.025 | 10% annual depreciation |
| $\rho_z$ | 0.95 | Persistence of Solow residual |
| $\sigma_\varepsilon$ | 0.007 | Volatility of Solow residual innovations |
Define $\hat{x}_t = \ln x_t - \ln x^*$ (log-deviation from steady state). Taylor-expand each equation, keeping first-order terms.
A positive technology shock ($\varepsilon_t > 0$) raises $z_t$. Output rises immediately. Consumption rises by less than output (smoothing). Investment rises sharply (temporarily high returns). Hours worked depend on the balance of substitution and income effects — with persistent shocks, the wealth effect partially offsets the wage incentive.
Adjust the persistence of technology shocks ($\rho_z$) and watch how the impulse response shapes change. At low persistence, shocks die out quickly. At high persistence, effects are nearly permanent.
Figure 14.1. Impulse responses to a one-standard-deviation positive technology shock. Four panels: output, consumption, investment, and hours worked. Drag the slider to see how persistence shapes the dynamics. Hover for exact values.
Compute the steady state for the basic RBC model with $\alpha = 0.33$, $\beta = 0.99$, $\delta = 0.025$, $\phi = 2$ (leisure weight), $z^* = 1$.
Step 1: From the Euler equation at steady state ($c_{t+1} = c_t$): \$1 = \beta(\alpha z^* K^{*\alpha-1} l^{*1-\alpha} + 1 - \delta)$. Solving: $\alpha K^{*\alpha-1} l^{*1-\alpha} = (1/\beta - 1 + \delta) = 1/0.99 - 1 + 0.025 = 0.0351$.
Step 2: Capital-labor ratio: $(K/l)^{\alpha-1} = 0.0351/0.33 = 0.1064$. So $K/l = 0.1064^{1/(0.33-1)} = 0.1064^{-1.493} = 28.6$.
Step 3: Output-capital ratio: $Y/K = (K/l)^{\alpha-1} = 0.1064$. Investment share: $I/Y = \delta(K/Y) = 0.025/0.1064 = 0.235$. Consumption share: $C/Y = 1 - I/Y = 0.765$.
Step 4: From the labor FOC: $\phi/(1-l^*) = (1-\alpha)(K^*/l^*)^\alpha / c^*$. With target $l^* = 1/3$: verify the calibration is internally consistent.
Trace the response to a positive one-standard-deviation technology shock ($\varepsilon_0 = 0.007$) with $\rho_z = 0.95$.
Impact (t=0): $z_0$ rises by 0.7%. Output jumps immediately: higher TFP means more output from the same inputs. The wage rises (MPL up), and the return to capital rises (MPK up).
Consumption: Rises by less than output (~0.3%). Forward-looking households smooth consumption over the persistent shock. They save a large fraction of the windfall.
Investment: Rises sharply (~2.5%) because the return to capital is temporarily high and households channel saving into capital accumulation.
Hours: The response depends on persistence. The substitution effect (higher wage $\to$ work more) pushes hours up. The wealth effect (richer $\to$ consume more leisure) pushes hours down. With $\rho_z = 0.95$, the wealth effect partially offsets, producing a small positive hours response (~0.2%).
Dynamics (t=1,...,40): All variables decay toward steady state at rate $\rho_z^t$. Capital accumulates slowly (predetermined), keeping output elevated even after $z_t$ has declined.
| Feature | U.S. Data | RBC Model |
|---|---|---|
| $\sigma_c/\sigma_y$ | ≈ 0.5 | ✓ ~0.5 |
| $\sigma_i/\sigma_y$ | ≈ 3.0 | ✓ ~3.0 |
| Output persistence | Autocorr. ~0.85 | ✓ From $\rho_z$ |
| Procyclical C and I | $\rho(c,y) > 0$ | ✓ |
| Feature | U.S. Data | RBC Model |
|---|---|---|
| Hours volatility | $\sigma_h/\sigma_y \approx 0.8$ | ✗ ~0.3 |
| Monetary non-neutrality | Money affects real GDP | ✗ Neutral |
| Recessions | Many non-technology causes | ✗ Requires negative tech shocks |
Adjust the model's structural parameters and see how the simulated business cycle moments change. Compare to U.S. data targets — can you find a calibration that matches all moments?
| Moment | U.S. Data | Model | Match? |
|---|---|---|---|
| $\sigma_y$ (%) | 1.72 | 1.72 | ✓ |
| $\sigma_c / \sigma_y$ | 0.50 | 0.50 | ✓ |
| $\sigma_i / \sigma_y$ | 3.00 | 3.00 | ✓ |
| $\sigma_h / \sigma_y$ | 0.80 | 0.31 | ✗ |
| $\text{corr}(c, y)$ | 0.88 | 0.88 | ✓ |
| $\text{autocorr}(y)$ | 0.85 | 0.85 | ✓ |
Figure 14.2. Calibration explorer. Adjust parameters and watch model moments update. Green check = within 20% of target. Red cross = outside 20%. The hours volatility ratio ($\sigma_h/\sigma_y$) is the hardest moment to match — the basic RBC model consistently underestimates it.
You now have the full RBC model — technology shocks, calibration, impulse responses, and the confrontation with data. This is the supply-side answer to the recession question, and it's the most radical claim in macroeconomics.
RBC says recessions are efficient responses to negative technology shocks. When $z_t$ falls, the marginal products of labor and capital both decline. Households optimally reduce labor supply (intertemporal substitution of leisure) and smooth consumption by cutting investment sharply. Output falls, but this isn't a market failure — it's the economy doing exactly what a social planner would choose. The First Welfare Theorem applies: the competitive equilibrium replicates the planner's solution. There is no role for stabilization policy because there is nothing to stabilize. The model generates realistic relative volatilities ($\sigma_c/\sigma_y \approx 0.5$, $\sigma_i/\sigma_y \approx 3$) and output persistence from a single shock source.
Against RBC, steelmanned: (a) What are negative technology shocks? The model requires that the economy periodically forgets how to produce — that TFP literally falls. Summers (1986) asked: "What are these shocks? Where's the technology regress?" Measured TFP (the Solow residual) is procyclical, but this may reflect demand-driven capacity utilization, not true technology changes. (b) The labor market: RBC needs large intertemporal substitution of leisure — workers voluntarily choose to work less in recessions. But micro estimates of labor supply elasticity are far too small (Shimer puzzle), and workers in recessions report being involuntarily unemployed. Nobody at a shuttered auto plant is choosing leisure. (c) Money matters: RBC says monetary policy is neutral. But Romer & Romer (1989), Christiano, Eichenbaum & Evans (1999), and decades of evidence show that monetary contractions cause real output declines. RBC has no mechanism for this.
The mainstream made a Solomonic judgment: RBC's methodology won, but its substance lost. The DSGE framework — microfoundations, calibration, rational expectations, general equilibrium — became the universal chassis for macroeconomic modeling. But the claim that recessions are efficient was rejected. The resolution was to take the RBC model and add sticky prices and monopolistic competition, producing the New Keynesian DSGE (Chapter 15). This preserves the discipline of microfoundations while restoring a role for demand shocks and monetary policy.
RBC's lasting contribution is the method, not the message. The evidence strongly supports a role for demand shocks, nominal rigidities, and monetary non-neutrality — all things the basic RBC model denies. But RBC forced the profession to take microfoundations seriously, to calibrate rather than just estimate, and to build general equilibrium models that are internally consistent. Every central bank DSGE model today descends from Kydland and Prescott's original framework. The irony: the model that said policy doesn't matter became the foundation for all modern policy analysis.
If we add sticky prices to the RBC framework, we get New Keynesian DSGE — the modern macro consensus. Come back at Chapter 15 (§15.1–15.8) for the synthesis: a model where both demand and supply shocks cause recessions, monetary policy has real effects, and the central bank can (sometimes) stabilize the economy. The question flips from "are recessions efficient?" to "how much of each recession is demand versus supply?"
Yield curve inversions, consumer confidence drops, and leading indicators flash warnings. But predicting recessions is notoriously unreliable — the models that explain them after the fact can't reliably predict them in advance.
IntermediateThe multiplier says a bigger stimulus would have kept unemployment below 8%. It hit 10%. Was the model wrong, or was the dose too small?
IntermediateThe smoothing parameter $\lambda$ controls the tradeoff: higher $\lambda$ means smoother trend. Standard: $\lambda = 1600$ for quarterly data.
A simulated GDP series is decomposed into trend and cycle using the HP filter. Drag $\lambda$ to see the tradeoff: low $\lambda$ lets the trend track every wiggle (small cycles), high $\lambda$ forces a smooth trend (large cycles).
Figure 14.3. HP filter applied to simulated log GDP. Top panel: data (blue) and trend (red). Bottom panel: cyclical component (green). Standard $\lambda = 1600$ for quarterly data. Drag the slider to feel why the choice of $\lambda$ matters.
Compare the baseline RBC model ($\alpha = 0.36$, $\beta = 0.99$, $\delta = 0.025$, $\rho_z = 0.95$, $\sigma_\varepsilon = 0.007$) to quarterly U.S. data (1947–2019, HP-filtered with $\lambda = 1600$).
| Moment | U.S. Data | RBC Model | Match? |
|---|---|---|---|
| $\sigma_y$ (%) | 1.72 | 1.72 | Yes (targeted) |
| $\sigma_c/\sigma_y$ | 0.50 | 0.52 | Yes |
| $\sigma_i/\sigma_y$ | 3.00 | 2.84 | Yes |
| $\sigma_h/\sigma_y$ | 0.80 | 0.31 | No |
| $\text{corr}(c,y)$ | 0.88 | 0.94 | Approx. |
| $\text{autocorr}(y)$ | 0.85 | 0.86 | Yes |
Key success: Consumption smoothing ($\sigma_c/\sigma_y \approx 0.5$) and investment volatility ($\sigma_i/\sigma_y \approx 3$) emerge naturally from optimal saving.
Key failure: Hours volatility is far too low (\$1.31$ vs. \$1.80$). The model needs either indivisible labor (Hansen, 1985) or labor market frictions to match the data.
The Lucas critique (1976): Why it destroyed large-scale Keynesian models.
In the 1960s and early 1970s, central banks and treasuries relied on large-scale econometric models — some with hundreds of equations — to forecast the economy and evaluate policy. The Federal Reserve's FRB/MIT/Penn model, the Brookings model, and similar systems estimated behavioral relationships (the marginal propensity to consume, the Phillips curve slope, the interest sensitivity of investment) from decades of historical data.
These models appeared to offer a stable tradeoff between inflation and unemployment. The Phillips curve suggested that the Fed could "buy" a percentage point of lower unemployment by accepting 1–2 percentage points of additional inflation. Policymakers in the Johnson and Nixon administrations exploited this tradeoff.
The critique: Lucas showed that the Phillips curve's slope was not a structural constant but a function of the monetary regime. Under a regime that kept inflation low, workers' inflation expectations were anchored, and surprise inflation could temporarily boost employment. But when the Fed systematically pursued inflationary policy, workers adjusted their expectations. The Phillips curve shifted up — the economy got higher inflation with no employment gain. This is exactly what happened during the stagflation of the 1970s.
The legacy: Lucas's paper redirected all of macroeconomics toward models built from structural primitives — preferences, technology, and equilibrium concepts that are invariant to policy. The RBC model was the first full implementation of this vision. Every DSGE model used by central banks today descends from the methodological revolution Lucas triggered.
A 20% decline in copper prices (40% of exports, 20% of GDP) is modeled as a negative technology shock equivalent to a 1.6% decline in GDP-equivalent productivity.
Output: Falls ~1.6%, partially recovers as resources reallocate. Consumption: Falls by less (smoothing). Investment in copper: Falls sharply. Hours: In copper sector, decline sharply; other sectors may absorb some workers.
The RBC model captures output and consumption dynamics, but misses unemployment dynamics — displaced copper miners don't instantly find jobs in other sectors.
| Label | Equation | Description |
|---|---|---|
| Eq. 14.1 | $E_0 \sum \beta^t u(c_t, 1-l_t)$ | Household preferences |
| Eq. 14.2 | $\ln z_t = \rho_z \ln z_{t-1} + \varepsilon_t$ | Technology shock process |
| Eq. 14.4 | Bellman equation | Value function |
| Eq. 14.5 | Euler equation | Consumption smoothing |
| Eq. 14.6 | $MRS_{leisure,cons} = MPL$ | Intratemporal labor condition |
| Eq. 14.8–14.9 | Log-linearized system | Approximate solution |
| Eq. 14.10 | HP filter | Trend-cycle decomposition |