class: center, middle, inverse, title-slide .title[ # Choice under Uncertainty (Lecture 1b) ] .subtitle[ ## EC404; Spring 2024 ] .author[ ### Prof. Ben Bushong ] .date[ ### Last updated January 18, 2024 ] --- layout: true <div class="msu-header"></div> <div style = "position:fixed; visibility: hidden"> `$$\require{color}\definecolor{yellow}{rgb}{1, 0.8, 0.16078431372549}$$` `$$\require{color}\definecolor{orange}{rgb}{0.96078431372549, 0.525490196078431, 0.203921568627451}$$` `$$\require{color}\definecolor{MSUgreen}{rgb}{0.0784313725490196, 0.52156862745098, 0.231372549019608}$$` </div> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ TeX: { Macros: { yellow: ["{\\color{yellow}{#1}}", 1], orange: ["{\\color{orange}{#1}}", 1], MSUgreen: ["{\\color{MSUgreen}{#1}}", 1] }, loader: {load: ['[tex]/color']}, tex: {packages: {'[+]': ['color']}} } }); </script> <style> .yellow {color: #FFCC29;} .orange {color: #F58634;} .MSUGreen {color: #14853B;} </style> --- class: inverseMSU # Reminders and Comments ### This course is very much about time management. By now you've hopefully opened the first problem set. That's a good start, but I encourage you to engage early and often. The second one will follow immediately on its heels. -- A helpful mantra: > There are two possible times to do something: now or never. Tomorrow isn't an option. --- class: inverseMSU name: Overview # Today ### **Our Outline:** (1) [Allais Paradox](#section1) (2) [Ellsberg Paradox](#section2) (3) [Kahneman & Tversky Examples](#section3) (4) [The Calibration Theorem](#section4) --- class: MSU name: section1 # Allais Paradox (Allais 1953) ## Question 1: | Option A | | | | Option B | |:---------|:-|:-:|-:|---------:| | (payout) | (prob) | | (prob) | (payout) | | $1 Million | 1 | | 0.89 | $1 Million | | | | | 0.10 | $5 Million | | | | | 0.01 | $0 | --- class: MSU # Allais Paradox (Allais 1953) ## Question 2: | Option A | | | | Option B | |:---------|:-|:-:|-:|---------:| | (payout) | (prob) | | (prob) | (payout) | | $1 Million | 0.11 | | | | | | | | 0.10 | $5 Million | | $0 | 0.89 | | 0.90 | $0 | Make your picks. --- class: MSU # Allais Paradox (Allais 1953) **The Paradox:** The combination of choosing A over B and choosing D over C violates expected utility. -- ### Allais Paradox (Revealed) (See board in class) --- class: MSU name: section2 # Ellsberg Paradox (Ellsberg 1961) ## Setup / Environment Suppose an urn contains 90 balls: - 30 of the balls are red. - The other 60 balls are black or yellow, in unknown proportions. - One ball will be drawn randomly from the urn. -- **Question 1:** Option A: You win $100 if the ball is red. Option B: You win $100 if the ball is black. --- class: MSU # Ellsberg Paradox (Ellsberg 1961) - An urn contains 90 balls. 30 of the balls are red. - The other 60 balls are black or yellow, in unknown proportions. - One ball will be drawn randomly from the urn. -- **Question 2:** Option C: You win $100 if the ball is either red or yellow. Option D: You win $100 if the ball is either black or yellow. -- *The Paradox*: The combination of choosing A over B and choosing D over C violates expected utility --- in particular, violates that people form stable subjective beliefs. --- class: MSU name: section3 # Evidence from Kahneman & Tversky (1979) A few details on the evidence: - Asked students and faculty to respond to *hypothetical* choice problems. Originally in Israel, later replicated at Stockholm and Michigan (note: median net monthly income in Israel `\(\approx 3000\)`). - Students faced a series of binary choices between two prospects; no more than a dozen problems per questionnaire; usual techniques of varying order of questions and positions of choices. -- Their notation eliminates zero outcomes --- e.g., "(4000,.8)" means 4000 with probability `\(0.8\)`, zero with probability `\(0.2\)`. --- class:MSU # Subproportionality ### Problem 3 | Option A | | Option B | |:--------:| |:--------:| | `\((4000,.8)\)` | | `\((3000,1)\)` | |[ `\(N=95\)` ] | | | -- ### Problem 4 | Option C | | Option D | |:--------:| |:--------:| | `\((4000,.2)\)` | | `\((3000,.25)\)` | |[ `\(N=95\)` ] | | | --- class:MSU # Subproportionality From these and similar examples, Kahneman & Tversky conclude that preferences often exhibit "subproportionality": - When comparing a larger/less-likely reward to a smaller/more-likely reward, if we scale down the probabilities proportionally, the person becomes more and more likely to choose the larger/less-likely reward. -- **Formally:** If `\(\left( y,pq\right) \sim \left( x,p\right)\)` then `\(\left(y,pqr\right) \succ \left( x,pr\right)\)` , where `\(y>x>0\)` and `\(p,q,r\in \left( 0,1\right)\)` --- class: MSU # "Reflection Effect" ### Problem 7 | Option A | | Option B | |:--------:| |:--------:| | `\((6000,.45)\)` | | `\((3000,.90)\)` | |[ `\(N=66\)` ] | | | -- ### Problem 7 `\(^{\prime}\)` | Option C | | Option D | |:--------:| |:--------:| | `\((-6000,.45)\)` | | `\((-3000,.90)\)` | |[ `\(N=66\)` ] | | | --- class: MSU # "Reflection Effect" From these and similar examples, Kahneman & Tversky conclude that preferences often exhibit a "reflection effect" - Preferences over losses are the opposite of preferences over equivalent gains. - More specifically, they see risk-averse behavior over gains and risk-loving behavior over losses (except for small probabilities). --- class: MSU # "Isolation Effect" ### Problem 10 Consider the following two-stage game. In the first stage, there is a probability of `\(.75\)` to end the game without winning anything, and a probability of `\(.25\)` to move into the second stage. If you reach the second stage you have a choice between `$$(4000,.80)\qquad \text{and }\qquad (3000,1)\text{.}$$` Your choice must be made before the game starts, i.e., before the outcome of the first stage is known. -- Note: we can collapse this to | Option A | | Option B | |:--------:| |:--------:| | `\((4000,.2)\)` | | `\((3000,.25)\)` | --- class: MSU # "Isolation Effect" ### "Problem 10" | Option A | | Option B | |:--------:| |:--------:| | `\((4000,.2)\)` | | `\((3000,.25)\)` | | 22% | | 78% | -- ### Problem 4 | Option C | | Option D | |:--------:| |:--------:| | `\((4000,.2)\)` | | `\((3000,.25)\)` | | 65% | | 35% | --- class: MSU # "Isolation Effect" ### Problem 11 You get 1000 for sure. In addition, choose between `$$(1000,.50)\qquad \text{and }\qquad (500,1)\text{.}$$` -- ### Problem 12 You get 2000 for sure. In addition, choose between `$$(-1000,.50)\qquad \text{and }\qquad (-500,1)\text{.}$$` --- class: MSU # "Isolation Effect" From these and similar examples, Kahneman & Tversky conclude that people exhibit an "isolation effect." - People ignore *seemingly* extraneous parts of the problem. - In particular, they tend to disregard shared components. -- Brief aside: There is now a large literature on "framing effects"--- two ways of presenting the **exact same problem** elicit different choices. - The isolation effect is a natural interpretation of some framing effects --- because for some ways of framing a problem, certain information can seem extraneous. --- class: MSU name: section4 # A Theoretical Problem ## From Rabin & Thaler (*JEP* 2001): A property of risk aversion: People tend to dislike risky prospects even when they involve an expected gain. - E.g.: A 50-50 gamble of losing $100 vs. gaining $110. -- Economists' explanation: EU with a concave utility function. -- **Big Idea:** This explanation doesn't work, because according to EU, anything but virtual risk neutrality over modest stakes implies manifestly unrealistic risk aversion over large stakes. --- class: MSU # Illustrating the Problem Suppose you have wealth $20,000, and you turn down a 50-50 bet to win $110 vs. lose $100. -- Suppose you have a CRRA utility function `$$u(x)=\frac{(x)^{1-\rho }}{1-\rho }\text{.}$$` -- **Question:** What values of `\(\rho\)` are consistent with you rejecting this bet? -- Reject if `$$\frac{1}{2}\text{ }\frac{(20,110)^{1-\rho }}{1-\rho }\quad +\quad \frac{1}{2} \text{ }\frac{(19,900)^{1-\rho }}{1-\rho }\quad <\quad \frac{(20,000)^{1-\rho }}{1-\rho}$$` -- With a little work, one can show that rejecting the bet implies that `\(\rho>18.17026\)`. --- class: MSU # The Calibration Theorem Suppose you have `\(\rho =19\)`. -- Again suppose you have wealth $20,000, and consider how you'd feel about a 50-50 bet to lose `\(Y\)` vs. win `\(X\)`? - For `\(Y=\)` $100 , accept if and only if `\(X>111.1\)` -- - For `\(Y=\)` $200 , accept if and only if `\(X>250.2\)` -- - For `\(Y=\)` $500 , accept if and only if `\(X>1038.4\)` -- - For `\(Y=\)` $750 , accept if and only if `\(X>3053.8\)` -- - For `\(Y=\)` $1000 , accept if and only if `\(X>\dots\)` (any guesses?) -- **Point:** The degree of risk aversion required to explain your rejection of the moderate-stakes gamble implies ridiculous behavior for larger-stakes gambles. --- class: MSU # The Calibration Theorem ### The Theorem in Full In fact, need not assume *anything* about the specific functional form for `\(u\)`. Here's another example: - Suppose Johnny is a **risk-averse** EU maximizer ( `\(u^{\prime \prime }\leq 0\)` ). -- - Suppose that, for any initial wealth, Johnny will reject a 50-50 gamble of losing $100 vs. gaining $110. - Consider a 50-50 gamble of losing $1000 vs. gaining $ `\(X\)`. -- **Question:** What is the minimum `\(X\)` such that Johnny might accept? -- **Answer:** `\(X=\infty\)` --- that is, Johnny will reject for any `\(X\)`. --- class: MSU # The Logic Rabin (2000) identifies a feature of ANY concave utility function for final wealth outcomes. An individual who would reject a 50-50 bet over lose 100 and gain 110 at all wealth levels would reject **all** 50-50 bets up to (-1,000 , `\(\infty\)`). -- Recall that expected-utility theory operates over final wealth outcomes. **Point:** Curvature over small stakes indicates *implausible* risk aversion at large stakes. -- **Intuition:** The marginal utility of money must decrease extremely rapidly. -- **Related Intuition:** Curves look like straight lines when you zoom in close enough. --- class: MSU # The Aftermath Put simply: the Calibration Theorem highlights that small-stakes risk attitudes cannot come from a concave utility function over final wealth outcomes. -- Barberis, Huang, and Thaler (2006) find that most MBAs at the University of Chicago turn down a gain-$550, lose-$500 coin-flip. -- - These students have (or will have) huge lifetime wealth relative to this gamble. - This implies (hilariously) large risk aversion: EU theory says same person would also turn down coin flip with stakes gain $88 trillion, lose $10,000. - (I'm reasonably confident those students would take that gamble.) --- class: MSU # The Aftermath (cont.) Small-stakes risk aversion is intuitively plausible and we see it all the time in day-to-day behavior. But it's not coming from diminishing marginal utility. **Question:** How do we make sense of this result? -- > Indeed, what is empirically the most firmly established feature of risk preferences, loss aversion, is a departure from expected-utility theory that provides a direct explanation for modest-scale risk aversion. Loss aversion says that people are significantly more averse to losses relative to the status quo than they are attracted by gains, and more generally that people’s utilities are determined by changes in wealth rather than absolute levels. Preferences incorporating loss aversion can reconcile significant small-scale risk aversion with reasonable degrees of large-scale risk aversion (p.1288)