Rational Inattention in Games: Experimental Evidence (with Daniel Martin), Experimental Economics, 27 (2024), No. 4, 715–742.
To investigate whether attention responds rationally to strategic incentives, we experimentally implement a buyer-seller game in which a fully informed seller makes a take-it-or-leave-it offer to a buyer who faces cognitive costs to process information about the offer's value. We isolate the impact of seller strategies on buyer attention by exogenously varying the seller's outside option, which leads sellers to price high more often. We find that buyers respond by making fewer mistakes conditional on value, which suggests that buyers exert higher attentional effort in response to the increased strategic incentives for paying attention. We show that a standard model of rational inattention based on Shannon mutual information cannot fully explain this change in buyer behavior. However, we identify another class of rational inattention models consistent with this behavioral pattern. [Data and Analysis Files] [Appendix]
Human Responses to AI Oversight: Evidence from Centre Court (with Romain Gauriot, Lionel Page, and Daniel Martin), Under review
Selected Coverage: The Economist - Kellogg Insight - Forbes - CBC Radio - Communications ACM
Extended abstract at EC'24, 15-Minute Presentation at Wharton [Video]
Powered by the increasing predictive capabilities of machine learning algorithms, artificial intelligence (AI) systems have the potential to overrule human mistakes in many settings. We provide the first field evidence that the use of AI oversight can impact human decision-making. We investigate one of the highest visibility settings where AI oversight has occurred: Hawk-Eye review of umpires in top tennis tournaments. We find that umpires lowered their overall mistake rate after the introduction of Hawk-Eye review, but also that umpires increased the rate at which they called balls in, producing a shift from making Type II errors (calling a ball out when in) to Type I errors (calling a ball in when out). We structurally estimate the psychological costs of being overruled by AI using a model of attention-constrained umpires, and our results suggest that because of these costs, umpires cared 37% more about Type II errors under AI oversight.
Texting to Save Lives: Evidence from Cardiovascular Treatment Reform in Mexico (with Ari Bronsoler), Under review
Can low-cost, widely available technologies improve patient outcomes in fragmented healthcare systems? We study the staggered roll-out of a reform in Mexico’s public health system that used group chats on a popular messaging app to improve coordination of inter-hospital transfers for heart attack patients. The program significantly increased survival and transfer rates, with the largest gains among moderately ill patients who were sick enough to benefit from transfer but stable enough to be moved. Survival improvements were most pronounced in general hospitals with larger productivity gaps relative to high-specialty centers. These findings demonstrate that inexpensive, simple, and scalable digital tools can meaningfully improve critical care delivery and patient allocation.
The Memory Premium (with Yuval Salant and Jörg Spenkuch) NBER Working Paper 33649
Selected Coverage: ScienceNews - Kellogg Insight
We explore the role of memory for choice behavior in unfamiliar environments. Using a unique data set, we document that decision makers exhibit a "memory premium.'' They tend to choose in-memory alternatives over out-of-memory ones, even when the latter are objectively better. Consistent with well-established regularities regarding the inner workings of human memory, the memory premium is associative, subject to interference and repetition effects, and decays over time. Even as decision makers gain familiarity with the environment, the memory premium remains economically large. Our results imply that the ease with which past experiences come to mind plays an important role in shaping choice behavior.
AI Recommendations and Non-instrumental Image Concerns
There is growing enthusiasm about the potential for humans and AI to collaborate by leveraging their respective strengths. Yet in practice, this promise often falls short. This paper uses an online experiment to identify non-instrumental image concerns as a key reason individuals underutilize AI recommendations. I show that concerns about how one is perceived, even when those perceptions carry no monetary consequences, lead participants to disregard AI advice and reduce task performance.