Optimal Memory with Sequential Learning: Signals or Posterior Beliefs
[Working Paper]
[Slides]
The memory of information is essential for problems involving sequential learning with uncertainty. Economic models of sequential learning under uncertainty have two key aspects: signals and beliefs. However, models differ in the assumption regarding memory of information. Some assume that a decision maker remembers only a posterior belief that he continuously updates when observing new information, while others assume that a decision maker remembers individual signals and only forms a posterior belief once prompted by a decision. We develop a theoretical framework in which a decision maker can freely choose what information to remember, however remembering information is costly. We study the problem of optimal memory for the decision maker and analyze the role of different decision environments. We find that across two different treatments, lab subjects respond in a way predicted by theory. Specifically, we focus on two factors of the environment: 1) uncertainty about the decision-relevant dimension, and 2) the number of signals. We find that in an environment with more uncertainty about the upcoming decision and fewer signals, people remember individual signals more frequently while in an environment with a clear decision and many signals, people remember posterior beliefs more frequently.
Belief Updating with Misinformation
[Working Paper]
[Slides]
[Experiment]
Uncertain information is frequently confirmed or retracted after people have initially heard it. A large existing literature has studied how people change their beliefs in response to new information, however, how people react to information about previous information is still unclear. We investigate three closely related questions: 1) How do people update their belief in response to being told a previous signal was (un)informative, 2) What is the effect of verifying the informativeness of a signal ex-ante rather than checking information ex-post, and 3) Do past information checks affect how people react to new uncertain information in the future? To answer these questions we conduct two (online) experiments using a novel modification of the classical ball and urn framework. It is deliberately abstract to avoid the influence of motivated reasoning or other situation specific circumstances. We find that the majority of subjects react to information about information incorrectly. Importantly, we can predict people's belief after the uncertain information is retracted or confirmed based on their initial response. For retractions, people that over-reacted initially end up with a belief higher than their initial prior and vice versa, people that under-reacted initially end up with a belief lower than before. After multiple consecutive retracted signals this leads to beliefs being more dispersed compared to equivalent information that is ex-ante labeled as uninformative. Confirmations or retractions in the past do not seem to affect how people respond to new uncertain information in our setup.
Useful Forecasting: Belief Elicitation for Decision-Making [Working Paper] [Slides]
Having information about an uncertain event is crucial for informed decision-making. This paper introduces a simple framework in which 1) a principal uses the reported beliefs of multiple agents to make a decision and 2) the agents reporting a belief are affected by the decision. Naturally, the question arises how the principal can incentivize the agents to report their belief truthfully. I show that in this setting a direct reporting mechanism using a scoring rule to incentivize belief reports and a fixed decision rule lead to truthful reporting by all agents as the unique Nash equilibrium under precisely two conditions, preference diversity and no pivotality. Contrary to that, popular alternative mechanisms, such as the Delphi method and prediction markets are likely to lead to biased belief reports. Moreover, if the principal can only consult a single agent the only mechanism that can guarantee truth-telling requires perfect knowledge of the agent’s preferences.
Lying and Reputation - An Experimental Study of Reputation Effects