The Value of Information
Research by Matteo Pozzi shows how improper regulation can unintentionally trigger dangerous behaviors of information avoidance from infrastructure managers.
We live in the information era: the more information we can collect, the better we can inform our decisions. But what if that’s not always the case?
What if in some scenarios, people may actually find it more desirable not to know?
This seeming contradiction was the motivation behind Associate Professor Matteo Pozzi’s most recent research, examining the positive and negative effects of how information is collected and affects decisions in infrastructure. The team Pozzi led consisted of Carl Malings, his former student in Civil and Environmental Engineering and currently a postdoctoral researcher at the French National Center for Scientific Research, and Associate Professor Andreea Minca of Cornell University. They found that public policy could unintentionally encourage decision makers in infrastructure to develop an information bias, overvaluing certain types of information while possibly avoiding others, depending on how it benefits themselves as owners.
To reach this conclusion, Pozzi’s team first had to discern a way to quantify the Value of Information (VoI) of data collected on infrastructure. This allowed them to quantify the value of collecting data in relation to the costs incurred in processing it or of potential repairs. The VoI allowed them to use a quantitative metric when evaluating the relationship between building owners and the policy makers that regulate infrastructure.
While this may seem simple in principle, the authors noted that the way in which each actor weighed their VoI varied, depending on their personal circumstances. Constraints placed on owners through regulation may induce them to act in a way that is rational from their position, but might not necessarily be the most safe or efficient course for society as a whole.
For instance, a regulation might require an owner to repair or restrict access to a building if they know that its condition poses a risk to public safety. Rather than collecting data on the building’s condition, which may lead to findings that would require action from the owner, it may be more advantageous of them to simply not collect that information. This creates a loophole in which the owner is perceived as less liable for the risk due to their negligence of the data, rather than knowing the risk and recklessly choosing not to act. This is information avoidance.
In other cases, owners may have access to information, but their assessment of that information’s value is subjective to the costs that information could force them to incur. In that case, it might be more rational for them as a decision maker to overvalue information that indicates low risk, while undervaluing information that paints a more negative picture. Even when regulators set mandatory thresholds for how much a piece of infrastructure can be allowed to degrade without repair, meeting this requirement gives no indication of what the actual condition of the infrastructure is or how soon it may be at risk.
After identifying the factors motivating these counterproductive decisions, Pozzi and his co-authors closed their research with a proposal for a new system of regulation. They reasoned that an ideal system of regulation must alter the owner’s VoI calculation to bring it into agreement with societal expectations of safety. Their proposed solution is the creation of a series of penalties and incentives, which could impose fines on owners when their property fails or is at high risk, while conversely providing subsidies for information collection or repair costs.
Most recently, Pozzi has embarked on a new investigation along similar lines, funded by the National Science Foundation’s Division of Social and Economic Sciences. His preceding research focused on the relationship between two agents, owners and policy makers, under external constraint. Alternatively, this project will focus on multi-agent systems where owners also interact and compete with one another, as they often must. The team will create a new VoI calculation for use in multi-agent settings, and ultimately work to create an analysis and mechanism for policy formulation to alleviate issues like information avoidance and overvaluation.
The project includes Assistant Professor Silvia Saccardo of Carnegie Mellon’s Department of Social and Decision Sciences and Associate Professor Maria-Florina Balcan, a Carnegie Mellon computer scientist focused on analyzing multi-agent systems.