Diagnosing human decision failures in infrastructure emergency scenarios
In high-risk infrastructure operations—such as those involving water systems, transportation networks, or power plants—human decision-making plays a pivotal role in safety. However, failures often occur due to inconsistencies in available data, reliance on outdated experience, ambiguous procedures, or lack of trust in automated systems. This project seeks to investigate the root causes of human decision failures in abnormal operating scenarios and explore how AI systems can be designed to mitigate these risks.
The student will analyze documented case studies of past infrastructure-related accidents and operational failures, focusing on understanding why inconsistencies between data, procedures, and human experience led to poor decisions. A central aspect of this work involves exploring the latent interdependencies between machine operations—situations where one decision impacts multiple downstream effects, which may not be immediately visible. These complex interdependencies often rely on tacit operator knowledge and are difficult to transfer or teach. To address this, the student will explore how human-AI collaboration can represent and communicate such tacit interdependencies using concepts from reinforcement learning (e.g., state-action models), improving situational awareness for both experienced and novice operators.
This project is ideal for students interested in human factors, AI safety, or infrastructure resilience. Depending on interest, students may focus on behavioral modeling, interface design, or the development of AI frameworks to support human learning and trust in high-stakes operations.