Artificial intelligence for deep space habitats
NASA is looking to unlock the potential of AI in the context of habitats in deep space. They see a scenario in which bases, vehicles, etc. will be operated by humans off planet, but then may be left uninhabited for an extended period of time between missions. AI is well-suited for the level of autonomy required in this context, so long as we are certain that it adheres to all critical design specifications.
One possible direction involves using weak supervision/crowd-sourcing methods for estimating cognitive states (e.g. trust, mental workload, and situation awareness) of astronauts. The goal of the project is to explore, improve, and implement weak-supervision and crowdsourcing models that estimate unobserved ordinal ground truth based on imperfect and partial observations. In its first stage, the application focuses on utilizing observations made by “expert” observers, simulating a crewmember being gauged by observers on the ground. If successful, the project will provide a means to produce human cognitive state estimates without ever having to administer obtrusive questionnaires in operational environments. Current experiments that are being conducted are collecting video and audio streams as well as physiological data (electrocardiogram, respiration rate, and electrodermal activity), eye-based measures, and embedded measures (e.g., actions taken by the operator). Based on the video and audio data, expert observers estimate a participants’ subjectively reported trust, mental workload, and situation awareness. These expert observations will then be modeled with the goal of recovering the unobserved cognitive states that participants subjectively report.