PI: Asim Smailagic
Co-PI(s): Dan Siewiorek
University: Carnegie Mellon University
Our proposal aims to research and develop novel explainability and interpretability modules that enhance and mediate communications between complex AI/ML systems and users. Besides the topic of enhancing explainability, the proposal puts an emphasis on the methodological aspects for the evaluation of interpretability in ML models and AI algorithms, bridging the gap between raw measures of interpretability and user-centric measures. One of the key challenges in building intelligent systems that are interpretable is trading off between explainability and accuracy. The CMU team will find a middle ground that capitalizes on the effectiveness of the deep networks at learning complex patterns of data while offering the clarity of the rule-based system. The researchers aimed to have explainability embedded in the development of AI. This will be contributing to better analysis of data and their understanding of data and help users to understand their results. Next generation of explainable AI methods will use fixed filters for the deep networks, as the researchers are getting evidence that convolution with fixed filters improves performance. Methods for evaluation of AI models will use trust and transparency tests with a Co-variance and In-variance representation.