Just as mechanical systems, such as cars, require regular inspection and occasional maintenance, trained AI models also deteriorate over time and need an appropriate analogue for tune-ups. For example, distribution shift may be responsible for reduced performance of a model that would require the model to be retrained. The goal of this project is to develop new ideas for how to assess model performance and data quality over time, as well as how to maintain both the model and data to consistently provide peak predictive performance. Approaches may span both statistical and formal assessment techniques in order to provide contractual assurances where possible and probabilistic estimates elsewhere. Students working on this project will be providing critical information to domain experts who need to know when and how their trained model can be trusted.