Machine learning methods that learn from a large amount of data have achieved impressive results in recent years. Researchers have made significant advancements on tasks ranging from recognizing objects in images to generating natural language. However, models have become increasingly more complex, and thus more opaque, in order to achieve better results. As a result, it has become progressively harder to understand how machine learning models make their decisions. Opaque machine learning models can make decisions for the wrong reasons without model designers knowing, leading to untrustworthy behavior. In many cases, such models can learn to make biased decisions against minorities. Dylan’s research focuses on creating techniques to better understand or interpret complex machine learning models in order to mitigate the aforementioned issues. His research has resulted in novel methods that generate more confident explanations for large machine learning systems, mitigate bias in models, and have found critical issues in current explanations systems, which could result in significant biased behavior being left undiscovered.