I'm a second year Computer Science Ph.D. student at UC Irvine advised by Sameer Singh. I also collaborate closely with Hima Lakkaraju. I generally work on machine learning learning safety, from the perspective of explainability, debugging, and fairness. I spent last summer at AWS working on model debugging with Krishnaram Kenthapadi.
dslack@uci.edu / @dylanslack20
Papers
-
How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations
Dylan Slack, Sophie Hilgard, Sameer Singh, and Himabindu Lakkaraju
arXiv, 2020
-
Defuse: Debugging Classifiers Through Distilling Unrestricted Adversarial Examples
Dylan Slack, Nathalie Rauschmayr, and Krishnaram Kenthapadi
arXiv, 2020
-
Differentially Private Language Models Benefit from Public Pre-training
Gavin Kerrigan*, Dylan Slack*, and Jens Tuyls*
EMNLP PrivateNLP Workshop, 2020
-
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack*, Sophie Hilgard*, Emiliy Jia, Sameer Singh, and Himabindu Lakkaraju
AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2020
Also accepted at SafeAI Workshop, AAAI, 2020
[Harvard Business Review, Deeplearning.ai]
-
Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
Dylan Slack, Sorelle Friedler, and Emile Givental
ACM Conference on Fairness, Accountability and Transparency (FAccT), 2020,
Also accepted at NeurIPS HCML Workshop, 2019
-
Assessing the Local Interpretability of Machine Learning Models
Dylan Slack, Sorelle A Friedler, Carlos Scheidegger, and Chitradeep Dutta Roy
Workshop on Human-Centric Machine Learning, NeurIPS, 2019
* notes equal contribution