Dylan Slack
dslack@uci.edu

Hello! I am a second-year PhD candidate at University of California, Irvine advised by Sameer Singh and co-advised by Hima Lakkaraju.

I work on machine learning and natural language processing as part of UCI NLP, UCI CREATE, and the HPI Research Center. My research is supported by an HPI fellowship. I've previously interned at AWS and will be interning at Google AI in Summer 2021 👨‍💻.

CV /  Google Scholar /  Github /  Twitter

profile photo
Research

I work on machine learning, natural language processing, interpretability, and fairness. Much of my research focuses on developing models that are robust, trustworthy, and equitable. * denotes equal contribution.

On the Lack of Robustness of Neural Text Classifier Interpretations
Muhammad Bilal Zafar, Michele Donini, Dylan Slack, Cedric Archambeau, Sanjiv Das, and Krishnaram Kenthapadi
Findings of ACL, 2021
Links forthcoming

Context, Language Modeling, and Multimodal Data in Finance
Sanjiv Ranjan Das, Connor Goggins, John He, George Karypis, Sandeep Krishnamurthy, Mitali Mahajan, Nagpurnanand Prabhala, Dylan Slack, Robert Van Dusen, Shenghua Yue, Sheng Zha, and Shuai Zheng
The Journal of Financial Data Science, 2021
Links forthcoming

How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations
Dylan Slack, Sophie Hilgard, Sameer Singh, and Hima Lakkaraju
arXiv, 2020
code / arXiv / bibtex

Differentially Private Language Models Benefit from Public Pre-training
Gavin Kerrigan*, Dylan Slack*, and Jens Tuyls*
EMNLP PrivateNLP Workshop, 2020
code / arXiv / bibtex

Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack*, Sophie Hilgard*, Emily Jia, Sameer Singh, and Hima Lakkaraju
AIES, 2020   (Oral Presentation)
Work also presented at SafeAI Workshop, AAAI, 2020
code / video / arXiv / bibtex
Press: Deeplearning.ai / Harvard Business Review

Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
Dylan Slack, Sorelle Friedler, Emile Givental,
FAccT, 2020  
Work also presented at HCML Workshop, NeurIPS, 2019
code / video / arXiv / bibtex

Assessing the Local Interpretability of Machine Learning Models
Dylan Slack*, Sorelle Friedler, Carlos Scheidegger, and Chitradeep Dutta Roy
HCML Workshop, NeurIPS, 2019
arXiv / bibtex

Patents

Automatic Failure Diagnosis and Correction in Machine Learning Models
Nathalie Rauschmayr, Krishnaram Kenthapadi, and Dylan Slack
Patent Application Filed

Talks

Here are a few recent talks!

aisc-talk

Speaking at AISC, virtually.

facct-talk

Speaking at FAccT, in Barcelona, Spain.


Source modified from this website.