I am a PhD candidate in the Machine Learning Group at the University of Cambridge, where I am supervised by Adrian Weller. My research interests lie in machine learning, explainable artificial intelligence, and human-machine collaboration.
My PhD research is funded by the Leverhulme Center for the Future of Intelligence (Trust and Transparency Initiative) with generous donations from DeepMind and Leverhulme Trust. Currently, I am an Enrichment Student at the Alan Turing Institute and an Advisor at the Responsible AI Institute. Previously, I was a Fellow at the Mozilla Foundation and a Research Fellow at the Partnership on AI.
I completed a joint bachelors-masters in Electrical and Computer Engineering at Carnegie Mellon University advised by José Moura. During four wonderful years in Pittsburgh, I collaborated with Pradeep Ravikumar on explainable AI, Zico Kolter on automated pothole detection, Fei Fang and Manuela Veloso on robot affect expression in competitive settings, and Radu Marculescu on network science for deep learning.
I grew up in Basking Ridge, New Jersey, USA.
|[Jul 2022]||Joined the Center for Research on Computation and Society at Harvard SEAS as a Summer Fellow|
|[Mar 2022]||Awarded a J.P. Morgan AI PhD Fellowship|
|[Feb 2022]||Launched the Alan Turing Institute's Interest Group on Human-Machine Teams|
|[Dec 2021]||Two papers accepted at AAAI 2022|
|[Sept 2021]||Joined the The Alan Turing Institute as an Enrichment Student|
|[Apr 2021]||One paper accepted at AIES 2021|
|[Jan 2021]||Our paper, CLUE, has been accepted to ICLR 2021 as an oral presentation. Our paper connecting feature importance and counterfactual explanations was accepted at AAAI 2021|
|[Oct 2020]||Awarded a Mozilla Fellowship|
|[Apr 2020]||Our paper on evaluating explanation methods was accepted to IJCAI 2020|
|[Mar 2020]||Co-organized a workshop, Human Interpretability in ML, at ICML 2020|
|[Jan 2020]||On Network Science and Mutual Information for Explaining Deep Neural Networks has been accepted to ICASSP 2020|
|[Jan 2020]||Our paper on concealing model unfairness from explanation methods has been accepted to ECAI 2020|
|[Nov 2019]||Our paper, Explainable Machine Learning in Deployment, has been accepted to FAT* 2020|
|[Oct 2019]||Joined Cambridge MLG as a Ph.D. Student, moving across the pond in the process 📦|
|[May 2019]||Finished my BS and MS at Carnegie Mellon 🎓|
|University of Cambridge
|Carnegie Mellon University
|Partnership on AI