(OOPSLA 2020) Perfectly Parallel Fairness Certification of Neural Networks
Recently, there is growing concern that machine-learning models, which currently assist or even automate decision making, reproduce, and in the worst case reinforce, bias of the training data. The development of tools and techniques for certifying fairness of these models or describing their biased behavior is, therefore, critical. In this paper, we propose a \emph{perfectly parallel} static analysis for certifying \emph{causal fairness} of feed-forward neural networks used for classification of tabular data. When certification succeeds, our approach provides definite guarantees, otherwise, it describes and quantifies the biased behavior. We design the analysis to be \emph{sound}, in practice also \emph{exact}, and configurable in terms of scalability and precision, thereby enabling \emph{pay-as-you-go certification}. We implement our approach in an open-source tool and demonstrate its effectiveness on models trained with popular datasets.
Wed 15 JunDisplayed time zone: Pacific Time (US & Canada) change
10:40 - 12:00 | |||
10:40 20mTalk | (OOPSLA 2020) Perfectly Parallel Fairness Certification of Neural Networks SIGPLAN Track Caterina Urban Inria & École Normale Supérieure | Université PSL, Maria Christakis MPI-SWS, Valentin Wüstholz ConsenSys, Fuyuan Zhang MPI-SWS | ||
11:00 20mTalk | (PLDI 2020) OOElala : Order-Of-Evaluation based Alias Analysis for compiler optimization SIGPLAN Track Ankush Phulia IIT Delhi, India, Vaibhav Bhagee IIT Delhi, India, Sorav Bansal IIT Delhi and CompilerAI Labs | ||
11:20 20mTalk | (POPL 2021) Simplifying Dependent Reductions with the Polyhedral Model SIGPLAN Track Cambridge Yang MIT CSAIL, Eric Atkinson MIT CSAIL, Michael Carbin Massachusetts Institute of Technology | ||
11:40 20mTalk | (POPL 2021) The Fine-Grained and Parallel Complexity of Andersen's Pointer Analysis SIGPLAN Track |