Wed 15 Jun 2022 15:50 - 16:10 at Macaw - Neural Networks and Numbers Chair(s): Madan Musuvathi

Deep Neural Networks (DNNs) have grown in popularity over the past decade and are now being used in safety-critical domains such as aircraft collision avoidance. This has motivated a large number of techniques for finding unsafe behavior in DNNs. In contrast, this paper tackles the problem of correcting a DNN once unsafe behavior is found. We introduce the provable repair problem, which is the problem of repairing a network $N$ to construct a new network $N’$ that satisfies a given specification. If the safety specification is over a finite set of points, our Provable Point Repair algorithm can find a provably minimal repair satisfying the specification, regardless of the activation functions used. For safety specifications addressing convex polytopes containing infinitely many points, our Provable Polytope Repair algorithm can find a provably minimal repair satisfying the specification for DNNs using piecewise-linear activation functions. The key insight behind both of these algorithms is the introduction of a Decoupled DNN architecture, which allows us to reduce provable repair to a linear programming problem. Our experimental results demonstrate the efficiency and effectiveness of our Provable Repair algorithms on a variety of challenging tasks.

Wed 15 Jun

Displayed time zone: Pacific Time (US & Canada) change

15:30 - 16:50
Neural Networks and NumbersSIGPLAN Track at Macaw
Chair(s): Madan Musuvathi Microsoft Research
15:30
20m
Talk
(OOPSLA 2021) FPL: fast Presburger arithmetic through transprecision
SIGPLAN Track
Arjun Pitchanathan University of Edinburgh, Christian Ulmann ETH Zurich, Michel Weber ETH Zurich, Torsten Hoefler ETH Zurich, Tobias Grosser University of Edinburgh
Link to publication DOI Authorizer link Pre-print
15:50
20m
Talk
(PLDI 2021) Provable Repair of Deep Neural Networks
SIGPLAN Track
Matthew Sotoudeh University of California, Davis, Aditya V. Thakur University of California at Davis
16:10
20m
Talk
(POPL 2022) One Polynomial Approximation to Produce Correctly Rounded Results of an Elementary Function for Multiple Representations and Rounding Modes
SIGPLAN Track
Jay P. Lim Yale University, Santosh Nagarakatte Rutgers University
16:30
20m
Talk
(POPL 2022) Provably Correct, Asymptotically Efficient, Higher-Order Reverse-Mode Automatic Differentiation
SIGPLAN Track
Faustyna Krawiec University of Cambridge, Simon Peyton Jones Microsoft Research, Neel Krishnaswami University of Cambridge, Tom Ellis Microsoft Research, Richard A. Eisenberg Tweag, Andrew Fitzgibbon Graphcore
DOI