Wed 15 Jun 2022 16:10 - 16:30 at Macaw - Neural Networks and Numbers Chair(s): Madan Musuvathi

Mainstream math libraries for floating point (FP) do not produce correctly rounded results for all inputs. In contrast, CR-LIBM and RLIBM provide correctly rounded implementations for a specific FP representation with one rounding mode. Using such libraries for a representation with a new rounding mode or with different precision will result in wrong results due to double rounding. This paper proposes a novel method to generate a single polynomial approximation that produces correctly rounded results for all inputs for multiple rounding modes and multiple precision configurations. To generate a correctly rounded library for $n$-bits, our key idea is to generate such a polynomial approximation for a representation with $n+2$-bits using the \emph{round-to-odd} mode. We prove that the resulting polynomial approximation will produce correctly rounded results for all five rounding modes in the standard and for multiple representations with $k$-bits such that $|E| +1 < k \leq n$, where $|E|$ is the number of exponent bits in the representation. Inspired by the RLIBM project, we also approximate the correctly rounded result when we generate the library with $n+2$-bits using the round-to-odd mode. We also generate polynomial approximations by structuring it as a linear programming problem but propose enhancements to polynomial generation to handle the round-to-odd mode. Our prototype is the first 32-bit float library that produces correctly rounded results with all rounding modes in the IEEE standard for all inputs with a single polynomial approximation. It also produces correctly rounded results for any FP configuration ranging from 10-bits to 32-bits while also being faster than mainstream libraries.

Wed 15 Jun

Displayed time zone: Pacific Time (US & Canada) change

15:30 - 16:50
Neural Networks and NumbersSIGPLAN Track at Macaw
Chair(s): Madan Musuvathi Microsoft Research
15:30
20m
Talk
(OOPSLA 2021) FPL: fast Presburger arithmetic through transprecision
SIGPLAN Track
Arjun Pitchanathan University of Edinburgh, Christian Ulmann ETH Zurich, Michel Weber ETH Zurich, Torsten Hoefler ETH Zurich, Tobias Grosser University of Edinburgh
Link to publication DOI Authorizer link Pre-print
15:50
20m
Talk
(PLDI 2021) Provable Repair of Deep Neural Networks
SIGPLAN Track
Matthew Sotoudeh University of California, Davis, Aditya V. Thakur University of California at Davis
16:10
20m
Talk
(POPL 2022) One Polynomial Approximation to Produce Correctly Rounded Results of an Elementary Function for Multiple Representations and Rounding Modes
SIGPLAN Track
Jay P. Lim Yale University, Santosh Nagarakatte Rutgers University
16:30
20m
Talk
(POPL 2022) Provably Correct, Asymptotically Efficient, Higher-Order Reverse-Mode Automatic Differentiation
SIGPLAN Track
Faustyna Krawiec University of Cambridge, Simon Peyton Jones Microsoft Research, Neel Krishnaswami University of Cambridge, Tom Ellis Microsoft Research, Richard A. Eisenberg Tweag, Andrew Fitzgibbon Graphcore
DOI