Wed 15 Jun 2022 15:30 - 15:50 at Macaw - Neural Networks and Numbers Chair(s): Madan Musuvathi

Presburger arithmetic provides the mathematical core for the polyhedral compilation techniques that drive analytical cache models, loop optimization for ML and HPC, formal verification, and even hardware design. Polyhedral compilation is widely regarded as being slow due to the potentially high computational cost of the underlying Presburger libraries. Researchers typically use these libraries as powerful black-box tools, but the perceived internal complexity of these libraries, caused by the use of C as the implementation language and a focus on end-user-facing documentation, holds back broader performance-optimization efforts. With FPL, we introduce a new library for Presburger arithmetic built from the ground up in modern C++. We carefully document its internal algorithmic foundations, use lightweight C++ data structures to minimize memory management costs, and deploy transprecision computing across the entire library to effectively exploit machine integers and vector instructions. On a newly-developed comprehensive benchmark suite for Presburger arithmetic, we show a 5.4x speedup in total runtime over the state-of-the-art library isl in its default configuration and 3.6x over a variant of isl optimized with element-wise transprecision computing. We expect that the availability of a well-documented and fast Presburger library will accelerate the adoption of polyhedral compilation techniques in production compilers.

https://dl.acm.org/doi/10.1145/3485539
this URL might only work when visiting from a https://dl.acm.org/doi/10.1145/3485539 URL.

Wed 15 Jun

Displayed time zone: Pacific Time (US & Canada) change

15:30 - 16:50
Neural Networks and NumbersSIGPLAN Track at Macaw
Chair(s): Madan Musuvathi Microsoft Research
15:30
20m
Talk
(OOPSLA 2021) FPL: fast Presburger arithmetic through transprecision
SIGPLAN Track
Arjun Pitchanathan University of Edinburgh, Christian Ulmann ETH Zurich, Michel Weber ETH Zurich, Torsten Hoefler ETH Zurich, Tobias Grosser University of Edinburgh
Link to publication DOI Authorizer link Pre-print
15:50
20m
Talk
(PLDI 2021) Provable Repair of Deep Neural Networks
SIGPLAN Track
Matthew Sotoudeh University of California, Davis, Aditya V. Thakur University of California at Davis
16:10
20m
Talk
(POPL 2022) One Polynomial Approximation to Produce Correctly Rounded Results of an Elementary Function for Multiple Representations and Rounding Modes
SIGPLAN Track
Jay P. Lim Yale University, Santosh Nagarakatte Rutgers University
16:30
20m
Talk
(POPL 2022) Provably Correct, Asymptotically Efficient, Higher-Order Reverse-Mode Automatic Differentiation
SIGPLAN Track
Faustyna Krawiec University of Cambridge, Simon Peyton Jones Microsoft Research, Neel Krishnaswami University of Cambridge, Tom Ellis Microsoft Research, Richard A. Eisenberg Tweag, Andrew Fitzgibbon Graphcore
DOI