Wed 15 Jun 2022 15:30 - 15:50 at Toucan - Tensors Chair(s): Sreepathi Pai
Thu 16 Jun 2022 03:30 - 03:50 at Toucan - Tensors

While loop reordering and fusion can make big impacts on the constant-factor performance of dense tensor programs, the effects on sparse tensor programs are asymptotic, often leading to orders of magnitude performance differences in practice. Sparse tensors also introduce a choice of compressed storage formats that can have asymptotic effects. Research into sparse tensor compilers has led to simplified languages that express these tradeoffs, but the user is expected to provide a schedule that makes the decisions. This is challenging because schedulers must anticipate the interaction between sparse formats, loop structure, potential sparsity patterns, and the compiler itself. Automating this decision making process stands to finally make sparse tensor compilers accessible to end users.

We present, to the best of our knowledge, the first automatic asymptotic scheduler for sparse tensor programs. We provide an approach to abstractly represent the asymptotic cost of schedules and to choose between them. We narrow down the search space to a manageably small “Pareto frontier” of asymptotically undominated kernels. We test our approach by compiling these kernels with the TACO sparse tensor compiler and comparing them with those generated with the default TACO schedules. Our results show that our approach reduces the scheduling space by orders of magnitude and that the generated kernels perform asymptotically better than those generated using the default schedules.

Wed 15 Jun

Displayed time zone: Pacific Time (US & Canada) change

15:30 - 16:50
TensorsPLDI at Toucan +12h
Chair(s): Sreepathi Pai University of Rochester
15:30
20m
Talk
Autoscheduling for Sparse Tensor Algebra with an Asymptotic Cost Model
PLDI
Peter Ahrens MIT CSAIL, Fredrik Kjolstad Stanford University, Saman Amarasinghe MIT CSAIL
DOI
15:50
20m
Talk
DISTAL: The Distributed Tensor Algebra Compiler
PLDI
Rohan Yadav Stanford University, Alex Aiken Stanford Univeristy, Fredrik Kjolstad Stanford University
DOI
16:10
20m
Talk
All you need is Superword-Level Parallelism: Systematic Control-Flow Vectorization with SLP
PLDI
Yishen Chen Massachusetts Institute of Technology, Charith Mendis University of Illinois at Urbana-Champaign, Saman Amarasinghe Massachusetts Institute of Technology
DOI
16:30
20m
Talk
Warping Cache Simulation of Polyhedral Programs
PLDI
Canberk Morelli Saarland University, Jan Reineke Saarland University
DOI

Thu 16 Jun

Displayed time zone: Pacific Time (US & Canada) change

03:30 - 04:50
TensorsPLDI at Toucan
03:30
20m
Talk
Autoscheduling for Sparse Tensor Algebra with an Asymptotic Cost Model
PLDI
Peter Ahrens MIT CSAIL, Fredrik Kjolstad Stanford University, Saman Amarasinghe MIT CSAIL
DOI
03:50
20m
Talk
DISTAL: The Distributed Tensor Algebra Compiler
PLDI
Rohan Yadav Stanford University, Alex Aiken Stanford Univeristy, Fredrik Kjolstad Stanford University
DOI
04:10
20m
Talk
All you need is Superword-Level Parallelism: Systematic Control-Flow Vectorization with SLP
PLDI
Yishen Chen Massachusetts Institute of Technology, Charith Mendis University of Illinois at Urbana-Champaign, Saman Amarasinghe Massachusetts Institute of Technology
DOI
04:30
20m
Talk
Warping Cache Simulation of Polyhedral Programs
PLDI
Canberk Morelli Saarland University, Jan Reineke Saarland University
DOI