LCTES 2022

Welcome to the 23rd ACM SIGPLAN/SIGBED International Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES 2022).

LCTES provides a link between the programming languages and embedded systems engineering communities. Researchers and developers in these areas are addressing many similar problems, but with different backgrounds and approaches. LCTES is intended to expose researchers and developers from either area to relevant work and interesting problems in the other area and provide a forum where they can interact.

Important Dates

  • Paper submission deadline: March 7, 2022 March 14, 2022
  • Paper notification: April 8, 2022 April 15, 2022
  • Artifact submission: April 29, 2022
  • Camera-ready deadline: May 6, 2022
  • Artifact decision: May 9, 2022
  • Conference: June 14, 2022
Dates
Plenary
You're viewing the program in a time zone which is different from your device's time zone change time zone

Tue 14 Jun

Displayed time zone: Pacific Time (US & Canada) change

09:00 - 10:00
KeynoteLCTES at Rousseau Center +12h
Chair(s): Tobias Grosser University of Edinburgh
09:00
60m
Keynote
Domain-specific programming methodologies for domain-specific and emerging computing systems
LCTES
Jeronimo Castrillon TU Dresden, Germany
10:30 - 12:00
Optimization for Compilers and LanguagesLCTES at Rousseau Center +12h
Chair(s): Yousun Ko Yonsei University
10:30
20m
Talk
RollBin: Reducing code-size via loop rerolling at binary levelVirtual
LCTES
Tianao Ge Sun Yat-sen University, Zewei Mo Sun Yat-sen University, Kan Wu Sun Yat-sen University, Xianwei Zhang Sun Yat-sen University, Yutong Lu Sun Yat-sen University
10:50
20m
Talk
Tighten Rust's Belt: Shrinking Embedded Rust Binaries
LCTES
Hudson Ayers Stanford University, Google, Evan Laufer Stanford University, Paul Mure Stanford University, Jaehyeon Park Stanford University, Eduardo Rodelo Stanford University, Thea Rossman Stanford University, Andrey Pronin Google, Philip Levis Stanford University, Johnathan Van Why Google
11:10
20m
Talk
JAX Based Parallel Inference for Reactive Probabilistic Programming
LCTES
Guillaume Baudart IBM Research, USA, Louis Mandel IBM Research, USA, Reyyan Tekin
11:30
20m
Talk
Implicit State MachinesVirtual
LCTES
Fengyun Liu Oracle Labs, Aleksandar Prokopec Oracle Labs
11:50
5m
Talk
(WIP) Scalable Size Inliner for Mobile Applications
LCTES
13:30 - 15:00
Be aware of HardwareLCTES at Rousseau Center +12h
Chair(s): Kyoungwoo Lee Yonsei University
13:30
20m
Talk
Optimizing Data Reshaping Operations in Functional IRs for High-Level Synthesis
LCTES
Christof Schlaak University of Edinburgh, Tzung-Han Juang McGill University, Christophe Dubach McGill University
13:50
20m
Talk
Co-Mining: A Processing-in-Memory Assisted Framework for Memory-Intensive PoW AccelerationVirtual
LCTES
Tianyu Wang The Chinese University of Hong Kong, Zhaoyan Shen Shandong University, Zili Shao The Chinese University of Hong Kong
14:10
20m
Talk
ISKEVA: In-SSD Key-Value Database Engine for Video Analytics ApplicationsVirtual
LCTES
Yi Zheng The Pennsylvania State University, Joshua Fixelle University of Virginia, ‪Nagadastagiri Challapalle‬ The Pennsylvania State University, Pingyi Huo The Pennsylvania State University, Vijaykrishnan Narayanan The Pennsylvania State University, Zili Shao The Chinese University of Hong Kong, Mircea R. Stan University of Virginia, Zhaoyan Shen Shandong University
14:30
20m
Talk
An Old Friend Is Better Than Two New Ones: Dual-Screen AndroidVirtual
LCTES
Zizhan Chen The Chinese University of Hong Kong, Siqi Shang The Chinese University of Hong Kong, Qihong Wu The Chinese University of Hong Kong, Jin Xue The Chinese University of Hong Kong, Zhaoyan Shen Shandong University, Zili Shao The Chinese University of Hong Kong
14:50
5m
Talk
(WIP) Cache-Coherent CLAMVirtual
LCTES
Chen Ding University of Rochester, Benjamin Reber University of Rochester, Dorin Patru Rochester Institute of Technology
15:30 - 17:00
How to Analyze and Utilize LCTES at Rousseau Center +12h
Chair(s): Guillaume Baudart IBM Research, USA
15:30
20m
Talk
Code Generation Criteria for Buffered Exposed Datapath Architectures from Dataflow GraphsVirtual
LCTES
Klaus Schneider University of Kaiserslautern, Anoop Bhagyanath University of Kaiserslautern, Julius Roob University of Kaiserslautern
Pre-print
15:50
20m
Talk
Trace-and-Brace (TAB): Bespoke Software Countermeasures against Soft Errors
LCTES
Yousun Ko Yonsei University, Alex Bradbury lowRISC C.I.C., Bernd Burgstaller Yonsei University, Robert Mullins University of Cambridge
16:10
20m
Talk
Automated Kernel Fusion for GPU Based on Code Motion
LCTES
Junji Fukuhara Tokyo University of Science, Munehiro Takimoto Tokyo University of Science
16:30
20m
Talk
TCPS: A Task and Cache-aware Partitioned Scheduler for Hard Real-time Multi-core SystemsVirtual
LCTES
Yixian Shen University of Amsterdam, Jun Xiao University of Amsterdam, Andy Pimentel University of Amsterdam
16:50
5m
Talk
(WIP) A Memory Interference Analysis using a Formal Timing Analyzer
LCTES
Mihail Asavoae Univ. Paris-Saclay, CEA List, Oumaima Matoussi Univ. Paris-Saclay, CEA List, Asmae Bouachtala Univ. Paris-Saclay, CEA List, Hai-Dang Vu Univ. Paris-Saclay, CEA List, Mathieu Jan Univ. Paris-Saclay, CEA List
21:00 - 22:00
KeynoteLCTES at Rousseau Center
Chair(s): Tobias Grosser University of Edinburgh
21:00
60m
Keynote
Domain-specific programming methodologies for domain-specific and emerging computing systems
LCTES
Jeronimo Castrillon TU Dresden, Germany
22:30 - 00:00
Optimization for Compilers and LanguagesLCTES at Rousseau Center
Chair(s): Yousun Ko Yonsei University
22:30
20m
Talk
RollBin: Reducing code-size via loop rerolling at binary levelVirtual
LCTES
Tianao Ge Sun Yat-sen University, Zewei Mo Sun Yat-sen University, Kan Wu Sun Yat-sen University, Xianwei Zhang Sun Yat-sen University, Yutong Lu Sun Yat-sen University
22:50
20m
Talk
Tighten Rust's Belt: Shrinking Embedded Rust Binaries
LCTES
Hudson Ayers Stanford University, Google, Evan Laufer Stanford University, Paul Mure Stanford University, Jaehyeon Park Stanford University, Eduardo Rodelo Stanford University, Thea Rossman Stanford University, Andrey Pronin Google, Philip Levis Stanford University, Johnathan Van Why Google
23:10
20m
Talk
JAX Based Parallel Inference for Reactive Probabilistic Programming
LCTES
Guillaume Baudart IBM Research, USA, Louis Mandel IBM Research, USA, Reyyan Tekin
23:30
20m
Talk
Implicit State MachinesVirtual
LCTES
Fengyun Liu Oracle Labs, Aleksandar Prokopec Oracle Labs
23:50
5m
Talk
(WIP) Scalable Size Inliner for Mobile Applications
LCTES

Wed 15 Jun

Displayed time zone: Pacific Time (US & Canada) change

01:30 - 03:00
Be aware of HardwareLCTES at Rousseau Center
Chair(s): Kyoungwoo Lee Yonsei University
01:30
20m
Talk
Optimizing Data Reshaping Operations in Functional IRs for High-Level Synthesis
LCTES
Christof Schlaak University of Edinburgh, Tzung-Han Juang McGill University, Christophe Dubach McGill University
01:50
20m
Talk
Co-Mining: A Processing-in-Memory Assisted Framework for Memory-Intensive PoW AccelerationVirtual
LCTES
Tianyu Wang The Chinese University of Hong Kong, Zhaoyan Shen Shandong University, Zili Shao The Chinese University of Hong Kong
02:10
20m
Talk
ISKEVA: In-SSD Key-Value Database Engine for Video Analytics ApplicationsVirtual
LCTES
Yi Zheng The Pennsylvania State University, Joshua Fixelle University of Virginia, ‪Nagadastagiri Challapalle‬ The Pennsylvania State University, Pingyi Huo The Pennsylvania State University, Vijaykrishnan Narayanan The Pennsylvania State University, Zili Shao The Chinese University of Hong Kong, Mircea R. Stan University of Virginia, Zhaoyan Shen Shandong University
02:30
20m
Talk
An Old Friend Is Better Than Two New Ones: Dual-Screen AndroidVirtual
LCTES
Zizhan Chen The Chinese University of Hong Kong, Siqi Shang The Chinese University of Hong Kong, Qihong Wu The Chinese University of Hong Kong, Jin Xue The Chinese University of Hong Kong, Zhaoyan Shen Shandong University, Zili Shao The Chinese University of Hong Kong
02:50
5m
Talk
(WIP) Cache-Coherent CLAMVirtual
LCTES
Chen Ding University of Rochester, Benjamin Reber University of Rochester, Dorin Patru Rochester Institute of Technology
03:30 - 05:00
How to Analyze and Utilize LCTES at Rousseau Center
Chair(s): Guillaume Baudart IBM Research, USA
03:30
20m
Talk
Code Generation Criteria for Buffered Exposed Datapath Architectures from Dataflow GraphsVirtual
LCTES
Klaus Schneider University of Kaiserslautern, Anoop Bhagyanath University of Kaiserslautern, Julius Roob University of Kaiserslautern
Pre-print
03:50
20m
Talk
Trace-and-Brace (TAB): Bespoke Software Countermeasures against Soft Errors
LCTES
Yousun Ko Yonsei University, Alex Bradbury lowRISC C.I.C., Bernd Burgstaller Yonsei University, Robert Mullins University of Cambridge
04:10
20m
Talk
Automated Kernel Fusion for GPU Based on Code Motion
LCTES
Junji Fukuhara Tokyo University of Science, Munehiro Takimoto Tokyo University of Science
04:30
20m
Talk
TCPS: A Task and Cache-aware Partitioned Scheduler for Hard Real-time Multi-core SystemsVirtual
LCTES
Yixian Shen University of Amsterdam, Jun Xiao University of Amsterdam, Andy Pimentel University of Amsterdam
04:50
5m
Talk
(WIP) A Memory Interference Analysis using a Formal Timing Analyzer
LCTES
Mihail Asavoae Univ. Paris-Saclay, CEA List, Oumaima Matoussi Univ. Paris-Saclay, CEA List, Asmae Bouachtala Univ. Paris-Saclay, CEA List, Hai-Dang Vu Univ. Paris-Saclay, CEA List, Mathieu Jan Univ. Paris-Saclay, CEA List
09:00 - 10:10
Keynote: Emery BergerPLDI at Kon-Tiki +12h
Chair(s): Işıl Dillig University of Texas at Austin
09:00
10m
Other
Welcome to PLDI 2022
PLDI
Işıl Dillig University of Texas at Austin, Ranjit Jhala University of California at San Diego; Amazon Web Services
09:10
60m
Keynote
Getting Your Research Adopted
PLDI
Emery D. Berger University of Massachusetts Amherst
Pre-print Media Attached
18:00 - 19:00
PLDI 2022 ReceptionPLDI at Beach North
18:00
60m
Social Event
PLDI 2022 Reception (sponsored by WhatsApp by Meta)social
PLDI

21:00 - 22:10
Keynote: Emery BergerPLDI at Kon-Tiki
21:00
10m
Other
Welcome to PLDI 2022
PLDI
Işıl Dillig University of Texas at Austin, Ranjit Jhala University of California at San Diego; Amazon Web Services
21:10
60m
Keynote
Getting Your Research Adopted
PLDI
Emery D. Berger University of Massachusetts Amherst
Pre-print Media Attached

Accepted Papers

Title
An Old Friend Is Better Than Two New Ones: Dual-Screen AndroidVirtual
LCTES
Automated Kernel Fusion for GPU Based on Code Motion
LCTES
Code Generation Criteria for Buffered Exposed Datapath Architectures from Dataflow GraphsVirtual
LCTES
Pre-print
Co-Mining: A Processing-in-Memory Assisted Framework for Memory-Intensive PoW AccelerationVirtual
LCTES
Implicit State MachinesVirtual
LCTES
ISKEVA: In-SSD Key-Value Database Engine for Video Analytics ApplicationsVirtual
LCTES
JAX Based Parallel Inference for Reactive Probabilistic Programming
LCTES
Optimizing Data Reshaping Operations in Functional IRs for High-Level Synthesis
LCTES
RollBin: Reducing code-size via loop rerolling at binary levelVirtual
LCTES
TCPS: A Task and Cache-aware Partitioned Scheduler for Hard Real-time Multi-core SystemsVirtual
LCTES
Tighten Rust's Belt: Shrinking Embedded Rust Binaries
LCTES
Trace-and-Brace (TAB): Bespoke Software Countermeasures against Soft Errors
LCTES
(WIP) A Memory Interference Analysis using a Formal Timing Analyzer
LCTES
(WIP) Cache-Coherent CLAMVirtual
LCTES
(WIP) Scalable Size Inliner for Mobile Applications
LCTES

Submission

Submissions must be in ACM SIGPLAN subformat of the acmart format (available at and explained in more detail at http://www.sigplan.org/Resources/Author). Each paper should have no more than 10 pages for full papers or 4 pages for work-in-progress papers, excluding bibliography, in 10pt font. There is no limit on the page count for references. Each reference must list all authors of the paper (do not use et al.). The citations should be in numeric style, e.g., [52]. Submissions should be in PDF format and printable on US Letter and A4 sized paper. For papers in the work-in-progress category, please prepend “WIP: ” in front of the title.

To enable double-blind reviewing, submissions must adhere to two rules:

  • author names and their affiliations must be omitted; and,
  • references to related work by the authors should be in the third person (e.g., not “We build on our previous work …” but rather “We build on the work of …”).

However, nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult (e.g., important background references should not be omitted or anonymized). Papers must describe unpublished work that is not currently submitted for publication elsewhere as discussed here. Authors of accepted papers will be required to sign an ACM copyright release.

Submission site: https://lctes2022.hotcrp.com

Call for Artifacts

Submission Site

Submit your artifacts through https://lctes2022artifacts.hotcrp.com.

Artifact Submission Deadline: 11:59pm April 29, 2022 (AoE)
Artifact Decision Notification: May 9, 2022

General Info

The authors of all accepted LCTES papers (including WIP papers) are invited to submit supporting materials to the Artifact Evaluation process. Artifact Evaluation is run by a separate Artifact Evaluation Committee (AEC) whose task is to assess how well the artifacts support the work described in the papers. This submission is voluntary but strongly encouraged and will not influence the final decision regarding the papers.

At LCTES, we follow ACM’s artifact review and badging policy, version 1.1. ACM describes a research artifact as follows:

By “artifact” we mean a digital object that was either created by the authors to be used as part of the study or generated by the experiment itself. For example, artifacts can be software systems, scripts used to run experiments, input datasets, raw data collected in the experiment, or scripts used to analyze results.

Submission of an artifact does not imply automatic permission to make its content public. AEC members will be instructed that they may not publicize any part of the submitted artifacts during or after completing evaluation, and they will not retain any part of any artifact after evaluation. Thus, you are free to include models, data files, proprietary binaries, and similar items in your artifact.

We expect each artifact to receive three reviews. Papers that go through the Artifact Evaluation process successfully will receive ACM reproducibility badge(s) printed on the papers themselves and available as meta information in the ACM Digital Library.

Artifact evaluation is single-blind. Please take precautions (e.g. turning off analytics, logging) to help prevent accidentally learning the identities of reviewers.

Badging

The papers with accepted artifacts will be assigned official ACM artifact evaluation badges, based on the criteria defined by ACM. ACM recommends awarding three different types of badges, generally saying, Availability (green) badge, Reproducibility (blue) badge, and Functionality/Reusability (red) badge, to communicate how the artifact has been evaluated. A single paper can receive up to three badges. (Please refer to ACM website for detailed badge information.) Note that artifacts will be evaluated with respect to the claims and presentation in the submitted version of the paper, not the camera-ready version.

The badges will appear on the first page of the camera-ready version of the paper, indicating that the artifact was submitted, evaluated, and found to be functional. Artifact authors will be allowed to revise their camera ready paper after they are notified of their artifact’s publication in order to include a link to the artifact’s DOI.

Note that we do not provide the “Replication” (light blue) badge, just the “Reproducibility” (dark blue) one.

Guidelines

  1. Carefully think which badge(s) you want.
    • If making your code public is all you want to do, seek only the “Availability” (green) badge. The reviewers will not exercise the artifact for its functionality or validate the claims.
    • If you only plan to reproduce the claims without making your artifact Documented, Consistent, Complete, and exercisable, seek for the “Reproducibility” (blue) badge rather than the “Functionality/Reusability” (red) badge.
    • If you do not plan on making the artifact available to the public, do not seek the “Availability” (green) badge but the other one or two.
  2. Minimize the artifact setup overhead.
    • A well-packaged artifact is easily usable by the reviewers, saving them time and frustration, and more clearly conveying the value of your work during evaluation. A great way to package an artifact is as a Docker image or in a virtual machine that runs “out of the box” with very little system-specific configuration. We encourage authors to include pre-built binaries for all their code, so that reviewers can start with little effort; together with the source and build scripts that allow to regenerate those binaries, to guarantee maximum transparency. Providing pre-built VMs or docker containers is preferable to providing scripts (e.g. Docker or Vagrant configurations) that build the VM, since this alleviates reliance on external dependencies. Your artifact should have a container or a bootable virtual machine image with all of the necessary libraries installed. After preparing your artifact, download and test it on at least one fresh machine where you did not prepare the artifact; this will help you fix missing dependencies, if any.
    • Giving AE reviewers remote access to your machines with preinstalled (proprietary) software is also possible.

Preparing an Artifact

Your submission should be in ONE pdf file, consisting only a URL link pointing to a widely available compressed archive format, such as ZIP (.zip), tar and gzip (.tgz), or tar and bzip2 (.tbz2). Ensure the file has the suffix indicating its format. The URL must protect the anonymity of the reviewers (e.g., a Google Drive URL).

The compressed archive should consist of three pieces:

  1. The submission version of your accepted paper.
  2. A README.txt file (PDF or plaintext format) that explains your artifact (details below).
  3. A folder containing the artifact.

The README.txt should consist of two parts:

  1. a Getting Started Guide, and
  2. Step-by-Step Instructions for how you propose to evaluate your artifact (with appropriate connections to the relevant sections of your paper).

The Getting Started Guide should contain setup instructions (including, for example, a pointer to the VM player software, its version, passwords if needed, etc.) and basic testing of your artifact that you expect a reviewer to be able to complete in 30 minutes. Reviewers will follow all the steps in the guide during an initial kick-the-tires phase. The Getting Started Guide should be as simple as possible, and yet it should stress the key elements of your artifact. Anyone who has followed this guide should have no technical difficulties with the rest of your artifact.

The Step by Step Instructions explain how to reproduce any experiments or other activities that support the conclusions in your paper. Write this for readers who have a deep interest in your work and are studying it to improve it or compare against it. If your artifact runs for more than a few minutes, point this out and explain how to run it on smaller inputs.

Where appropriate, include descriptions of and links to files (included in the archive) that represent expected outputs (e.g., the log files expected to be generated by your tool on the given inputs); if there are warnings that are safe to be ignored, explain which ones they are.

Please include the following if possible:

  • A list of claims from the paper supported by the artifact, and how/why.
  • A list of claims from the paper not supported by the artifact, and how/why. Example: Performance claims cannot be reproduced in VM, authors are not allowed to redistribute specific benchmarks, etc. Artifact reviewers can then center their reviews / evaluation around these specific claims.

When packaging your artifact, please keep in mind:

  • how accessible you are making your artifact to other researchers, and
  • the fact that the AEC members will have a limited time in which to make an assessment of each artifact.

We strongly encourage to use a container (e.g., Docker) which provides a way to make an easily reproducible environment. It also helps the AEC have confidence that errors or other problems cannot cause harm to their machines.

You should make your artifact available as a single archive file and use the naming convention <paper #>.<suffix>, where the appropriate suffix is used for the given archive format. Please use a widely available compressed archive format such as ZIP (.zip), tar and gzip (.tgz), or tar and bzip2 (.tbz2). Please use open formats for documents.

Artifact Evaluation Committee

Other than the chair, the AEC members are senior graduate students, postdocs, or recent PhD graduates, identified with the help of the LCTES PC and recent artifact evaluation committees. Please check SIGPLAN’s Empirical Evaluation Guidelines for some methodologies to consider during evaluation.

Throughout the review period, reviews will be submitted to HotCRP and will be (approximately) continuously visible to authors. AEC reviewers will be able to continuously interact (anonymously) with authors for clarifications, system-specific patches, and other logistics to help ensure that the artifact can be evaluated. The goal of continuous interaction is to prevent rejecting artifacts for a “wrong library version” types of problems.

During the evaluation process, authors and AEC are allowed to anonymously communicate through the HotCRP system to overcome technical difficulties. Ideally, we hope to see all submitted artifacts to successfully pass the artifact evaluation.

Call for Papers

Programming languages, compilers, and tools are important interfaces between embedded systems and emerging applications in the real world. Embedded systems are aggressively adapted for deep neural network applications, autonomous vehicles, robots, healthcare applications, etc. However, these emerging applications impose challenges that conflict with conventional design requirements and increase the complexity of embedded system designs. Furthermore, they exploit new hardware paradigms to scale up multicores (including GPUs and FPGAs) and distributed systems built from many cores. Therefore, programming languages, compilers, and tools are becoming more important to address these issues such as productivity, validation, verification, maintainability, safety, and reliability for meeting both performance goals and resource constraints.

LCTES 2022 solicits papers presenting original work on programming languages, compilers, tools, theory, and architectures that help in overcoming these challenges. Research papers on innovative techniques are welcome, as well as experience papers on insights obtained by experimenting with real-world systems and applications. Papers can be submitted through HotCRP.

Important Dates

  • Paper submission deadline: March 7, 2022 March 14, 2022
  • Paper notification: April 8, 2022 April 15, 2022
  • Camera-ready deadline: May 6, 2022
  • Conference: June 14, 2022

Paper Categories

  • Full paper: 10 pages presenting original work.
  • Work-in-progress paper: 4 pages papers presenting original ideas that are likely to trigger interesting discussions.

Accepted papers in both categories will appear in the proceedings published by ACM.

LCTES 2022 provides a journal mode in addition to the usual conference mode. Accepted full papers will be selectively invited to be published in a special issue of the ACM Transactions on Embedded Computing Systems (TECS).

Original contributions are solicited on the topics of interest including, but not limited to:

Programming language challenges, including:

  • Domain-specific languages
  • Features to exploit multicore, reconfigurable, and other emerging architectures
  • Features for distributed, adaptive, and real-time control embedded systems
  • Capabilities for specification, composition, and construction of embedded systems
  • Language features and techniques to enhance reliability, verifiability, and security
  • Virtual machines, concurrency, inter-processor synchronization, and memory management

Compiler challenges, including:

  • Interaction between embedded architectures, operating systems, and compilers
  • Interpreters, binary translation, just-in-time compilation, and split compilation
  • Support for enhanced programmer productivity
  • Support for enhanced debugging, profiling, and exception/interrupt handling
  • Optimization for low power/energy, code/data size, and real-time performance
  • Parameterized and structural compiler design space exploration and auto-tuning

Tools for analysis, specification, design, and implementation, including:

  • Hardware, system software, application software, and their interfaces
  • Distributed real-time control, media players, and reconfigurable architectures
  • System integration and testing
  • Performance estimation, monitoring, and tuning
  • Run-time system support for embedded systems
  • Design space exploration tools
  • Support for system security and system-level reliability
  • Approaches for cross-layer system optimization

Theory and foundations of embedded systems, including:

  • Predictability of resource behavior: energy, space, time
  • Validation and verification, in particular of concurrent and distributed systems
  • Formal foundations of model-based design as the basis for code generation, analysis, and verification
  • Mathematical foundations for embedded systems
  • Models of computations for embedded applications

Novel embedded architectures, including:

  • Design and implementation of novel architectures
  • Workload analysis and performance evaluation
  • Architecture support for new language features, virtualization, compiler techniques, debugging tools
  • Architectural features to improve power/energy, code/data size, and predictability

Mobile systems and IoT, including:

  • Operating systems for mobile and IoT devices
  • Compiler and software tools for mobile and IoT systems
  • Energy management for mobile and IoT devices
  • Memory and IO techniques for mobile and IoT devices

Full papers

  • Please check your presentation schedule in the “Program” tab.
  • Presentations are 17 minutes with an additional 3 minutes for Q&A.
  • Please connect with your session chair before your session, and provide them with your short bio so that they can introduce you.

Work-in-Progress papers

  • Please check your presentation schedule in the “Program” tab.
  • Presentations are 4 minutes with an additional minute for Q&A.
  • Please connect with your session chair before your session, and provide them with your short bio so that they can introduce you.

Slide specs

  • Presentations should be in pptx/pdf format in an aspect ratio of 16:9.

To register for LCTES 2022, go here, click ‘REGISTER’, and then follow the steps shown below.

  • On page 1, choose your registration type (i.e., regular or student, ACM or SIGPLAN membership number).
  • On page 2, select either ‘In-Person’ or ‘Virtual’ for LCTES, along with the other conferences, workshops, and tutorials you wish to register for, depending on your plans. Note that in-person registration only includes LCTES, whereas virtual registration includes access to all the colocated conferences, workshops, and tutorials.
  • On page 3, enter your personal information.
  • On page 4, indicate your mailing preference and agree the Terms of Service.
  • On page 5, select your diet preference and mentoring options.
  • On page 6, review your registration information and select a payment option.

Please see this link for general information on registering for PLDI and colocated conferences, workshops, and tutorials.

Questions? Use the LCTES contact form.