A paper consists of a constellation of artifacts that extend beyond the document itself: software, proofs, models, test suites, benchmarks, and so on. In some cases, the quality of these artifacts is as important as that of the document itself, yet most of our conferences offer no formal means to submit and evaluate anything but the paper.
Following a trend in our community over the past many years, PLDI 2022 includes an Artifact Evaluation process, which allows authors of accepted papers to optionally submit supporting artifacts. The goal of artifact evaluation is two-fold: to probe further into the claims and results presented in a paper, and to reward authors who take the trouble to create useful artifacts to accompany the work in their paper. Artifact evaluation is optional, but highly encouraged, and authors may choose to submit their artifact for evaluation only after their paper has been accepted.
The evaluation and dissemination of artifacts improves reproducibility, and enables authors to build on top of each other’s work. Beyond helping the community as a whole, the evaluation and dissemination of artifacts confers several direct and indirect benefits to the authors themselves.
The ideal outcome for the artifact evaluation process is to accept every artifact that is submitted, provided it meets the evaluation criteria listed below. We will strive to remain as close as possible to that ideal goal. However, even though some artifacts may not pass muster and may be rejected, we will evaluate in earnest and make our best attempt to follow authors’ evaluation instructions.
The artifact evaluation committee will read each artifact’s paper and judge how well the submitted artifact conforms to the expectations set by the paper. The specific artifact evaluation criteria are:
- Consistency with the paper, the artifact should reproduce the same results, modulo experimental error.
- Completeness, the artifact should reproduce all the results that the paper reports, and should include everything (code, tools, 3rd party libraries, etc.) required to do so.
- Documentation, the artifact should be well documented so that reproducing the results is easy and transparent.
- Ease of reuse, the artifact provides everything needed to build on top of the original work, including source files together with a working build process that can recreate the binaries provided.
Note that artifacts will be evaluated with respect to the claims and presentation in the submitted version of the paper, not the camera-ready version.
The artifact evaluation committee evaluates each artifact for the awarding of one or two badges:
Functional: This is the basic “accepted” outcome for an artifact. An artifact can be awarded a functional badge if the artifact supports all claims made in the paper, possibly excluding some minor claims if there are very good reasons they cannot be supported. In the ideal case, an artifact with this designation includes all relevant code, dependencies, input data (e.g., benchmarks), and the artifact’s documentation is sufficient for reviewers to reproduce the exact results described in the paper. If the artifact claims to outperform a related system in some way (in time, accuracy, etc.) and the other system was used to generate new numbers for the paper (e.g., an existing tool was run on new benchmarks not considered by the corresponding publication), artifacts should include a version of that related system, and instructions for reproducing the numbers used for comparison as well. If the alternative tool crashes on a subset of the inputs, simply note this expected behavior.
Deviations from this ideal must be for good reason. A non-exclusive list of justifiable deviations includes:
- Some benchmark code is subject to licensing or intellectual property restrictions and cannot legally be shared with reviewers (e.g., licensed benchmark suites like SPEC, or when a tool is applied to private proprietary code). In such cases, all available benchmarks should be included. If all benchmark data from the paper falls into this case, alternative data should be supplied: providing a tool with no meaningful inputs to evaluate on is not sufficient to justify claims that the artifact works.
- Some of the results are performance data, and therefore exact numbers depend on the particular hardware. In this case, artifacts should explain how to recognize when experiments on other hardware reproduce the high-level results (e.g., that a certain optimization exhibits a particular trend, or that comparing two tools one outperforms the other in a certain class of cases).
- In some cases repeating the evaluation may take a long time. Reviewers may not reproduce full results in such cases.
In some cases, the artifact may require specialized hardware (e.g., a CPU with a particular new feature, or a specific class of GPU, or a cluster of GPUs). For such cases, authors should contact the Artifact Evaluation Co-Chairs (Xinyu Wang and Niki Vazou) as soon as possible after round 1 notification to work out how to make these possible to evaluate. In past years one outcome was that an artifact requiring specialized hardware paid for a cloud instance with the hardware, which reviewers could access remotely.
Reusable: This badge may only be awarded to artifacts judged functional. A Reusable badge is given when reviewers feel the artifact is particularly well packaged, documented, designed, etc. to support future research that might build on the artifact. For example, if it seems relatively easy for others to reuse this directly as the basis of a follow-on project, the AEC may award a Reusable badge.
For binary-only artifacts to be considered Reusable, it must be possible for others to directly use the binary in their own research, such as a JAR file with very high quality client documentation for someone else to use it as a component of their own project.
Artifacts with source can be considered Reusable:
if they can be reused as components,
if others can learn from the source and apply the knowledge elsewhere (e.g., learning an implementation or proof/formalization technique for use in a separate codebase), or
if others can directly modify and/or extend the system to handle new or expanded use cases.
Artifacts given one or both of the Functional and Reusable badges are generally referred to as accepted.
After decisions on the Functional and Reusable badges have been made, authors of any artifacts (including those not reviewed by the AEC, and those reviewed but not found Functional during reviewing) can earn an additional badge for their artifact durably available:
Available: This badge is automatically earned by artifacts that are made available publicly in an archival location. We strongly suggest, but do not require, that artifacts that were evaluated as Functional archive the evaluated version. There are two routes for this:
- Authors upload a snapshot of the complete artifact to Zenodo, which provides a DOI specific to the artifact. Note that Github, etc. are not adequate for receiving this badge (see FAQ), and that Zenodo provides a way to make subsequent revisions of the artifact available and linked from the specific version.
- Authors can work with Conference Publishing to upload their artifacts directly to the ACM, where the artifact will be hosted alongside the paper.
To maintain the separation of paper and artifact review, authors will only be asked to upload their artifacts after their papers have been accepted. Authors planning to submit to the artifact evaluation should prepare their artifacts well in advance of this date to ensure adequate time for packaging and documentation.
Throughout the artifact review period, submitted reviews will be (approximately) continuously visible to authors. Reviewers will be able to continuously interact (anonymously) with authors for clarifications, system-specific patches, and other logistics help to make the artifact evaluable. The goal of continuous interaction is to prevent rejecting artifacts for minor issues, not research related at all, such as a “wrong library version”-type problem. The conference proceedings will include a discussion of the continuous artifact evaluation process.
Types of Artifacts
The artifact evaluation will accept any artifact that authors wish to submit, broadly defined. A submitted artifact might be:
- mechanized proofs
- test suites
- data sets
- hardware (if absolutely necessary)
- a video of a difficult- or impossible-to-share system in use
- any other artifact described in a paper
Artifact Evaluation Committee
Other than the chairs, the AEC members are senior graduate students, postdocs, or recent PhD graduates, identified with the help of the PLDI PC and recent artifact evaluation committees.
Among researchers, experienced graduate students are often in the best position to handle the diversity of systems expectations that the AEC will encounter. In addition, graduate students represent the future of the community, so involving them in the AEC process early will help push this process forward. The AEC chairs devote considerable attention to both mentoring and monitoring, helping to educate the students on their responsibilities and privileges.
Call for Artifacts
Submit your artifact via HotCRP: https://pldi22ae.hotcrp.com/
A well-packaged artifact is more likely to be easily usable by the reviewers, saving them time and frustration, and more clearly conveying the value of your work during evaluation. A great way to package an artifact is as a Docker image or in a virtual machine that runs “out of the box” with very little system-specific configuration. We encourage authors to include pre-built binaries for all their code, so that reviewers can start with little effort; together with the source and build scripts that allow to regenerate those binaries, to guarantee maximum transparency. Providing pre-built VMs or docker containers is preferable to providing scripts (e.g. Docker or Vagrant configurations) that build the VM, since this alleviates reliance on external dependencies.
Submission of an artifact does not imply automatic permission to make its content public. AEC members will be instructed that they may not publicize any part of the submitted artifacts during or after completing evaluation, and they will not retain any part of any artifact after evaluation. Thus, you are free to include models, data files, proprietary binaries, and similar items in your artifact.
Artifact evaluation is single-blind. Please take precautions (e.g. turning off analytics, logging) to help prevent accidentally learning the identities of reviewers.
Generating the Artifact
Your submission should consist of three pieces:
- The submission version of your accepted paper (in
README.txtfile that explains your artifact (details below).
- A Zenodo link (details below).
README.txt should consist of two parts:
- a Getting Started Guide and
- Step-by-Step Instructions for how you propose to evaluate your artifact (with appropriate connections to the relevant sections of your paper);
The Getting Started Guide should contain setup instructions (including, for example, a pointer to the VM player software, its version, passwords if needed, etc.) and basic testing of your artifact that you expect a reviewer to be able to complete in 30 minutes. Reviewers will follow all the steps in the guide during an initial kick-the-tires phase. The Getting Started Guide should be as simple as possible, and yet it should stress the key elements of your artifact. Anyone who has followed the Getting Started Guide should have no technical difficulties with the rest of your artifact.
The Step by Step Instructions explain how to reproduce any experiments or other activities that support the conclusions in your paper. Write this for readers who have a deep interest in your work and are studying it to improve it or compare against it. If your artifact runs for more than a few minutes, point this out and explain how to run it on smaller inputs.
Where appropriate, include descriptions of and links to files (included in the archive) that represent expected outputs (e.g., the log files expected to be generated by your tool on the given inputs); if there are warnings that are safe to be ignored, explain which ones they are.
The artifact’s documentation should include the following:
- A list of claims from the paper supported by the artifact, and how/why.
- A list of claims from the paper not supported by the artifact, and how/why.
Example: Performance claims cannot be reproduced in VM, authors are not allowed to redistribute specific benchmarks, etc. Artifact reviewers can then center their reviews / evaluation around these specific claims.
Please create a Zenodo (https://zenodo.org/) repository. If you intend to publish the artifact, you can choose
Open Access for
License. Please note that this would generate a Zenodo DOI that is permanently public. On the other hand, you can create a “private” repository by checking
Restricted Access which would require you to grant permission to someone (in our case, the AEC members) who wanted to access the repository.
Packaging the Artifact
When packaging your artifact, please keep in mind: a) how accessible you are making your artifact to other researchers, and b) the fact that the AEC members will have a limited time in which to make an assessment of each artifact.
Your artifact should have a container or a bootable virtual machine image with all of the necessary libraries installed.
We strongly encourage to use a container (e.g., https://www.docker.com/).
Using a container or a virtual machine image provides a way to make an easily reproducible environment — it is less susceptible to bit rot. It also helps the AEC have confidence that errors or other problems cannot cause harm to their machines.
You should upload your artifact to zenodo and submit the zenodo link. Please use open formats for documents.
Discussion with Reviewers
We expect each artifact to receive 3-4 reviews.
Throughout the review period, reviews will be submitted to HotCRP and will be (approximately) continuously visible to authors. AEC reviewers will be able to continuously interact (anonymously) with authors for clarifications, system-specific patches, and other logistics to help ensure that the artifact can be evaluated. The goal of continuous interaction is to prevent rejecting artifacts for a “wrong library version” types of problems.
Based on the reviews and discussion among the AEC, one or more artifacts will be selected for Distinguished Artifact awards.
Info for Reviewers
If you want to be part of the Artifact Evaluation committee, you can fill out the self-nomination form: https://forms.gle/pJJfStLXJwaqJmtm6
You can also nominate someone (colleague, student) whom you think would be a good match for the Artifact Evaluation Committee using this form: https://forms.gle/nDtW7wtpkGvU8MQBA
- Author artifact submission: Mar 4
- Reviewer preferences due: Mar 8
- (Phase 1) Basic functionality reviews due: Mar 15
- (Phase 2) Full reviews due: Mar 25
- (Phase 3) Final revised reviews due: Apr 6
- Author notification: Apr 13
We expect the bulk of the review work to take place between March 9 and March 25. During this time, reviewers will complete 3 to 4 reviews. Reviewers can expect each artifact to take 8h in average to review completely.
Phase 1 Deadline
In this phase, reviewers will go through the Getting Started Guide that accompanies each artifact and submit a short review based on the “basic functionality” described in it. These initial reviews will be made available to authors, and they will be able to communicate with you directly through HotCRP in order to debug simple issues that arise. The identity of reviewers remains anonymous (“Reviewer A”, “Reviewer B”, etc.).
Phase 2 Deadline
In this phase, reviewers will go through the Step-by-Step Instructions of each artifact and submit full, complete reviews, extending and expanding upon the Phase 1 reviews as appropriate. As before, these reviews will be made available to authors, and they will be able to communicate with you directly through HotCRP.
Phase 3 Deadline
The majority of artifact evaluations are expected to be completely done after the previous two phases, requiring no additional back-and-forth with the authors. This third phase is for artifacts which posed no problems during Phase 1 (Getting Started) but had issues during Phase 2 (Step-by-Step Instructions). This phase allows such artifacts to be discussed with authors in case there are simple issues that can be addressed that enable the artifact to be evaluated successfully.
See SIGPLAN’s Empirical Evaluation Guidelines for some methodologies to consider during evaluation.
This section contains questions frequently asked by authors, and our answer. Do not hesitate to contact us if you have further questions that are not directly answered in here.
Q: My artifact requires special hardware (e.g., access to a cluster, GPU, CPU extensions such as TSX or SGX, etc). Can I still submit it for the AEC?
In this case, you may have to provide access to a machine (or machines) with the required hardware resources, and facilitate reviewer access to your hardware.
Q: My artifact uses huge amounts of data and/or my evaluation took a long time to complete (e.g., days or weeks or more). Can I still submit it for the AEC?
Each artifact should be complete to the extent possible and include everything needed to replicate all the experiments in full, together with instructions about how to do it.
Members of the AEC will spend roughly 8 hours per artifact. During this time, each AEC member will check the completeness of the artifact with regards to the results reported in the paper. Including scaled down versions of the full experiments, or instructions about how to do it, are highly encouraged to assist AEC members in their task (together with a brief discussion about how the scaled down experiments are representative of the full experiments).
For datasets that are very large (i.e., 100s of GB or more), authors can submit a subset of the datasets to be evaluated by the AEC; and make the full results available to allow for future research.