ARCH21:Editor's Preface

This volume contains the papers presented at the 8th International Workshop on Applied Verification of Continuous and Hybrid Systems (ARCH), as well as the results of the 5th edition of ARCH-COMP, a competition for the formal verification of continuous and hybrid systems. The workshop was held on July 09, 2021 as part of the 7th IFAC Conference on Analysis and Design of Hybrid Systems. Due to the coronavirus pandemic, the workshop was held via video conferencing. Previous editions of the ARCH workshop series were held 2014 in Berlin, 2015 in Seattle, 2016 in Vienna, 2017 in Pittsburgh, 2018 in Oxford, 2019 in Montreal, and 2020 was held online. The goal of the ARCH workshops is to bring together people from industry with researchers and tool developers interested in applying verification techniques to continuous and hybrid systems. The workshops are accompanied by a collaborative website (cps-vo.org/group/ARCH), which features a curated collection of benchmarks, disseminates results submitted by researchers and tool developers, and provides feedback from practitioners in the form of experience reports. The benchmark repository is intended to serve as a lasting and evolving resource to the research community.

The workshop received 4 submissions, all of which were accepted by the program committee. Each submission was reviewed by 4 program committee members, including at least one member from academia and one from industry. 

In addition to the workshop papers, these proceedings present the results of the 5th edition of ARCH-COMP. ARCH-COMP is a friendly competition that was carried out online from April to July, 2021. ARCH-COMP showcases the participating tools and serves as a testing ground to see which methods are particularly suitable to which types of problems. As a side effect, it aims at establishing a consensus for comparing different software implementations in the context of verification, as such comparisons are routinely demanded by reviewers of scientific publications. 

All participating tools were represented in the competition jury, headed by the organizers. In the problem phase of the competition, participants submitted problem instances, which were then approved by the jury by consensus. In most categories, participants submitted a code package and the performance measurements were run centrally under the supervision of Taylor T. Johnson. Participants who were not able to submit executable code carried out the performance measurements themselves, as indicated in the reports. To establish further trustworthiness of the results, the code with which the results have been obtained is available on the ARCH website. 

In this 5th edition of ARCH-COMP, 31 tools participated in the competition and 17 tools participated in the repeatability evaluation. The 2021 prize for the best result was awarded according to a vote by the attending audience to the tool KeYmaera X.

The problem descriptions and the results are provided in a report for each category, drafted by the category lead together with representatives of the participating tools. Due to the diversity of problems, ARCH-COMP does not provide any ranking of tools. Nonetheless, the presented results probably provide the most complete assessment of tools for the safety verification of continuous and hybrid systems up to this date.


Goran Frehse, Matthias Althoff (Program Chairs)
Sergiy Bogomolov (Publicity Chair)
Taylor T. Johnson (Evaluation Chair)
July 9, 2021
Brussels, Belgium