History of VSComp

The Verified Software Competition has its roots in a broader international effort to make formal verification practical. This page traces the competition from its origins in the Verified Software Initiative through each edition, describing how the format evolved, what each year contributed, and how VSComp fits into the wider landscape of the verified software movement. For edition-specific details, see the years index; for background on the competition itself, visit about.

Timeline illustration representing the historical development of the Verified Software Competition

The Verified Software Initiative

The story of VSComp begins before the competition itself. In the mid-2000s, a coalition of researchers from institutions around the world came together under the banner of the Verified Software Initiative — an ambitious, long-term project aimed at developing the scientific foundations, tools, and practical methodology required to produce software that is demonstrably correct. The initiative recognised a fundamental gap: while formal verification had made substantial theoretical progress, the tools and techniques were not yet routinely applied to real software, and there was no systematic way to evaluate how well they worked.

One response to this gap was the creation of a competition. The idea was simple in outline: publish a set of clearly specified programming tasks, invite participants to prove their implementations correct using whatever verification tools they preferred, and compare the results. This would generate concrete evidence about the state of the art — not in the abstract, but through the practical exercise of making proofs work on real code.

VSTTE and the Competition Format

The Verified Software: Theories, Tools, Experiments (VSTTE) conference series provided the natural institutional home for this effort. VSTTE brought together researchers working on program verification, and the competition gave that community a shared, concrete set of problems to discuss. Several editions of VSComp were co-located with VSTTE events, and competition results were presented and debated in workshop sessions.

The relationship between the competition and the conference shaped the format in important ways. Problems were designed to reflect the research themes of the corresponding VSTTE meeting — if a conference emphasised concurrency verification, the competition was likely to include a concurrency problem. This alignment made the competition directly relevant to ongoing research conversations and ensured that the results contributed to the field rather than existing in isolation.

The proceedings of VSTTE conferences, many of which appear in the Springer Lecture Notes in Computer Science series, contain reports from several VSComp editions. The SpringerLink platform hosts these proceedings and provides access to the published competition reports for researchers with institutional subscriptions. The publications page lists the specific reports associated with each documented edition.

Editions at a Glance

The following timeline summarises each edition of the competition, highlighting the key developments and contributions of each year. Detailed information about problems, solutions, and format specifics is available on the individual year hub pages.

2010

The Inaugural Edition

The first Verified Software Competition launched as a pilot, testing whether the format was viable. Five problems were published, spanning assertional verification (sorting, array partitioning) and data-structure reasoning (binary search trees, linked lists). The submission window was relatively compact, and the evaluation criteria were broad. Participation was encouraging: entries arrived from multiple research groups using a variety of tools, including Dafny, Frama-C, Coq, and Why3. The pilot demonstrated that the concept worked and that the community had appetite for this kind of exercise.

2011

Refining the Format

Building on the pilot, the second edition refined the submission process and expanded the problem set to test a broader range of verification techniques. The organisers introduced more explicit guidance on what a submission should contain, responding to lessons from the first year where some entries lacked sufficient documentation for the evaluators to reproduce the proofs. The 2011 edition also saw wider participation, with submissions from institutions that had not been involved in the pilot.

2012

Data Structures and Concurrency

The third edition introduced problems with greater emphasis on data-structure invariants and, for the first time, concurrent reasoning. This expansion tested verification tools in domains where they had historically been weaker, and the results were instructive. Several tools that handled sequential verification well struggled with concurrency challenges, while others demonstrated emerging capabilities in this area. The edition contributed to a growing body of evidence about where the field's practical capabilities lay.

2014

Maturity and Depth

After a year without a competition, the 2014 edition returned with a more substantial format. Problems were accompanied by detailed specifications, and the evaluation criteria were made more explicit, with the three dimensions of correctness, completeness, and clarity formalised as the assessment framework. The edition also produced one of the more substantial published reports, including a comparative analysis of solution approaches and a discussion of specification ambiguity. This edition is often cited as a turning point in the competition's maturity.

2015

Completeness and Documentation

The 2015 edition placed increased emphasis on completeness and on the documentation accompanying each submission. The organisers had observed that earlier editions sometimes received solutions that proved interesting partial properties but left significant aspects of the specification unaddressed. While partial solutions remained welcome, the evaluation in 2015 weighted completeness more heavily. The edition also introduced more structured expectations for the written explanation of the verification strategy, reflecting the community's recognition that a good proof should be readable, not just checkable.

2017

The Most Recent Edition

The 2017 edition tested concurrent reasoning and refinement techniques alongside traditional assertional verification. The problem set was designed to push the boundaries of what current tools could handle, and the results provided a useful snapshot of the field at that point. The edition report includes a retrospective on the competition series as a whole, discussing trends in tool usage, participation patterns, and the role of VSComp in the broader verified software landscape.

VSComp in the Verified Software Landscape

VSComp is not the only competition or benchmarking effort in the formal-methods world. Other initiatives, such as SV-COMP (for software verification tools), VerifyThis (for on-site verification challenges), and the RERS challenge (for reactive systems), address different aspects of the field. What distinguishes VSComp is its emphasis on complete, documented proofs of program correctness against formal specifications, with evaluation by expert committee rather than automated scoring.

Together, these efforts form an ecosystem of evaluation mechanisms that serve different parts of the community. A tool developer might enter SV-COMP to benchmark their tool's performance on a large set of verification tasks, while a researcher might enter VSComp to demonstrate the expressiveness and clarity of their proof methodology on a smaller number of carefully specified problems. The two are complementary rather than competitive.

The Competition's Legacy

The problems and solutions produced by VSComp have had a life beyond the competition itself. Verification tool developers have adopted competition problems as benchmarks. Instructors have used them as teaching exercises. Researchers have cited them in papers as evidence of what current tools can and cannot do. This secondary use was not the original intent, but it represents a significant contribution: a curated, publicly available set of verification challenges with documented solutions.

The about page describes the community and institutional context in more detail. For the publications associated with the competition, see the publications page. And for direct access to problems and solutions from each edition, the years index is the best starting point.

Competition Years

The years index provides direct access to hub pages for each edition with detailed information on problems, solutions, and format.

About VSComp

The about page describes the competition's mission, community, and institutional context.

Publications

The publications page lists academic reports and proceedings associated with each documented edition.