VSComp 2010
The inaugural Verified Software Competition took place in 2010, organised in conjunction with the VSTTE 2010 conference and the FoVeOOS workshop on Formal Verification of Object-Oriented Software. Five challenge problems were published, spanning fundamental domains of program verification: sorting, searching, maximum computation, linked data structures, and amortised complexity. This page covers what the 2010 edition involved, lists every problem with links to detailed descriptions and submitted solutions, and notes the artifacts and errata associated with the competition.

The First Verified Software Competition
VSComp 2010 was, to the best of available records, the first online competition dedicated entirely to formal software verification. The idea was straightforward: publish a curated set of challenge problems, give participants a defined submission window, and evaluate the resulting solutions on correctness, completeness, and clarity. Any tool or proof assistant was permitted — the competition was deliberately tool-agnostic.
The connection to the VSTTE (Verified Software: Theories, Tools, Experiments) conference series provided an academic anchor. VSTTE 2010 brought together researchers working on the practical side of formal methods, and the competition served as a natural complement: a concrete arena where the techniques discussed in conference papers could be applied to a shared problem set. The FoVeOOS (Formal Verification of Object-Oriented Software) workshop further emphasised object-oriented specification and verification, influencing the style of several problems.
Participants worked individually or in small teams. Submissions were expected to include annotated source code, a formal specification against which correctness was asserted, and machine-checkable proofs produced by the tool of choice. The organisers evaluated entries on three axes: whether the proof actually discharged all verification conditions, whether the specification captured the full intended behaviour, and whether the submission was clearly documented. For a detailed breakdown of how these criteria developed over subsequent years, see the rules page.
What the 2010 Edition Covered
The five problems fell broadly into two domains: assertional verification of array-based algorithms and reasoning about heap-allocated data structures. This split was intentional. Array algorithms — sorting, searching, computing a maximum — exercise classic loop-invariant reasoning. They require participants to express preconditions, postconditions, and intermediate assertions that a verifier can check step-by-step through a loop body.
The data-structure problems, by contrast, demanded reasoning about pointers, aliasing, and structural invariants. Verifying a linked list reversal or an amortised queue is qualitatively different from verifying a sorting routine: the prover must track the shape of heap memory, not just the values stored in contiguous locations. Separation logic and related frameworks were common choices among participants tackling these problems.
Together, the problem set served as a baseline: future editions could measure how far verification tools had advanced by comparing the difficulty and scope of their challenges against this initial set. A full cross-year comparison is available on the problems catalogue.
How Submissions Worked
The 2010 competition operated within a defined submission window during which participants could prepare and submit their solutions. The exact duration and logistics were communicated through the competition channels associated with VSTTE 2010. Any verification tool or proof assistant was acceptable, provided the resulting proofs were machine-checkable.
Each submission typically comprised three components: the annotated source code (with specifications embedded as assertions, contracts, or annotations in the tool's native syntax), a prose explanation of the verification strategy, and the proof artifacts themselves — output logs, checked proof scripts, or equivalent evidence that the tool had discharged all verification conditions.
Evaluation followed the three-pillar model outlined on the rules page: correctness (do the proofs hold?), completeness (does the specification cover the full behaviour?), and clarity (is the submission understandable to a knowledgeable reviewer?).
2010 Problem Set
The inaugural edition included five problems. Each card below summarises the challenge; follow the link to the full problem description in the problems catalogue.
Given an integer array, compute and verify the index of the maximum element. Requires a loop invariant that tracks the running maximum and proves the result is no less than every element in the array.
Prove that a sorting algorithm correctly produces an output that is both a permutation of the input and in non-decreasing order. Participants must formalise the permutation property and the ordering postcondition.
Verify a binary search implementation on a sorted array, showing that it returns the correct index when the element is present and a sentinel otherwise. Requires precise reasoning about integer overflow and index boundaries.
Verify core operations — insertion, deletion, and traversal — on a singly linked list. The challenge tests whether participants can specify and preserve structural invariants across pointer manipulations.
Prove functional correctness and amortised complexity for a two-stack queue. Requires a potential-function argument or equivalent to show that enqueue and dequeue operate in amortised constant time.
Submitted Solutions
Solutions for the 2010 problems are collected in the solutions archive. Because this was the first edition, the range of tools represented in submissions was notably broad — from mature platforms like Boogie/Z3 to emerging frameworks that had not yet been tested against a shared benchmark. Reviewing the 2010 solutions offers a useful snapshot of the state of verification tooling at the time.
Where available, solution entries include the annotated source code, the specification used, and notes on the verification strategy. Some entries also include timing data or proof-size metrics, though reporting conventions varied. The solutions page provides guidance on how to cite individual submissions.
Competition Artifacts
Artifacts associated with the 2010 edition include the original problem statements and any supplementary materials distributed to participants. Where these documents remain available, they are listed below.
Problem Statements & Supplementary Materials
The original problem descriptions for VSComp 2010 were distributed through the VSTTE 2010 conference channels. These documents specified each challenge in precise formal terms, including the expected input/output behaviour and the properties to be verified. Where supplementary materials such as starter code or reference implementations were provided, they accompanied the problem statements.
Some historical materials are provided as-is; where unavailable, the problem summaries on this page and in the problems catalogue summarise the format and requirements.
Errata and Clarifications
As with any competition, minor clarifications were issued during the 2010 submission window. These typically addressed edge cases in problem specifications — for instance, whether the maximum computation problem (P1-2010) should handle empty arrays, or how the binary search problem (P3-2010) should behave when multiple elements match the search key.
No significant corrections to the problem statements were required after the competition concluded. The problem set has remained stable and continues to serve as a reference baseline for subsequent editions. Readers interested in how the format and evaluation criteria evolved can compare this edition with the later 2014, 2015, and 2017 hubs.
The Broader Landscape in 2010
The formal methods community in 2010 was at an inflection point. SMT solvers had recently achieved dramatic performance improvements, making automated verification of moderately complex programs feasible for the first time. Tools such as Dafny, Why3, and KeY were gaining traction beyond their originating research groups, and there was growing interest in whether these tools could handle realistic, non-trivial code.
VSComp 2010 provided a structured venue for this question. Rather than relying on individual case studies or benchmark suites, the competition offered a common set of problems against which different tools and techniques could be compared on equal footing. The connection to FoVeOOS added a particular emphasis on object-oriented verification — a domain where specification and reasoning about aliasing remain active research topics.
For a broader view of how the competition fits into the history of formal methods events, see the history page. The publications page lists academic papers and reports that discuss the competition and its outcomes.
Year: 2010
Problems: 5
Domains: Sorting, binary search, maximum computation, linked data structures, amortised queues
Context: Held with VSTTE 2010 and FoVeOOS
Full problem catalogue · Solutions archive · Competition rules · All competition years