Frequently Asked Questions
This page collects the questions we hear most often about the Verified Software Competition. Whether you are considering entering for the first time or looking for specifics about how judging works, the answers here are drawn from how the competition has actually operated across its editions. Topics covered include eligibility, tool choice, problem selection, evaluation criteria, preparation advice, and citation guidance. For the formal framework, see the rules; for challenge details, browse the problems catalogue.

What is the Verified Software Competition?
VSComp is an international competition in which participants produce machine-checked proofs that software meets a formal specification. The organisers publish a set of problems — each one a programming task paired with a precise description of the properties the solution must satisfy — and participants submit implementations together with mechanically verifiable proofs of correctness. It is not a speed-coding contest. The emphasis is on the quality, completeness, and clarity of the verification, not on how quickly you can write code.
The competition grew out of the broader formal verification community and has been associated with the VSTTE conference series. You can read more about its origins on the years index.
Who can participate?
Anyone. There are no restrictions on affiliation, nationality, or career stage. We have had entries from individual PhD students, large research groups, and industry practitioners working with verification tools in their day jobs. Teams of any size are welcome, though each submission is evaluated as a single entry. There is no entry fee.
That said, the problems assume familiarity with formal methods. They are not beginner exercises — participants are expected to know how to write specifications, construct loop invariants, and operate at least one verification tool competently. If you are new to the field, the formal verification overview is a reasonable starting point before attempting competition problems.
What verification tools do participants use?
The competition does not prescribe a specific tool. Over the years, submissions have used Dafny, Why3, Frama-C with WP or Jessie, Coq, Isabelle/HOL, Lean, KeYmaera X, SPARK Ada, TLA+, Event-B, and several others. Some participants have even used combinations — for instance, generating verification conditions in one framework and discharging them in another.
The only hard requirement is that the proof must be mechanically checkable. Paper-only arguments, no matter how elegant, do not qualify. The organising committee must be able to reproduce the verification result by running your toolchain. Practical advice on packaging submissions for reproducibility appears on the solutions page.
Can I use my own programming language?
Yes, provided the language is supported by a verification tool that produces machine-checkable proofs. In practice, this means you are choosing a verification ecosystem rather than just a language. If you work in Dafny, you write Dafny. If you prefer Frama-C, you write annotated C. If you use Coq or Isabelle, your “implementation” is typically a functional program within the proof assistant's logic.
There is no list of approved languages. If your tool produces a proof artifact the reviewers can check, it is acceptable. We encourage participants to include clear setup instructions so the evaluation committee can install and run the tool without difficulty.
How are problems chosen?
The organising committee selects problems that test a range of verification techniques. A typical edition includes challenges spanning assertional verification (loop invariants over arrays), data-structure reasoning (trees, linked lists), concurrency or refinement, and sometimes algorithm correctness at a higher level of abstraction. The goal is breadth: a single edition should exercise different parts of the verification toolbox.
Problems are designed to be solvable within the submission window but nontrivial enough that a complete, clear proof represents genuine work. They are not trick questions. The full catalogue is browsable on the problems page.
Where can I find past problems?
The problems catalogue lists every challenge from every edition, organised by year and domain. Each entry includes a summary of what the problem asks, the domain it falls into (assertional verification, concurrency, data structures, etc.), and an editorial difficulty estimate. Where available, links to problem-set documents are provided directly on the relevant year page.
How should I prepare for a future edition?
The single best preparation is to attempt past problems with the tool you intend to use. Pick two or three challenges from the catalogue, work through them fully, and pay attention to the parts that are harder than expected — encoding the specification precisely, finding the right invariants, structuring the proof so the tool can discharge obligations efficiently.
Beyond that, read solutions from previous years. The solutions archive describes approaches other participants have taken, and studying those will give you a sense of what the evaluation committee considers clear and complete. Familiarise yourself with the rules so you know what a submission should contain.
What counts as a valid solution?
A valid solution includes three things: an implementation that addresses the problem, a formal specification of the properties it claims to satisfy, and a machine-checkable proof that the implementation meets the specification. Documentation explaining the verification strategy is strongly encouraged and has been explicitly expected in later editions.
Partial solutions are accepted. If you prove sorting correctness but not the permutation property, that is a legitimate entry — it will simply score lower on completeness than a solution addressing both. The rules page describes the evaluation criteria in detail.
How is judging done?
The organising committee evaluates submissions along three dimensions: correctness, completeness, and clarity. Correctness asks whether the proof is mechanically valid and whether the specification faithfully captures the intended behaviour. Completeness asks whether the solution addresses the full scope of the problem or only a subset. Clarity asks whether the proof is structured so that a knowledgeable reader can follow the reasoning.
There is no single numerical score. The committee exercises expert judgement, informed by these criteria, to assess each entry. This reflects a deliberate design choice: verification solutions are difficult to rank on a linear scale because different problems and different tools involve fundamentally different techniques. The process is closer to peer review than to an automated leaderboard.
What happened in a particular year?
Each edition has its own hub page on the years index. Those pages describe the problems set that year, the submission format, any edition-specific rules, and notes on the solutions received. The competition ran in 2010, 2011, 2012, 2014, 2015, and 2017, with each edition reflecting the state of the art in verification tooling at the time. If you are curious about how the competition evolved, the year pages tell that story chronologically.
How should I cite VSComp solutions or problems?
If you are referencing a specific problem or solution in a research paper, we suggest citing the relevant competition report where one exists. The solutions archive and the year pages note published reports and their venues. For problems, cite the problem-set document or the competition report for that edition.
A reasonable citation format includes the competition name (Verified Software Competition), the edition year, and the publication venue if applicable. For example: “Problem P3 from VSComp 2014, as published in [venue].” If no formal publication exists for a given edition, citing the competition website with the URL of the relevant year page is appropriate.
What is formal verification, exactly?
Formal verification is the practice of using mathematical methods to prove that a piece of software (or hardware) behaves according to a precise specification. Unlike testing, which checks behaviour for specific inputs, verification establishes correctness for all possible inputs within the specification's scope. The techniques involved include Hoare logic, separation logic, model checking, refinement calculi, and interactive theorem proving, among others.
VSComp focuses specifically on software verification: proving that programs meet their specifications. The formal verification page provides a more thorough introduction to the field, its history, and the tools commonly used. If you are coming from a software-engineering background rather than a formal-methods one, that page is designed to bridge the gap.
The competition rules describe the submission format, evaluation criteria, and timeline structure in full detail.
Browse the complete problem catalogue to see every challenge set across all editions.
The solutions archive shows how participants have approached these challenges with different tools and techniques.
Visit the years index for a chronological overview of every edition.