VSComp 2015

The 2015 edition of the Verified Software Competition continued the format established in previous years while shifting the problem selection toward more practical, systems-level verification challenges. Five problems addressed text processing, graph algorithms, memory management, and refinement verification. This page describes the domains covered, the submission format, the complete problem listing with links to the problems catalogue and solutions archive, available artifacts, and errata.

Abstract illustration representing the 2015 edition of the Verified Software Competition

Practical Verification at Scale

The 2015 edition represented a deliberate pivot toward realism. Previous editions had tested fundamental verification skills — sorting algorithms, search trees, concurrent counters — and demonstrated that modern tools could handle them. The question for 2015 was different: could those same tools tackle the kind of messy, stateful, systems-level code that working software engineers encounter in practice?

The answer, as reflected in the problem set, was a qualified yes. Problems drew on scenarios that would be familiar to any systems programmer: a text editor buffer backed by a gap-buffer data structure, components of a DNS resolver, a log-structured storage engine with crash-consistency requirements, and a string-matching algorithm requiring precise index arithmetic. These are not textbook exercises. They involve irregular control flow, complex internal invariants, and interactions between multiple subsystems.

The competition continued to welcome solutions built with any verification tool or proof assistant. Submissions were evaluated on the same three-pillar model — correctness, completeness, and clarity — described on the rules page. Organisers are listed on the year page when available.

What the 2015 Edition Covered

The five problems spanned several distinct verification domains. Three problems (P1-2015, P2-2015, and P4-2015) fell under systems verification, each targeting a different flavour of stateful, systems-level code. The text editor buffer (P1-2015) tested gap-buffer invariants and cursor semantics. The DNS resolver components (P2-2015) required reasoning about message-level protocols. And the log-structured storage engine (P4-2015) introduced crash consistency as a verification target — a property that is notoriously difficult to formalise and prove.

The remaining two problems explored different territory. The amortised-complexity queue (P3-2015) required participants to prove not just functional correctness but also a quantitative bound on the amortised cost of operations — a blend of program verification and complexity analysis that few tools handle natively. The string-matching problem (P5-2015) was the most classical in flavour, requiring careful loop invariants and completeness arguments, but applied to a realistic text-processing scenario.

This mix was intentional. By combining systems-level problems with algorithmic challenges and complexity reasoning, the 2015 edition tested the breadth of a participant's verification toolkit. The problems catalogue provides cross-year filtering to compare domains across editions.

How Submissions Worked

The 2015 edition followed the established submission format. Participants were given a defined submission window during which they could prepare and submit their formally verified solutions. Any verification tool or proof assistant was accepted, and submissions were expected to include annotated source code, formal specifications, proof artifacts, and a written explanation of the verification strategy.

The emphasis on the strategy document, introduced in the 2014 edition, continued. Reviewers found that the written explanation was often as valuable as the proof itself — it helped distinguish between a proof that was mechanically valid but fragile (relying on tool-specific automation that might not generalise) and one that reflected a deep understanding of the verification challenge.

Partial solutions remained acceptable. A submission addressing three of five problems, or proving correctness without the complexity bound for the queue problem, received credit proportional to its demonstrated coverage. The full evaluation criteria are documented on the rules page.

2015 Problem Set

The 2015 edition published five problems. Each card below summarises the challenge; detailed descriptions and the full specifications are available in the problems catalogue.

P1-2015Text Editor Buffer

Verify the core operations of a gap-buffer–based text editor: insertion, deletion, and cursor movement. The challenge tests whether verification tools can handle realistic stateful data structures with complex internal invariants.

Systems verificationhard
P2-2015DNS Server Components

Verify key components of a simplified DNS resolver, including query parsing, cache lookup, and response construction. Participants must specify the expected behaviour at the network-message level.

Systems verificationhard
P3-2015Queue with Amortised Complexity

Prove both functional correctness and amortised O(1) complexity for a two-stack queue implementation. Requires a potential-function argument or equivalent amortised analysis technique formalised in the proof assistant.

Amortised complexityhard
P4-2015Log-Structured Storage

Verify append and compaction operations of a simplified log-structured storage engine. Participants must show crash consistency: the data structure remains valid even if an operation is interrupted.

Systems verificationmedium
P5-2015String Matching

Prove that a string-matching algorithm correctly identifies all occurrences of a pattern within a text. Requires careful index arithmetic and completeness arguments.

Text processingmedium

Submitted Solutions

Solutions for the 2015 problems are collected in the solutions archive. The systems-level character of the problem set influenced the distribution of tools represented in submissions. Problems involving heap-allocated data structures and crash consistency attracted entries using separation logic frameworks and specialised crash-safety logics, while the string-matching and queue problems saw a wider range of general-purpose verifiers.

Several submissions to the text editor buffer problem (P1-2015) are particularly instructive as case studies in how different tools handle mutable, pointer-rich data structures. The solutions page provides guidance on formats, reproducibility considerations, and how to cite individual submissions.

Competition Artifacts

Artifacts associated with the 2015 edition include the problem statements and any supplementary materials provided to participants. Where original documents remain available, they are listed below.

Problem Statements & Supplementary Materials

The 2015 problem statements were distributed through the competition's established channels. Each problem included a precise specification of the expected behaviour, input constraints, and the properties to be verified. Some problems included starter code or reference implementations to clarify the intended data-structure layout.

Some historical materials are provided as-is; where unavailable, the problem summaries on this page and in the problems catalogue summarise the format and requirements.

Solution Bundles

Selected solution submissions have been collected and are available through the solutions archive. These bundles typically include annotated source code, proof scripts, and the participant's verification strategy document.

Errata and Clarifications

The 2015 edition generated several clarification requests during the submission window, particularly around the systems-level problems. For the text editor buffer (P1-2015), participants asked whether the gap buffer was required to support variable-length gaps or only a fixed initial gap size. The organisers clarified that either approach was acceptable, provided the specification accurately reflected the chosen design.

The DNS resolver problem (P2-2015) prompted questions about the scope of the specification: did participants need to verify the full resolver pipeline, or only the specified components? The organisers confirmed that only the listed components — query parsing, cache lookup, and response construction — were in scope.

The crash-consistency requirement for the log-structured storage problem (P4-2015) was the most discussed aspect of the 2015 edition. Participants used a variety of crash models, and the organisers accepted any reasonable formalisation provided it was clearly stated. No corrections to the problem statements were required. For context on how other editions handled errata, see the 2014 and 2017 year hubs.

The Verification Landscape in 2015

By 2015, formal verification was beginning to appear in industrial contexts with increasing frequency. Major software companies had published accounts of using formal methods on production systems — from verified file systems to formally analysed network protocols. This industrial momentum influenced the 2015 problem selection: the challenges were designed to be recognisable to practitioners, not just academic researchers.

Tool ecosystems had also matured. Proof assistants offered better automation, tighter integration with programming languages, and more robust standard libraries for common verification patterns. The result was that problems which would have been prohibitively labour-intensive in 2010 were now within reach of a skilled participant working within a single submission window.

The history page provides broader context on how the competition relates to the formal methods community's trajectory. The publications page lists academic work discussing the 2015 edition and its outcomes.

At a Glance

Year: 2015
Problems: 5
Domains: Systems verification, text processing, amortised complexity, crash consistency
Focus: Practical, systems-level verification challenges