Formal Methods

surfingcomplexity.blog

Introduction: > Back in August, Murat Derimbas published a [blog post](https://muratbuffalo.blogspot.com/2024/08/linearizability-correctness-condition.html) about the [paper by Herlihy and Wing](https://dl.acm.org/doi/10.1145/78969.78972) that first introduced the concept of *linearizability*. When we move from sequential programs to concurrent ones, we need to [extend our concept of what "correct" means](https://surfingcomplexity.blog/2023/12/31/consistency/) to account for the fact that operations from different threads can overlap in time. Linearizability is the strongest consistency model for single-object systems, which means that it's the one that aligns closest to our intuitions. Other models are [weaker](https://surfingcomplexity.blog/2023/12/29/the-inherent-weirdness-of-system-behavior/) and, hence, will permit anomalies that violate human intuition about how systems should behave. > > Beyond introducing linearizability, one of the things that Herlihy and Wing do in this paper is provide an implementation of a linearizable queue whose correctness cannot be demonstrated using an approach known as *refinement mapping*. At the time the paper was published, it was believed that it was always possible to use refinement mapping to prove that one specification implemented another, and this paper motivated [Leslie Lamport](https://lamport.org/) and [Martín Abadi](https://research.google/people/abadi/) to propose the concept of *prophecy* *variables*. > > I have long been fascinated by the concept of prophecy variables, but when I learned about them, I still couldn't figure out how to use them to prove that the queue implementation in the Herlihy and Wing paper is linearizable. (I even [asked Leslie Lamport about it](https://www.youtube.com/watch?v=8nNJw-k8Ma0&t=15m) at the [2021 TLA+ conference](https://conf.tlapl.us/2021/)). > > Recently, Lamport published a book called [The Science of Concurrent Programs](https://lamport.azurewebsites.net/tla/science.pdf) that describes in detail how to use prophecy variables to do the refinement mapping for the queue in the Herlihy and Wing paper. Because the best way to learn something is to explain it, I wanted to write a blog post about this. > > In this post, I'm going to assume that readers have no prior knowledge about TLA+ or linearizability. What I want to do here is provide the reader with some intuition about the important concepts, enough to interest people to read further. There's a lot of conceptual ground to cover: to understand prophecy variables and why they're needed for the queue implementation in the Herlihy and Wing paper requires an understanding of *refinement mapping*. Understanding refinement mapping requires understanding the state-machine model that [TLA+](https://lamport.azurewebsites.net/tla/tla.html) uses for modeling programs and systems. We'll also need to cover what *linearizability* actually is. > > We'll going to start all of the way at the beginning: describing what it is that a program should do.

3
0
https://browncs1951x.github.io/static/files/hitchhikersguide.pdf

[Associated class (Brown University cs1951x)](https://browncs1951x.github.io) This is a 209-page book on logical verification in [Lean](https://lean-lang.org/) (4.0, the "new" version), available as a PDF.

5
1
https://burtonqin.github.io/posts/2024/07/rustcheckers/

List of Rust static and dynamic analysis tools organized by type, with: - Name - Description - IR they analyze (HIR, MIR, LLVM IR, etc.) - Bug Types - Technology - Maintenance (1-5 stars, whether they're frequently updated or dead)

14
2
surfingcomplexity.blog

Abstract: > I've been reading Alex Petrov's [Database Internals](https://www.databass.dev/) to learn more about how databases are implemented. One of the topics covered in the book is a data structure known as the [B-tree](https://en.wikipedia.org/wiki/B-tree). Relational databases like Postgres, MySQL, and sqlite use B-trees as the data structure for storing the data on disk. > > I was reading along with the book as Petrov explained how B-trees work, and nodding my head. But there's a difference between feeling like you understand something (what the philosopher C. Thi Nguyen calls [clarity](https://philarchive.org/rec/NGUTSO-2)) and actually understanding it. So I decided to model B-trees using [TLA+](https://lamport.azurewebsites.net/tla/tla.html) as an exercise in understanding it better. > > TLA+ is traditionally used for modeling concurrent systems: Leslie Lamport originally developed it to help him reason about the correctness of concurrent algorithms. However, TLA+ works just fine for sequential algorithms as well, and I'm going to use it here to model sequential operations on B-trees.

6
0
buttondown.email

Nondeterminism is used very often in formal specifications, and not just when multi-threading or explicit randomness are involved. This article provides examples, insight into why that is, and more.

9
0
https://graydon2.dreamwidth.org/312681.html

> Recently [Boats wrote a blog post](https://without.boats/blog/references-are-like-jumps/) about Rust, mutable aliasing, and the sad story of local reasoning over many decades of computer science. I recommend that post and agree with its main points! Go read it! But I also thought I'd add a little more detail to an area it's less acutely focused on: formal methods / formal verification. > > **TL;DR**: support for *local* reasoning is a big factor in the ability to do *automated* reasoning about programs. Formal verification involves such reasoning. Rust supports it better than many other imperative systems languages -- even some impure functional ones! -- and formal verification people are excited and building tools presently. This is not purely by accident, and is worth understanding as part of what makes Rust valuable beyond "it doesn't need a GC". > > The rest of this post is just unpacking and giving details of one or more of the above assertions, which I'll try to address in order of plausible interestingness to the present, but I will also throw in some history because I kinda can't help myself. The author is Graydon Hoare, the "founder" of Rust and technical lead until around 2013.

6
1
github.com

From README: > At the heart of `coq-of-rust` is the translation of Rust programs to the [proof system Coq 🐓](https://coq.inria.fr/). Once some Rust code is translated to Coq, it can then be verified using standard proof techniques. > > Here is an example of a Rust function: > > ```rust > fn add_one(x: u32) -> u32 { > x + 1 > } > ``` > > Running `coq-of-rust`, it translates in Coq to: > > ```coq > Definition add_one (τ : list Ty.t) (α : list Value.t) : M := > match τ, α with > | [], [ x ] => > ltac:(M.monadic > (let x := M.alloc (| x |) in > BinOp.Panic.add (| M.read (| x |), Value.Integer Integer.U32 1 |))) > | _, _ => M.impossible > end. > ``` > > Functions such as `BinOp.Panic.add` are part of the standard library for Rust in Coq that we provide. We can then express and verify specifications on the code in Coq. Blog: - Improvements in the Rust translation to Coq, [part 1](https://formal.land/blog/2024/02/29/improvements-rust-translation), [part 2](https://formal.land/blog/2024/03/08/improvements-rust-translation-part-2), [part 3](https://formal.land/blog/2024/03/22/improvements-rust-translation-part-3) - [Monadic notation for the Rust translation](https://formal.land/blog/2024/04/03/monadic-notation-for-rust-translation) - [Translation of Rust's core and alloc crates](https://formal.land/blog/2024/04/26/translation-core-alloc-crates) The same group also recently started [coq-of-python](https://github.com/formal-land/coq-of-python), which is the same thing but for Python: - [Translation of Python code to Coq](https://formal.land/blog/2024/05/10/translation-of-python-code) - [Simulation of Python code in Coq](https://formal.land/blog/2024/05/14/translation-of-python-code-simulations) Alternate Rust-to-_ translators: - [Hax](https://github.com/hacspec/hax): Translate Rust to Coq (like above) or F*. - [Aeneas](https://github.com/AeneasVerif/aeneas) + [Charon](https://github.com/AeneasVerif/charon): Translate Rust's MIR to pure lambda calculus / F* / Coq / HOL / Lean. Currently has some strict limitations like no nested loops or closures. Paper: [Aeneas: Rust verification by functional translation](https://dl.acm.org/doi/abs/10.1145/3547647). Other Rust formal verification projects: - [Creusot](https://github.com/xldenis/creusot): Formally verifies Rust code with annotations representing preconditions/postconditions/assertions, by translating the MIR to [Why3](https://www.why3.org/) then solving. - [Kani](https://github.com/model-checking/kani): Formally verifies Rust code with annotations, by using model checking powered by [CBMC](https://github.com/diffblue/cbmc). - [Verus](https://github.com/verus-lang/verus): Formally verifies a Rust DSL with syntax extensions to represent preconditions/postconditions/assertions, by using the [Z3](https://github.com/Z3Prover/z3) SMT solver.

5
0
https://jesper.cx/posts/agda-core.html

> This blog post is about Agda Core, a project to build a a formally specified core language for (a reasonable subset of) Agda together with a reference implementation of a type checker for it. If you have been following my work, you might know that I have been working on this project on and off for the past few years, yet the current version is still very much a work in progress. Hence, in the second part of this post, I will dive into a few examples of some of the more unexpected and interesting obstacles we encountered while trying to turn Agda Core from a dream into reality.

8
0
https://leino.science/papers/krml280.html

Context: [Dafny](https://dafny.org/) is a programming language with a built-in SMT solver that can encode formally-checked statements (e.g. preconditions, postconditions, assertions). Dafny has features of most programming languages; for example, [covariant/contravariant/"non-variant" (invariant) type parameters](https://docs.scala-lang.org/tour/variances.html). But in order to keep said verification sound and decidable in reasonable time, these features have extra restrictions and other complexities. This article explains that there are 2 more "variance modes", because the of "covariant" and "invariant" variances may be "cardinality-preserving" or "not cardinality-preserving". ```dafny // Most "traditional" languages have 3 variances type Foo< +CovariantParameter, NonVariantParameter, -ContravariantParameter> // Dafny has 5 type Bar< +CovariantCardinalityPreservingParameter, NonVariantCardinalityPreservingParameter, -ContravariantParameter, *CovariantNonCardinalityPreservingParameter, !NonVariantNonCardinalityPreservingParameter> ```

2
0
github.com

Another, very similar verified superset of Rust is [Creusot](https://github.com/creusot-rs/creusot). I'm not sure what the benefits/downsides of each are besides syntax tbh. This is also similar to [Kani](https://model-checking.github.io/kani/) which also verifies Rust code. However, Kani is a [model checker](https://research.ibm.com/haifa/Workshops/summerseminar2004/present/intro_mc_and_applications.pdf) like [TLC](http://lamport.azurewebsites.net/tla/toolbox.html?from=https://research.microsoft.com/en-us/um/people/lamport/tla/toolbox.html&type=path) (the model checker for TLA+), while Verus and Creusot are [SMT solvers](https://www.open-do.org/wp-content/uploads/2010/06/SMT_provers.pdf) like [Dafny](https://github.com/dafny-lang/dafny). Interestingly, Verus (unlike both Kani and Creusot) has its language server, which is a fork of `rust-analyzer` ([`verus-analyzer`](https://github.com/verus-lang/verus-analyzer/)).

16
1
github.com

> Narya is eventually intended to be a proof assistant implementing Multi-Modal, Multi-Directional, Higher/Parametric/Displayed Observational Type Theory, but a formal type theory combining all those adjectives has not yet been specified. At the moment, Narya implements a normalization-by-evaluation algorithm and typechecker for an observational-style theory with Id/Bridge types satisfying parametricity, of variable arity and internality. There is a parser with user-definable mixfix notations, and user-definable record types, inductive datatypes and type families, and coinductive codatatypes, with functions definable by matching and comatching case trees.

7
0
https://youtu.be/CU1wqF6lQQ0

cross-posted from: https://infosec.pub/post/9524990 > Verifiable Random Functions

11
0
buttondown.email

It uses [PRISM](https://www.prismmodelchecker.org/), a "probabilistic model checker", so not your typical theorem prover or SAT solver.

17
1
dylibso.com

Exploring how to run formally verified code in a WebAssembly sandbox

4
0
https://youtu.be/JFOnoqAplAY

> In the past few years, we witnessed the development of multiple smart contract languages - Solidity, Viper, Michelson, Scilla etc. These languages need to enable developers to write correct, predictable behavior smart contract code. Each language development effort therefore ends up spending resources into building formal verification toolsets, compilers, debuggers and other developer tools. > In this episode, we are joined by Grigore Rosu, Professor of computer science at UIUC [University of Illinois at Urbana-Champaign] for a deep dive into the K framework. The K framework is mathematic logic and language that enables language developers to formally define all programming languages; such as C, Solidity and JavaScript. Once a language is formally specified in the K framework, the framework automatically outputs a range of formal verification toolsets, compilers, debuggers and other developer tools for it. Updates to the language can be made directly in K. This technology has massive implications for smart contract programming language development, and formal verification efforts in the blockchain space. > We also cover his efforts to express the Ethereum virtual machine using the K framework, and to develop a new virtual machine technology, called IELE, specifically tailored to the blockchain space. Check out the episode to understand a game changing technology in the formal verification and smart contract safety space. > > Topics discussed in this episode: > - Grigore's background with NASA and work on formally verified correct software > - Motivations to develop K framework > - Basic principles behind the operation of K framework > - How K deals with undefined behavior / ambiguities in a language definition > - The intersection of K framework and smart contract technology > - Runtime Verification's collaboration with Cardano > - KEVM and IELE, smart contract virtual machines developed by Runtime Verification > - Broader implications of the K framework for the blockchain industry

2
0
dafny.org

I saw this posted on r/ProgrammingLanguages. I hadn't heard of this language before, but it looks neat. https://en.wikipedia.org/wiki/Dafny

1
0
jakob.space

>Even if formal methods have a reputation for being costly to adopt, and not reasonably applicable to all parts of the software engineering life cycle, my opinion is that the tooling is becoming increasingly accessible and that software practitioners should, at the very least, be aware of their capabilities. A good engineer knows to pick the right tool for the job, and there are cases where formal methods are just that. [More discussion in the comments section here](https://news.ycombinator.com/item?id=35511152).

1
0
github.com

>This development encodes category theory in Coq, with the primary aim being to allow representation and manipulation of categorical terms, as well realization of those terms in various target categories.

3
0

The community currently has no mods (as the person who requested it in the request community did not want to mod it). If anyone wants to be a mod feel free to dm me or reply here

1
0