VU-VFS-2025. Lecture 7 - VC's for functions and pointers [004p]
- December 12, 2025
VU-VFS-2025. Lecture 7 - VC's for functions and pointers [004p]
- December 12, 2025
1. Assertions
- December 12, 2025
1. Assertions
- December 12, 2025
Definition 1.1. Assertions & Havoc [004q]
- December 12, 2025
Definition 1.1. Assertions & Havoc [004q]
- December 12, 2025
We introduce 3 new syntactic constructs with their associated hoare rules, these constructs are:
- The statement \(\texttt {assert(F)}\) which \(\texttt {fails}\) if \(F\) evaluates to \(\bot \)
- The statement \(\texttt {assume(F)}\) which tells us that \(F\) evaluates to \(\top \)
- The statement \(\texttt {x := havoc()}\) which assigns a non-deterministic value to a variable \(x\)
1.1.1. Evaluation rules
- December 12, 2025
1.1.1. Evaluation rules
- December 12, 2025
We introduce a new construct \(\texttt {fail}\) which denotes the failure state, as an alternative to \(\sigma \).
For the assumption we only have a \(\top \) rule, the idea being if the assumption holds then the statement is equivalent to a \(\texttt {skip}\) otherwise the execution gets stuck but doesn't fail.
The idea with getting stuck being that since we aren't failing for the case of partial-correctness (failure or termination) it means we can simply ignore this case.
The final big-step evaluation rule is for the havoc statement:
1.1.2. Hoare rules
- December 12, 2025
1.1.2. Hoare rules
- December 12, 2025
In addition to our big-step evaluation rules we also have Hoare rules.
1.1.3. A more detailed explanation
- December 12, 2025
1.1.3. A more detailed explanation
- December 12, 2025
A central thing to understand with assertions especially is that they integrate a result's monad into the big-step evaluation semantics, in addition to the Hoare logic. So the idea is now instead of the relationship being denoted by:
\[ (\texttt {Stmt} \times \texttt {Memory}) \to \texttt {Memory} \to \texttt {Prop} \]It's now expressed as
\[ (\texttt {Stmt} \times \texttt {Memory}) \to (\texttt {Result} \texttt {Memory}) \to \texttt {Prop} \]Where we can model \(\texttt {Result}\) as the following monad
inductive Result (α : Type)
| ok : α → Result α -- Normal termination with final state
| fail : Result α -- Assertion failure / error state
deriving Repr, DecidableEq
At this point if you want to be pedantic it's worth mentioning that the implication is that we are lifting all other inference rules into the context of this result monad, though for pedagogical purposes its enough to just assume that \(\Downarrow \sigma \) just corresponds to \(\Downarrow \texttt {.ok} \sigma \) i.e. evaluated into a non-fail state.
The Hoare triple is where stuff becomes a bit more interesting. As a reminder, the classical hoare triple is basically just an alias for the logical formula:
\[ \forall \sigma , \sigma '.\ P(\sigma ) \to \langle s, \sigma \rangle \Downarrow \sigma ' \to Q(\sigma ') \]Naturally because we know are no longer evaluating into just \(\sigma '\) but either an \(\texttt {ok}\) or \(\texttt {fail}\) state, it's worth asking how we actually represent that as a Hoare triple. We can start in much the same way as we do for the regular partial correctness hoare triple
\[ \forall \sigma , \sigma _t.\ P(\sigma ) \to \langle s, \sigma \rangle \Downarrow \sigma _t \to (??) \]Here we then run into I guess a design choice as to how we proceed, the two main possibilities are:
- We fail sometimes succeed other times this would mean that: \[ (\exists \sigma ', \sigma _t = \texttt {.ok}\ \sigma ' \land Q(\sigma ')) \] another way of seeing this is here we are disallowig failure meaning that for a hoare triple to be proven valid we must explicitly demonstrate as a side condition that we cannot fail otherwise our hoare triple is invalid.
- We never fail which would mean that: \[ (\forall \sigma ', \sigma _t = \texttt {.ok}\ \sigma ' \land Q(\sigma ')) \] for this rule another way of seeing it is we are ignoring failure or filter success states since we are clearly only quantifying over those states that are successfull.
The latter option seems to be generally more common in theoretical settings because it doesn't require explicitly accounting for failure states, they are just implicitly filtered out in the logic.
Quiz 1.2. Weakest precondition for assert and assume [004r]
- December 12, 2025
Quiz 1.2. Weakest precondition for assert and assume [004r]
- December 12, 2025
Answer the following:
- What's \(\texttt {wp}(\texttt {assert}(P), Q)\)
- What's \(\texttt {wp}(\texttt {assume}(P), Q)\)
- Given a statement \(s\), can we transform it into a statement \(s'\) such that: \[ \vDash \{P\}\ s\ \{Q\} \iff \{\top \}\ s'\ \{\top \} \]
Solution.
- December 12, 2025
Solution.
- December 12, 2025
We can remember that the weakest precondition is simply what we can derive reasoning backwards from the formulaic expression of a hoare triple.
-
Considering our wp formula we have
\[
\texttt {wp}(\texttt {assert}(b), Q)(\sigma ) \to (\texttt {assert}(b), \sigma ) \Downarrow \texttt {.ok}\ \sigma \to Q(\sigma )
\]
reasoning backwards we can start with a case split on the evaluation of the assertion, this leads to two cases
- \(b\) evaluates to true, as this case trivially implies that the assertion evaluates into \(\texttt {.ok}\ \sigma \) it means we must prove \(Q(\sigma )\). The only way we can prove this is by having our weakest precondition be a witness to this assertion.
- \(b\) evaluates to false, as this would imply a contradiction since the assertion evaluates to \(\texttt {.ok}\ \sigma \), more specifically we know have to prove the contradiction that \(\texttt {.ok}\ \sigma = \texttt {.fail} \sigma \), we know (i.e. have a hypothesis) by inversion that \(b \Downarrow ^f \sigma \) since we know \(b\) has to be true to prove the contradiction we also must ensure the weakest precondition states that \(b \Downarrow ^t \sigma \)
- We can just use the more intuitive reasoning here, we know the weakest precondition characterizes a set of states where terminating executions end in a state satisfying \(Q\). The only way for something being assumed to be true leading to some other else is it if implies something else, thus our \(\texttt {wp}\) becomes \[ \texttt {wp}(\texttt {assume}(b), Q)(\sigma ) \equiv (b \Downarrow ^t \sigma ) \to Q(\sigma ) \] in terms of backward reasoning the idea is that \(\texttt {assume}(p)\) essentially just adds a raw hypothesis \((b \Downarrow ^t \sigma )\) but clearly that by itself doesn't somehow allow me to just manifest \(Q(\sigma )\), the only way we could possibly get \(Q\) from \(b\) being true is if we have a function which states that \(b\) being true implies \(Q\).