Skydiver, on 17 March 2015 - 05:47 PM, said:
If it's not obvious, I've been a skeptic when I first heard of formal methods many years ago.
You have to know what these methods are actually trying to prove. Formal methods won't prove that your code will have 99.9999% availability, or will be 100% immune to security bugs (or any kind of bug at all), etc.
They prove things about the program, as an abstract entity
Quote
The main reason my skepticism is is that somebody trying to verify the correctness of a program has to not only verify the correctness at the problem domain level, but they will also have to do the same down at the implementation language level, and perhaps at the machine language level? Compilers do have bugs. Have the compilers been proven to be correct? How about the libraries used? Have they been proven correct?
Or is this a matter of levels of abstraction? If we can prove that the software is correct at the problem domain level/program design level, then any bugs introduced by compilers, libraries, and programmers who have been up for 48 hours straight does not affect the correctness of the software. If that's an acceptable level of abstraction, is "Works on my machine" an acceptable level as well?
Or is this a matter of levels of abstraction? If we can prove that the software is correct at the problem domain level/program design level, then any bugs introduced by compilers, libraries, and programmers who have been up for 48 hours straight does not affect the correctness of the software. If that's an acceptable level of abstraction, is "Works on my machine" an acceptable level as well?
A program is a mathematical entity. You could say a program is anything that is isomorphic to a Turing Machine. Turing machines are not bound to any compiler, implementation language, nor machine language. Yet we can reason about them, and so can we about programs in general.
Formal verification methods, try to prove specific propositions about these programs, using that same kind of logic that allows you to reason about them. Usually this means modelling the program in a language specific to the formal verification tool (for example Coq, Agda, or some other language/tool like Why3, etc), and then using the theory behind this language/tool to provide the verification you need.
But yes, after you do so, nothing tells you that the "translation" between the implementation language and the verification language isn't faulty (like when compiling Coq into Haskell to run a program), or that your initial modelling of said program wasn't faulty, or that the underlying compiler/machine isn't faulty. All of those things can be proven separately, in their own domains (like trying to prove that the specific compiler works correctly), but they are independent on proving things about the actual program.
You do need "blind" faith in one thing, and that is the typechecker/compiler/runtime of the verification language/tool that does the verification per se. So, if you were to try and prove a proposition in Coq, you'd need to trust that Coq's typechecker works correctly, and doesn't let you do weird shit like proving bottom or stuff like that.
But this isn't really that big of a deal. At the very least you've reduced the scope of what you have to blindly trust, from everything down to just this specific language/compiler.

New Topic/Question
Reply




MultiQuote

|