I refer you to Joel Spolsky about formal proofs of programs:
[…]
So in the first day of that class, Dr. Zuck filled up two entire whiteboards and quite a lot of the wall next to the whiteboards proving that if you have a light switch, and the light was off, and you flip the switch, the light will then be on.
The proof was insanely complicated, and very error-prone. It was harder to prove that the proof was correct than to convince yourself of the fact that switching a light switch turns on the light. Indeed the multiple whiteboards of proof included many skipped steps, skipped because they were too tedious to go into formally. Many steps were reached using the long-cherished method of Proof by Induction, others by Proof by Reductio ad Absurdum, and still others using Proof by Graduate Student.
For our homework, we had to prove the converse: if the light was off, and it’s on now, prove that you flipped it.
I tried, I really did.
I spent hours in the library trying.
After a couple of hours I found a mistake in Dr. Zuck’s original proof which I was trying to emulate. Probably I copied it down wrong, but it made me realize something: if it takes three hours of filling up blackboards to prove something trivial, allowing hundreds of opportunities for mistakes to slip in, this mechanism would never be able to prove things that are interesting.
I have a MSc in formal methods, and this is really misleading. Surely you cannot make some proofs easily. However, how did Airbus verify that some errors simply do not exist in their fly-by-wire software (which is ~100 KLOC implemented in a subset of C)? (I'm sure Boeing also employs these techniques internally).
Using abstract interpretation. There are tons of formal methods, ranging from type systems to formal proofs. Lots of compromises can be made to make them practical and useful for a particular domain. Look into [1,2] for some quick introductory examples. Going straight into formal proofs is in general a really bad idea.
I have worked on railway control systems, with really nasty potential race conditions and managed to prove the absence of large classes of errors. Then derived implementations formally. It's really not that hard. There's even a subfield of CS looking into verifying formal properties of biological systems [3], which are really complex.
I wonder if anyone can follow Feynman’s style of proof?
By the end of that summer of 1983, Richard had completed his analysis of the behavior of the router, and much to our surprise and amusement, he presented his answer in the form of a set of partial differential equations. To a physicist this may seem natural, but to a computer designer, treating a set of boolean circuits as a continuous, differentiable system is a bit strange. Feynman's router equations were in terms of variables representing continuous quantities such as “the average number of 1 bits in a message address.” I was much more accustomed to seeing analysis in terms of inductive proof and case analysis than taking the derivative of “the number of 1’s” with respect to time.
As explained in this talk by Robert Martin, proving programs correct was the big goal which Edsger W. Dijkstra tried to reach by eliminating GOTO in favor of structured programming (which is essentially having if-else statements, loops and iteration as part of a language as replacement for all practical uses of GOTO).
Software could be proven correct, but we abandoned that, just gave up on it, it’s too hard. But we can test it. We can use science, and we can write tests that demonstrate that the software is not failing. We treat software like a science, not like mathematics.
I think Robert Martin doesn't fully grasp the field of formal methods and program semantics, which has evolved a lot since the early 1980s. Paraphrasing Alan Perlis, beware of the Turing tar pit, where everything is possible but nothing of interest is easy.
So one trick is to work on DSLs with restricted semantics that are good enough for your domain. That makes proofs plus other formal techniques, and hence security guarantees, much much easier.
But if you insist on Turing complete languages, as I explained above, you can e.g. build abstract interpreters or data flow analyses that are able to prove really sophisticated things. I have implemented a C abstract interpreter for an industrial client that among many other things detected whether you were making potential out of bounds accesses to arrays, or potentially using uninitialized pointers. Of course it erred on the safe side. But it was really precise (few false positives). Not a walk in the park as it relies on Galois connections. See the seminal Cousot & Cousot 1977 paper [1].
[…]
So in the first day of that class, Dr. Zuck filled up two entire whiteboards and quite a lot of the wall next to the whiteboards proving that if you have a light switch, and the light was off, and you flip the switch, the light will then be on.
The proof was insanely complicated, and very error-prone. It was harder to prove that the proof was correct than to convince yourself of the fact that switching a light switch turns on the light. Indeed the multiple whiteboards of proof included many skipped steps, skipped because they were too tedious to go into formally. Many steps were reached using the long-cherished method of Proof by Induction, others by Proof by Reductio ad Absurdum, and still others using Proof by Graduate Student.
For our homework, we had to prove the converse: if the light was off, and it’s on now, prove that you flipped it.
I tried, I really did.
I spent hours in the library trying.
After a couple of hours I found a mistake in Dr. Zuck’s original proof which I was trying to emulate. Probably I copied it down wrong, but it made me realize something: if it takes three hours of filling up blackboards to prove something trivial, allowing hundreds of opportunities for mistakes to slip in, this mechanism would never be able to prove things that are interesting.
https://www.joelonsoftware.com/2005/01/02/advice-for-compute...