I think that the Curry Howard correspondence is one of the coolest things ever. Philip Wadler’s “Propositions as Types” paper sends chills down my spine. Getting to literally reason about the programs we write and use ~Logic~ on them is fascinating and amazing to me. So what does this all mean? Is it practical? Can we do anything with it?
(please clap)
Here’s the basic idea: when we write a type signature in our programs, we’re being tricked into writing a proposition in logic. When we write an implementation for it, we’re providing evidence for that proposition, and potentially proving it. The more powerful our type system, the more logically we can think about it.
Let’s take Java for example. Here’s a method signature:
public Integer length(List<Character> string);
This signature is like a promise: “If you give me a List<Character>
, then I’ll give you an Integer
.”
We can write that in propositional logic as:
Now, in order to prove this, we must demonstrate that this works for all lists of characters.
If someone can provide a List<Character>
that causes this function to break, then our proposition is false!
Fortunately, lists are simple, and even Java’s null
isn’t terribly awful here.
Here’s our proof:
public Integer length(List<Character> string) {
if (string == null) {
return 0;
}
Integer result = 0;
for (Character c : string) {
result += 1;
}
return result;
}
Actually, we can provide a proof much more simply: return 0;
satisfies the promise in the type signature!
It’s a little easier to see the connection with types and logic with Haskell’s syntax. Compare these two declarations:
length :: [Char] -> Integer
There’s even an arrow!
We can use this sort of logic to prove things false, too.
Let’s say you want to prove ONCE AND FOR ALL that the Haskell function head :: [a] -> a
function is just bad.
It’s not OK and you want to remove it from your codebase, and maybe even get a ruckus going on the Haskell mailing list.
The idea that throwing exceptions in pure code is bad hasn’t been cutting it at the company meetings, so we’ll have to resort to logic.
First, we need a way to translate some ideas to get at their logical counterpoints.
How are we going to represent the idea of a linked list in logic?
Our tools in logic are and
($\land$), or
($\lor$), and implies
($\implies$).
We’ve already seen how implies
corresponds to functions, so now we just need to figure out or
and and
and we can do some THEOREM PROVING.
$A \land B$ is true if $A$ is true and if $B$ is true.
If either of them are false, then the proposition $A \land B$ is false.
Logical and
has a few rules that we can look at:
If $A$ is true and $B$ is true, then $A \land B$ is true. That means whenever we know $A$ and $B$, we can introduce $A \land B$. This theorem looks like:
\[A \implies (B \implies A \land B)\]What’s a Java method signature that corresponds with this?
public <A, B> And<A, B> andIntroduction(A a, B b);
Or Haskell:
andIntroduction :: a -> b -> And a b
If $A \land B$ is true, then we can eliminate the $\land$ and write down both $A$ and $B$ on their own. As a logical proposition, this looks like:
\(A \land B \implies A\) \(A \land B \implies B\)
Which we can implement in our two favorite programming languages:
public <A, B> A andElimOne(And<A, B> and);
andElimOne :: And a b -> a
andElimTwo :: And a b -> b
Given the rules and type signatures for and elimination, I think we can arrive at a suitable class to implement it. It’s the humble pair, or tuple!
public class <A, B> Tuple<A, B> {
public final A fst;
public final B snd;
public Tuple(A a, B b) {
this.fst = a;
this.snd = b;
}
}
type And a b = (a, b)
The proposition $A \lor B$ is true if either of $A$ or $B$ are true.
If one is false, that’s fine! As long as they both aren’t false.
Now let’s review the rules for or
($\lor$).
This one will be a little trickier.
If we know $A$ is true, then we can say that $A \lor B$ is true. Even if we know that $B$ is false, we can still say $A \lor B$ since $A$ is true. Expressed as a theorem, we have:
\[A \implies A \lor B\]Which gives us a Java method signature:
public <A, B> Or<A, B> orIntroduction(A a);
and a Haskell type signature:
orIntroduction :: a -> Or a b
Naturally, we can also write b -> Or a b
.
Or elimination is a way of rewriting an Or
value.
It’s a bit trickier than And
elimination, since in And
elimination we know we’ve got an A and a B.
If we know that $A \lor B$ is true, then we have two possible cases: either $A$ is true, or $B$ is true.
But we don’t know which!
So we’ll have to able to handle either case.
So, provided we know how to handle $A \implies Q$ and $B \implies Q$, then we can do some elimination. Because a full type signature might get messy, here’s a list of “requirements” and a guarantee:
Written out, this is:
\[(A \lor B) \implies (A \implies Q) \implies (B \implies Q) \implies Q\]Translated to Java:
public <A, B, Q> Q orElimination(Or<A, B> or, Function<A, Q> left, Function<B, Q> right);
In Haskell, we’ve got:
orElimination :: (Or a b) -> (a -> q) -> (b -> q) -> q
So we know we can make an $A \lor B$ from either an $A$ or a $B$.
And we know that we can eliminate it, but only if we can handle either an $A$ or a $B$.
This is Haskell’s Either
type:
data Either a b = Left a | Right b
type Or = Either
Implementing Either in Java is left as an exercise for the reader.
Alright, we’ve reviewed our logic and got some tools on our belts.
Let’s get back to lists, and how much we hate the head
function.
The Haskell list type is defined like:
data List a
= Nil
| Cons a (List a)
Now, we need to translate this to just using Or
and And
.
We can start by deconstructing the |
in the top level sum and replacing it with an Either
.
Since the Left
value is a nullary constructor, we can say that the end of a list is like Left ()
.
type List' = Either () (...)
That leaves the other case.
We’ve got a constructor Cons a (List a)
, which has two elements.
We can express that as a tuple: (a, List a)
.
Haskell doesn’t let you have cycles in type synonym declarations, so we can’t actually say:
type List a = Either () (a, List a)
But we can imagine that this is the case.
Now that we have it in terms of Either
and (,)
, we can write it as a logical proposition:
Now, we can plug that directly in to the type signature for head
, which we’ll put here in logical form:
Replacing $List A$ with the definition of list:
\[() \lor (A \land List_{A}) \implies A\]Now we can use the and
elimination rule as listed above to get rid of the $List_{A}$ term:
And, we’re stuck.
We can’t eliminate the $A \lor ()$ since the or
rule requires more information than we have.
Considering what is meant by or
: “I have evidence that either $A$ or $B$ is true.”
If we don’t know what evidence we have, then we certainly can’t say that $A$ is true.
If we had an $A$ available in the environment, then we could prove the $A$ that we’re looking for.
Consider:
This is equivalent to changing head
’s type signature to:
head :: [a] -> a -> a
head [] a = a
head (x:xs) _ = x
Alternatively, we could change that final bit to:
\[() \lor A \implies () \lor A\]which is also tautologically true.
$A \lor ()$ is equivalent to Either () a
is equivalent to Maybe a
, so we’d have:
head :: [a] -> Maybe a
By breaking down head
s real type signature, we’ve proven that it is partial.
Furthermore, by altering the final step in the proof, we can work backwards and determine what a total solution would look like.
We’ve achieved a minor victory today. When functional programmers talk about ‘reasoning about their code’, this is one of the things we’re talking about. Like, literal, actual, reasoning. Not just ‘thinking’, but logical, clear, structured reasoning about how the code is working and what it means.
If you’re interested in learning more, I’d highly recommend reading Type Theory and Formal Proof: An Introduction.
(thanks to @hdgarrood on Twitter for posting some corrections! brackets are hard)
(thanks to /u/ouchthats on Reddit for correcting my faulty use of a logic… damn LEM showing up everywhere! As it happens, reasoning and logic are difficult to get exactly right, but they’re the only things we can get exactly right!)