Thursday, November 13, 2014

Curry-Howard for GUIs: Or, User Interfaces via Linear Temporal, Classical Linear Logic

Together with Jennifer Paykin and Steve Zdancewic, we have written a short note about the next phase of the long project to make GUI programming intellectually manageable.

Essentially, the idea is that the natural language of graphical user interfaces is $\pi$-calculus, typed using classical linear logic (plus some temporal modalities). Furthermore, we think that the implementation strategy of callbacks and event loops can be understood in terms of Paul-Andre Mellies' tensorial logic. So we think we can:

  1. Explain how there are secretly beautiful logical abstractions inside the apparent horror of windowing toolkits;
  2. Illustrate how to write higher-order programs which automatically maintain complex imperative invariants, and
  3. Write some Javascript programs which we think are actually $\pi$-calculus terms in disguise.
You can download the draft here. If you want to read something with more theorems than this note, I'd suggest looking at Jennifer and Steve's paper about their Linear/Producer/Consumer calculus for CLL.

Tuesday, November 4, 2014

Integrating Linear and Dependent Types at POPL 2015

I'm very happy to announce that Integrating Linear and Dependent Types will appear at POPL 2015! The link above goes to the final version, which (at the behest of the reviewers) has been significantly expanded from the original submission. (Added up, the total length of the reviews was almost half the length of the submission, which says something about the degree of care taken in the process.)

Also, Jeremy Yallop has put together a web page with links to many of the POPL 2015 papers. Some of the freely available ones (as of this writing) which catch my eye are:

  • Formal verification of a C static analyzer, by Jacques-Henri Jourdan, Vincent Laporte, Sandrine Blazy, Xavier Leroy and David Pichardie

    Astrée is one of the big success stories in verification. Apparently they are building a provably correct(!) version.

  • Space-Efficient Manifest Contracts, by Michael Greenberg

    One of the big difficulties in applying contract checking to functional programming is that it breaks tail call optimization. This paper says that you can do it without breaking TCO, which is (a) a real breakthrough, and (b) probably has all kinds of applications.

  • Functors are type refinement systems, by Paul-André Melliès and Noam Zeilberger

    I knew fibrations were important for characterizing inductive definitions in type theory, parametricity, and the semantics of dependent types. Apparently they are also important for characterizing refinement types.

  • Programming up to Congruence, by Vilhelm Sjöberg and Stephanie Weirich

    This is a dependent type theory which works up the congruence closure of the equality hypotheses in the context, rather than using a judgmental equality. The treatment of equality is the central problem in the design of dependently typed languages, so it's nice to see exploration of the design space. (This approach reminds me a bit of the Girard/Schroeder-Heister equality rule, which semi-secretly underpins GADTs.)

Monday, October 20, 2014

Focusing is not Call-by-Push-Value

Ever since I learned about them, I've thought of call-by-push-value and focusing (aka polarization) as essentially two different views of the same problem: they both give a fine-grained decomposition of higher-order effectful programs which permits preserving the full βη-theory of the language.

Until this morning, I had thought that the differences were merely cosmetic, with CBPV arising from Paul Levy's analysis of the relationship between denotational semantics and operational semantics, and focusing arising an analysis of the relationship between operational semantics and proof theory (a lot of people have looked at this, but I learned about it from Noam Zeilberger). Both systems decompose a Moggi-style computational monad into a pair of adjoint operators, which mediate between values and computations (in CBPV) and positive and negative types (in focusing). So I thought this meant that “value type” and “positive type” were synonyms, as were “computation type” and “negative type”.

This morning, I realized I was wrong! Focusing and call-by-push-value make precisely the opposite choices in their treatment of variables! To understand this point, let's first recall the syntax of types for a call-by-push-value (on top) and a polarized (on bottom) calculus.

\begin{mathpar}
\begin{array}{llcl}
\mbox{Value Types} &
X,Y,Z & ::= & \Val{A} \bnfalt 0 \bnfalt X + Y \bnfalt 1 \bnfalt X \times Y \\
\mbox{Computation Types} & 
A,B,C & ::= & \F{X} \bnfalt X \to A \bnfalt \top \bnfalt X \With Y \\[1em]

\mbox{Positive Types} & 
P,Q & ::= & \Down{N} \bnfalt 0 \bnfalt P + Q \bnfalt 1 \bnfalt P \times Q \\
\mbox{Computation Types} & 
M,N & ::= & \Up{P} \bnfalt P \to N \bnfalt \top \bnfalt M \With N \\
\end{array}
\end{mathpar}

At first glance, these two grammars look identical, save only for the renamings $\Val{-} \iff \Down{-}$ and $\F{-} \iff \Up{-}$. But this is misleading! If they are actually the same idea, the reason has to be much more subtle. The reason for this is that the typing judgements for these two systems are actually quite different.

In call-by-push-value, the idea is that $\Val{A}$ is a functor which is left adjoint to $\F{X}$. As a result, values are interpreted in a category of values $\ValueOp$, and computations are interpreted in a category of computations $\CompOp$. The adjunction between values and computations means that the hom-set $\VHom{X}{\Val{A}}$ is ismorphic to the hom-set $\CHom{\F{X}}{A}$. This adjunction gives rise to the two basic judgement forms of call-by-push-value, the value judgement $\judge{\Gamma}{v}{X}$ and the computation judgement $\judgec{\Gamma}{t}{A}$. The idea is that $\interp{\judgev{\Gamma}{v}{X}} \in \VHom{\Gamma}{X}$ and $\interp{\judgec{\Gamma}{t}{A}} \in \CHom{\F{\Gamma}}{A}$.

The key bit is in the interpretation of contexts in computations, so let me highlight that:

$\interp{\judgec{\Gamma}{t}{A}} \in \CHom{\F{\Gamma}}{A}$

Note that we interpret contexts as $\F{\Gamma}$, and so this says that variables refer to values.

However, in a polarized type theory, we observe that positive types are “left-invertible”, and negative types are “right-invertible”. In proof theory, a rule is invertibile when the conclusion implies the premise. For example, the right rule for implication introduction in intuitionistic logic reads

\begin{mathpar}
\inferrule*[]
          {\judgend{\Gamma, S}{T}}
          {\judgend{\Gamma}{S \to T}}
\end{mathpar}

This is invertible because you can prove, as a theorem, that

\begin{mathpar}
\inferrule*[]
          {\judgend{\Gamma}{S \to T}}
          {\judgend{\Gamma, S}{T}}
\end{mathpar}

is an admissible rule of the system. Similarly, sums have a left rule:

\begin{mathpar}
\inferrule*[]
          {\judgend{\Gamma, S}{Q} \\ \judgend{\Gamma, T}{Q}}
          {\judgend{\Gamma, S + T}{Q}}
\end{mathpar}

such that the following two rules are admissible:

\begin{mathpar}
\inferrule*[]
          {\judgend{\Gamma, S + T}{Q}}
          {\judgend{\Gamma, S}{Q}}

\and

\inferrule*[]
          {\judgend{\Gamma, S + T}{Q}}
          {\judgend{\Gamma, T}{Q}}
\end{mathpar}

The key idea behind polarization is that one should specify the calculus modulo the invertible rules. That is, the judgement on the right should fundamentally be a judgement that a term has a positive type, and the hypotheses in the context should be negative. That is, the two primary judgements of a polarized system are the positive introduction judgement

$\judge{\Gamma}{v}{P}$

which explains how introductions for positive types work, and the negative elimination (or spine judgement)

$\spine{\Gamma}{s}{N}{P}$

which explains how eliminations for negative types work. The eliminations for positive types are derived and the introductions for negative types are derived judgements (which end up being rules for pattern matching and lambda-abstractions) which make cut-elimination hold, plus a few book-keeping rules to hook these two judgements together. The critical point is that the grammar for $\Gamma$ consists of negative types:

$\Gamma ::= \cdot \bnfalt \Gamma, x:N$

This is because positive types are (by definition) left-invertible, and so there is no reason to permit them to appear as hypotheses. As a result, the context clearly has a very different character than in call-by-push-value.

I don't have a punchline for this post, in the sense of “and therefore the following weird things happen as a consequence”, but I would be astonished if there weren't some interesting consequences! Both focalization and call-by-push-value teach us that it pays large dividends to pay attention to the fine structure of computation, and it's really surprising that they are apparently not looking at the same fine structure, despite apparently arising from the same dichotomy at the type level.

Wednesday, July 9, 2014

Two Drafts on Dependent Types

I have two new draft papers to publicize. The first is a paper with Pierre Pradic and Nick Benton:
  • Integrating Linear and Dependent Types, Neelakantan R. Krishnaswami, Pierre Pradic, Nick Benton. The technical report with proofs is also available.

    In this paper, we show how to integrate linear types with type dependency, by extending the linear/non-linear calculus of Benton to support type dependency.

    Next, we give an application of this calculus by giving a proof-theoretic account of imperative programming, which requires extending the calculus with computationally irrelevant quantification, proof irrelevance, and a monad of computations. We show the soundness of our theory by giving a realizability model in the style of Nuprl, which permits us to validate not only the β-laws for each type, but also the η-laws.

    These extensions permit us to decompose Hoare triples into a collection of simpler type-theoretic connectives, yielding a rich equational theory for dependently-typed higher-order imperative programs. Furthermore, both the type theory and its model are relatively simple, even when all of the extensions are considered.

Sometimes, it seems like every problem in programming languages research can be solved by either linear types, or dependent types. So why not combine them, and see what happens?

  • Mtac: A Monad for Typed Tactic Programming in Coq, Beta Ziliani, Derek Dreyer, Neelakantan R. Krishnaswami, Aleksandar Nanevski, Viktor Vafeiadis.

    A website with Coq source and tutorial is available.

    Effective support for custom proof automation is essential for large-scale interactive proof development. However, existing languages for automation via tactics either (a) provide no way to specify the behavior of tactics within the base logic of the accompanying theorem prover, or (b) rely on advanced type-theoretic machinery that is not easily integrated into established theorem provers.

    We present Mtac, a lightweight but powerful extension to Coq that supports dependently-typed tactic programming. Mtac tactics have access to all the features of ordinary Coq programming, as well as a new set of typed tactical primitives. We avoid the need to touch the trusted kernel typechecker of Coq by encapsulating uses of these new tactical primitives in a monad, and instrumenting Coq so that it executes monadic tactics during type inference.

Since I'm not the main author of this paper, I feel free to say this is really good! Mtac manages to strike a really amazing balance of simplicity, cleanliness, and power. It's really the first tactic language that I want to implement (rather than grudgingly accepting the necessity of implementing).

Tuesday, July 1, 2014

PhD opportunities at the University of Birmingham

My university, the University of Birmingham, is looking for applicants to the CS PhD program. I'm putting our advertisement on my blog, in case you (or your students, if you're a professor) are looking for a graduate program -- well, we're looking for students! We have an imminent funding deadline -- please contact us immediately if you are interested!


We invite applications for PhD study at the University of Birmingham.

We are a group of (mostly) theoretical computer scientists who explore fundamental concepts in computation and programming language semantics. This often involves profound and surprising connections between different areas of computer science and mathematics. From category theory to lambda-calculus and computational effects, from topology to constructive mathematics, from game semantics to program compilation, this is a diverse field of research that continues to provide new insight and underlying structure.

  • See our webpage, with links to individual researchers, here:

    http://www.cs.bham.ac.uk/research/groupings/theory/

  • Information about PhD applications may be found here:

    http://www.cs.bham.ac.uk/admissions/postgraduate-research/

  • If you are considering applying, please contact any of us. We will be very happy to discuss the opportunities available.
    • Martin Escardo (topology, computation with infinite objects, constructive mathematics, intuitionistic type theory)
    • Dan Ghica (game semantics, heterogeneous computing, model checking)
    • Achim Jung (mathematical structures in the foundations of computing: logic, topology, order)
    • Neel Krishnaswami (type theory, verification, substructural logic, interactive computation)
    • Paul Levy (denotational semantics, lambda-calculus with effects, nondeterminism, category theory, game semantics)
    • Uday Reddy (semantics of state, separation logic)
    • Eike Ritter (security protocol verification)
    • Hayo Thielecke (abstract machines, concurrent and functional programming, software security)
    • Steve Vickers (constructive mathematics and topology, category theory and toposes)

Thursday, June 19, 2014

Gödel's Ontological Argument

In his paper Jokes and their Relation to the Cognitive Unconscious: Marvin Minsky argued that a sense of humor would be an essential component of a rational agent. His argument went a bit like this: an AI would use need to use logic to deduce consequences from their data about the world. However, any sufficiently large database of facts would inevitably contain inconsistencies, and a blind application of ex falso quodlibet would lead to disaster.

The bus timetable, which you believe to be correct, says the bus arrives at 3:35, but it has actually arrived at 3:37. This is a contradiction, and so by ex falso, you think it's a good idea to give me all your money!

Clearly, the correct response to such an argument is to laugh at it and move on. Whence the sense of humor, according to Minksy.

When I first read Anselm's ontological argument for the existence of God, I had the correct Minskyan reaction — I laughed at it and moved on. However, one of the curiosities of mathematical logic is that Kurt Gödel did not laugh at it. He found all of the gaps in Anselm's reasoning, and then — and here we see what being one of the greatest logicians of all time gets you — he proceeded to repair all the holes in the proof.

That is: Gödel has given a proof of the existence of God. So let's look at the proof!

We begin by axiomatizing a predicate

\[ \Good : (i \to \prop) \to \prop \]

To do this, we first give the following auxilliary definitions.

  • $\Godlike : i \to \prop$
    $\Godlike(x) = \forall \phi:i \to \prop.\; \Good(\phi) \Rightarrow \phi(x)$

  • $\EssenceOf : i \to (i \to \prop) \to \prop$
    $\EssenceOf(x, \phi) = \phi(x) \wedge \forall \psi:i \to \prop.\; \psi(x) \Rightarrow \Box(\forall x.\; \phi(x) \Rightarrow \psi(x))$

  • $\Essential : i \to \prop$
    $\Essential(x) = \forall \phi:i \to \prop.\; \EssenceOf(x,\phi) \Rightarrow \Box(\exists y.\phi(y))$

So something is godlike if it has all good properties. Something $x$'s essence is a property $\phi$, if $x$ has property $\phi$, and furthermore every property $x$ has is implied by $\phi$. Something $x$ is essential, if something with its essence necessarily exists. (This sounds like medieval theology already, doesn't it?)

Now, we can give the axioms describing the $\Good$ predicate.

  1. $\forall \phi:i \to \prop.\; \Good(\lambda x.\lnot \phi(x)) \iff \lnot \Good(\phi)$
  2. $\forall \phi, \psi:i \to \prop.\; (\forall x:i.\; (\phi(x) \Rightarrow \psi(x))) \Rightarrow \Good(\phi) \Rightarrow \Good(\psi)$
  3. $\Good(\Godlike)$
  4. $\Good(\phi) \Rightarrow \Box(\Good(\phi))$
  5. $\Good(\Essential)$

The first axiom says that every property is exactly good or not good (bad). The second says that if a property is a logical consequence of a good property, it is also a good property. The third says that being godlike is good. The third property says that every good property is necessarily good. Finally, being essential (as opposed to accidental) is good. I feel like I could argue with any of these axioms (except maybe 2), but honestly I'm more interested in the proof itself.

Below, I give Goedel's result, rearranged a bit and put into a natural deduction style. Loosely speaking, I'm using the natural deduction format from Davies and Pfenning's A Judgmental Reconstruction of Modal Logic. In my proof, I use the phrase “We have necessarily $P$” to mean that $P$ goes into the valid context, and I use “Necessarily:” with an indented block following it to indicate that I'm proving a box'd proposition. The rule is that within the scope of a “Necessarily:”, I can only use “necessarily $P$” results from outside of it.

This style is occasionally inconvenient when proving lemmas, so I'll also use the “necessitation rule” from traditional modal logic, which says that if you have a closed proof of $P$, you can conclude $\Box(P)$. (This rule is implied by the Davies and Pfenning rules, but it's handy in informal proofs.)

Now, their proof system is basically for S4, and the ontological argument uses S5. So I'll also make use (as an axiom) of the fact that $\Diamond(\Box(P))$ implies $\Box(P)$ — that is, that possibly necessary things are actually necessary. This saves having to mention world variables in the proof.

Overall, the proof system needed for the proof is a fully impredicative second-order classical S5 — a constructivist may find this a harder lift than the existence of God! (Can God create a consistent axiomatic system so powerful She cannot believe it?) Jokes aside, it's an interesting proof nonetheless.

We begin by showing that any Godlike entity only has necessarily good properties.

Lemma 1. (God is good) $\forall x:i.\; \Godlike(x) \Rightarrow \forall \phi:i \to \prop.\; \phi(x) \Rightarrow \Box(\Good(\phi))$

Proof.

  • Assume $x$ and $\Godlike(x)$, $\phi$ and $\phi(x)$.
  • For a contradiction, suppose $\lnot \Good(\phi)$.
    • Then by axiom 1, $\Good(\lambda x.\lnot \phi(x))$.
    • Unfolding $\Godlike$ and instantiating with $\lambda x.\;\lnot \phi(x), \Good(\lambda x.\lnot \phi(x)) \Rightarrow \lnot \phi(x)$.
    • We know $\Good(\lambda x.\lnot \phi(x))$.
    • Hence $\lnot \phi(x)$
    • This contradicts $\phi(x)$.
  • Therefore $\Good(\phi)$.
  • By Axiom 4, $\Box(\Good(\phi))$.

Then, we show that all of a Godlike entity's properties are entailed by being Godlike. I was initially tempted to dub this the “I am that I am” lemma, but decided that “God has no hair” was funnier. The name comes from the theorem in physics that “black holes have no hair” — they are completely characterized by their mass, charge and angular momentum. Similarly, here being Godlike completely characterizes Godlike entities.

If you accept Leibniz's principle, this implies monotheism, as well. Hindus and Buddhists will disagree, because they often deny that things are characterized by their properties. (For differing reasons, Jean-Paul Sartre might say the same, too, as would Per Martin-Löf!)

Lemma 2. (God has no hair): $\forall x:i.\; \Godlike(x) \Rightarrow \EssenceOf(x, \Godlike)$

Proof.

  • Assume $x$ and $\Godlike(x)$.
  • By definition, $\EssenceOf(x, \Godlike)$ = $\Godlike(x) \wedge \forall \psi:i \to \prop.\; \psi(x) \Rightarrow \Box(\forall x.\; \Godlike(x) \Rightarrow \psi(x))$
    1. $\Godlike(x)$ holds by assumption.
    2. Assume $\psi:i \to \prop$ and $\psi(x)$.
      • By Lemma (God is good), $\Box(\Good(\psi))$.
      • So necessarily $\Good(\psi)$.
      • Necessarily:
        • Assume $x$ and $\Godlike(x)$.
          • By definition of $\Godlike$, $\Good(\psi) \Rightarrow \psi(x)$.
          • But necessarily $\Good(\psi)$.
          • Hence $\psi(x)$
        • Therefore $\forall x.\; \Godlike(x) \Rightarrow \psi(x)$
      • Therefore $\Box(\forall x.\; \Godlike(x) \Rightarrow \psi(x))$
  • Therefore $\forall \psi:i \to \prop.\; \psi(x) \Rightarrow \Box(\forall x.\; \Godlike(x) \Rightarrow \psi(x))$
  • Therefore $\EssenceOf(x, \Godlike)$

Next, we show that if a Godlike object could exist, then a Godlike entity necessarily exists.

Lemma 3. (Necessary Existence): $(\exists x:i.\; \Godlike(x)) \Rightarrow \Box(\exists y:i.\; \Godlike(y))$

Proof.

  • Assume there is an $x$ such that $\Godlike(x)$.
  • By axiom 5, $\Good(\Essential)$.
  • By the definition of $\Godlike$, we have $\Essential(x)$.
  • Unfolding $\Essential$, we get $\forall \phi:i \to \prop. \EssenceOf(x,\phi) \Rightarrow \Box(\exists y.\phi(y))$
  • By Lemma (God has no hair), We know $\EssenceOf(x, \Godlike)$.
  • Hence $\Box(\exists y:i.\; \Godlike(y))$

We can now show that the above implication is itself necessary. We could have stuck all of the above theorems inside a “Necessarily:” block, but that would have been an annoying amount of indentation. So I used the necessitation principle, which is admissible in Davies/Pfenning.

Lemma 4. (Necessary Necessary Existence): $\Box(\exists x:i.\; \Godlike(x)) \Rightarrow \Box(\exists y:i.\; \Godlike(y))$

Proof. By inlining the proofs of all the earlier lemmas, we can give a closed proof of the necessary existence lemma (i.e., using nothing but the axioms and definitions).

So by necessitation, we get $\Box(\exists x:i.\; \Godlike(x)) \Rightarrow \Box(\exists y:i.\; \Godlike(y))$.

Now, we‘ll show that it's possible that God exists. I would have called it the “no atheists in foxholes” lemma, except that the US military atheists’ association maintains a list of atheists in foxholes.

Lemma 5. (God is possible): $\Diamond (\exists x.\Godlike(x))$

Proof.

  • For a contradiction, suppose $\Box(\forall y:i.\lnot \Godlike(y))$.
    • Assume $y$ and $\Godlike(y)$.
      • Instantiating the contradiction hypothesis, $\lnot \Godlike(y)$.
    • Therefore $\forall y:i.\; \Godlike(y) \Rightarrow \lnot \Godlike(y)$.
    • By Axiom 2, $\Good(\lambda y.\lnot \Godlike(y))$
    • By Axiom 1, $\lnot \Good(\Godlike)$.
    • By Axiom 3, $\Good(\Godlike)$.
    • This is a contradiction.
  • Therefore $\lnot \Box(\forall y.\lnot \Godlike(y))$.
  • By quantifier twiddling, $\lnot \Box(\lnot \exists y.\Godlike(y))$
  • By clasical definition of possibility $\Diamond (\exists y.\Godlike(y))$

Now we can finish off Gödel's argument!

Theorem 1. (God necessarily exists): $\Box(\exists y:i.\; \Godlike(y))$

Proof.

  • By Lemma (Necessary Necessary Existence), $\Box[(\exists x:i.\; \Godlike(x)) \Rightarrow \Box(\exists y:i.\; \Godlike(y))]$
  • Necessarily $(\exists x:i.\; \Godlike(x)) \Rightarrow \Box(\exists y:i.\; \Godlike(y))$.
  • By Lemma (God is Possible), $\Diamond ((\exists x:i.\; \Godlike(x)))$.
  • Stipulating $\exists x:i.\; \Godlike(x)$:
    • By using the implication, $\Box(\exists y:i.\; \Godlike(y))$.
  • Therefore $\Diamond \Box(\exists y:i.\; \Godlike(y))$.
  • In S5, $\Diamond \Box P$ implies $\Box P$ for any $P$.
  • Therefore $\Box(\exists y:i.\; \Godlike(y))$

When I set out to understand this proof, I wanted to see where the proof was nonclassical — I initially thought it would be funny to constructivize his argument and announce the existence of a computer program which was a realizer for God. It turns out that this is impossible, since this is a deeply classical argument. However, there might still be some computational content in this proof, since Jean-Louis Krivine has looked at realizability interpretations of very powerful classical systems such as ZF set theory. AFAICT, the tricky part will be finding a specific predicate satisfying the axioms of $\Good$.

However, as far as understanding the proof goes, the use of excluded middle actually turns out to not bother me overmuch. After all, proving that it is not the case that God doesn't exist, sort of makes a good case for apophatic theology, which is pretty congenial to me as a cultural (albeit firmly atheist) Hindu!

Instead, the part I find most difficult to swallow is the proof of the possible existence of God, because it runs afoul of relevance. Specifically, the proof that $\Godlike(y) \Rightarrow \lnot \Godlike(y)$ does not use its hypothesis. As a result, I find myself quite dubious that being $\Godlike$ is a $\Good$ property (Axiom 3).