Okay, three weeks of sheer suspense should be enough, it’s about time for some actual content.

A philosopher/logician I have long wanted to familiarize myself with is Jaakko Hintikka, one of the originators of game semantics – a semantics for logic based on game theory. So I picked up Game-Theoretical Semantics (edited by Esa Saarinen) at my library and plunged in. It turns out Hintikka has some interesting things to say about both formal logic and natural language. I originally intended to make a single post, but it seems best to split it into two parts, one dealing with the semantics itself, the other with applying it to natural language.

As you see, I am terrible at writing introductory fluff, so let’s get down to business. This post should be understandable to anyone with at least a cursory knowledge of formal logic. Just to refresh your memory: correspond to, respectively, “or”, “and”, “there is an x such that”, “for all x” and “not”. The basic idea behind game semantics is quite straight-forward: the truth-value of a sentence is determined by the outcome of a tug-of-war between ‘Myself’ (trying to prove the sentence true) and ‘Nature’ (trying to prove the sentence false). We fix a domain of individuals and to every sentence , we associate a recursively defined game . If the sentence is of the form:

- , I choose or and the game is continued with respect to it
- , Nature chooses or and the game is continued with respect to it
- , I choose a member of , give it a proper name if does not already have one (, for example), and the game is continued with respect to
- , Nature chooses a member of , give it a proper name if does not already have one (, for example), and the game is continued with respect to
- , the roles of the two players are switched

The rules are called, naturally enough, (G.), (G.), (G.E), (G.U), (G.) respectively. Implication can be translated into disjunction and negation or a new rule can be added (in , I choose or ). Likewise, we can abandon the negation rule in favour of rewriting sentences using De Morgan’s laws.

Using this process, we ultimately arrive at an atomic sentence (a sentence like which we do not further analyze logically). If is true, I have won and Nature lost; if is false, vice versa (rule ). Finally, the original sentence is true iff I have a winning strategy in . (In game theory, a strategy is a function describing which option one is going to choose given the previous ones, a winning strategy is naturally a strategy which guarantees you victory.)

Thus we have defined a semantics for first-order predicate calculus which determines which sentences are true and which are false. That this definition is reasonable should be obvious. Assuming the Axiom of Choice, one can easily use induction to prove that this definition of truth is equivalent to Tarski’s (truth = satisfaction by every assignment of objects to variables). The problematic step is where we have to jump from “for every , I have a winning strategy for ” to “I have a winning strategy for ” – without the Axiom of Choice, there needn’t be any, and sentences true according to Tarski would no longer have to be true according to Hintikka, resulting in a non-classical logic.

In fact, the easy modifiability to deal with non-classical logics is one of the main advantages of game semantics. For one thing, we might require the strategies to be computable functions. For another, the semantics works for a more general theory than classical logic if we relax a condition we have been tacitly assuming all along. Namely, the requirement of perfect information – that both players are informed about the previous choices made by their adversary (and remember their own) and their subsequent choices depend on them. The theory we get when we no longer assume such a dependence is called finite partially-ordered quantification or branching quantification. A typical formula with a branching quantifier might look like this:

This implies that the choice of depends only on and the choice of only on (the branching may of course be further nested). We can make this dependence explicit by using the so-called Skolem functions :

This sentence is not equivalent to any sentence of regular first-order predicate logic. On the other hand, the sentence:

is equivalent simply to:

.

This, obviously, is an exception rather than the norm – it can be proven that (validity in) this logic of finite partially-ordered quantification is recursively equivalent to (validity in) second-order logic, i.e. there is an algorithm to turn a sentence in second-order logic (where we can quantify over predicates in addition to individuals) into an equivalent sentence in this “branching” logic. And second-order logic is, of course, a much stronger theory than first-order logic.

What implications all this has for natural language (English in particular) will be discussed next time.

November 10, 2008 at 10:20 pm

Hi Preno,

I can’t quite see how we can abondon the negation rule?

Also, how is

equivalent to

?

Take F to be =, and assume there are two or more objects. Then neither you nor nature has a winning strategy for the first formula. But for the second formula nature has a winning strategy (just pick a different object from you.)

November 11, 2008 at 2:11 am

Hi Andrew,

If we want to abandon the negation rule, we just have to add rules to push the negation further in, like:

(N.E) If S’ is of the form , it is to be rewritten as .

We also have to rewrite universal quantification similarly and use de Morgan’s laws on ~(A&B) and ~(AvB). Finally, we add a rule to the effect that:

(G.~A) If the game has reached the sentence ~A, A atomic, I have won and Nature has lost if A is false; if A is true, vice versa.

As for the two sentences, the first one is true iff I have a winning strategy, which I have iff there is a single x such that F(x,y) for all y (in this case, iff there is only one object), which there is iff the second one true. The negations of the two formulas, however, aren’t equivalent, afaict, which is in line with the failure of the law of the excluded middle in Hintikka’s logic. This corresponds to the fact that in general the Gale-Stewart theorem which guarantees one of the players a winning strategy only works (as you have just seen) for games of perfect information.

November 11, 2008 at 11:34 am

Ok, but that’s not a particularly natural sense in which they are equivalent. For example the biconditional (defined in the natural way) is not true in almost every model, which would strike me as a good reason not to call them equivalent (by B I mean the branching formula, and L the linear one.)

Also, I don’t think its quite as simple to get rid of negation as you make out. In addition to (N.E) you need rules that will eliminate it from the branching formulae and I can’t see how you would do that.

November 11, 2008 at 1:21 pm

Well, the natural way to define would be , right? And that’s true in every model. If there is a single x such that F(x,y) for all y, then I can always win the game for the branching sentence, and vice versa, if I can always win the game for the branching sentence, that means that whatever y Nature chooses, F(x,y) is true for a suitable choice of x independent of y, which is what the linear sentence says.

If you want to define the biconditional using negation (say, ), then you’ll probably get something weird, but I think the natural definition is the conjunction of two conditionals.

The negation elimination was only meant for regular first-order predicate logic, not the branching logic. (Or at least Hintikka didn’t outline such a rule for the branching logic anywhere and as you say it’s difficult to see what they could look like.) I see I didn’t make that clear enough in the post.

November 11, 2008 at 2:09 pm

By the natural way I meant (which I assume is equivalent to your way.)

This certainly isn’t true in every model. Let’s stick with the example I had above: F is identity, the domain contains 2 or more elements. You don’t have a winning strategy for if nature chooses the right conjunct on the first go, you can either chose to play on L (not a good idea, nature has a winning strategy for that) or on ~B. On ~B like B neither has a winning strategy. So you don’t have a winning strategy whichever you choose.

November 11, 2008 at 2:41 pm

Note also, reading the conditional that way, is equivalent (in the strong sense) to the natural definition.

November 11, 2008 at 2:54 pm

I’m not so sure they are equivalent if you don’t assume the law of the excluded middle. In your example, B, ~B and L are false, and ~L is true. So and are both true, but and hence the whole conjunction isn’t. , on the other hand, is. The equivalence of to would entail the law of the excluded middle, so that cannot be the intended interpretation of for the branching logic.

At any rate, I didn’t just make this example up, Hintikka explicitly says that they are “equivalent”. And I’d say that makes sense, because they are both saying that same thing, one entails the other and vice versa.

November 11, 2008 at 3:28 pm

I’m not assuming the law of excluded middle, I’m just applying the definitions.

Firstly you write

“In your example, B, ~B and L are false, and ~L is true.”

That’s not true: B and ~B are neither true nor false.

Secondly, according to your own definition:

“in , I choose G or ~F).”

which makes my definition equivalent to yours.

Thirdly you write:

“The equivalence of to would entail the law of the excluded middle, so that cannot be the intended interpretation of for the branching logic.”

Also not true. How did you get that? If no one has a winning strategy for a, then no-one has a winning strategy for so it is neither true nor false, and the same goes for .

November 11, 2008 at 3:38 pm

Look, I don’t need to assume they’re equivalent. Suppose the biconditional is . Suppose your rule for conditionals (“in , I choose G or ~F).”)

Assume, again, F is identity, the domain contains 2 or more elements. You don’t have a winning strategy: if nature chooses the left conjunct on the first go, then according to your rule for conditionals, you can either chose to play on L (not a good idea, nature has a winning strategy for that) or on ~B. On ~B like B neither has a winning strategy. So you don’t have a winning strategy whichever you choose. (Note: this was essentially my original argument.)

November 12, 2008 at 12:29 am

Well, you have a genuine point there, but I should also clarify that the rule for (like the alternative negation rules) is only supposed to work for classical logic. This rule supposed to be a “good first approximation”, but the full treatment of conditionals, which I didn’t go into here (and which Hintikka goes into in a separate paper), is more complicated. Basically, if I claim that A -> B, then I have no obligation to attempt to verify B if A is false. So, we first play G(A) with roles reversed, and if Nature wins, then I am forced to attempt to prove B, but my strategy can depend on the strategy nature chose in G(A). I.e. I win a play if the outcome of G(A) is false or if it is true and manage to transform her strategy into a strategy for B and win G(B). My choice of strategy for the conditional can thus be represented as a choice of strategy for A plus a functional which transforms Nature’s strategies for A into my strategies for B. The conditional is then true iff there is such a pair that wins against any pair of Nature’s strategies for A and B. (To deal with nested conditionals, you’d have to consider higher-order functionals.)

I must be overlooking something, as I don’t understand what effect the dependence of my strategy for B on Nature’s strategy for A is supposed have (as both players know whether A has a winning strategy, and I don’t see how it matters which one Nature chooses). It doesn’t even appear to include as a tautology and doesn’t really fix the problem you’re pointing out. Even by this definition, isn’t true. (In my previous comments, I was operating under the assumption that one only has to verify the consequent if the antecedent has been found to be true, which would make the biconditional true and wouldn’t collapse into a classical material implication. Now that I’ve gone through the rules for the conditional again, I can’t see what the difference between the amended definition and classical one is supposed to be. The conditional still seems to be true iff I can falsify A or verify B, false if I can verify A and falsify B.)

So I’m assuming that Hintikka was saying that they are “equivalent” in the sense that they are true in the same models, that B entails L and L entails B, but that’s just my guess. The PlanetMath article puts it as “it [B] can be written as an ordinary first-order formula”.