## Undecidability from a syntactic perspective

Robinson arithmetic, a finitely axiomatized fragment of arithmetic, is undecidable, therefore, so is classical first-order predicate logic. This is one usual way of proving the undecidability of first-order logic (with an arithmetic language). What do we mean by undecidability here? That we have no algorithm for deciding whether a formula belongs to a certain theory or not, of course, but how is this notion formalized? Via the notion of Gödel numbering, which allows us to translate formulas into numbers, and the notion of recursive sets of numbers, which in arithmetic terms defines sets membership in which can be algorithmically decided: a theory is decidable if the Gödel numbers of its formulas form a recursive set. These definitions work out very well, but one might wonder whether, in talking about the undecidability of first-order logic, one has to bring numbers – entities which prima facie have no special connection to the syntactic objects which are the formulas of logic – and number-theoretic coding tricks into the picture. One way to avoid this is, of course, to define computability in terms of Turing machines. Another, closer in spirit to Gödel’s original argument, is to employ a theory dealing with syntactic objects – texts – to play the role that arithmetic plays in Gödel’s proof. This is what the Polish logician Andrzej Grzegorczyk has done in his paper on Undecidability without Arithmetization.

Specifically, the theory used is Tarski’s theory of concatenation (TC), with the following axioms:

1. $x \ast (y \ast z) = (x \ast y) \ast z$
2. $x \ast y = z \ast u \rightarrow ((x=z \wedge y=u) \vee$ $\exists w ((x \ast w = z \wedge y = w \ast u) \vee (x = z \ast w \wedge w \ast y = u)))$
3. $\alpha \neq x \ast y$
4. $\beta \neq x \ast y$
5. $\alpha \neq \beta$

Concatenation is associative, there are at least two different atomic symbols, and two different ways of dividing a text in two have a common beginning, a common ending and differ only in whether the middle part is the end of first half or the beginning the second half. That’s it. This seemingly meager theory, however, turns out to be undecidable, and indeed essentially so.

Before defining what we mean by undecidable, some preliminaries first. The intended model of TC (playing the role of $\mathbb{N}$) is the set of all (finite non-empty) words over the alphabet $\{a,b\}$: $a, b, aa, ab, \ldots$. These will be called the standard texts. The term $\alpha$ designates the text $a$, the term $\beta$ the text $b$. Clearly each element of the intended model is designated by some constant term. The role of numerals ( $0, S(0), S(S(0)), \ldots$) is played by a class of constant terms called standard names ( $\alpha, \alpha \ast \beta, ((\alpha \ast \beta) \ast \beta), \ldots$). We can easily define inclusion of texts, and limited quantification ( $\forall x \leq y$) is replaced by quantification over subtexts ( $\forall x \subseteq y$). The equivalent of the crucial schema $x \leq \overline{n} \equiv x=\overline{0} \vee x=\overline{1} \vee \ldots \vee x=\overline{n}$ (where, for example, $\overline{3}$ is the numeral $S(S(S(0)))$) is provable, as are elementary facts about equality of texts. (Transposing the argument to a different setting brings out nicely the exact value of each step in the proof.)

Now we can define the notions corresponding to primitive and general recursivity. The class $ED$ of elementarily discernible relations on the set of standard texts is the smallest class (a) including the relations of being identical with $a$, being identical with $b$, identity of two texts and the ternary relation of z being the concatenation of x and y, (b) closed under adding a redundant argument, identifying two arguments, permuting the order of arguments, negation, conjunction, and quantification limited to subtexts (thus $R(x,y,\ldots)$ yields $S(t,y,\ldots) = \forall x \subseteq t R(x,y,\ldots)$). The class $GD$ of generally discernible relations adds to these inductive conditions closure under dual quantification: $R$ is GD if there are two GD relations $S, T$ such that $R(x,y,\ldots) \equiv \exists t S(t,x,y,\ldots)$ and $R(x,y,\ldots) \equiv \forall u T(u,x,y,\ldots)$. The second condition is clearly equivalent to $\neg R(x,y,\ldots) \equiv \exists u \neg T(u,x,y,\ldots)$, so we have a procedure for deciding whether for any given argument $R$ holds or not: going through the standard texts one by one, we will sooner or later hit upon a suitable witness playing the role of $t$ or $u$ (WLOG we can assume that dual quantification has been used only once in defining a GD relation, as the last step). The definition of a generally discernible function is obvious, and we can easily observe that the counter-image of a GD set under a GD function is GD, and the class of GD functions is closed under composition. Every GD relation $R$ can be represented by a formula $F$ in a given consistent extension T of TC, that is, each instance $R(c,d,\ldots)$ holds iff $F(N(c),N(d),\ldots)$ is provable in T, where $N(c)$ is the standard name of the standard text $c$, and every ED relation $R$ can be strongly represented a formula $F$, that is, $R$ is represented by $F$ and $\neg R$ by $\neg F$. The above-mentioned schema is appealed to in proving the limited quantification induction step of these representation theorems. (It should be noted here that the original paper only proves this for $T$ true in the standard model, the extension to arbitrary consistent extensions and the normal form lemma it relies on is proved in the paper Undecidability and Concatenation.)

We also need to code the formulas and other syntactic objects of TC within TC (in other words, as sequences of a’s and b’s). Since our theory is a theory of texts, we don’t have to rely on any specifically arithmetical tricks like the Chinese remainder theorem here, we simply enumerate the symbols used in TC and assign the codes $aba, abba, abbba, \ldots$ to them (variables are written as, for example, $x|||$, so if the code of $x$ is $abbbba$ and the code of $|$ is $abba$, then the code of this variable is $abbbbaabbaabbaabba$). Thus each formula (as a word over the alphabet $(,),\ast,\alpha,\ldots$) can also play the role of a text (word over the alphabet $a,b$). Substitution can then be defined in full generality, but Grzegorczyk simplifies matters by assuming WLOG that the free variable we’re substituting for occurs only once in the formula, which makes the definition of substitution particularly simple.

The last step is the self-referential one. Assume that the consistent theory T containing TC is GD, i.e. the set of the codes of the formulas of T is GD (hence so is its complement). Substitution and coding are GD operations, therefore the operation that assigns the code of the formula $f(Code(f))$ to the code of the formula $f$ containing one free variable is also GD, and so is the counter-image of the complement of T under this operation, i.e. the set $X$ of (the codes of) formulas $f$ such that $f(Code(f)) \notin T$. By the representation theorem, there is a formula $G$ defining this set: for any standard text $t$, $t \in X \equiv G[Code(t)] \in T$. Setting (the code of) $G$ for $t$, we get $Code(G) \in X \equiv G[Code(G)] \in T$, but by definition of $X$ also $Code(G) \in X \equiv G[Code(G)] \notin T$, contradiction. Therefore, T is not GD. And since $t \in TC \equiv (A \rightarrow t) \in L1$, where $A$ is the conjunction of the axioms of TC and $L1$ is the set of logically valid formulas of first-order logic (in our language), we get that $L1$ is not generally discernible. QED.