Aaron of God of the Machine posed a question in the comments to the post on free will below that struck me as worth repeating here. (I see he’s also blogged it.) Some people have apparently offered the following thought experiment as an argument that we must have the capacity for free choice. You tell a superbeing with a fancypants brainscanner: I’m going to say either “yes” or “no” in a minute: which will it be. Now, the “problem” is supposed to be that, whatever they say, you can, of course, say the opposite — thereby falisfying their prediction. So even someone who knew all the physical antecedents and had as much calculating power as you please (the argument goes) couldn’t accurately predict your behavior — it must be that you’ve got free will.
But that’s not what the hypothetical shows at all. Consider that it would be possible to write an extremely simple computer program with the same properties. It outputs to the screen: “Will I say ‘yes’ or ‘no’ after you hit return?” and then it takes as an input “yes” or “no.” If the input is “yes” it prints “no” to the screen, and vice-versa. Now, nobody’s going to suppose that this shows something about the surprising capacity of computers for free choice. The weirdness that creates the problem isn’t free will, but the more tractable weirdness of “strange loops” — the same kind of problem of self reference that gives rise to the “liar paradox” (what is the truth value of “This sentence is false”?). That is, you’re asked to provide an input that reports the outcome of a process that will negate your input. That demand — â??generate an input P to a process, which is identical to the output, when the process in question is negation,â? cannot be satisfied. Thatâ??s not about freedom; itâ??s just logic.
What you could use this paradox to show is the impossibility of hostile gods — that is, of two omniscient (yet deterministic) beings with opposed interests. Imagine two such beings playing this old game: one is “evens” and one is “odds,” and each sticks out either one or two fingers. If the sum is even, then “evens” wins, and if the sum is odd, then “odds” wins. We used to call it “one, two, three shoot” when I was a kid. Here we get a cycle, because each is following an algorithm which takes as an input the action the other player will take, which itself takes as an input the result of the initial algorithm. This, too, is a kind of incoherent demand: we demand that a finite process contain itself. (What about an infinite process, since we’re supposed to be talking about gods? Mathematicians have the idea of a supertask– an infinite number of operations performed in a finite time. These are tricky, and I don’t know what to say about them. But then… who wins?)
Leaving all that aside, I think a lot of apparent puzzles we’re familiar with — the liars paradox, or Russell’s paradox about the “set of all sets that do not contain themselves,” only seem spooky when we don’t stop to ask: what process am I being asked to carry out? If we think that parsing these problems is about finding out whether some thing — a proposition — has a certain property “truth,” or whether another sort of thing, a “set,” really “exists,” we get all sorts of confused. That sense of paradox dissolves when we see ourselves as asking whether we can perform certain kinds of operations on the statements or symbols that yield determinate results. But it should also have some weird effects on our intuitive readings of all kinds of ordinary, non-paradoxical sentences too.