I think that determinism doesn't entail predictability, even in principle, unless you
are going to help yourself to some principles that are so outlandish
that you might as well invoke the supernatural.
Roughly speaking, a system is deterministic if every event necessarily follows as a result of prior events. Or, to put it another way, every future state of the system is completely determined by the initial conditions of that system.
I think intuitively, it seems to follow that every future state of such a system could be predicted, albeit perhaps only in principle. For example, imagine a snooker table at the beginning of a game and suppose the precise location, mass and other physical variables of all the balls are known. Then given the laws of physics and the momentum and path of the cue ball after cueing off, we might suppose that some supercomputer could show us a picture of where every ball would be on the table for any future time.
So if the whole natural world were considered as a deterministic system, we might conclude that the future is completely predictable. But there seem to be limits on
predictability by any methods of computation conceived of so far.
First, we'd need precise measurements, or the error in our predictions would grow so fast as to
make them uselessly inaccurate. Suppose that one
such measurement had a value that turned out to be a non computable irrational number? If it
is truncated anywhere, that introduces error. But if it is not truncated, we
have an infinite amount of information.
Second, even
if the supercomputer is as computationally powerful as we like, it is
still part of the natural world, so the information state of the computer is
manifest physically, which itself would perturb the system
leading to an infinite regress. Supposing the computer is outside the
natural world doesn't help either, as we then have the classic dualist problem of how it could interact with the
world to
take measurements whilst remaining completely separate.
These are reasons why, although we might have a strong intuition that
with enough information, time and computing power, every subsequent state of a deterministic
system can be precisely
specified within a given margin of error, this is not the case. Even given a classical deterministic physics model of the world, a precise prediction of the future state of the system is not possible, even in principle.
This thought experiment predates modern computing by a long way. As far back as 1814 Pierre-Simon LaPlace, in his introduction to Essai philosophique sur les probabilités, postulated what would later be known as LaPlace's Demon: an intellect with enough calculating ability and knowledge to predict the future.
Perhaps a demon is more apt a characterization than a supercomputer. Given the arguments considered above, any such entity looks like it would have to be a supernatural agent, which is
ironic given this is one thought experiment that hard determinists use
to deny free will.
I have reposted this as the previous version was appearing as a draft on my dashboard, so apologies if it appears to duplicate!
Thursday, April 16, 2020
Sunday, April 12, 2020
The Necessity of Contingency
First, a confession: I haven't read Quentin Meillassoux's After Finitude. I have chewed through Hyperstucteralism's necessity of Contingency (Chiesa, 2015) and The necessity of contingency or contingent necessity: Meillassoux, Hegel, and the subject (Van Houdt, 2011) though. This wasn't just to remind myself how opaque Continental philosophy can be; I had a nagging sense that Meillassoux had over-egged things, but couldn't quite put my finger on exactly why. So thanks to Peter Wolfendale for the following encapsulation: "The question that remains for critics of Meillassoux’s thesis is thus whether it conflates the epistemic contingency of nomological necessity with the logical necessity of nomological contingency." Superb!
Subscribe to:
Posts (Atom)