Abstract: Nessus's essays inspired me to go and read Max Tegmark's original essay. His theory that physical existence is equivalent to mathematical existence is certainly provocative, but it's also flawed, in ways I'll explain below. In particular, the part of Tegmark's theory that aspires to be falsifiable is either meaningless or already falsified, depending on how you look at it.
(I'm going to be using the phrase "mathematical object" a hell of a lot, so I'm just going to call them "mobjects" from now on. Apologies in advance if this word sounds really dumb.)The problem that Tegmark's theory is designed to solve is the problem of why our universe has this particular set of natural laws and physical constants rather than any of the infinitely many possible alternatives. Theories of physics, such as general relativity, model the universe as a kind of mobject (in this case a four-dimensional differential manifold with Lorentzian metric). Tegmark proposes that in fact, the universe
is a mobject. From the point of view of mathematics, there are no 'priveleged' mobjects - if they're mobjects at all then they exist, and that's all there is to it. So far so uncontroversial.
In classical, deterministic physics, there is supposed to be precisely one physically existing universe, whereas in modern interpretations of quantum theory, universes form a vast family or 'ensemble'. However, in both cases, there's supposedly a meaningful distinction between a mobject that describes a physically existing universe and one that does not. Tegmark's central idea is to do away with this distinction, and see what follows.
What follows is that we could, a priori, have found ourselves in any of the class of mobjects containing sentient observers. Effectively, then, our universe is a sample of one drawn from a probability distribution over the 'space' of possible universes (i.e. mobjects) containing minds. From this, it would follow that the universe we see around us should be 'maximally generic' given the constraint of containing minds. In other words, there should be no deep, profound patterns observable in the universe that can't in some way be accounted for by the necessity that the universe should support intelligent life.
The first and most obvious objection is that there is no parameter space of
all mobjects, and no way of assigning them the 'prior probabilities' necessary to do the kind of Bayesian calculations Tegmark needs to make his predictions. To be sure, one can sometimes classify mobjects in a parameter space, but the things you're classifying have to be sufficiently similar to one another to make this meaningful. For instance, one can represent a triangle as a triple of positive real numbers (a,b,c) such that neither a nor b nor c is greater than the sum of the other two. Hence, the parameter space of triangles turns out to be a subset of
R3. Less trivially, I suspect one could also construct some kind of parameter space of smooth closed loops (of which triangles are limiting cases), though such a space would need to be infinite-dimensional. In all cases, though, a parameter space is a mobject designed to classify some distinct family of mobjects. However, there is no 'mobject of all mobjects' (because mobjects can be represented as sets, and there's no set of all sets). Since it doesn't exist, one can't slap a probability distribution on it. And even if you could, what probability distribution would you choose? It couldn't possibly be uniform, and there's no obvious (canonical) choice of non-uniform distribution.
The second and more damning objection is that the universe has an unbelievably large amount of 'surplus information' over and above what's needed to support intelligent life. For instance, Turing machines (and hence all computers) can be interpreted as mobjects. A computer running a sufficiently accurate simulation of a human in a warm, sparsely furnished, well-lit, windowless, doorless room would thus qualify as a mobject supporting intelligent life. Tegmark's theory is thus unable to explain why the universe doesn't just consist of one person and a warm, sparsely furnished, well-lit, windowless, doorless room. Such a simulation wouldn't even need to take into account quantum and relativistic effects.
So, to sum up, the problems are:
(1) There's no parameter space of mathematical objects.
(2) Even if there was, we wouldn't be able to put a probability distribution on it.
(3) Even if we could, it would (with overwhelming likelihood) turn out to be overwhelmingly likely that random universe with intelligent life would be Vastly simpler than our universe.
Appendix: I'm glossing over a lot of other peripheral problems one might have with Tegmark's theory. For instance:
* What exactly is a mathematical object? Even if your preferred answer is just "a set", then there's the whole quagmire of advanced set theory to wade through, with its confusing grey areas between existence and non-existence. (E.g. Some set theorists propose that if we look sufficiently far up the chain of cardinal numbers, we'll eventually find one that's 'measurable'. Other set theorists disagree. This dispute cannot be resolved by proof, for we have proved that if set theory can be done consistently at all, then it can done consistently with or without 'measurable' cardinals.)
It seems odd to want to exacerbate this sort of thing by making it a matter of
physical (rather than just mathematical) existence.
* There's no rigid boundary between sentient life and not-quite-sentient life. No precise definition can be made of what it means for a mathematical object to 'contain a self-aware substructure'.
* Can the universe necessarily be conceived as a mathematical object at all? Is it not possible (at least in the sense of non-contradiction) that the natural laws of the universe 'go on forever' getting more and more complicated, instead of stopping at, say, superstring theory?
Aftermath:
Now, I suspect one could cut the empirical part of Tegmark's theory free from the rest of it (in other words, ditch the idea about predicting the properties of natural laws of the basis of them being random) but still maintain that physical existence and mathematical existence are equivalent. However, doing this, you lose all claim to have explained why the universe is like this rather than like that.
In metaphysical matters such as these, I think it's a good idea to keep in the back of one's mind the following 'emergency reality check'. Define "Empirical Physical Existence" (EPE) as the property of something being directly observable 'out there' in 'the real world'. Going back to my earlier example, whether or not we decide to say that unicorns have PE (physical existence) in some possible world or other, we *know* that horses have EPE and unicorns don't. Whether or not PE is equivalent to ME (mathematical existence), we know for sure that EPE and ME are distinct.
Tegmark's theory was designed to solve the traditional problem "why is the universe like this rather than like that?", but what he seems to be ignoring is that this is precisely what science is all about. Progress in physics consists of taking batches of theories and/or experimental data that seem arbitrary or mysterious and finding hidden patterns that explain why they have that form. Once this is done, you can always go on to ask "but why did the hidden pattern like this instead of like that", and if you're lucky, you can find even deeper patterns that help you explain it. You'll never be able to explain everything, but that's an unrealistic goal to have in the first place.
Having explored Tegmark's website, it appears that he's an extremely smart guy, and a real cosmologist rather than just a person with a 'trippy' armchair theory they're trying to peddle.
Also, on reflection, I think it might just be possible to defend his theory against my second objection (the waters here now seem a lot muddier than before).
Although an explicit description of our universe as it is now would require a fantastic amount of information, it would take very much less information if all we wanted to do was determine which mathematical object the universe is. This is analogous to the fact that, whereas it takes infinitely many bits to write down the complete binary expansion of pi, hardly any space is needed to write down an algorithm for computing pi. The information determining our universe would consist of the laws of physics together with an 'initial condition'1. Presumably, once the laws are specified, almost any randomly chosen initial condition would suffice to give a universe very much like our own, with stars and planets. Also, it wouldn't be that unreasonable to suppose that the overwhelming majority of possible 'initial conditions' lead to universes containing life and minds.
Hence, all that extra ('surplus') information out there in the universe began life as nothing more than randomness. Now, random information is still information, but we can easily eliminate it by assuming (in a Tegmarkian way) that there is a physically existing universe for each possible set of 'initial conditions'. (This is analogous to the fact that a computer program that counts upwards from 1, printing out each number as it goes, can be written in fewer bits than one that simply prints out N (where N is some enormous, arbitrarily chosen natural number) - and yet both programs eventually print N). Remarkably enough, then, the real universe may turn out to need vastly less information to specify it than would my 'man in an empty room' universe. Unless, of course, one could show that a 'man in an empty room' can emerge from random initial conditions (which seems rather implausible). (Understatement.)
So, it's conceivable that our universe is roughly 'as simple as it could have been' given the emergence of life and conscious thought.
However, I wouldn't go as far as to replace 'conceivable' with 'reasonable'. I cannot see why Dennett's 'inversion of the cosmic pyramid' (whereby order emerges from chaos, then design from order, then mind from design) could not have taken place in a universe with natural laws simpler than ours. Unfortunately, I can't prove this hunch of mine by pointing to some simulated toy universe and saying 'look at this', as our computers aren't (and maybe won't ever be) good enough to do such a simulation on a sufficient scale. I think there's room for debate here - neither side 'obviously' wins (unlike, say, a debate about whether God exists).
1 I don't know precisely what form an 'initial condition' would take, in light of the Big Bang theory, but let's assume that there's some way to make sense of it
I see it merely as a neat way of explaining the fact that we experience things as existing while completely satisfying the Ockham's razor principle of not multiplying entities, since if the universe arises from a formal mathematical system then (in one sense at least) there are no entities at all.
My take on what a mathematical object is is simply that, given a sufficiently well-defined consistent set of axioms and rules of inference, any conclusions that can be drawn from it are unavoidable in the sense that their negation cannot be shown in the same system. Therefore the things that have independent "existence" are exactly those kind of inferences. I reject the idea that mathematics describes some kind of pre-existing objects, and claim instead that what we can describe as mathematical objects' existence follows from their potential to be defined.
The idea of a universe that contains a single sentient being in an isolated room is a genuine problem and I'm not sure how to get around it except by saying that worlds like that exist, and worlds like this exist, and I happen to be in one like this.
I suppose that there might be an infinite number of independent physical laws, and if this is the case and we are not willing to accept this as another kind of mathematical object then we would have to reject the whole idea. If the laws of physics were infinite in number I don't think we could ever prove it though.