?

Log in

Philosophy on LJ
Sink the Tegmark! 
6th-Feb-2004 08:34 pm
Abstract: Nessus's essays inspired me to go and read Max Tegmark's original essay. His theory that physical existence is equivalent to mathematical existence is certainly provocative, but it's also flawed, in ways I'll explain below. In particular, the part of Tegmark's theory that aspires to be falsifiable is either meaningless or already falsified, depending on how you look at it.



(I'm going to be using the phrase "mathematical object" a hell of a lot, so I'm just going to call them "mobjects" from now on. Apologies in advance if this word sounds really dumb.)

The problem that Tegmark's theory is designed to solve is the problem of why our universe has this particular set of natural laws and physical constants rather than any of the infinitely many possible alternatives. Theories of physics, such as general relativity, model the universe as a kind of mobject (in this case a four-dimensional differential manifold with Lorentzian metric). Tegmark proposes that in fact, the universe is a mobject. From the point of view of mathematics, there are no 'priveleged' mobjects - if they're mobjects at all then they exist, and that's all there is to it. So far so uncontroversial.

In classical, deterministic physics, there is supposed to be precisely one physically existing universe, whereas in modern interpretations of quantum theory, universes form a vast family or 'ensemble'. However, in both cases, there's supposedly a meaningful distinction between a mobject that describes a physically existing universe and one that does not. Tegmark's central idea is to do away with this distinction, and see what follows.

What follows is that we could, a priori, have found ourselves in any of the class of mobjects containing sentient observers. Effectively, then, our universe is a sample of one drawn from a probability distribution over the 'space' of possible universes (i.e. mobjects) containing minds. From this, it would follow that the universe we see around us should be 'maximally generic' given the constraint of containing minds. In other words, there should be no deep, profound patterns observable in the universe that can't in some way be accounted for by the necessity that the universe should support intelligent life.

The first and most obvious objection is that there is no parameter space of all mobjects, and no way of assigning them the 'prior probabilities' necessary to do the kind of Bayesian calculations Tegmark needs to make his predictions. To be sure, one can sometimes classify mobjects in a parameter space, but the things you're classifying have to be sufficiently similar to one another to make this meaningful. For instance, one can represent a triangle as a triple of positive real numbers (a,b,c) such that neither a nor b nor c is greater than the sum of the other two. Hence, the parameter space of triangles turns out to be a subset of R3. Less trivially, I suspect one could also construct some kind of parameter space of smooth closed loops (of which triangles are limiting cases), though such a space would need to be infinite-dimensional. In all cases, though, a parameter space is a mobject designed to classify some distinct family of mobjects. However, there is no 'mobject of all mobjects' (because mobjects can be represented as sets, and there's no set of all sets). Since it doesn't exist, one can't slap a probability distribution on it. And even if you could, what probability distribution would you choose? It couldn't possibly be uniform, and there's no obvious (canonical) choice of non-uniform distribution.



The second and more damning objection is that the universe has an unbelievably large amount of 'surplus information' over and above what's needed to support intelligent life. For instance, Turing machines (and hence all computers) can be interpreted as mobjects. A computer running a sufficiently accurate simulation of a human in a warm, sparsely furnished, well-lit, windowless, doorless room would thus qualify as a mobject supporting intelligent life. Tegmark's theory is thus unable to explain why the universe doesn't just consist of one person and a warm, sparsely furnished, well-lit, windowless, doorless room. Such a simulation wouldn't even need to take into account quantum and relativistic effects.


So, to sum up, the problems are:

(1) There's no parameter space of mathematical objects.
(2) Even if there was, we wouldn't be able to put a probability distribution on it.
(3) Even if we could, it would (with overwhelming likelihood) turn out to be overwhelmingly likely that random universe with intelligent life would be Vastly simpler than our universe.


Appendix: I'm glossing over a lot of other peripheral problems one might have with Tegmark's theory. For instance:

* What exactly is a mathematical object? Even if your preferred answer is just "a set", then there's the whole quagmire of advanced set theory to wade through, with its confusing grey areas between existence and non-existence. (E.g. Some set theorists propose that if we look sufficiently far up the chain of cardinal numbers, we'll eventually find one that's 'measurable'. Other set theorists disagree. This dispute cannot be resolved by proof, for we have proved that if set theory can be done consistently at all, then it can done consistently with or without 'measurable' cardinals.)
It seems odd to want to exacerbate this sort of thing by making it a matter of physical (rather than just mathematical) existence.

* There's no rigid boundary between sentient life and not-quite-sentient life. No precise definition can be made of what it means for a mathematical object to 'contain a self-aware substructure'.

* Can the universe necessarily be conceived as a mathematical object at all? Is it not possible (at least in the sense of non-contradiction) that the natural laws of the universe 'go on forever' getting more and more complicated, instead of stopping at, say, superstring theory?


Aftermath:

Now, I suspect one could cut the empirical part of Tegmark's theory free from the rest of it (in other words, ditch the idea about predicting the properties of natural laws of the basis of them being random) but still maintain that physical existence and mathematical existence are equivalent. However, doing this, you lose all claim to have explained why the universe is like this rather than like that.

In metaphysical matters such as these, I think it's a good idea to keep in the back of one's mind the following 'emergency reality check'. Define "Empirical Physical Existence" (EPE) as the property of something being directly observable 'out there' in 'the real world'. Going back to my earlier example, whether or not we decide to say that unicorns have PE (physical existence) in some possible world or other, we *know* that horses have EPE and unicorns don't. Whether or not PE is equivalent to ME (mathematical existence), we know for sure that EPE and ME are distinct.

Tegmark's theory was designed to solve the traditional problem "why is the universe like this rather than like that?", but what he seems to be ignoring is that this is precisely what science is all about. Progress in physics consists of taking batches of theories and/or experimental data that seem arbitrary or mysterious and finding hidden patterns that explain why they have that form. Once this is done, you can always go on to ask "but why did the hidden pattern like this instead of like that", and if you're lucky, you can find even deeper patterns that help you explain it. You'll never be able to explain everything, but that's an unrealistic goal to have in the first place.

Comments 
6th-Feb-2004 11:37 pm (UTC)
You do a very good job of explaining both Tegmark's theory and the problems with it. Your explanation of his theory made it seem much more plausible than I had originally given it credit for, and later commentary did a much more thorough job of taking the plausibilty right back. This is an excellent essay!
7th-Feb-2004 04:04 pm (UTC)
Ta very much :)

Having explored Tegmark's website, it appears that he's an extremely smart guy, and a real cosmologist rather than just a person with a 'trippy' armchair theory they're trying to peddle.

Also, on reflection, I think it might just be possible to defend his theory against my second objection (the waters here now seem a lot muddier than before).

Although an explicit description of our universe as it is now would require a fantastic amount of information, it would take very much less information if all we wanted to do was determine which mathematical object the universe is. This is analogous to the fact that, whereas it takes infinitely many bits to write down the complete binary expansion of pi, hardly any space is needed to write down an algorithm for computing pi. The information determining our universe would consist of the laws of physics together with an 'initial condition'1. Presumably, once the laws are specified, almost any randomly chosen initial condition would suffice to give a universe very much like our own, with stars and planets. Also, it wouldn't be that unreasonable to suppose that the overwhelming majority of possible 'initial conditions' lead to universes containing life and minds.

Hence, all that extra ('surplus') information out there in the universe began life as nothing more than randomness. Now, random information is still information, but we can easily eliminate it by assuming (in a Tegmarkian way) that there is a physically existing universe for each possible set of 'initial conditions'. (This is analogous to the fact that a computer program that counts upwards from 1, printing out each number as it goes, can be written in fewer bits than one that simply prints out N (where N is some enormous, arbitrarily chosen natural number) - and yet both programs eventually print N). Remarkably enough, then, the real universe may turn out to need vastly less information to specify it than would my 'man in an empty room' universe. Unless, of course, one could show that a 'man in an empty room' can emerge from random initial conditions (which seems rather implausible). (Understatement.)

So, it's conceivable that our universe is roughly 'as simple as it could have been' given the emergence of life and conscious thought.

However, I wouldn't go as far as to replace 'conceivable' with 'reasonable'. I cannot see why Dennett's 'inversion of the cosmic pyramid' (whereby order emerges from chaos, then design from order, then mind from design) could not have taken place in a universe with natural laws simpler than ours. Unfortunately, I can't prove this hunch of mine by pointing to some simulated toy universe and saying 'look at this', as our computers aren't (and maybe won't ever be) good enough to do such a simulation on a sufficient scale. I think there's room for debate here - neither side 'obviously' wins (unlike, say, a debate about whether God exists).


1 I don't know precisely what form an 'initial condition' would take, in light of the Big Bang theory, but let's assume that there's some way to make sense of it
7th-Feb-2004 03:44 pm (UTC)
I can accept Tegmark's premise that the universe arises from a mathematical system (or rather, I came up with it independently some time ago), but I agree with you than one cannot make any probabalistic deductions about what kind of mathematical system it is due to the lack of a parameter space.

I see it merely as a neat way of explaining the fact that we experience things as existing while completely satisfying the Ockham's razor principle of not multiplying entities, since if the universe arises from a formal mathematical system then (in one sense at least) there are no entities at all.

My take on what a mathematical object is is simply that, given a sufficiently well-defined consistent set of axioms and rules of inference, any conclusions that can be drawn from it are unavoidable in the sense that their negation cannot be shown in the same system. Therefore the things that have independent "existence" are exactly those kind of inferences. I reject the idea that mathematics describes some kind of pre-existing objects, and claim instead that what we can describe as mathematical objects' existence follows from their potential to be defined.

The idea of a universe that contains a single sentient being in an isolated room is a genuine problem and I'm not sure how to get around it except by saying that worlds like that exist, and worlds like this exist, and I happen to be in one like this.

I suppose that there might be an infinite number of independent physical laws, and if this is the case and we are not willing to accept this as another kind of mathematical object then we would have to reject the whole idea. If the laws of physics were infinite in number I don't think we could ever prove it though.
(Deleted comment)
7th-Feb-2004 03:54 pm (UTC)
Oh, and one more thing: I don't think that the incompleteness problem that you mentioned to do with measurable cardinals is necessarily a problem. It seems to be the case that the universe contains a finite amount of information, in which case if it is a set then it could be a finite one and this would not be an issue. If one considers the universe to be defined by a formal system and the total amount of information that will ever exist in the universe is finite, then the formal system need not be sufficiently powerful to contain arithmetic, so incompleteness need not apply at all.
10th-Feb-2004 02:42 am (UTC) - Re:
"... the universe contains a finite amount of information ..."
If a system contains a finite amount of data, does it necessarily follow that the amount of information contained is also finite?
10th-Feb-2004 03:28 pm (UTC) - Re:
Of course that kind of comment depends on the definition of "information" that is in use, information being one of those catch-all terms that doesn't have a single generally accepted and well-defined meaning. I was thinking of something along the lines of Shannon information, which is essentially the length of the shortest computer program capable of generating all the data in a system. If the amount of data is finite the Shannon information is also finite (although the converse is not true: the decimal expansion of pi generates an infinite amount of data which contains only a small amount of information).

This means that incompleteness need only affect the physical world if it is not possible in principle for the laws of physics to be completely emulated by a computational process. I hadn't fully appreciated that before so I thank you for making a comment that lead me to it.
10th-Feb-2004 04:00 pm (UTC) - Re:
"information being one of those catch-all terms that doesn't have a single generally accepted and well-defined meaning."
Quite so (though I wouldn't be quite so ummmm dismissive); for that reason "the universe contains a finite amount of information" collapses, no?

You're welcome, I'm sure ... but not no sure I buy your justification of "finite information".
Ah yes, ok, I think I've got you: computed using a single criteria, the amount is finite (implying, of course, that another means of computation might very well produce information that is not included in the first set).
10th-Feb-2004 05:25 pm (UTC) - Re:
I didn't mean to sound dismissive, I was just trying to say that we need to be very careful about definitions when discussing things like information, consciousness, existance etc. This doesn't mean that we can't discuss them, just that we must be as clear as we can about what we mean by those words in a given context. So my earlier statement should have read something more like "...if the universe contains a finite amount of Shannon information..."
10th-Feb-2004 05:39 pm (UTC) - Re:
*blink*
This has gone meta- in a way I find confusing; nothing personal in anything I wrote: "dismissive" applied to information theory is what I meant.
Were we not being careful?

Ahhh, but I don't think it would have been nearly as interesting when fully qualified! Perhaps more correct, precise ... and some might answer more productive ... but didn't this give rise to come clarity?
11th-Feb-2004 07:45 am (UTC) - This might be interesting...
Incompleteness need only affect the physical world if it is not possible in principle for the laws of physics to be completely emulated by a computational process.

This is quite a deep observation, but it needs to be clarified.

What exactly does it mean for incompleteness to affect the physical world? Does it just mean that the physical world is a mathematical object whose existence is unprovable? Even if the information content of the universe is finite, there could still be undecidable questions one could ask about it (as with the natural numbers).

Also, isn't it possible that the laws of physics could be locally (but not globally) computable? One could imagine a universe whose physical laws vary from place to place (in a lawlike manner), such that they are computable in the vicinity of any point, but not across the whole of space and time. Even if there's no such variation, the universe could be 'large' enough to have infinite information content, in such a way that its existence as a mathematical object turns out to be unprovable.

Plus there's a whole debate we haven't even gone into about "what is a natural law?". (This may be important because a universe with computable laws of physics ('axioms') might have higher-order regularities determined by those laws but not derivable from them (true but unprovable statements)).

This all gets rather confusing...
11th-Feb-2004 12:18 pm (UTC) - Re: This might be interesting...
Yes, yes, exactly! I've been thinking about exactly these types of questions for ages - it's why I'm learning all I can about formal systems and incompleteness. You raise a lot of very interesting points and I haven't addressed them all below because they need a lot of careful thinking about.


I guess a lot of it depends on exactly what kind of mathematical object the laws of physics define. Tegmark seems to be thinking of something similar to our current models of the laws of physics, where space is a mainfold in some real-valued vector space. Others have proposed that the universe is a cellular automaton, although the evidence against this is overwhelming, and Stephen Wolfram has proposed on the basis of no evidence at all that the universe is a kind of graph-theoretical object upon which a set of well-defined transformations operate over time (although to be fair he does a good job of explaining how things like relativity could arise from this sort of system).

The latter two models are explicitly computable, and although there might be unanswerable questions about what their state would be after an infinite amount of time has passed, the assumption is that we exist at a point where a finite amount of time has passed and hence the state of the universe at the current time is completely determined.


Perhaps one way to think about it might be this: imagine that there is a string of binary digits from which the state of the universe at any given time can somehow be read off. This may or may not be possible, and it may seem as wacky and arbitrary as Wolfram's idea, but it's isomorphic to a lot of other ways of thinking about it so it is reasonable to consider it. For example, if the universe were a cellular automaton then the string could encode in some well-defined way the set of cells which are in each state at each time-step.

If this string is finite then there is no problem: any question we can ask about the universe can just be looked up. If the string is infinite but computable then we can say there is a computer program (or Turing machine) T which can compute it, and we can answer any question about it by running T for a long enough period of time. The finiteness of the program T corresponds to a finite set of axioms that can generate the string, and hence a finite number of physical laws.

Now let's ask what happens if the binary string is well-defined but not computable. This is equivalent to there being true but unprovable statements in the physical universe. Now what happens is that any program or set of axioms that could generate the string would have to be infinite in extent. No matter how many regularities we find in the string there will always be some that we have overlooked.

Thus one possible consequence of uncomputability or unprovability in the laws of physics is that we would experience the universe as having an infinite number of independent physical laws; perhaps this could manifest itself as a new "underlying" theory coming into view whenever we probe matter at a higher energy level...
11th-Feb-2004 12:54 pm (UTC) - Answers to a couple of your questions
Does it just mean that the physical world is a mathematical object whose existence is unprovable?

I don't think so, because if I discover that the existance of a given mathematical object cannot be proven I can simply posit its existence as an additional axiom. Thus whatever mathematical object the universe corresponds to, its existence can be proven from some set of axioms. I think.

Even if the information content of the universe is finite, there could still be undecidable questions one could ask about it (as with the natural numbers).

This is certainly true. If I interpret the number of atoms in my pet cat as the Gödel number of an undecidable statement in some formalization of number theory then I have an undecidable question about my cat. I think the reason this doesn't cause any problems for my cat's existence is that there is no physical process that depends on the answer to that question.
8th-Feb-2004 11:25 am (UTC) - Small correction...
In fact, the consistency of "ZFC + measurable cardinals exist" is almost certainly a lot stronger than just "ZFC". However, I think set theorists believe that "ZFC + measurable cardinals" is still consistent (even if they can't prove it), and so this is still an example of incompleteness.

It isn't quite the same kind of incompleteness as Gödel's theorem establishes. Gödel finds a statement that is 'definitely true' (whatever that means), but unprovable. On the other hand, what we have here is something that can be called 'true' or not and we (presumably) get consistent results either way (rather like how the axioms defining a group don't determine whether the group is abelian).
8th-Feb-2004 03:15 pm (UTC) - Re: Small correction...
This is not quite the case. Gödel's theorem finds a statement that, under a specific interpretation, seems to us to be true but which is actually independent of the axioms. So if we have a formal system of arithmetic T and we find the Gödel statement G then both "T + G" and "T + ~G" are consistent formal systems. If this were not the case then we would be able to prove either G or ~G by reductio ad absurdum, which we can't.

The difference between "T + G" and "T + ~G" is in the interpretation: "T + G" behaves in the way that we expect the natural numbers to, with every number having a numeral. "T + ~G" contains extra numbers, termed the supernatural numbers, which have all the same properties as the natural numbers except that they cannot be represented directly. They are usually interpreted as being larger than all the natural numbers. There is an entire branch of mathematics called "nonstandard analysis" which deals with systems based on "T + ~G"-type sets of axioms. There are some results in fields ranging from number theory to calculus that are easier to prove in nonstandard analysis than in standard analysis. I understand this to be because the infinitesimal quantities involved in calculus can be defined as something like "a natural number divided by a supernatural number" rather than having to think in terms of limits all the time.

So in fact, these two kinds of incompleteness are the same.
8th-Feb-2004 08:45 pm (UTC) - Re: Small correction...
This did cross my mind (I have read GEB too) and so I thought you might bring the point up.

Here's the asymmetry:

There is a canonical model of Peano arithmetic (or TNT if you prefer) and G is true in this model. A looser way of saying the same thing is that "interpreted as a statement about natural numbers, G is true".

On the other hand, there isn't a canonical model of ZFC. Some models are better than others: For instance, apparently it's possible for a model of ZFC to be 'unfounded', despite the 'axiom of foundation' holding true within the system. However, among the models of ZFC, there's no clear 'winner'.


At least I think this is how it works...
8th-Feb-2004 10:34 pm (UTC) - Re: Small correction...
Well, I'm reading GEB at the moment so you have a slight advantage over me :) I had come across these concepts before but they were much less clear.

I don't fully understand what a model is (although I gather it's a well-defined mathematical term) so I can't fully comment on that yet. If you know of anywhere I can find a good introduction it would be really helpful.

Am I right in assuming, though, that the choice of canonical model is not really inevitable? By that I mean that we could find a model of Peano arithmetic for which G is false. Or does "canonical" have a well-defined meaning here as well?
9th-Feb-2004 01:09 am (UTC)
Hmm, I don't know all that much about logic/set theory/model theory. I could try to make up a precise definition of 'model', but even if it served the purpose, it would probably differ a little from the accepted definition, so I'd better not.

Essentially, a model of a formal system is just a way of interpreting its sentences as statements about some mathematical object, such that all theorems have true interpretations.

TNT has many models, but only one standard model (up to isomorphism), that being the natural numbers themselves (and as mathematicians, we must be arrogant enough to presume that we understand precisely what the natural numbers are). 'Canonical' basically just means 'standard'.

If we'd known about (some particular version of) supernatural numbers all along, and it had been our intention to try to formalise them with TNT, then I suppose one could call one of the non-standard models 'canonical'. This would still be a bit perverse though, because the natural numbers are 'singled out' by being contained in every other model - which makes them 'canonical' in an objective sense.


Unfortunately I don't know enough about nonstandard models of arithmetic to know whether any have special characteristics that single them out. This is more-or-less equivalent to asking whether one can construct them without the axiom of choice. (I have no idea).
9th-Feb-2004 11:02 am (UTC)
It's the idea of assuming we know exactly what the natural numbers are in the first place that bothers me. As I see it they are a model (in a loose sense) of the way numbers behave in the physical world. Any axiomisation we can give behaves in exactly the same way as physical things for finite quantities, and since there seem to be only a finite number of things in the universe this is all we require. How we want our concept of the "natural" numbers to behave when we consider an infinity of them is, to me, purely a matter of taste.

I get the impression from a little googling that a "canonical" model is a kind of minimum possible model that satisfies the axioms. This makes sense because I guess to make a model of "T + ~G" one would have to explicitly include the supernaturals. It makes any argument for the "truth" of G based on a canonical model seem a little tautological though. I definately need to read more about this.
8th-Feb-2004 10:09 pm (UTC)
I'm actually a student of Tegmark's at Penn, and I was wondering if you'd like me to show him this argument, and, perhaps, get a response? You can reach me at alb2@sas.upenn.edu.

~Adge
8th-Feb-2004 10:13 pm (UTC) - Re:
I hear that he's going to be moving to MIT next year and will be one flight up from me (in the Center for Space Research). Truth or rumor?
10th-Feb-2004 09:55 pm (UTC) - Re:
That's one hot pic.

I talked to "Teggie" (as we call him when he's not around) last night, and he confirmed that he will take a leave of absence from Penn next year to teach at MIT. He quickly added that he'll be returning to us, but he might have been trying to appease a broken-hearted freshman astro-nerd. Oh, well.

~ Adge
10th-Feb-2004 10:40 pm (UTC) - Re:
Well, if it's any consolation, Douglas Hofstadter came to MIT a while back for a year, and ended up going back to Indiana.
9th-Feb-2004 07:53 pm (UTC)
Wow, I guess it is a small world after all.

I guess it would be interesting to see what he makes of these counterarguments, although as you can see, one of them isn't as good as I initially hoped.

However, I still think it's a valid point that there's no way to parameterise *all* mathematical objects, and no non-arbitrary way to put a probability distribution on any sufficiently big set of them.

Neil

(neil (dot) fitzgerald (at) ic (dot) ac (dot) uk)
10th-Feb-2004 11:05 pm (UTC) - Re:
I don't really agree with your other points (I haven't gotten around to saying anthing about them yet, but if you don't still hold them, I don't know if it's worth bothering), but I agree that this one might be a problem. Headcube and I were even talking about this very problem somewhere else in this discussion, and it's a worry I've had before when trying to think about David Lewis's Modal Realism, which is somewhat similar to Tegmark's theory. I wonder what Dr. Tegmark would say about it.

I certainly don't know how to pick a Natural number at random, for instance, but let's consider for a moment a world where there are a countably infinite number of people and each person were assigned a unique social security number from the set of natural numbers. Let's also assume that there are no missing Natural numbers in the set of assigned social security numbers. This doesn't seem like a metaphysically impossible situation, especially since the cosmologist currently believe the universe to be infinitely large, and to always have been. Now, consider that you live in this world, and you don't know what your social security number is or anything about how it was assigned. It seems like there must be something you can say about your number -- like it's not all that likely to be the number 7. And it's not even likely to be prime.
15th-Feb-2004 02:43 pm (UTC) - Re:
I think that the intuitive reasonability of that situation is an illusion. I think what would happen is that with probability 1 your social security number would be infinite.

I agree with you that intuitively it feels like you should be able to say that there would be a 50-50 chance of having an even number and very little chance of having a prime, but there doesn't seem to be a way to get around the fact that the probability of drawing a prime number from the sequence 1,2,4,3,6,5,8,7,9,11,10,13,12... (non-prime, prime, non-prime...) which includes all the natural numbers, should be 1/2. Perhaps I will work through this formally at some point to make sure I'm getting it right.

Also, there seems to be a slight hint of the axiom of choice here, as it is needed in order to say that it is possible to uniquely assign the social security numbers in the first place. I don't know what that signifies, but it feels very odd.
15th-Feb-2004 06:03 pm (UTC) - Re:
Actually, forget what I said about the axiom of choice, it isn't needed at all.
21st-Mar-2004 04:18 am (UTC) - Late comment on "Sink the Tegmark"
It's just occurred to me that it could be possible to put a probability distribution on the set of all mathematical objects by assuming that the smaller the amount of information required to specify an object, the higher its probability of being the one we want.

The idea is something like this: assume that the universe is defined by a mathematical object. Now let's define some way to specify each mathematical object by a string of bits (this could include non-computable objects: there would be a certain number of bits required to specify "the set of Turing machines that don't halt", for instance). Now lets assume due to some kind of Occam's Razor principle that the mathematical object we live in should have as short a description in this scheme as possible. Then we can put a probability distribution on the mathematical objects that makes the ones with shorter descriptions more likely than the ones with long descriptions, and this would allow us in theory to make predictions in the way Tegmark suggests.

This is very similar to what Chaitin does in defining the halting probability Omega. It is not possible to pick a Turing machine at random from a uniform distribution over the set of all Turing machines due to the lack of a parameter space, so Chaitin uses the concept of self-delimiting binary code. We give our computer a series of bits at random and it tells us when it has a valid program and we can stop. It's more likely to have a valid program sooner than later, so shorter programs have a higher weight in the halting probability than random ones. We could use the same concept of self-delimiting code in our definition of the probability distribution of mathematical objects.

This seems to have the problem that the set of objects that have short definitions depends on our scheme for describing them, but it does seem to be similar to what Tegmark has in mind. I found this note on Wikipedia that says "he has come up with a nice mathematical argument for the multiverse: the computational expression of a single random number between one and zero (with all its infinite decimals) is longer than the computational expression of the whole set of numbers that exist between 1 and 0, so it may be more informationally economic for reality to consist of infinite parallel universes instead of just one. The computer code for such a computation is only two lines long", which does seem to suggest something similar to what I'm saying.

(not saying I agree though. I've pretty much rejected all forms of Platonism now, which means my theory that was similar to Tegmark's is in trouble...)
This page was loaded Oct 24th 2016, 3:48 am GMT.