We shouldn’t be surprised to find that the world is set up in such a way as to allow us to exist. This is obvious, but it turns out to be surprisingly difficult to formalise exactly what we mean by that. From the perspective of human rationality, we care about these kinds of issue because it would be nice to have a mathematical definition of the ideal epistemic rationalist, that we can then try to approximate. We *almost* do – Bayesian updating. But the concept of Bayesian updating contains a few holes, and this is one of them.

Note that what we’re interested in here is finding out about the state of the world. That is, we’re doing science (admittedly at a very abstract level in this case). We’re not trying to find out the true nature of reality (i.e. what it means to say that something actually exists) or human consciousness or qualia, or self-identity issues. This topic is easily confusing enough before you get on to those kinds of issue, so I’m generally trying to leave them out here.

Also I’m leaving out the issue of *time*. In these examples, all evidence is presented to all observers at the same time and they update instantly. There’s no time dimension. Different anthropic principles handle time differently (e.g. whether they consider the fundamental unit to be the *observer* or the *observer-moment*). This is another thing I don’t want to go into here because it adds confusion.

There will likely be a dumb arithmetic error in here somewhere. Can you spot it?

**Bayesian updating**

In order to be able to update on evidence, we first need a *prior world probability distribution*. For each possible world (represented as a mathematical object), this distribution assigns a probability. This is supposed to represent how intrinsically plausible each world is – i.e. the probability of the real world being that way *before* we go and take a peek at the real world to see what it really looks like.

Obviously this concept is somewhat eyebrow-raising and there are a number of theoretical problems that come up when choosing a good prior. There are two popular approaches here:

- The Solomonoff prior(wp), where the probability of a world relates to the length of the shortest computer program capable of generating it
- Toy examples where the prior is basically handed to us in the question.

Usually when applying Bayes’ rule we would talk about “P(evidence | hypothesis) * P(hypothesis)”. In practice that’s useful, but it isn’t really necessary here. If each “hypothesis” describes the *exact state of the entire world* then which evidence we expect to pop up is completely determined. The evidence is part of the world.

So given two inputs:

- The prior probability for each world
- The evidence that we see

We get one output:

- The posterior probability for each world

As a simple “toy” example, consider the following (somewhat clichÃ©) application of Bayes.

A coin is flipped. If the coin lands heads, a container is filled with marbles at a ratio of 1 red to 3 green. If the coin lands tails, the container is filled at a ratio of 1 red to 1 green. You draw one marble from the container.

The marble is green. What is the probability that the coin landed heads? To work this out, we first eliminate the possible worlds which are contradicted by the evidence. Then we divide the remaining probabilities by the remaining total. In this example, P(Heads | Green) = 0.6.

This example can clearly be generalised. But we’re making the implicit assumption here that there’s exactly one observer in each possible worlds. If the number of observers varies then things get murkier.

**What does evidence mean?**

In the single-observer examples, it’s easy to translate *evidence* into *a subset of possible worlds*. E.g. in the above example, the evidence “green marble” is translated into “the subset of possible worlds in which a green marble is floating in front of the eyeball”.

But what if there are *multiple* observers?

There are at least two plausible interpretations of “green marble”:

- “green marble” -> “there exists an observer which sees a green marble”
- “green marble” -> “there exists an observer which sees a green marble
**and that observer is me**“.

I’ll explore what I mean by these and what some of the problems are with each possible interpretation. But first, we need a new example to contemplate. Here a coin is flipped – if it’s heads then we spawn one observer, otherwise we spawn two. A second coin is flipped to determine whether the first observer sees a red or a green marble. In the two-observer case, the other observer always sees a green one. We assume that the observers have no direct way of telling how many other observers there are, and if there are two observers then they have no direct way of telling which one they are.

Imagine being one of these observers (who we assume are all aware of the experimental setup). You see a green marble. What probability should you assign to each hypothesis?

**“There exists an observer”**

In this methodology, we take the evidence “green” to mean “there exists an observer which sees a green marble”. I don’t know whether this methodology even has a name – Bostrom dismisses this class of anthropic principles but doesn’t seem to give them a collective name. (Please correct me if I missed something here).

So what do we do? Simply discard any universes incompatible with the evidence (i.e. universes which don’t contain any green-observing eyeballs). Then renormalise in the same way as we did for Bayes. Each world has probability one third.

It is tempting to go this route – for example, it more or less captures the intuition behind the answer “1/2″ in the Sleeping Beauty problem. So why do we reject it?

- Boltzmann Brains. When considering the real universe (as opposed to these toy examples), some hypothetical worlds are
*huge*. They are big enough that observers pop into existence at random due to random collisions between atoms in a gas cloud or whatever. In these huge worlds we expect*every possible experience to exist somewhere*. Under this methodology, we can never reject these worlds and neither can evidence shift probability weight from one to the other. We’re just stuck believing they’re all equally plausible.

**Self-sampling assumption (SSA)
**

Here, we engage in a two-step process. First, we take the probability distribution of worlds and turn it into a probability distribution of *centred worlds*. These are basically worlds with a big arrow pointing to some observer saying “THIS IS YOU”.

In SSA, we keep the total probability weight for each world the same. The weight assigned to each *centred* world is therefore the weight of its corresponding uncentred world divided by the number of observers in that world. (0.25 divided by 2 is 0.125).

Once we’ve done that, we reject centred worlds where “YOU” would observe the wrong thing.

Then we divide through by the total. Finally, we have to rub out the “YOU” arrow in order to arrive at a posterior probability distribution of worlds.

So why do we reject SSA?

- The answer we get depends on our observer reference class. For example, you see that black wall in the picture above? If we had considered that to be an “observer” then we would get
*different numerical answers*even if the eyeballs are capable of observing that they are not actually a wall. (There are non-pathological examples of this – e.g. we get different answers for the Sleeping Beauty problem depending on whether we count people who are not participating in the experiment as “observers”). - One obvious choice of reference class – people who have had the exact same experience as you – degenerates into the “there exists an observer” case and hence suffers from the Boltzmann brain problem.

**Self-indication assumption (SIA)**

Here we follow much the same methodology as for SSA except that we use a different function for turning our world-distribution into a centred-world-distribution.

This time the probability weight assigned to each centred world is the same as the weight assigned to the corresponding uncentred world. (So the probabilities won’t add up to 1, but it doesn’t matter because we’re going to renormalise anyway). A consequence of this is that worlds with lots of observers become more “likely”.

Another fun fact here is that SIA is equivalent to SSA with a different choice of prior (although possibly an unnatural one).

Then proceed the same way as SSA:

Assuming you are capable of observing that you are not a wall, SIA doesn’t suffer from the observer reference class problem that SSA has. Choosing a silly observer will spawn a new centred world which then immediately gets rejected, without affecting any of the arithmetic.

In general the problems with SIA seem slightly less severe than for the alternatives, which I assume is why SIAish anthropic principles seem to be popular right now. But there are still problems:

- SIA cares about whether a system counts as “one” observer or “two” (or more). With human observers that might seem to be obvious (unless you think that because we share information across the globe, the entire human civilisation should count as a single observer). It’s a bit more problematic if the observer is implemented in software though – you can basically divide a computer system in two in a completely continuous fashion, so it really isn’t obvious when something should count as one or two observers. Remember that we have a reductionist view of the world here, and so the probability of getting different worldstates really shouldn’t depend on our (arbitrary) choice of labels such as “observer”.
- There may be convergence issues. Under the Solomonoff prior, the size of a world (and hence the average number of observers) grows
*much*more quickly than its probability diminishes (as a Busy Beaver function(wp), in fact). And that’s before you even get into infinite worlds. - In the many worlds interpretation(wp) we want to assign
*different*weights to different Everett branches(lw). This doesn’t pose a methodological problem, but it seems to mean that original prior needs to assign some kind of “reality fluid” distribution to each world. We can do that, but the added complication leads us to suspect we might be confused.

**Conclusion**

We don’t know how to do this stuff yet.

**Anthropic principle link dump**

- Katja Grace has a good summary.
- Nick Bostrom wrote the book, which is generally considered canonical although I personally find it a little heavy-going
- Eliezer Yudkowsky covered some of these topics on Less Wrong:
- Yudkowsky also seems particularly bothered by quantum immortality, a related issue which I haven’t covered here (as I wasn’t going to talk about
*time*):

## 0 Responses to “Anthropic Principle Primer”