Entropy and unconvincing models

I’m keen to find a model of the world that is able to describe, say, an intelligence explosion. I think it would be an important step towards establishing whether the singularity is just a fantasy. Such a model may be wrong, but there is just something about a mathematical model that gives it more weight in an argument than mere words. Words can be confusing and misleading in all sorts of ways; with maths at least you know what you’re trying to say, and establishing whether it then agrees with reality is a bit more of a mechanical process.

I don’t have a model for you yet. There are people significantly smarter than me working on this kind of stuff and if they haven’t done it yet it’s unlikely I will. So the best I can do for now is to try to imagine some aspects of what such a model would look like. It’s still words but maybe the words will start to mean something a bit more.

The only totally convincing model of an intelligence explosion would be to simulate all the computational processes involved in a recursively self-improving AI. This would be a really bad idea. A simulation of a recursively self-improving AI is a recursively self-improving AI (See the AI box argument, and then also consider the possibility of technology leakage).

At the opposite end of the spectrum, you can model intelligence (or optimisation power or whatever) as some opaque parameter G, and then ponder how exactly G might relate to dG/dt and other variables representing other high-level abstractions. It’s easy to see how you can make this look explosion-shaped. I was going to present a strawman model of this kind but David Chalmers has done that for me. (I was just transcribing that video for the SI – I’ll link to the transcript too when they get round to putting it up). That lecture was actually very good – I’m not dissing it – and I certainly take the idea of the intelligence explosion very seriously. But I think there are valid reasons why people might find that kind of model unconvincing.

People might say that no such G really exists, that something caps it at around human level, that it ceases to give a competitive advantage or is too expensive beyond human level, that G can only be increased at a very gradual rate, or that increasing G correlates with decreased motivation to increase G any further. And so on. I think I could argue against each of those (and any other biggies that I forgot), but that’s not really the point – the model is so simple, just an opaque real number, that we’re back to words and arguments to establish whether this number describes the real world and how intelligence really behaves. Such a model might be a good way of summing up our intuitions about intelligence (and is valuable for that reason), but it seems unilluminating – it won’t help us discover whether those intuitions are right or wrong.

Another weakness with this kind of intelligence-begets-superintelligence model is that it doesn’t do a very good job of describing what’s already happened. Chalmers attempts to link the intelligence explosion with evolution of intelligence – I think on some abstract level he may be right (and I’ve got a post planned about that), but in the talk it comes across weakly. He explicitly makes the contrast between biological reproduction and extendible AI creation (neatly summarised as “It’s not like if we just have sex really well then our kids are going to be geniuses”). And the timescales are obviously very different.

Anyway, I’m getting sidetracked. We could certainly extend our model to cope with the creation of life, the evolution of brains and so on and then the final bit at the end for the intelligence explosion we think is going to happen. But it feels like we’re not really getting more out of this model than we put in – the model will include as many terms as we have pieces of evidence plus an extra one for what we think will happen in the future. And Bayes tends not to look kindly on this sort of model.

What we want is a model that can incorporate the origin of life, the evolution of brains and the human brain in particular, and maybe events such as the invention of agriculture, the industrial revolution and the growth of information technology – all within the same framework. It seems like a tall order – most of these events are regarded as paradigm shifts, where you have to throw your old model away and start using a new one. And each seems very different in character. But I think that’s what we need if we are to claim to predict another paradigm shift which is itself different in character.

Also the anthropic principle gets in the way but let’s not worry about that just yet.

One starting point I considered was thermodynamics. The second law of thermodynamics, which states that entropy (statistically) always increases, is interesting because it doesn’t depend on any particular law of physics. It’s more a general guiding principle about how the organisation of a system changes over time. (Incidentally, biological evolution is another, less precise, principle of this kind). We know two things about entropy: that in a closed system it always increases, and that locally it can decrease in order to produce structures that help it to increase overall. (Typical examples being convection currents and life).

Out of all the possible configurations of atoms on the Earth, there are more configurations that contain no living things than there are configurations that contain at least one living thing. Think about that because it’s counterintuitive, and also because I may have got it wrong. If I’m right then life is lower entropy than non-life. Similarly, out of the living configurations, there are more that contain no multicelled organisms than there are ones that contain at least one multicelled organism. So when multicellular life first appeared, the local entropy on Earth dropped a bit.

So this gestures vaguely in the direction of an anti-second law of thermodynamics:

  • the local entropy of a system like the Earth can decrease in a sequence of steps (as life appears and then gets more sophisticated)
  • these lower energy configurations are somehow better at taking the low entropy energy from the sky and smearing it out into higher entropy configurations when it gets sent back out into space. This means the overall entropy of the closed system increases at a faster rate.

This isn’t a model yet, but it certainly seems like an interesting idea, and we can imagine dropping from our current level to a lower entropy configuration – from our point of view this may or may not look like “the singularity”. But as with any interesting but unsubstantiated idea, before we take it further it’s important to see if there are any objections, and if the objections are too convincing then maybe give the idea up.

As it turns out, I think the objections win and we need to look for a better idea, at least for now.

So what are the objections that come to mind?

  • The local entropy doesn’t always drop. If an asteroid hits us, I assume it goes back up again?
  • The lower entropy states aren’t always the ones that are better at flattening out the thermal gradient. Being low entropy may be necessary for that, but it’s not sufficient.
  • In particular, the lowest entropy (presumably where all the atoms in the Earth are sorted into order) is completely lifeless and thermodynamically will behave in much the same way as the original high entropy configuration
  • Does a recursively self-improving AI fit our pattern?

The last point I find the most interesting, and the most difficult to argue around. We might imagine that a superinitelligence would have a lot of internal structure and so have low local entropy. And it might use a lot of power to do all its thinking so we might expect it to increase overall entropy rapidly.

So far it fits the pattern of a plausible state that our world might drop into. But what about if we program its utility function to be the exact opposite? There doesn’t seem to be any obvious reason why it couldn’t just send little probes all over the universe, disrupting any local patches of low entropy that it finds, in defiance of our observed trend.

The particularly interesting thing about this is it seems to apply to any trend-based model. For any hypothetical high-level organisational trend, we have to imagine what would happen if we tried to build an AI which would put that trend into reverse. I don’t think that’s possible for the second law of thermodynamics – that feels too much like an actual physical law. But for things like what I just said? Sure, why not.

So in order to come up with something which might be able to model intelligence explosions, it seems like we’re looking for high-level organisational laws that the AI can’t put into reverse without destroying itself. Otherwise we end up in a peculiar position where we can model AIs that can optimise any conceivable utility function, except for a few over here.

Admittedly, this logic seems to be backwards. We’re looking for models which will test whether an intelligence explosion is plausible – surely we shouldn’t reject them just because they say that no it isn’t plausible, or only is for certain utility functions? Maybe. But we’re not engaged in logic here, rather in a search for inspiration. It’s easy to produce models that correspond well to the real world and which don’t predict an intelligence explosion. That doesn’t tell us anything – it may just mean that an intelligence explosion would push the real world outside the model’s domain of applicability.

I might have a couple of suggestions, not related to thermodynamics and which more closely hug our intuition of what a superintelligence requires in order to exist. But these belong in separate posts.

For now, I just have one remaining insight on entropy, which doesn’t really have anything to do with the singularity or anything.

I’ve wondered in the past about how entropy should be defined formally. Our standard measure of randomness – Kolmogorov complexity – doesn’t work (at least in a deterministic universe) because the complexity of any state is just the complexity of the initial state + log t + the (constant) complexity of your universe-simulating machine. What I should have thought to myself at the time was, “hey this is a complexity question, I bet Scott Aaronson has an excellent post about that“.

Aaronson’s key points are:

  • Define entropy as a resource-bounded version of Kolmogorov complexity (i.e. entire universe simulation is forbidden because it takes too long to run)
  • Define a related notion of “complextropy”. This is the smallest (again, resource bounded) complexity of the set of states of which our state is a “typical” or “random” member.

So based on what I was saying earlier, it sounds like we shouldn’t expect complextropy to always follow a particular trend, once goal-seeking systems are introduced into the world. An AI could be programmed with a utility function that scores highly if complextropy oscillates in cycles, in complete defiance of the model. It doesn’t even need a recursively self-improving AI – it just requires little old me, dribbling some milk into some coffee one drop at a time, watching the complex swirls form and then dissipate. It’s a bit more complicated than that – in particular my brain (plus the rest of the world) is a lot more complex than the cup of coffee will ever be. And I don’t need to start an argument with Aaronson just yet. But you can see where I’m going at least.

The other key point – about the definition of entropy – is interesting because I feel like I had considered that definition before and rejected it for some reason. The reason was probably that any resource bound seems somehow arbitrary. Why should the second law of dynamics work with one size of computer and not another? I’m not sure this is fair though – it’s possible that it might work for all possible choices of resource bound.

  • If you set the bound to zero then you have to describe the entire universe pixel by pixel (OK, I’m imagining the universe as a deterministic reversible cellular automaton(wp) for simplicity). Entropy is constant under this definition, hence (non-strictly) increasing.
  • If you set the bound to infinity then you get k+log t, which also increases although rather slowly.
  • Anything in between I guess corresponds more to our intuitive idea of what entropy means.

So, not obviously right but not obviously wrong either. And yet something still bothers me.

Suppose you have built a reversible computer(wp). This may or may not be possible in our universe, but in our imagined cellular automaton world it should be. I think that a reversible computer can simulate the function of a (non-reversible) Turing machine, given adequate time and space (citation needed). So just program it to produce a big blob of pseudorandom data (starting off from empty memory). Entropy, according to our bounded-complexity estimate, will increase from low to high. Then stick the machine in reverse and entropy will return back to low again (remember that reversible computers, at least theoretical ones, can run without generating any waste entropy).

So, I’m still pondering that one.

 

0 Responses to “Entropy and unconvincing models”



  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





Follow

Get every new post delivered to your Inbox.

%d bloggers like this: