Archive Page 2

But I don’t understand it yet

The first thing that I had to learn is that it is OK to want something.

When a frog jumps, does it want to jump? When a tree grows, does it want to grow or is it just built that way? You can know everything about a person and not know what that person wants. What is good and evil? Atoms have no feelings, but a person is atoms and a person has feelings.

That’s something that I still have to learn. What I want doesn’t have to make sense.

The world makes sense but I don’t understand it yet.

When I am confused about something, it says a lot about me and says nothing about the thing I am confused about.

Is a tomato a fruit or a vegetable? Is Pluto a planet? When we ask these questions we are not talking about vegetables or planets. We are talking about ourselves.

I don’t know if I have had an original thought in my life. How many times have you heard that?

Sometimes the man is nasty. I don’t want to say I admire him. He is somewhere where good ideas have settled.

Everything I can see is inside my own head. We talk about heads a lot. Inside my head is a shadow of your head. Inside that is a shadow of mine. Now it fades.

To think that everyone else is right and you are wrong is arrogance – it means you think you know what is right.

Freedom is the freedom to think two plus two equals five. People make good use of this freedom.

I am only making sense to myself here. Just carry on writing and maybe it will come out OK. Someone else said that. You never need to think the same thought twice unless it’s a thought you like thinking. Someone else said that too – but why is it important?

Will I keep my thoughts to myself? How destructive can a thought be?

To stare into the abyss and say “meh”.

I see someone as me if it’s helpful to see him as me. That person who made all those mistakes? Not exactly me.

I know what it is like to be wrong. If I am right now then I was wrong then. If I was right then then I am wrong now. If the person convinces you they are right and they are right then you have won the argument.

It is OK to win.

Anthropic Principle Primer

We shouldn’t be surprised to find that the world is set up in such a way as to allow us to exist. This is obvious, but it turns out to be surprisingly difficult to formalise exactly what we mean by that. From the perspective of human rationality, we care about these kinds of issue because it would be nice to have a mathematical definition of the ideal epistemic rationalist, that we can then try to approximate. We almost do – Bayesian updating. But the concept of Bayesian updating contains a few holes, and this is one of them.

Note that what we’re interested in here is finding out about the state of the world. That is, we’re doing science (admittedly at a very abstract level in this case). We’re not trying to find out the true nature of reality (i.e. what it means to say that something actually exists) or human consciousness or qualia, or self-identity issues. This topic is easily confusing enough before you get on to those kinds of issue, so I’m generally trying to leave them out here.

Also I’m leaving out the issue of time. In these examples, all evidence is presented to all observers at the same time and they update instantly. There’s no time dimension. Different anthropic principles handle time differently (e.g. whether they consider the fundamental unit to be the observer or the observer-moment). This is another thing I don’t want to go into here because it adds confusion.

There will likely be a dumb arithmetic error in here somewhere. Can you spot it?

Continue reading ‘Anthropic Principle Primer’

What we seem to agree on so far

Based on the last Toronto LW singularity discussion, we seem at least mostly agreed on the following things. (I’ve bolded and question-marked things that I’m not quite so sure we agree on or that haven’t been discussed in much detail). Also, while I’m trying not to just write down my own opinion on everything here, it’s inevitably going to happen to some extent so I’ll get the group to review this.

  • (The brain is fundamentally a computer and there’s no theoretical reason that our cognitive abilities can’t be reproduced in silicon?)
  • The human brain is probably not near the limit of what’s possible in terms of intelligence.
  • The following concepts are not completely wacky. They’re at least worth spending time thinking about and discussing semi-regularly down the pub. (There’s an obvious selection bias at work on this one).
    • Whole-brain emulations (“ems”)
    • Human-level or above human-level artificial intelligence (“AI”)
    • Intelligence explosion
  • Human extinction or economic/technological collapse is a factor which might prevent any of this.
  • Most of us believe there’s a >50% chance that ems appear before AI.
  • Ems are likely to be
    • Faster than human brains
    • Economically cheaper than human brains
    • Able to make copies of themselves (i.e. direct copies of their mind-state, not “children”)
  • If ems are introduced as economic participants, the global economy will change drastically (“em world”)
    • Likely involving much faster rates of growth (what will be the ultimate limit on this?)
    • Role of humans in this world is unclear and possibly extremely precarious.
  • “Em world” consists of at least two phases: an initial phase where we can expect ems to behave like somewhat normal human beings, and later phases where economic and Darwinian pressures have changed their structure and behaviour. (To what extent do these considerations apply to “AI world” also?)
  • Initial em world depends on who gets access to uploading first, what their motivations are, how quickly everyone else gets access to uploading and whether they want to use it, etc. Lots of uncertainty there; avoid detailed stories.
  • Things that may or may not happen later on in em world:
    • An intelligence explosion
    • Specialisation of ems to different niches, possibly involving drastic self-modification
    • Each niche gets dominated by copies of a single em
  • AIs are really hard to contain 
    • In general, the smarter it is the more likely it is to find loopholes
    • AI-box argument
    • Source code leakage
    • Technological arms-race/waterline – if a perfectly contained AI is developed, it won’t be long before another team comes up with a more sloppy one
    • (Does this apply to ems too? SD and GE have touched on em confinement before, but ethics and practicality of this aren’t clear)
  • Unclear to what extent human values would be preserved in em-world, AI-world or intelligence explosion

A few of our values and meta-ethics:

  • Belief/value separation – reasoning about “ought” won’t help us answer questions about “is” or “will be”. (But conversely, “is” might help us answer “ought”).
  • Human extinction is bad?
  • Ems are morally relevant entities?
  • Other things equal, we’d prefer not to have a singleton AI interfering in our business?
  • Different people have different preferences, almost certainly incompatible?
  • Instrumental vs. terminal values – broadly speaking, instrumental values are updated on factual evidence
  • Personal preferences vs. moral values – broadly speaking, a preference is considered moral if other people are involved?

A few tactics for our group:

  • Read lots of stuff (preferably reading material should be diverse, relevant and non-ridiculous)
  • For now I’m sticking with the “opinion database” consisting of notes taken from reading material. I’ll drop this if it turns out not to be useful.
  • In discussions, use clear and concrete language and avoid too much jargon.
    • e.g. WBE is “waiting for someone to die, then chopping their brain into really thin slices, scanning each slice and performing image processing to determine where each neuron is and how they are connected, then running a computer program which will simulate the behaviour of all the neurons simultaneously”
    • It’s easier to analyse those kinds of concept (for ethics, practicality and consequences) than to analyse something like “uploading a mind into a computer”
  • Obviously I’m keeping these minutes so we can record any interesting ideas that come up and make sure that the discussion is actually progressing
  • I’ll also create “idea posts” ahead of time (both for the singularity discussions and the general discussions) and encourage others to do the same.

Toronto LW Singularity Discussion, 2012-03-09

Present: SB, SD, GE, SF, EJ

Minutes: GE (note that when I’ve written something down that doesn’t make sense to me afterwards, I’m sometimes leaving it out here. Please let me know if you think I’ve missed out an important point)

The starting point of the discussion was Yudkowsky’s notion of an optimisation process, in particular the posts Optimization and the Singularity and Observing Optimization.

Here are my notes from beforehand:

Yudkowskys’s view:

My view (actually I didn’t get to communicate this in the meeting, sorry):

  • Thermodynamics – to hit a small target, must start from smaller platform?
  • We are limited in the kinds of property we can expect from all optimisation processes in general. When it comes to programmable optimisation processes, someone could write one to do the exact opposite of what we expect (e.g. the opposite of hitting a small target would be an entropy-maximiser)
  • instead define in terms of who beats who, or its success in a wide range of environments?

Concepts:

  • optimisation process
  • optimisation power
  • intelligence as optimisation power divided by resource usage
  • recursively self-improving optimisation process
  • which features of self an optimisation process can improve, and how quickly
  • goal stability
  • friendliness
  • “Friendly AI” as a particular approach to friendliness
  • coherent extrapolated volition
  • singleton
  • programmable optimisation process (AGI may be programmable, evolution not)
  • meme (actually I’m interested in a generalised notion – any kind of information that undergoes copying, mutation and selection. Genes would be included here).

Continue reading ‘Toronto LW Singularity Discussion, 2012-03-09’

Giving What We Can in Cambridge

So, I was in Cambridge (UK) recently on a work jolly, and I find my old hometown abuzz with rationalist activity. A Cambridge Less Wrong group has suddenly sprung up, and I find out there’s a Cambridge chapter of Giving What We Can. While I was there, GWWC hosted a talk by Toby Ord – their founder, FHI dude and generally all round nice bloke.

Giving What We Can is like another GiveWell. Both aim to find the charities which produce the most units of goodness per unit of money. GWWC adds a signaling/commitment element: its members pledge to give 10% of their income to an awesomely effective charity. My feeling right now is that I trust GiveWell’s recommendations slightly more (though they’re starting to show some Aumann agreement here – both think SCI is a good buy). On the other hand, GiveWell appears to be more of a community. The process of self-modifying into an effective altruist is actually really hard, and a community is what I need (and I guess a lot of other people do too).

Anyway, Ord’s talk…

Continue reading ‘Giving What We Can in Cambridge’

Information domains

This post is intended as a half-baked idea which we will discuss at the local Less Wrong meetup and either discard or improve. In particular, at the last Singularity discussion we seemed to feel it would be useful to come up with a non-confused notion of a meme, and this might start off some kind of a sequence with that goal.

It’s hard for information to cross between different domains:

  • genetic information
  • knowledge in human brains
  • human written knowledge
  • human-readable computer information
  • machine-readable computer information

This is due to differences in storage medium, bandwidth between systems and differences in how information is encoded/interpreted. Something which we might regard as a “paradigm shift” occurs when either a new information domain is created (presumably with some advantage over previous ones) or when a partition is removed and information can flow between previously separate domains.

This idea is supposed to be obvious, so expect nothing profound here. I hope that by stringing together obvious ideas, we can end up with a non-obvious one.

Continue reading ‘Information domains’

Entropy and unconvincing models

I’m keen to find a model of the world that is able to describe, say, an intelligence explosion. I think it would be an important step towards establishing whether the singularity is just a fantasy. Such a model may be wrong, but there is just something about a mathematical model that gives it more weight in an argument than mere words. Words can be confusing and misleading in all sorts of ways; with maths at least you know what you’re trying to say, and establishing whether it then agrees with reality is a bit more of a mechanical process.

I don’t have a model for you yet. There are people significantly smarter than me working on this kind of stuff and if they haven’t done it yet it’s unlikely I will. So the best I can do for now is to try to imagine some aspects of what such a model would look like. It’s still words but maybe the words will start to mean something a bit more.

The only totally convincing model of an intelligence explosion would be to simulate all the computational processes involved in a recursively self-improving AI. This would be a really bad idea. A simulation of a recursively self-improving AI is a recursively self-improving AI (See the AI box argument, and then also consider the possibility of technology leakage).

Continue reading ‘Entropy and unconvincing models’