Toronto LW Singularity Discussion, 2012-01-12

Warning: contains mathematics

Present: RA, SB, SD, GE, MK, JM

Minutes: GE

Host: pub

This discussion was slightly more informal due to a noisy venue and two newcomers. Welcome to Less Wrong, RA and MK! Also great to see familiar faces SB and JM taking an interest in the Singularity discussion.

GE kicks off with a discussion of Minds, Machines and Gödel. GE attempts to explain Gödel’s incompleteness theorem(wp) but I don’t get very far. (This page is a good quick introduction, but I feel it glosses over some important details – Godel’s incompleteness theorems only talk about whether statements are provable, not whether they’re true. If you get those concepts even slightly confused then you end up in a world of fail).

GE introduces first-order logic(wp) and Peano arithmetic(wp) (again, badly). SD asks the interesting question: why do the Peano axioms need to include multiplication when surely it can be defined in terms of addition? GE says that it’s because first-order logic provides no way to say “apply this particular operation n times” (you can for any particular fixed finite n, but not the general version which is what you need for a useful multiplication predicate).

The interesting thing though is that addition and multiplication are actually enough to define the concept of “apply this particular operation n times”. And from this you can definite predicates for xy and all sorts of other wacky functions. I explain at the end of this post (which I didn’t have time to do in the meeting obviously).

SD says he disagrees with Minds, Machines and Gödel because people way smarter than him do. (GE says except Roger Penrose(wp)).

SB asks whether whole brain emulation is a counterexample to MMG (even though no-one’s emulated a whole human brain yet, they’re already emulating pieces). GE says not really – the thesis in MMG is that any brain emulation on a computer will necessarily not be completely accurate.

SD brings up Penrose’s quantum theory of consciousness. GE thinks that the point here is that we don’t yet have a complete formalisation of quantum field theory (including gravity), and so the idea maybe is that none is possible – systems which map features of quantum gravity into their high-level function may not be Turing machine-emulatable. (I don’t agree, I just think this is what Penrose was saying in The Emperor’s New Mind).

SB brings up the topic of conceptual/metaphorical reasoning – perception and motor functions used in reasoning – as discussed by George Lakoff in Philosophy in the Flesh. E.g. when we are babies we associate warmth with being close to our mothers, so warmth is used as a metaphor for friendliness or intimacy. MK gives an example of this as being the nonsense words “kiki” and “bubu” (did I get those right) – when given a choice, people across different cultures assign the first to a spiky shape and the second to a blobby shape. It’s assumed that this comes from the shape the tongue makes in the mouth. SB agrees that the key point here is that the same metaphors crop up across different languages.

SD wonders whether some of this might be innate instead of cultural? E.g. there was an experiment on chimpanzee babies where they were presented with a wire model “mother” which included a feeding apparatus, next to a warm furry (but non-lactating) fake mother. They were trying to train the babies to accept the wire one as the real mother, but what happened instead was they clung to the furry one and sort of reached over to feed from the other one. GE asks whether the most telling experimental results are the unintended ones?

(I have written here “SB: can assert things that are wrong?”. I’ve completely forgotten what that was referring to, sorry)

RA has wondered in the past how Isaac Asimov’s Three Laws of Robotics(wp) would be encoded. This starts a general discussion of Friendly AI topics.

SD brings up:

  • An AI asked to prove the Riemann hypothesis could turn the entire visible universe into a computer designed for that purpose, annihilating humanity in the process (this meme is attributed to Marvin Minsky. Also mentioned e.g. here)
  • “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else” (from Yudkowsky’s Artificial Intelligence as a Positive and Negative Factor in Global Risk)
  • If Gandhi were offered a pill which would make him want to commit murder, he would not take it. (This is also mentioned in Yudkowsky’s paper, but I can’t find the source for the Gandhi version).

SB wonders how to encode human values for an FAI, and about the stability of goals in an intelligence explosion. SD points out that we only want to encode “good” values, not e.g. the occasional tendency to commit genocide which humans have been known to do. GE says that this is analogous to the problem of democracy – sometimes it means the “wrong” person wins.

SD lays out something like Robin Hanson’s “series of exponential growth modes” argument for the Singularity. SB mentions Yudkowsky’s classification of Three Major Singularity Schools.

MK asks what if we’re not seeing exponential growth in technology, but instead are in the middle of an S-curve which will fizzle out before we get to the point where machines can design machines? (This is essentially the increase then level-off scenario from the last meeting, which at the time we regarded as less likely. Worth revisiting?)

Technology shows exponential growth up to present day, then continues up and eventually levels off before we reach the point that machines can design machines

SD says that the brain is just a protein computer and that he expects we’ll be able to do better than what’s coughed up by evolution (note that this is what Minds, Machines and Gödel is arguing against).

RA points to massive Moore’s Laws going on in medical science right now, e.g. the cost of human genome sequencing is now down to $1000. SD (referring back to something we were discussing at the first meeting) points out that the genome is just a small part of understanding the mind.

We get to talk about destructive vs. nondestructive ways of probing brains in detail. SD says that the expected path to whole brain emulation (according to Anders Sandberg and Nick Bostrom) is going to involve chopping up a brain and scanning the slices. So who will be the first human volunteer? (Presumably someone who is already dead and who has the right paperwork).

SB is interested in the video reconstruction based on fMRI scan (my understanding of how this works is that the subject patiently watches a corpus of video clips with her brain activity being recorded. She is then presented with a test clip and her brain activity is recorded again. The software then finds the 100 best matches from the original corpus and averages them, and that’s what you see in the YouTube clip).

SB wonders which jobs will be replaced by robotics? It may be that information workers get replaced by AIs before blue-collar workers get replaced by robots. Paul Krugman says the same thing.

SD wonders about survivorship bias in stockbrokers – are the successful ones just successful because they were lucky, and then get all the attention? MK says this is mentioned in The Drunkard’s Walk by Leonard Mlodinow – essentially the distribution of winnings on the stock market was exactly the same shape as if it had been a result of coin flips. JM asks whether the best (and simultaneously the worst) strategy is to just buy at random. MK says that a monkey was once the most successful at beating the market in Chicago (is this really true??)

At this point GE got annoyed at having to shout over the live music so we split into two smaller groups. I’ll give RA+SB+GE first.

GE discusses the database that he wants to create of everything that we read about for our Singularity discussions.

  • Person (SB wonders which people we pick?)
  • Question (e.g. “Will AI go foom?“)
  • That person’s opinion – yes or no or a probability or something vaguer
  • Citation
  • Possibly a quotation too, unless typing them all in becomes arduous.

GE anticipates quite a lot of people and quite a lot of questions, but also a lot of empty cells because not everyone weighs in on everything.

One such question RA is particularly interested in is the global distribution of Singularity technologies (and the economic systems associated with them – this is assuming a positive Singularity result). Would there be a two-tier system of haves and have-nots? RA was watching a TV program about farming methods through the ages – the point being that they could show a lot of ancient farming methods because they are still in use somewhere in the world. But SB thinks standards of living are rising across the globe?

GE thinks that if some AIs are aggressive and expansionary, and others aren’t then the expansionary ones will “win” and so the “Singularity” (for better or worse) would be expected to span the globe (and possibly take off into space too). GE can’t help imagining a Friendly AI as a Nanny AI (turns out I’m not the first to use that terminology, though I think I’m using it slightly differently – analogous to the nanny state(wp)). SB thinks that for an AI to be considered friendly it would have to stop people from killing each other. GE says that it could do this either by actively intervening or by manipulating people so that they don’t want to. But in either case it seems like it would have to meddle quite a lot.

RA mentions Slavoj Žižek(wp) and Examined Life – unfortunately I missed the context for that.

Back on the subject of the Singularity opinion database, SB wonders how strongly to weigh different people’s opinions. GE isn’t sure about this one – he thinks it’s more important to see what the opinion landscape looks like before we try and figure out who’s right and who’s wrong. There’s the obvious bias towards weighing people more strongly when we already agree with them. Possibly we should try and rate arguments instead of people? (according to some measure of internal consistency or rating them on the disagreement hierarchy (see also DH7 here)).

RA points out that academia has not really accepted Kurzweil/Yudkowsky in a big way. (This is something I’d also brought up briefly in the last meeting).  SB says that for him personally, the Singularity doesn’t seem to pass the bullshit test.

RA sees that when a number of breakthroughs happen at around the same time in different fields, this can spawn a new field (and gives UAVs (wp) as an example). RA says that 2012 will be the year of 3D printing(wp).

RA asks about optimal philanthropy (which we’d mentioned earlier – I forget exactly when). RA says that people want zero overheads but that really isn’t possible within a capitalist system. GE says he doesn’t care whether overheads are high or low – he just wants results, and points to GiveWell.  GiveWell doesn’t address existential risk though.

RA leaves and we go back to talking as a group, starting off by sharing what we had been discussing in threes.

SD+MK+JM had been discussing cultures such as the Amish and Jewish Orthodox who avoid certain technologies and SD mentions that (I forget which culture in particular) use air-driven saws because they are suspicious of electricity. GE asks whether the result is like steampunk?

SD+MK+JM had also been discussing the ethics of bringing back Neanderthals. GE says the Neanderthals provide good proof that a human-like species can become extinct. MK says that we are at an unusual point where there is only one human species. SB describes the different hunting styles used by us and Neanderthals – we would hunt animals by walking them to exhaustion, and they would basically fight things. The suggestion was that Neanderthals got squeezed out at the end of the Ice Age because of the animals they were adapted to hunting. We also agreed that the method of hunting animals by walking them to exhaustion seemed (if not more efficient then) less risky.

GE asked about relative intelligence of us vs. Neanderthals. SD says he thinks intelligence was close, that they had larger brains but a similar brain-to-body mass ratio(wp). GE is sceptical of brain/body ratio as an estimate of intelligence. If you attach the same brain to a larger body, why does that make it stupider? GE wonders if it somehow relates to tradeoffs and selection pressure associated with intelligence/energy/mass but isn’t sure. JM suggests that larger bodies are more complex and so require a bigger brain to drive them. SD suggests in particular a larger surface area implies more sensory nerves. Whales have big brains and in the meeting we weren’t completely sure what they use them for – JM suggests that they need a lot of sensors on their skin in order to deal with fluid dynamics. GE mentions Inside Nature’s Giants – the British TV show where they basically cut up an enormous dead thing.

Back on the subject of AI safety, GE is sceptical of AI ethics as seen by people like Ben Goertzel (although I should really read papers such as that one before coming to hasty judgements). The problem is that an AI can behave in a very ethical way, but if it creates another AI that is less ethical (or if a third party were to create another AI from its source code with some of the ethical safeguards taken out) then you’ve still got problems.

SD mentions the apparent decline in levels of violence over the course of history (this has come up in LW meetings before, possibly prompted by Steven Pinker’s The Better Angels of Our Nature). There are now incentives to be more peaceable. JM says that wealth reduces violence – more to lose, no longer fighting out of desperation etc. SB says we have become more empathic, with the size of the in-group growing from the size of a tribe to the size of a nation. SD says that a Friendly AI could try to accelerate this process. GE isn’t sure whether this could still be seen as a nanny AI – trying to change or manipulate human nature?

Regarding stopping people doing bad stuff – SD says that with terrorist attacks there is a power law connecting damage caused by a terrorist attack with its planning time (i.e. the most devastating attacks take a very long time to plan, and hence are more likely to be prevented by law enforcement or secret services). Unfriendly AI wouldn’t make a good terror weapon because it would take too long to develop. GE has a different perspective – firstly, accidental AI catastrophe seems more of a worry right now than deliberate misuse, and secondly you can’t really reset the clock back to zero with a software project. If an AGI project is disrupted for whatever reason, all the code is still lying around all over the place – especially if it’s an open source project. This means that another team can just continue the project from where it left off. (SB says very few inventions are done by one person).

SD says that the limiting factor is hardware, e.g. IBM’s Watson requires a pretty beefy system to run on. Supply of hardware could be controlled somehow to avoid unregulated AIs popping up. GE makes the analogy with preventing nuclear terrorism by controlling the supply of nuclear fuel. SB says what about risks such as self-replicating nanomachines? We would expect the design for those to consist of quite a small amount of information. GE says that in general SD’s suggestion sounds precarious because new technology will come along, and technology tends to be disruptive.

SD asks whether a “nanny AI making people happy” is a contradiction? GE says that his definition of nanny AI is more to do with how much it meddles than about making people happy or otherwise. In general a successful Friendly AI seems like it would be far from the libertarian ideal where everyone is basically left to do what they like. JM thinks that no (strong) AI is really compatible with the libertarian ideal? GE asks why then do a lot of Singularitarians appear to have libertarian politics? (SD points to Less Wrong survey – 32% libertarian). GE says that with the Singularity as his main issue right now, he kind of feels outside mainstream politics anyway.

Actions

Probably an action on GE to read some stuff and take notes on the opinions I’m reading, in order to kick off the opinion database I described and see whether it seems feasible and useful.

Appendix: sketch of how to define repeated function application from Peano axioms

Peano Arithmetic is a formalisation of number theory – and number theory is particularly powerful because you can encode basically any finite mathematical structure as a natural number if you’re clever enough. What we’re interested in encoding here are lists of natural numbers, and then using each element of the list to encode the next step in the iteration. I’ll try and thrash out how to do this, but I don’t remember being taught how to do it (just picked up bits and pieces here and there), so there are probably some mistakes here. I hope the basic approach is valid though.

1. Define a predicate equivalent to x=pn (which is valid only for prime p and for n<p).

Construct a number of the form X = 1234…n in base p. I think this can be done easily enough if you observe that

  • (p-1)((p-1)X+n+1)+1 is a power of p
  • (Is X is the smallest such???)

Then ask for x (which will be the 1000…0 that we’re looking for) such that:

  • x < X
  • px > X
  • p is the only divisor of x

2. Divide a predicate equivalent to “p is the nth prime”

This will be true iff there exists a number of the form Y = 21 * 32 * 53 * 74 * … * pn.

  • Each prime <=p is a divisor of Y.
  • For each prime q<p, where r is the next prime after q, qi | Y iff ri+1 | Y.
  • 2 is a divisor of Y and 4 is not.
  • Note that n<p here, so it’s valid to use the predicate defined in (1).

3. We can now define a “finite set” representation, by using the presence or absence of a particular prime factor to determine whether something is in the set. Define a predicate “x is an element of S”

  • Let p be the nth prime
  • We’re just asking whether p is a divisor of S

4. We can easily define a “tuple” representation

  • (x,y) == (xx+x+yy+3y+2xy)/2
  • Turns out this gives a different answer for each choice of x and y over the natural numbers

5. Our list representation is a set of tuples. [a,b,c…] == {(0,a),(1,b),(2,c)}

6. Let f(x,y) be some functional predicate (i.e. for each x, there’s exactly one y such that f(x,y) is true). Then we can define a “z = apply f to x, n times” predicate:

  • Is there a list L…
  • such that x is the 0th element of L
  • and for i<n, if y is the ith element of L then f(y) is the (i+1)th element
  • and z is the nth element of L

Phew.

0 Responses to “Toronto LW Singularity Discussion, 2012-01-12”



  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: