Toronto LW Singularity Discussion, 2011-11-22

Present: SD, GE (host, minutes)

This was the first meeting and a particularly rainy night, so only two of us. I decided there wasn’t much point having a structured discussion – that can wait until the next one when hopefully we’ll have more people. So this is a bit all over the place; future meetings will have a clearer direction.

SD wants the goal of these discussions to be obtaining an answer to the question: Is the Singularity worth worrying about? (Relative to other things)

GE also has a more specific goal: finding out whether Singularity Institute is the highest expected-utility-per-dollar charity (and if not, then who). These seems like a much harder question to answer though, and it seems to make sense to tackle SD’s question first. We’ll see if anyone in future meetings have any other goals for these discussions.

There seem to be three (or four) components to the question “is the Singularity worth worrying about?”

  • Is it going to happen? (And will it be good or bad?)
  • Can you do anything about it?
  • How much should we be worrying about other things instead? (This one may be beyond the scope of these discussions though

SD asked about this history of existential risk – in particular, have previous high energy physics experiments posed an x-risk? GE recalls reported fears that the first atom bomb test would somehow ignite the atmosphere – we now know that that isn’t physically possible, but at the time we didn’t really know that and it was more a sort of really good hunch. So given the information available at the time it would have been an x-risk.

SD brought up the issue that there might be a social imperative, and creeping need, for potential Singularity-pathway technologies. For example cybernetics for amputees and (whatever) for Alzheimer’s sufferers. Technologies might be sold along those lines – it’s good social signalling that’s difficult to argue against. (GE attempted to contrast this with the more free-market economic pressures associated with technological advancement, but I didn’t make my point very well so we’ll gloss over that). There might be less talk about computer intelligences, which people are still scared of.

SD mentioned people who are worried about AI belonging to two camps – Terminator-watchers vs. Less Wrong. The implied distinction being irrational fear due to ignorance (and possibly unfavourable experiences with mundane computer technologies), vs. rational concern due to actually-when-you-really-think-about-it-maybe-there-is-some-kind-of-problem-here-after-all. GE wasn’t sure that the split was that clear – while LW members might know about biases, we’re not immune from them and at least some of the AI-worry might be old irrational phobias dressed up in the language of rationality. (I’m not saying that it is, it’s just sort of difficult to distinguish that).

GE points out that there’s still disagreement about the Singularity issue on LW – either outright scepticism or disagreement about important details such as the AI-FOOM scenario. SD thought that this lack of Aumann agreement might be due to ignorance of the field. GE agreed that it might but thinks it might also be due to rationality-fail on one or both sides.

GE wants to find someone who’s predicted the future and used math to back it up. He had a vague feeling that The Uncertain Future (a SI project) fitted that description but wasn’t sure. When we went to look, GE’s computer started exhibiting virus-infected behaviour (immediately prior to that actually – it wasn’t the SI’s doing) so we thought we’d better stop.

SD wants to find out what’s been done in AI research. GE brought up narrow-AI vs AGI distinction. (Incidentally when I say AI on this blog, I could be referring to either of these somewhat). SD tied this into Yudkowsky’s and Hanson’s different views of AI:

  • Yudkowsky: there’s some meaningful thing called general intelligence or optimisation power that we can think of in terms of general laws
  • Hanson: intelligence is module-based: an entity is considered intelligent if it has certain abilities, which are not really related to each other

Note that I’m minuting what we said rather than what Yudkowsky and Hanson actually think, so this may not be completely accurate.

(Offtopic but interesting point from SD: some people learn multiple languages more easily than others. Why?)

GE brought up Kurzweil’s human-genome-size-as-AGI-size-estimate issue. GE’s understanding is that it’s a program-size estimate only, and that it doesn’t address:

  • How easy it would be to actually do
  • Computational complexity – how many bytes of RAM or processing cycles it would need
  • Information introduced by the environment – it assumes that the entity running this software would be exposed to a similar environment to that of a developing human

GE imagines the laws of physics (plus the environment) as a function which maps (organism, genome) -> (organism, genome). He then imagines that if the genome was known but the organism wasn’t, what we’re essentially searching for is a nontrivial fixed point of this function. Or maybe not quite a fixed point – it might wobble around inside some area. GE thinks there is likely to be just one such fixed region (or a very few of them), and SD thinks there might be lots.

SD points out that the way the human brain wires itself up during development (defined here as up to early/mid twenties) involves randomness. GE points out that (pseudo-) randomness is easy on a computer.

We got to talking about whole brain emulations. GE describes how the task could be approached at different levels of resolution/abstraction (see Table 2 here). Lower levels (i.e. simulating individual molecules) require way loads more computing power, but higher levels require deeper scientific understanding, although this is less obviously and straightforwardly true.

Over time, assuming no crisis turns up, both scientific understanding and available computer power will increase. There are two issues here – firstly, which level of abstraction represents the most easily obtained trade-off, which will be approached first? And secondly, with that as the target, would scientific understanding or computing power prove to be the limiting factor? GE recalls the argument that understanding tends to increase in jumps and bounds, so that was the limiting factor then WBE would be achieved rather suddenly. If computing power was the limiting factor then it would be achieved more gradually, with mouse brain emulations becoming possible for human ones, and hugely slow and expensive emulations arriving before cheap ones. The thinking went, this might allow society time to adjust before the full-on onslaught of crazy. GE wasn’t sure whether or not this made sense.

Regarding WBE timelines – it seems like the sort of problem where “it’s always harder than you think, even bearing in mind that it’s always harder than you think”.

We discussed em society a little. How would government/economics work in such a world? We really didn’t have any idea. GE wasn’t sure about Hanson’s idea that humans could survive in post-WBE society by owning capital and living off rent/interest. This seems to require property rights to be respected – an issue which Hanson may have addressed somewhere but I haven’t stumbled across it yet.


SD points out that humans might be hostile to ems and want them shut down or made illegal – due to the ugh factor, or not wanting to live in a post-WBE economy. SD thinks this might be like the War On Drugs – legislation vs. market forces. GE wondered whether there would still be ems around, but they’d be hidden (e.g. within organised crime gangs). GE brought up the AI-in-a-box thought experiment. SD brought up the issue of ems-as-slaves – would people slap a convenient “nonperson” label on ems in order to be comfortable with them working for us?

SD considers the em world “weirder” than a paperclipper going FOOM. (Possibly a kind of uncanny valley?) (Also, we weren’t sure about the terminology “superintelligence” – does it just mean something way the hell smarter than a human, or does it mean an intelligence optimised almost to the limit of what’s physically possible? (i.e. the conjectured result of FOOMing).

If the FOOM scenario is possible then it could happen to both Friendly and Unfriendly AIs. In the context of Friendly AI, GE wondered whether a FOOM might itself be considered negative utility – i.e. whether we’d want to stop the FAI from improving itself (beyond a certain point) unless absolutely necessary, e.g. in order to fight a self-improving UFAI. Possible reasons for a “friendly” FOOM being bad:

  • Just that it would freak people out
  • We might have got our “nonperson predicate” askew, and FAI might be experiencing massive suffering for some reason without us realising it (and yet it wouldn’t be motivated to stop this, caring only about humanity’s preferences). It seems plausible that a more sophisticated mind might be capable of more suffering. Can an ant suffer as much as a human?
  • This is my favourite reason: under the Simulation Hypothesis, anything which might plausibly cause the simulation to shut down is an existential risk (at least a risk to human existence). An AI FOOM might massively increase the computing power required to simulate our universe, which seems a plausible shutdown trigger.

SD was curious about Singularity Institute’s take that Friendly AI is essentially a math problem. We didn’t dwell on this much during the meeting, but I think it’s a really interesting question.

SD observed that (re. the Singularity) we only have a small number of models available for something enormously complex:

  • Kurzweil
  • FOOM
  • Ems
  • Event horizon

We got to talking about happiness – GE vaguely recalls that between countries, happiness is somewhat loosely correlated with wealth. But it seems to be presumptuous to suppose that the trend will continue for societies that are extremely wealthy.

SD points out that for both lottery winners and paraplegics, happiness tends to revert back to the baseline. SD imagines happiness as the long-term average of pleasure. GE imagines happiness and pleasure as being more independent (i.e. you can experience a lot of pleasure, even long-term, without being happy. And vice versa). But GE agrees that pleasure comes in spikes and happiness is more about long-term trends.

SD points out link between happiness and (perceived) status. Status is relative and somewhat zero-summish, so this might make globally optimising happiness somewhat tricky.

GE points out that people don’t act as happiness maximisers. Firstly, while people presumably like being happy, seeking it out doesn’t always seem to be a primary motivation (though I may be wrong about that). Secondly, people aren’t all that agenty – cognitive biases such as anchoring seem to show that people don’t act so as to maximise anything in particular. SD mentions Prospect Theory.

SD describes Less Wrong as cognitive behavioural therapy for non-depressed people.

Wikipedia has articles on the following things, although since I’m going to tend to be talking about the same topics over and over, I may stop listing them at the end of posts:

Advertisements

4 Responses to “Toronto LW Singularity Discussion, 2011-11-22”


  1. 1 Judy Neinstein - Facebook July 11, 2014 at 01:48

    Hi! Do you know if they make any plugins to protect against hackers?
    I’m kinda paranoid about losing everything I’ve worked hard on. Any recommendations?

  2. 2 Judy Neinstein 2011 July 25, 2014 at 05:57

    Amazing blog! Is your theme custom made or did you download it from somewhere?
    A theme like yours with a few simple adjustements would really
    make my blog stand out. Please let me know where you got your theme.
    Many thanks


  1. 1 Toronto LW Singularity Discussion, 2012-01-12 « Prince Mm-mm Trackback on January 15, 2012 at 16:49
  2. 2 What we seem to agree on so far « Prince Mm-mm Trackback on March 25, 2012 at 12:44

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: