What we seem to agree on so far

Based on the last Toronto LW singularity discussion, we seem at least mostly agreed on the following things. (I’ve bolded and question-marked things that I’m not quite so sure we agree on or that haven’t been discussed in much detail). Also, while I’m trying not to just write down my own opinion on everything here, it’s inevitably going to happen to some extent so I’ll get the group to review this.

  • (The brain is fundamentally a computer and there’s no theoretical reason that our cognitive abilities can’t be reproduced in silicon?)
  • The human brain is probably not near the limit of what’s possible in terms of intelligence.
  • The following concepts are not completely wacky. They’re at least worth spending time thinking about and discussing semi-regularly down the pub. (There’s an obvious selection bias at work on this one).
    • Whole-brain emulations (“ems”)
    • Human-level or above human-level artificial intelligence (“AI”)
    • Intelligence explosion
  • Human extinction or economic/technological collapse is a factor which might prevent any of this.
  • Most of us believe there’s a >50% chance that ems appear before AI.
  • Ems are likely to be
    • Faster than human brains
    • Economically cheaper than human brains
    • Able to make copies of themselves (i.e. direct copies of their mind-state, not “children”)
  • If ems are introduced as economic participants, the global economy will change drastically (“em world”)
    • Likely involving much faster rates of growth (what will be the ultimate limit on this?)
    • Role of humans in this world is unclear and possibly extremely precarious.
  • “Em world” consists of at least two phases: an initial phase where we can expect ems to behave like somewhat normal human beings, and later phases where economic and Darwinian pressures have changed their structure and behaviour. (To what extent do these considerations apply to “AI world” also?)
  • Initial em world depends on who gets access to uploading first, what their motivations are, how quickly everyone else gets access to uploading and whether they want to use it, etc. Lots of uncertainty there; avoid detailed stories.
  • Things that may or may not happen later on in em world:
    • An intelligence explosion
    • Specialisation of ems to different niches, possibly involving drastic self-modification
    • Each niche gets dominated by copies of a single em
  • AIs are really hard to contain 
    • In general, the smarter it is the more likely it is to find loopholes
    • AI-box argument
    • Source code leakage
    • Technological arms-race/waterline – if a perfectly contained AI is developed, it won’t be long before another team comes up with a more sloppy one
    • (Does this apply to ems too? SD and GE have touched on em confinement before, but ethics and practicality of this aren’t clear)
  • Unclear to what extent human values would be preserved in em-world, AI-world or intelligence explosion

A few of our values and meta-ethics:

  • Belief/value separation – reasoning about “ought” won’t help us answer questions about “is” or “will be”. (But conversely, “is” might help us answer “ought”).
  • Human extinction is bad?
  • Ems are morally relevant entities?
  • Other things equal, we’d prefer not to have a singleton AI interfering in our business?
  • Different people have different preferences, almost certainly incompatible?
  • Instrumental vs. terminal values – broadly speaking, instrumental values are updated on factual evidence
  • Personal preferences vs. moral values – broadly speaking, a preference is considered moral if other people are involved?

A few tactics for our group:

  • Read lots of stuff (preferably reading material should be diverse, relevant and non-ridiculous)
  • For now I’m sticking with the “opinion database” consisting of notes taken from reading material. I’ll drop this if it turns out not to be useful.
  • In discussions, use clear and concrete language and avoid too much jargon.
    • e.g. WBE is “waiting for someone to die, then chopping their brain into really thin slices, scanning each slice and performing image processing to determine where each neuron is and how they are connected, then running a computer program which will simulate the behaviour of all the neurons simultaneously”
    • It’s easier to analyse those kinds of concept (for ethics, practicality and consequences) than to analyse something like “uploading a mind into a computer”
  • Obviously I’m keeping these minutes so we can record any interesting ideas that come up and make sure that the discussion is actually progressing
  • I’ll also create “idea posts” ahead of time (both for the singularity discussions and the general discussions) and encourage others to do the same.
Advertisements

0 Responses to “What we seem to agree on so far”



  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: