Toronto LW Singularity Discussion, 2012-04-05

Sorry I haven’t been writing these up – or holding new ones. This will get fixed in the near future.

Present: SD, GE, SM

In this discussion we were brainstorming counterarguments to the Singularity Institute worldview. We ranked these according to our vague feeling of how plausible they were – don’t read too much into this classification. When it says ½ or +/- it’s generally because there was some disagreement within the group. (Our plausibility tolerance probably also drifted so I’m listing these in the order they came so that you can correct for that).

We also noted whether the arguments related to the TIMESCALE for the emergence of AGI (as opposed to whether it’s possible in the first place), and whether they relate to the CONSEQUENCES. If you reject the multiple discovery(wp) hypothesis and assume that AGI invention occurs infrequently then arguments that suggest most AGIs will have mild consequences are also relevant to the timescale for the emergence of destructive or high-impact AI.

4 is the most plausible (according to us) and 1 is the least plausible.

2. Intelligence is inherently biological. (It’s the only example we have so far)

3 (CONSEQUENCE). AI will have motivational systems that can be hijacked

  • analogous to TV subverting need for social interaction
  • AI makes itself think it’s satisfied its own goal
  • Fictional example: defeating superbeings in Star Trek

4 (TIMESCALE). Predicting the future is really hard, so don’t be overconfident in your predictions.

4. An intelligence explosion would be a completely new kind of thing, totally unprecedented in the entire history of our visible universe. You need a lot of bits of evidence before you should go around predicting something like that.

3. Humans are not capable of inventing AGI. (I think I’ve heard this phrased as “if our brains were simple enough for us to understand, we’d be so simple that we couldn’t understand them).

2. Intelligence can only evolve, it can’t be engineered.

3. Fermi paradox(wp) – one possible explanation is that the gate occurs after our point in development, and most societies extinguish themselves before creating a fooming AI (which would presumably make a mess of its future light cone, and so we can observe the absence of such things).

4. Aside from this evidence, we might be concerned that other catastrophes will occur first. Some kind of existential disaster or economic collapse/slowdown

4. Planning fallacy(wp) in the Whole Brain Emulation roadmap. (I don’t have this listed in my notes as TIMESCALE but it probably should be).

2½. The universe is not deterministic for (Bayesian) AGI. It’s not possible to make valid inferences about the universe.

2. Human intelligence is already near the limit. (Does this account for sped up versions of human intelligence?)

3. Or the upper limit is above human but below foom.

1. Quantum immortality means we don’t need to worry about x-risks.

1. AI will necessarily respect us and our values (intelligence implies compassion).

1?. AIs converge on goal systems (Kant’s categorical imperative(wp) kind of thing)

1+ (CONSEQUENCE). No-one will invent AGI without appropriate safeguards.

1. Gödel-based arguments and John Searle’s Chinese Room(wp).

2½. Human brain relies on quantum phenomena

2-. The simulation hypothesis and simulation shut down due to

  • Being in an AI box (e.g. if we were taking part in David Chalmers’ leakproof singularity)
  • Simulation reaches its termination condition, e.g. if an AI is interested in the distance to the nearest alien world which has also gone foom
  • Simulation crash

3- Singularity wouldn’t be noticed.

2. Superintelligence makes mistakes? e.g. creating a physics disaster. Some kind of fundamental limit to risk management.

3 (TIMESCALE). Backlash against AI – luddism (reactions against transhumanism or people losing jobs)

3. We (i.e. SD, GE and SM) are insane and incapable of reasoning correctly. Acting as Bayesians is hard.

4. Singularity Institute – selection bias in favour of people with this worldview.

4. Selection bias amoungst singularitarians in general. Not enough people looking for counterarguments. People who believe it isn’t going to happen just aren’t interested.

3. (TIMESCALE) We’ve been massively over-optimistic about AI capability in the past. (We had a brief discussion about whether “optimistic” is the right word to refer to high expectations of technological progress if you think it’s ultimately going to end badly. I think the vague conclusion is we’re sort of stuck with this usage)

3- (TIMESCALE). We will be limited by hardware capabilities.

2½. Powerful interests emerge that have the capability of stopping AGI research, e.g. worldwide totalitarian government or coalition.

(sort of a side note here – Warren Buffet doesn’t like investing in innovation. I can’t remember what the exact relevance was).

1. Moral uncertainty over e.g. ems – if they take over then it’s OK?

(Another side note – most (?) contrarians are biologists. Biologists have a near view of brain complexity).

TODOs:

  • look up past futurology, wrong predictions made by smart people
  • look for more of these arguments on LW or other places – the number of good arguments that we missed in this discussion gives evidence as to our own arguing ability
Advertisements

0 Responses to “Toronto LW Singularity Discussion, 2012-04-05”



  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: