Posts Tagged 'lesswrong'

Toronto LW Singularity Discussion (sort of), 2012-04-19

Present: SD, GE, SB

SD was reading an article on Ribbonfarm about “Hackstability“, the equilibrium position between singularity and collapse.

SB wonders how far into the future we can reliably predict, so this discussion is more about the near future(trope) than the singularity which we hope is still some way off. We pick a 5 year timescale and try to predict what we think we will see.

(This post may contain buzzwords)

Continue reading ‘Toronto LW Singularity Discussion (sort of), 2012-04-19’

Toronto LW Singularity Discussion, 2012-04-05

Sorry I haven’t been writing these up – or holding new ones. This will get fixed in the near future.

Present: SD, GE, SM

In this discussion we were brainstorming counterarguments to the Singularity Institute worldview. We ranked these according to our vague feeling of how plausible they were – don’t read too much into this classification. When it says ½ or +/- it’s generally because there was some disagreement within the group. (Our plausibility tolerance probably also drifted so I’m listing these in the order they came so that you can correct for that).

We also noted whether the arguments related to the TIMESCALE for the emergence of AGI (as opposed to whether it’s possible in the first place), and whether they relate to the CONSEQUENCES. If you reject the multiple discovery(wp) hypothesis and assume that AGI invention occurs infrequently then arguments that suggest most AGIs will have mild consequences are also relevant to the timescale for the emergence of destructive or high-impact AI.

4 is the most plausible (according to us) and 1 is the least plausible.

Continue reading ‘Toronto LW Singularity Discussion, 2012-04-05’

What we seem to agree on so far

Based on the last Toronto LW singularity discussion, we seem at least mostly agreed on the following things. (I’ve bolded and question-marked things that I’m not quite so sure we agree on or that haven’t been discussed in much detail). Also, while I’m trying not to just write down my own opinion on everything here, it’s inevitably going to happen to some extent so I’ll get the group to review this.

  • (The brain is fundamentally a computer and there’s no theoretical reason that our cognitive abilities can’t be reproduced in silicon?)
  • The human brain is probably not near the limit of what’s possible in terms of intelligence.
  • The following concepts are not completely wacky. They’re at least worth spending time thinking about and discussing semi-regularly down the pub. (There’s an obvious selection bias at work on this one).
    • Whole-brain emulations (“ems”)
    • Human-level or above human-level artificial intelligence (“AI”)
    • Intelligence explosion
  • Human extinction or economic/technological collapse is a factor which might prevent any of this.
  • Most of us believe there’s a >50% chance that ems appear before AI.
  • Ems are likely to be
    • Faster than human brains
    • Economically cheaper than human brains
    • Able to make copies of themselves (i.e. direct copies of their mind-state, not “children”)
  • If ems are introduced as economic participants, the global economy will change drastically (“em world”)
    • Likely involving much faster rates of growth (what will be the ultimate limit on this?)
    • Role of humans in this world is unclear and possibly extremely precarious.
  • “Em world” consists of at least two phases: an initial phase where we can expect ems to behave like somewhat normal human beings, and later phases where economic and Darwinian pressures have changed their structure and behaviour. (To what extent do these considerations apply to “AI world” also?)
  • Initial em world depends on who gets access to uploading first, what their motivations are, how quickly everyone else gets access to uploading and whether they want to use it, etc. Lots of uncertainty there; avoid detailed stories.
  • Things that may or may not happen later on in em world:
    • An intelligence explosion
    • Specialisation of ems to different niches, possibly involving drastic self-modification
    • Each niche gets dominated by copies of a single em
  • AIs are really hard to contain 
    • In general, the smarter it is the more likely it is to find loopholes
    • AI-box argument
    • Source code leakage
    • Technological arms-race/waterline – if a perfectly contained AI is developed, it won’t be long before another team comes up with a more sloppy one
    • (Does this apply to ems too? SD and GE have touched on em confinement before, but ethics and practicality of this aren’t clear)
  • Unclear to what extent human values would be preserved in em-world, AI-world or intelligence explosion

A few of our values and meta-ethics:

  • Belief/value separation – reasoning about “ought” won’t help us answer questions about “is” or “will be”. (But conversely, “is” might help us answer “ought”).
  • Human extinction is bad?
  • Ems are morally relevant entities?
  • Other things equal, we’d prefer not to have a singleton AI interfering in our business?
  • Different people have different preferences, almost certainly incompatible?
  • Instrumental vs. terminal values – broadly speaking, instrumental values are updated on factual evidence
  • Personal preferences vs. moral values – broadly speaking, a preference is considered moral if other people are involved?

A few tactics for our group:

  • Read lots of stuff (preferably reading material should be diverse, relevant and non-ridiculous)
  • For now I’m sticking with the “opinion database” consisting of notes taken from reading material. I’ll drop this if it turns out not to be useful.
  • In discussions, use clear and concrete language and avoid too much jargon.
    • e.g. WBE is “waiting for someone to die, then chopping their brain into really thin slices, scanning each slice and performing image processing to determine where each neuron is and how they are connected, then running a computer program which will simulate the behaviour of all the neurons simultaneously”
    • It’s easier to analyse those kinds of concept (for ethics, practicality and consequences) than to analyse something like “uploading a mind into a computer”
  • Obviously I’m keeping these minutes so we can record any interesting ideas that come up and make sure that the discussion is actually progressing
  • I’ll also create “idea posts” ahead of time (both for the singularity discussions and the general discussions) and encourage others to do the same.

Toronto LW Singularity Discussion, 2012-03-09

Present: SB, SD, GE, SF, EJ

Minutes: GE (note that when I’ve written something down that doesn’t make sense to me afterwards, I’m sometimes leaving it out here. Please let me know if you think I’ve missed out an important point)

The starting point of the discussion was Yudkowsky’s notion of an optimisation process, in particular the posts Optimization and the Singularity and Observing Optimization.

Here are my notes from beforehand:

Yudkowskys’s view:

My view (actually I didn’t get to communicate this in the meeting, sorry):

  • Thermodynamics – to hit a small target, must start from smaller platform?
  • We are limited in the kinds of property we can expect from all optimisation processes in general. When it comes to programmable optimisation processes, someone could write one to do the exact opposite of what we expect (e.g. the opposite of hitting a small target would be an entropy-maximiser)
  • instead define in terms of who beats who, or its success in a wide range of environments?

Concepts:

  • optimisation process
  • optimisation power
  • intelligence as optimisation power divided by resource usage
  • recursively self-improving optimisation process
  • which features of self an optimisation process can improve, and how quickly
  • goal stability
  • friendliness
  • “Friendly AI” as a particular approach to friendliness
  • coherent extrapolated volition
  • singleton
  • programmable optimisation process (AGI may be programmable, evolution not)
  • meme (actually I’m interested in a generalised notion – any kind of information that undergoes copying, mutation and selection. Genes would be included here).

Continue reading ‘Toronto LW Singularity Discussion, 2012-03-09’

Toronto LW Singularity Discussion, 2012-01-12

Warning: contains mathematics

Present: RA, SB, SD, GE, MK, JM

Minutes: GE

Host: pub

This discussion was slightly more informal due to a noisy venue and two newcomers. Welcome to Less Wrong, RA and MK! Also great to see familiar faces SB and JM taking an interest in the Singularity discussion.

GE kicks off with a discussion of Minds, Machines and Gödel. GE attempts to explain Gödel’s incompleteness theorem(wp) but I don’t get very far. (This page is a good quick introduction, but I feel it glosses over some important details – Godel’s incompleteness theorems only talk about whether statements are provable, not whether they’re true. If you get those concepts even slightly confused then you end up in a world of fail).

Continue reading ‘Toronto LW Singularity Discussion, 2012-01-12’

Toronto LW Singularity Discussion, 2011-12-06

First off, apologies to everyone for the super-late circulation of these minutes. Christmas and my day job have been getting in the way somewhat. Also as always, if I seem to have misrepresented what you said or what you meant then let me know. A technical note: rather than listing them all at the end, I’ve put wikipedia links inline with (WP) after them to let you know it isn’t linking anywhere exciting.

Present: SD (host), GE (minutes), SM

Apologies: SS

The question for this meeting was: how do we go about deciding whether the Singularity is something we should be worried about? What’s the next thing we need to do?

SD says: we need to read lots of stuff. We all generally agreed on this point. So yeah, that answered “what’s the next thing we need to do” pretty quickly. But we still had plenty to talk about.

Continue reading ‘Toronto LW Singularity Discussion, 2011-12-06’

Toronto LW Singularity Discussion, 2011-11-22

Present: SD, GE (host, minutes)

This was the first meeting and a particularly rainy night, so only two of us. I decided there wasn’t much point having a structured discussion – that can wait until the next one when hopefully we’ll have more people. So this is a bit all over the place; future meetings will have a clearer direction.

SD wants the goal of these discussions to be obtaining an answer to the question: Is the Singularity worth worrying about? (Relative to other things)

GE also has a more specific goal: finding out whether Singularity Institute is the highest expected-utility-per-dollar charity (and if not, then who). These seems like a much harder question to answer though, and it seems to make sense to tackle SD’s question first. We’ll see if anyone in future meetings have any other goals for these discussions.

There seem to be three (or four) components to the question “is the Singularity worth worrying about?”


Categories