Present: SB, SD, GE, SF, EJ
Minutes: GE (note that when I’ve written something down that doesn’t make sense to me afterwards, I’m sometimes leaving it out here. Please let me know if you think I’ve missed out an important point)
The starting point of the discussion was Yudkowsky’s notion of an optimisation process, in particular the posts Optimization and the Singularity and Observing Optimization.
Here are my notes from beforehand:
Yudkowskys’s view:
My view (actually I didn’t get to communicate this in the meeting, sorry):
- Thermodynamics – to hit a small target, must start from smaller platform?
- We are limited in the kinds of property we can expect from all optimisation processes in general. When it comes to programmable optimisation processes, someone could write one to do the exact opposite of what we expect (e.g. the opposite of hitting a small target would be an entropy-maximiser)
- instead define in terms of who beats who, or its success in a wide range of environments?
Concepts:
- optimisation process
- optimisation power
- intelligence as optimisation power divided by resource usage
- recursively self-improving optimisation process
- which features of self an optimisation process can improve, and how quickly
- goal stability
- friendliness
- “Friendly AI” as a particular approach to friendliness
- coherent extrapolated volition
- singleton
- programmable optimisation process (AGI may be programmable, evolution not)
- meme (actually I’m interested in a generalised notion – any kind of information that undergoes copying, mutation and selection. Genes would be included here).
Continue reading ‘Toronto LW Singularity Discussion, 2012-03-09’