Toronto LW Singularity Discussion, 2011-12-06

First off, apologies to everyone for the super-late circulation of these minutes. Christmas and my day job have been getting in the way somewhat. Also as always, if I seem to have misrepresented what you said or what you meant then let me know. A technical note: rather than listing them all at the end, I’ve put wikipedia links inline with (WP) after them to let you know it isn’t linking anywhere exciting.

Present: SD (host), GE (minutes), SM

Apologies: SS

The question for this meeting was: how do we go about deciding whether the Singularity is something we should be worried about? What’s the next thing we need to do?

SD says: we need to read lots of stuff. We all generally agreed on this point. So yeah, that answered “what’s the next thing we need to do” pretty quickly. But we still had plenty to talk about.

In order to answer the question “what should we read about?”, SD wanted a definition of the “Singularity”. He suggested something analogous to the next industrial revolution or the invention of agriculture (I think this is roughly how Robin Hanson sees it).

(Incidentally, we came up with another interesting definition on another occasion: the Singularity as the point at which you could remove all humans from the economy and it would carry on going).

SD sees AI as where the action is and so a good thing to research. SD wants us to understand what intelligence really is; GE says that no-one really understands that yet, so this should be qualified with “at least as up to date as the most advanced research”.

GE asks what if something unexpected turns up? (I assume I was referring to something like a new approach to AI which could make it seem drastically simpler).

SD (back on the subject of what to read) brings up cognitive science(WP) (our current only example of human-level intelligence: the human brain).

SM says he has read some of Kurzweil’s The Singularity is Near, and Yudkowsky (I didn’t minute exactly what, but I assume at least AI as +- Global Risk).

GE believes that Yudkowsky’s ideas have little traction amongst AI researchers, and if so this is an interesting piece of information to consider.

SM asks who are the AI researchers exactly and what are they doing? e.g. in IBM.

SD points out that AI is usually seen as a specific tool for a specific purpose. No grant for building utopia.

GE wonders if that describes the Singularity Institute?

GE asks if there is a strong argument out there against Yudkowsky’s ideas. In particular outside of Less Wrong – do people form counterarguments or simply dismiss Yudkowsky’s ideas?

SD (I haven’t minuted this, but I assume he was talking about himself?) says anecdotal evidence for dismissal. The apparent craziness of an idea is an excuse to stop thinking about it. SM says true for him too. Default position for alien sightings, conspiracy theories etc. is scepticism. SD says that the Internet trained him. GE wonders if dismissal might be the rational response in these cases – if the work required to actually prove something is nonsense is high then it’s cheaper to just dismiss it (at the cost of maybe missing something really important). SM points out this is especially true if it’s something you can’t do anything about.

SD says that jargon can be used to make a convincing-seeming case for something implausible. (This suggests we should aim for the clearest language that we can).

GE argues that in general, the best position for an outsider to go with is the scientific consensus (e.g. with global warming).

Still on the subject of traction of the Singularity idea, SM points out it made the cover of Time magazine (also see the video). Also some traction in Silicon valley. (I missed the chance to clarify this in the meeting, but with my lack-of-traction hypothesis I realised that I was just thinking about the worldview of Yudkowsky and the SI, which is just a subset of singularitarianism. I.e. that the Singularity is likely to be bad and that it’s less likely to be bad if a deliberate effort is made).

SD points out that a statement such as (no actual data here, just an example) “60% of physicists believe in the Many-Worlds Interpretation” refers to all kinds of physicist, not just those who are most likely to know about the issue. GE says that the Singularity hypothesis is talking about actual stuff that may or may not happen in the future and so is somehow more “facty” than interpretations of quantum mechanics. GE says worth finding out if there’s a consensus. Look at surveys (biased but a start) and published material.

GE summarises what we hope to learn from reading:

  • Clue up on issues
  • Expertology – statistics on good/bad arguments for/against hypotheses

SD says to keep track of the Less Wrong discussion board, and Hacker News.

(GE is curious about something (somewhat out of the flow of the discussion) – debate visualisation. There are tools for this, but from what I understand of how they work, it’s a kind of arguments-as-soldiers approach showing which arguments counter which. But surely any rational debate is about trying to establish an underlying model of reality – is it possible to illustrate the model rather than the structure of the debate itself? Just a thought).

SM says one thing made him update in favour of Singularity hypothesis – pop sci books on the brain. What we know about the brain combined with a feeling that we can usually do better than natural selection. SD says we understand more about the brain hardware than software. (except for protein folding – prions).

GE says that focusing on the very latest science may be bad – that will just show what’s hot right now, rather than what’s well established. SD says reading scientific papers critically is a skill. SM points out the MIT encyclopedia of the cognitive sciences as a good (and hefty) reference to well-established science. GE wonders what kinds of material we should read? Academic textbooks seem to be too much detail; pop sci is biased. SM said something about Luke Muelhauser (I forget what, but possibly that he’s a good source to read).

SD says that the “null hypothesis” is that the Singularity won’t happen, and assuming no collapse this means incremental change instead of revolution. SM points to Tyler Cowen’s Great Stagnation, but isn’t sure how much this affects the Singularity scenario – it only requires rapid gains in one sector of the economy, not the economy as a whole. SM points to IBM’s Watson and the healthcare sector. SD is sceptical of this but thinks that Watson might eventually be huge.

(I have rather cryptically written here “SM: Progress – needs to be more specialised. GE: That’s our problem”. Not sure exactly what we were talking about there).

We take a brief medical detour. SM says that it’s hard for actuaries to predict mortality 20 years in the future. GE says that something like organ replacement might be a game changer. SM says anti-aging too. SD claims that the last few years of life often suck, and SM points out there are health-adjusted measures of lifespan such as the quality-adjusted life year(WP). SM also claims that preventative medicine (as it is now) doesn’t save money.

(We take another somewhat abstract detour on the subject of different species and the structure of societies. I confess to wondering whether chimps, given sufficient time, would develop advanced technology – given much more limited capacity for planning and abstract thought, any technology would have to “evolve” rather than being consciously designed as such. I wonder whether volatile social structures here might be the limiting factor, rather than just lower intelligence. We also talk about eusociality and parallels between e.g. ant and human society – is this just superficial? A colony of ants are all closely related, but in humans this isn’t the case – our genetic interests aren’t aligned. But is human behaviour governed more by memes?)

GE attempts to classify the different possible paths for the future global economy.

  • Total collapse. This covers all the eventualities where human society gets wiped out (or most of it does) .
  • Partial collapse. The economy collapses (possibly associated with a large loss of life) but society continues on.
  • Technology increases up to a point and then levels off. This seems unlikely, although I think if human-level AGI proved to be beyond the scope of technology, I’d put that in this category.
  • Continued incremental change. This is the sort of economic business-as-usual, SD’s null hypothesis.
  • Knee. Any point where economic or technological growth suddenly shoots upwards may be regarded as some sort of Singularity – the graph doesn’t tell you much about what’s actually happening on the ground, though it doesn’t seem to much of a stretch to suppose it’ll be something unpredictable and possibly weird.

Collapse (up and then down), Partial collapse (up and then somewhat down), Level-off (up and then across), Incremental change (gradual upward curve), Knee (upward curve with a sharp kink)

On the “incremental change” hypothesis, GE claims that exponential growth can’t carry on forever (This statement is somewhat vacuous in practice – it’s come up before in LW discussions that people tend to confuse “there exists a theoretical limit” with “and that limit comes soon”. Particularly w.r.t. Moore’s law. It’s generally the next century-ish that we’re concerned about here, not thousands of years in the future).

SM quotes Kurzweil on a progression Chemistry -> Biology -> Brains -> Technology (-> AI?).

SD brings up the topic of mindspace. GE thinks that where the dominant entities end up in mindspace may depend on how they come about – if smarter-than-human minds come about by augmenting humans (i.e. posthumans) then they’re likely to undergo some value drift but still be somewhat recognisable (or not – that isn’t a promise). AI is likely to end up somewhere more unpredictable though. GE thinks that the question of which value system dominates – while very important to us – is somewhat orthogonal to the question of the rate of growth of economic activity or technology.

The amorphous blob of mindspace, with humans and posthumans nearby and AI off in a completely different corner

GE points to Muelhauser’s Criticisms of intelligence explosion (on LW) and IEEE Spectrum’s Tech Luminaries Address Singularity as a good pointer towards intelligent Singularity sceptics.

Someone (sorry, forgot to note who) pointed to Minds, Machines and Gödel.

Actions:

  • SD to find surveys on attitudes towards Singularity
  • Everyone to read lots of stuff and share on Google group
Advertisements

1 Response to “Toronto LW Singularity Discussion, 2011-12-06”



  1. 1 Toronto LW Singularity Discussion, 2012-01-12 « Prince Mm-mm Trackback on January 15, 2012 at 16:49

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: