Singularity Summit 2011

The Singularity Summit ran from 15th-16th October 2011, in New York City. It’s organised by our good friends the Singularity Institute, and features some big name speakers with wide-ranging views on a variety of issues. I was in the audience, and three weeks late I’ve finally got around to writing up my account.

I had a minor quibble before we even got started – the Singularity was defined as the creation of “smarter than human intelligence”. I’m not quite sure how the Singularity should be defined, but I feel it ought to be something which you would notice (if it didn’t kill you). Talking about the creation of smarter than human intelligence (rather than what would happen next) leads people to speculate that the Singularity has already happened (e.g. is the Internet intelligent?), which I feel just confuses things.

Anyway.


Ray Kurzweil was first up, talking about how the Singularity is totally right around the corner. He’s a techno-optimist (one of their idols, I think) which ought to lead me to be irritated by him, but it doesn’t. He knows a lot of stuff and he’s interesting. In fact,  I should read his book, The Singularity is Near.

Kurzweil responded to a recent piece by Microsoft co-founder Paul Allen entitled “The Singularity Isn’t Near” (http://www.technologyreview.com/blog/guest/27206/). Kurzweil claimed that despite the obvious nod to his book, Allen hadn’t actually read it and that in the book he’d already responded to most of what Allen was saying. I’m witholding judgement on that one – I must admit that I didn’t find Allen convincing, but I have my own views and a short article like that is unlikely to sway me even if it’s right. I really need to find out if any Singularity sceptics have laid out any more detailed and up-to-date arguments.

A big theme of Kurzweil’s talk seemed to be the robustness and multiply-redundant nature of technological progress. He mentioned a concept I hadn’t heard of before – engineers’ pessimism, whereby the people most closely involved with a particular technology tend to be over-pessimistic about its future progress. The point about technology, Kurzweil says, is that if one approach doesn’t work something else probably will, and there are so many possible approaches (whether it’s to AI, raw processing power or some other challenge). Related to this is that Moore’s law appears to be unaffected by economic downturns.

He talked about AGI as if it getting it working wouldn’t actually be that big of a deal:

  • The complexity of the brain at the lowest level can’t be that much – the human genome is only 50MB when compressed, and a lot of that’s obviously not relevant for how the brain processes information
  • The hierarchical structure of the neocortex – we may think there’s something special about us that we can think in abstract, higher-level concepts but it’s starting to look like that’s just additional layers of the same kind of pattern-matching circuit.
  • Biologically-inspired paradigms have already seeped into the design of Watson (more on Watson later).

Other technologies and trends that he mentioned as important are 3d printing – email someone a violin – and medicine/biology/health becoming information technology.

“Humanity is about going beyond our biology – a transbiological future”.


The next two talks didn’t seem particularly Singularity-ish to me. They were about medical stuff. I imagine they were there to cater to the portion of the audience who want to live forever – it’s easy to see why such people would be drawn to a Singularity conference:

  • Singularity-ish technologies (e.g. whole brain emulation) are obviously of great interest to these people
  • These people have a personal stake in the future, so they need to start learning about the Singularity as if it’s indeed going to happen then they’re going to have to sit through it.

Stephen Badylak was talking abuot regenerative medicine. This is about fixing wounds and disease by replacing or regenerating tissues or organs.

Stem cell medicine has been disappointing so far, and right now the big thing is extracellular matrix. I hadn’t even heard of that before the talk so excuse my ignorance. One theme (although he didn’t exactly put it like this) seemed to be that you could do it the hacky way and sort of get away with it. When treating humans, they use matrix from pigs’ bladders and it still works – it doesn’t contain any cells so the body won’t reject it, and since it’s so fundamental to how the body grows, it’s been stable across that length of evolutionary time. Interestingly, the cells wouldn’t turn into a bladder either – they’d turn into whatever they were supposed to. (Although that doesn’t work for whole organs – you need liver matrix if you want to make a liver).

Another theme was that this was a paradigm change and the medical world wasn’t quite ready for it yet. Stem cell therapy apparently results in a big red swelling as the stem cells get to work, and so the medical profession assume it’s infection even if it’s actually working just fine. Then there’s surgeons who don’t know quite what they’re doing with the new technology, thus making it look less promising. And then there’s regulation (with this crowd, that always gets a groan).

Anyway, this stuff actually seems to work – it’s been used to treat oesophagus cancer and repair somebody’s leg muscle. So this is definitely one to watch.


Sonia Arrison was talking about the consequences of life/health extension to 150 years or so (the consequences rather than how we expect to get there).

The talk accompanied her book – 100 Plus – which they were handing out free copies of, compliments of Peter Thiel. H+ magazine likes the book (http://hplusmagazine.com/2011/10/26/sonia-arrisons-100-plus-book-review/) but I have to say I found it somewhat shallow – the problem about living for a very long time is that it’s necessarily going to take place in the future, and so predicting what it’ll be like becomes a futurology problem. Arrison doesn’t seem to be with the whole Singularity thing, and the problem of futurology is just that extrapolating one trend doesn’t make sense unless you’re going to extrapolate the others too. But I think I’m being unnecessarily harsh. People need to start thinking about these things and researching them, even if it happens one step at a time.

She gave a positive spin to most things. The emphasis was not just on longer lives but healthier lives – from which we could expect more wealth, as well as more interesting (multiple) careers, more investment in education etc. The question of resource usage was waved away – innovation means we’ll be able to do more with less. She also addressed the question of whether life extension technology  will only be available to the wealthy – generally there has been an accelerating trend in the distribution of technology (46 years for a quarter of US population to get electricity, 7 years to get Internet access).

The most controversial question of course is should we be doing this? Arrison (and most of the other people in the room) of course would say yes. She phrased the question as “is it natural?” and answered that it’s naturla to want to improve ourselves. I’m not sure there’s any logical answer to this one though – it just seems that different people with different values will come up with different answers. (As for me, I’d just find it really hard to tell someone that they’re too old and ought to die now. If I’m worried at all about life extension it’s because of any consequence there might be, not the thing itself).


Peter Thiel seems to be a smart guy. He is worried that the pace of innovation is slowing down – not necessarily forever, just at the moment. People just aren’t thinking about the future any more, like they were in the 50’s-60’s. (“It’s strange that the Singularity is considered strange”).

An important point (which I guess I was kind of aware of before but hadn’t quite sunk in somehow) was the difference between globalisation – good ideas spreading around – and technological advance. (He suggested that the future of China would be much like the USA but with a few high speed trains).

Thiel mentioned that the iPhone may be sophisticated technology, but it’s seen as magic by most people – it exists as part of a fashion culture, not a technologically aware one. He was also disappointed that Steve Jobs’s recent death was seen as such a blow to the industry – like, was he the last innovator left?

Now, I’m a techno-volatile so for me technological advance for its own sake isn’t always a good thing. But Thiel’s central message seems right: if there is a problem here, it’s up to us as individuals to do something about it. What can you do about the Singularity?


I found Michael Shermer’s talk a bit disappointing. His main point was that in the future we’ll need to move beyond tribalism and find new social structures that actually work, and this is something I can’t really disagree with. He was optimistic about it, pointing to the moral principles that naturally pop out of evolution – reciprocal altruism and so on – together with abstract scientific thought leading us to expand the circle of people each of us cares about (the monkeysphere – http://www.cracked.com/article_14990_what-monkeysphere_p1.html). He dubs the new society “Civilization 2.0” or the “Social Singularity”.

Generally not much new here, and abuse of the word “singularity” makes baby Eliezer cry (http://yudkowsky.net/singularity/schools).


James McLurkin has robots!!! Beep beep boop boop they go, interfering with the Wifi and implementing a real-world bubble sort. Their design was inspired by ants. That’s probably all you need to know at this point (except that their absurdly simple design makes them great for education, and that swarming robots which are the spiritual successors of these may well play a big role in the future).


Stephen Wolfram is interesting because (to me) he seems to be 95% of the way towards getting it and then suddenly veers off track at the last moment. He certainly understands cellular automata – you take various simple rules, some of them will descend into complete order or chaos, but a sizeable few will yield interesting behaviour. (Anyone who hasn’t yet played with Conway’s Game of Life really should – it’s one of the most awesome things to come out of mathematics, in terms of the simplicity of its definition versus how many surprises it generates).

A kind of obvious point he raised in the talk is the issue of computational irreducibility – sometimes, to obtain the result of a computation you basically have to just run that computation. This is offered as an explanation for free will in a deterministic universe – it may be possible to determine what action a conscious mind will take, but in order to do that someone needs to simulate that mind, essentially creating another copy of the conscious being.

His approach of “mining the computational universe” sounds interesting. It seems to follow a hacking mindset, but this might be the right approach for some things. I’m not sure whether it’s actually possible to engineer emergent behaviour, or whether you have to just go looking for it.

One of Wolfram’s big ideas is the principle of computational equivalence. This is essentially a converse of the Church-Turing thesis

CT: Any deterministic system can be simulated on a universal Turing machine.

PCE: Practically any deterministic system whose behaviour isn’t obviously simple, can simulate a universal Turing machine.

It seems possible there’s a mathematical conjecture behind those vague words, but Wolfram seems to use it more as a sort of philosophical guideline. (Even if it can’t be formalised as a conjecture or a theorem, it may still be useful. In particular, as theorems start getting too

difficult to actually prove, it might start becoming acceptable to use Bayesian inference to work out the “probability” that a particular mathematical statement is true. Applying that here, it should be possible to guess the probability that a universal turing machine can be built within a particular cellular automaton, just by observing the behaviour of that CA when seeded with random dots).

I think he’s taking this too far though. He says in the talk that there’s a fundamental equivalence between e.g. fluid dynamics and the firing of neurons in the brain – that the weather really does have a mind of its own. I don’t think this can be extrapolated from the PCE though. The PCE may suggest that any (interesting) CA may be used to simulate any other. But it will only do so if you carefully engineer a particular pattern. The PCE only seems to suggest an equivalence between rules, not between patterns.

He talked about Wolfram Alpha (wolframalpha.com) of course. I haven’t actually used this yet so won’t pass judgement, though Wolfram seems sure it’s going somewhere. (He says it captures some aspects of artificial intelligence, but is nothing like human intelligence. It’s an aeroplane, not a bird).

His final point was an interesting one about humanity’s purpose. Right now, a lot of our sense of purpose comes from working around our limitations – not everything is possible. But assuming some form of humanity survives the singularity, we may find ourselves in a world where effectively anything is possible. What then would be our purpose, as individuals or as a society? Wolfram suggested that future humans would look back to the pre-singularity age to try and find out what humanity’s values were. So being in an age where a lot of information is being recorded, we may be in a special position to influence the future. This seems to echo Friendly AI theory – instead of an AI we have future humans as an amoral super-powerful optimisation process which we, in the near future, need to program with the right values.


Jason Silva is trying to inspire people about science and technology and get people to use the power of imagination. You seem to just switch him on and he gushes – this seemed apparent when the audience tried to ask him questions. We saw a couple of his videos, which basically show him having a geekgasm with a background of whooshy science stuff. But despite these reservations, I found him quite refreshing to listen to.

Is part of the reason why people find it difficult to take AI risks seriously because they lack imagination?

“As long as mortality exists, everything is escapism”


Dmitry Itskov was talking about a transhumanist movement that he founded called Russia 2045 (2045.com). The plan is:

1. Avatars

2. Replacement bodies

3. Brain emulation/mind-uploading

And 2045 gives some idea of the estimated timescale. So yeah, people really want this to happen. Itskov reckons the mid 21st century is the moment of choice – it’s around this time that he expects machines will surpass human intelligence, and that we’ll need a safe AI. I agree with the sentiment. I’m not sure I believe the timescale (I honestly don’t know what to make of the timescale estimates that come out of this conference, but the people behind them are smart enough that they’re starting to scare me a little).

I’m also not convinced that whole brain emulation is really going to help here (I intend to blog about this). But a lot of really smart people seem to consider brain emulation a win when it comes to protecting humanity’s future. So either a lot of really smart people are right and I’m wrong, or a lot of really smart people are going to collaborate on a project that will end in a ghastly mess. Both seem plausible.


Christof Koch was talking about the neurobiology and mathematics of consciousness. Although Koch didn’t mention it, this is an important topic for anyone interested in Friendly AI, because friendliness would require determining which entities count as “people” deserving of rights.

He mentioned that we have a little nervous system in our gut that operates mostly independently of the brain – that’s really interesting. I didn’t know that. He said that it was “not conscious – or conscious but not telling us”. This seems an important distinction, though I feel he sort of ignored this issue later (for example he claimed without qualification that the cerebellum was not conscious. I’m pretty sure he’s right, but at this point I’m not sure how we would really know).

He talked about neural correlates of consciousness. These are interesting (it’s interesting, for example, that the primary visual cortex is not part of the neural correlates of consciousness – though perhaps not all that surprising).

He then went on to give what he reckoned to be a mathematical test for consciousness, based around the synergy of the network – the extent to which it’s more than the sum of its parts. I wasn’t following the maths, but it sounded like a measure of how much things break if you were to split the network in two.

I’m a little sceptical of this kind of approach to the “what is consciousness” problem. It sounds a little like a network with enough complexity of a certain sort will magically create epiphenomenal consciousness. This likely isn’t how the world works – and to be fair, I’m not sure that Koch was actually claiming that it was. Even if consciousness turns out not to be a physically meaningful concept, we might still need a definition for it, so we can sort out post-singularity ethics. Koch’s line of thinking might be a step in the right direction here.


Unfortunately I missed the first part of Eliezer Yudkowsky’s talk. He was talking about how Gödel’s incompleteness theorem causes problems for Friendly AI. Essentially, for an AI to prove itself friendly (according to some reasonable criterion) is analogous to a proof system proving its own consistency. This can’t be done according to Godel’s second incompleteness theorem, and Yudkowsky said that he was so far unable to come up with a way to bypass this restriction in a friendly AI design. He also talked about decision theory, the prisoner’s dilemma and the smoker’s lesion.

Normally Yudkowsky’s output is way more fun than this – it seemed a little dry and technical and I don’t know how much most people would have got out of it.


Max Tegmark was thinking big – intelligent life from the perspective of the universe. He thinks in terms of numbers and probabilities, which I like.

He had some interesting thoughts on the existence of alien life. He defined the Universe as everything that we can see (which might be a very small subset of everything which actually exists). With that in mind, he gave a logarithmic prior for the probability of intelligent life appearing on a particular planet. He asserted that there was no other intelligent life in our galaxy (else we’d probably have noticed it), and that it was relatively unlikely that the density of intelligent life would happen to fall in the 10^21-10^26 range (galaxy to universe scale). I’ll have to think about this harder to see whether the argument makes sense but it sounds sort of plausible.

His other interesting point was that if intelligent aliens are indeed missing, then it’s either because they failed to reach our level of development (e.g. life didn’t appear at all, or never advanced beyond single cells), or they reached our level of development and then extinguished themselves before they started interstellar colonisation. (This couldn’t have happened via intelligence explosion or vacuum bubble, as we observe the absence of such an event in our past light cone. Tegmark didn’t bring this particular point up).

Clearly, if lots of aliens reached our level of development and then all died out, it doesn’t look too hopeful for ourselves. So Tegmark hopes that they failed at one of the earlier stages.

Following the existential risk theme, Tegmark observed that estimates of an x-risk event range from about 10-1 to 10^-4 per decade, and that the world dedicates about 10-6 of its GDP to x-risk reduction. This is actually really really bad.


Alexander Wissner-Gross was making analogies between AI and quantitative finance, and was pushing the idea that the first superhuman AI might emerge out of the financial system. He actually sounded like he knew what he was talking about, and he had a strong focus on the safety aspects which I approve of.

Interesting aspects of the financial system (considered as AI)

  • Finance is driving network fibre. NY-London round trip time is 60ms, where 52.2ms is the physical limit
  • He suggested that computational nodes would be placed at strategic points between markets (e.g. in the middle of the ocean) in order to effectively reduce the round trip time further. I’m not sure what the relevance of this was, but it’s awesome.
  • It’s coupled to humans via trade. (although I’m yet to be convinced on this one from the point of view of AI safety)
  • It spans the globe – literally a brain the size of a planet.

He made analogies between strategies of financial regulation and strategies for keeping an AI under control. We might already be seeing the first wave of AI regulation! I’m not convinced that financial regulators are sufficiently competent however? It seems more likely to illustrate the things that don’t work (companies can work their way around regulations, rendering them irrelevant, just as a utility-maximising AI would be expected to try to work around any restrictions that are put on it).

I don’t think that a friendly/regulated quantitative-financial-AI will save us. I don’t think that Wissner-Gross thought that either – he suggested that the strategy should be more about delaying the onset of unfriendly AI until we figure out what we’re doing. And in any case, I think this line of thinking will lead to some interesting ideas.


Sharon Bertsch McGrayne was talking about Bayes Theorem and how awesome it is, how the Bayesian approach was neglected and derided so long, etc. I already knew all that – it’s a minor theme in the Less Wrong sequences – though I still feel I need to learn how Bayesian statisticians actually do their work. One interesting point from the talk about why it took so long to take hold was that it was successfully used in World War II, but classified.


David Brin’s talk was entitled “So you want to make gods. Now why would that bother anybody?”. Brin is not anti this stuff – rather, the talk was about how to communicate it with outsiders. His main suggestion was mining the Bible – “you argue scripture at them – they can’t argue science back”. Out of the things he pointed out, the one I found most interesting was that God didn’t actually punish the ambitious – Adam and the builders of the Tower of Babel – with flame or anything, he just made their lives more difficult. (Brin: “Ambition isn’t punished as sin, it’s rewarded with challenge!”)


Tyler Cowen was talking about the apparent slowdown in progress since the 1970’s – in fact a similar theme to Thiel’s talk, though presented differently so it’s well worth a listen to both. (Again, we’re talking just about the developed nations, not those playing catch-up, which I think is fair enough). Some suggested reasons:

  • “A sofa coan only get so comfortable” – plateau
  • Working with stuff a lot easier than working with people (at current margins, have to work with people)
  • Science increasingly (over-)specialised – there’s a limit to what a scientific mind can grasp
  • Rent-seeking – innovations in ripping people off. Healthcare/education/finance.
  • Regulation
  • Culture in education is too egalitarian – top 1% not trained to become next innovators?
  • Most of world’s gains being captured by elderly?
  • Interesting question: what’s the true forcing variable? Why aren’t pro-science forces stronger?

He suggested that advances in medicine and AI might be a way out.

Of course, for those of us worried about a bad technological singularity, any slowdown might be regarded as a good thing. I’m not sure it’s that simple though. In any case, it does look like there’s something real going on here, and it would be worthwhile finding out why because I don’t think anyone really knows just yet.

Michael Vassar and Tyler Cowen then had a chat about this topic (it was listed on the programme as a “debate”, but it turns out Vassar agrees with the main point that there’s been a recent underperformance in tech, so yeah it was more of a chat). My notes are sketchy here so I guess not much stuff turned up, except that Vassar was armed with stats:

  • The average American will never read a book after school
  • There are as many people in prison as have PHD’s
  • Unemployment in the US would be 18-19% if measured correctly

John Mauldin was talking about the economic crisis. He seems like someone worth following, in that he seems to be smart and hold a position that I disagree with while not seeming completely crazy. People like that are worth identifying in case they’re right and I’m wrong.

Anyway, he was talking about “the end of the world as we know it”, referring to a debt default – possibly an unfortunate choice of phrase given the forum. He referred to Reinhart and Rogoff, “This Time is Different” – a paper analysing financial crises throughout history. I’d heard of that before and it’s going on my reading list. Anyway, if you look at the historical data then we’re screwed, or something. It would be interesting to figure out whether this is true – quite a few smart friendly people have been telling me that public debt isn’t such a problem and I’ve been believing them, but hey, they’re wrong about the Singularity so they could be wrong about this too. It’s nice not to feel so emotionally invested in mainstream political issues any more.

This wasn’t Mauldin’s main point though. The main point was that none of this was going to matter, because we’re due some awesomeness over the next 20 years: wireless connectivity, demographic shift, the “sovereign individual”, biotech, the “close to home revolution” including nanotechnology, robotics/AI, some kind of energy revolution.

I’m not sure how many of these would actually solve more problems than they create, but he’s at least right that these are the kinds of things economists should be talking about.

There was the usual question from the audience of “are robots going to take our jobs?”. The answer was along the lines of the markets will solve everything and if you can’t find work then you’re just lazy (I’m paraphrasing). I’m sceptical of this kind of argument even in today’s economy – when we’re talking about humans competing against superintelligent robots it just sounds like he didn’t understand the question. A possibly wrong but interesting answer is that we could emulate and upgrade ourselves in order to keep up with the AIs (while maintaining whatever it is we feel gives us our identity) – a technorapture scenario. Another possibly wrong but interesting answer is Robin Hanson’s thing about holding capital and living off the interest. We didn’t get either of those though.


I’d forgotten Riley Crane’s talk. He helped organise the winning response to the DARPA red balloon challenge – using social networks to locate balloons that DARPA had placed around the USA. So he was talking about how to use social networks to get people to do things. A key thing seemed to be fighting for people’s attention.

Anyway, Crane seemed to feel that a new communication paradigm was on its way, but while the talk was interesting I guess he didn’t make this point sink in for me.


Dileep George and Scott Brown are trying to build an AI. Together they are Vicarious Systems, and they sound like they might know what they’re doing. Their approach lies somewhere between brain emulation and just-a-bunch-of-algorithms. They point out that the flaw with brain emulation is that you’re emulating a whole bunch of stuff you don’t need. Instead they are looking at the structure of the physical world, and how this is mirrored in the structure of the brain (both are hierarchical). They hope to learn the simplifying assumptions about the world that evolution has baked into the structure of our brains, and use that to guide their programming.

Another important difference from brain emulations, of course, is that these AIs will not model particular people’s brains. This means that they need to be trained – the information about how the neurons should be wired up (or whatever the equivalent is in their system) won’t be available. But they reckon that subsystems only need to be trained once and can then be reused, which is interesting.

They’ll be focusing on vision first. They want to do that before adding abstract thinking to their AI, as they reckon that in order to be useful, abstract thinking needs to be grounded in stimuli from the real world.

I’m not in a position to judge whether George and Brown really know what they’re doing. They can certainly give good presentations. But the standard thorny little issues apply here. The first is AI rights – if a system has human abilities but is not given human rights, it’s effectively a slave. The second issue is the economic impact of introducing human-level (or above human-level) AI into the economy. This could lead to us all becoming welfare queens or landed gentry, or it could lead to unmitigated disaster.


Jaan Tallinn was completely on-message. It was awesome. He says that throughout history, people have had to balance their individual needs against those of society. But as new technologies lead to an increasingly volatile and unpredictable future, we now have to choose between:

  • the individual
  • society
  • future society

Tallinn describes the work of addressing future society’s needs as “level 3 challenges”, and the growing movement which is addressing them as the “CL3 generation”. That includes me. It’s a relief that there’s finally a word for what I am. The movement needs a better name, and needs to be more cohesive, and I’m curious as to whether Tallinn sees himself playing a big leadership role in this movement. In any case it’s exciting.

In the talk, Tallinn mentioned Stanislav Petrov, who possibly single-handedly averted a nuclear war, by correctly identifying a missile early-warning alert as a false alarm. Tallinn’s point was firstly that the future is going to be increasingly determined not by societies but by individuals with big red buttons. His second point is that Petrov action remains almost unknown and underappreciated – society just doesn’t understand these kind of issues yet.

Tallinn described some of the reasons why society is having trouble coming to terms with level 3 challenges. Conventional actions aimed at addressing the needs of present society – charitable donations or volunteer work, etc. – usually carry positive social status. Tallinn warned that this could lead to a social status reinforcement cycle, where people seek out the psychological reward associated with doing things that society approves of. The problem is that this can lead to actions that are:

  • Short term (seeking frequent rewards)
  • Scope insensitive
  • Easy to understand (people can only approve if they understand what you’re doing)

Tallinn in particular criticised the Gates Foundation for this last one. Short-termness and scope insensitivity are obviously massive burdens when it comes to thinking about level 3 challenges – and in general these issues aren’t easy to understand either. So the social reward mechanisms that evolution gave us are inadequate for many future challenges, and we instead have to resort to logic and reason.

Tallinn doesn’t know what causes somebody to become CL3. I don’t know either, but it’s clearly something personal. It’s interesting that he pointed to the Less Wrong and Overcoming Bias blogs as something that changed his thinking.

What made Tallinn’s talk special for me was not that it contained anything completely new to me, but rather that it explained and clarified an idea which was very important to me but which I would have been unable to put into words in quite that way.


The final three talks were all about this year’s hottest AI story – IBM’s Watson. This (narrow) AI was able to beat the best human contestants in the quiz show Jeopardy!. The software is not only able to parse the questions with reasonable accuracy, but also to parse the large corpus of text (from Wikipedia and such) which it draws its answers from. We had two people from IBM speaking, as well as Ken Jennings, the man who got beaten.

David Ferrucci was the lead developer on the Watson project. He made the valid point that the general public don’t understand what’s easy or what’s difficult for computers – or that Watson isn’t Skynet. You need to know at least a little about computers and AI to know what makes Watson special, and to know what it is and what it isn’t. (I guess that follows one of the themes of this summit, which was disappointment at the level of technological literacy among the general public).

People might look at Google and assume that computers are already pretty good at answering questions. But Google doesn’t really answer questions – it just provides you with enough relevant-seeming information that you have to obtain the answer from. Watson has to provide actual answers to questions, and generate a confidence estimate for each one – when the confidence reaches a certain threshold, it will buzz in with that answer. And it can not only do this particular task well enough to be useful, it can do it as well as a highly trained human.

So how does it work? Firstly, it has a lot of grunt. It has 2880 cores and 15Tb RAM. (Disk apparently wasn’t fast enough). I’m interested as to whether that could be used as part of an estimate as to the amount of computing power required for a human-level AGI. A human brain can obviously do a lot more than answer Jeopardy! questions, but at the same time we might expect algorithms to become more efficient. (Sorry, that wasn’t in the talk, just my own speculation).

Secondly, it doesn’t rely on a single algorithm but rather has a lot of algorithms working in parallel, and some way of combining the results at the end. This allowed development work to be distributed – Ferrucci made an interesting point that more progress on AI will be made if people are working together on an integrated system than if PHDs all pursue their ideas in isolation.


Dan Cerutti, also from IBM, is responsible for commercialising this technology. He made a sensible point – is it smarter than a human? He doesn’t care; he’s just thinking about the practical applications. I agree that this approach – thinking first in terms of its role in the economy – is a good one to take when considering the impact of AI technology on society.

Cerutti wanted to find the most non-evil application that Watson was suited for. This lead him to healthcare – essentially a diagnosis machine. It seemed to fit Watson’s capabilities quite well: it’s a problem of high economic value (Watson is an expensive system), it involves making decisions with high frequency, and it can add value because there’s a big mismatch between the relevant information that a human brain can hold and the information available in the world.

Obviously they need to make some changes to the Jeopardy-playing system (unless you like receiving your medical diagnosis in the form of a question). The main improvement that he talked about was submitting inquiries (which would include extra information such as a patient’s medical record) in addition to a simple question. It would also be more interactive, where the system would respond with further questions if it needed more information in order to make its diagnosis.

Cerutti is keen to spin this the right way. He says, don’t think of it as HAL, think of it as the nice computer from Star Trek which answers everybody’s questions. Don’t worry, we’d never use this technology for something bad like the military, we’ll use it for something good, like health. Also it’s there to help existing medical professionals, not take away their jobs. Not that I don’t trust IBM’s motives here, but once a technology is developed it can’t be undeveloped – yet it can be adapted for new purposes it wasn’t originally intended for.


Ken Jennings is the Jeopardy! megachampion who lost (along with some other guy) to Watson. (Just in case any of you don’t already know, this echoes chess champion Garry Kasparov’s 1997 loss to another IBM machine, Deep Blue). IBM were proud of their achievement – their headline after the game was “humans win”. They were going to show that in either case.

Jennings is great. He has a sense of humour and he used to be a computer programmer – and actually knows a bit about AI! That’s kind of awesome (for example, he knew from what he’d been taught that Watson beating him shouldn’t have been possible). He’d had the Achilles/Tortoise stories from Gödel Escher Bach read to him as bedtime stories at Kindergarten.

He says that Watson is the only Jeopardy contestant he’s been inside. The game was held in an IBM building. So the audience were all IBM engineers who were cheering for Watson – it was “an away game for humanity”.

Jennings said that Watson had a few minor advantages. The audio and video questions were left out as the software was not designed to process them. Watson had an advantage on the longer questions simply because it could read them faster. Watson performed a lot better in some categories than others (just like a human contestant would, though not necessarily for the same reason – with Watson it was about how high the learning curve was for that subject, whereas for a human it would be more a question of that person’s areas of expertise). As such, Watson lost both of the practice games but got lucky with the categories on the live game (he insists it wasn’t rigged that way though). So I think it’s fair to say that at this exact point in time, Watson is the equal to the best human players rather than completely dominating them.

Jennings says there is a PR challenge for AI. Any technology will have both friendly and scary applications – and people tend to be scared of technology when they don’t understand it. (I don’t know. I feel I understand this reasonably well and I’m still scared). Jennings says that the job of “quiz show contestant” has now been made obsolete by AI, and that the same will happen to more knowledge sector jobs. According to him, this could lead to job polarisation – AI taking out the middle class. He is also worried about outsourcing parts of his brain – GPS causing the hippocampus to atrophy, etc. He doesn’t believe there will be a singularity event – a big paradigm-changing thing.

A final point. Jennings gave his part of the winnings to a charity – VillageReach. It’s kind of awesome that he chose this because it was nominated by GiveWell as one of the most effective charities out there. And most people don’t choose their charities that way. Also interesting was that he claims on previous editions of the show, the TV network didn’t want him to say that he would give his winning money to charity, and instead say he was going to spend it on a new kitchen or whatever. What’s going on there? I’m aware that our culture (or human nature) might introduce anti-rational-charity norms, but anti-charity altogether? That’s an odd one. I’ll mark it as an outlier for now.


List of videos here.

Wikipedia has pages on:

 

Less Wrong has pages on:

 

0 Responses to “Singularity Summit 2011”



  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: