Meta: The idea is that I’ll create some kind of database of these, allowing viewpoints to be combined from multiple documents (and then sorted by document/author/topic). In the meantime, here are what I consider to be the interesting bits from Chalmers. This mostly consists of paragraphs lifted from the original document; the headings are my addition as are anything in brackets.

More meta: Actually, what I’ve got here doesn’t make a very good summary of the paper. I’ve left out the actual argument and just give his conclusions.

===Header===
David Chalmers, The Singularity: A Philosophical Analysis
http://consc.net/papers/singularityjcs.pdf

Jargon:
AI+: above human-level AI.
Singularity: Intelligence explosion

===Content===
Singularity:
What happens when machines become more intelligent than humans?
One view is that this event will be followed by an explosion to ever-
greater levels of intelligence, as each generation of machines creates
more intelligent machines in turn. This intelligence explosion is now
often known as the ‘singularity’.

This intelligence explosion is sometimes combined with another
idea, which we might call the ‘speed explosion’.

In principle there could be an intelligence
explosion without a speed explosion and a speed explosion without an
intelligence explosion. But the two ideas work particularly well
together.

the arguments give some
reason to think that both speed and intelligence might be pushed to the
limits of what is physically possible.

Academia:
One might think that the singularity would be of great interest to
academic philosophers, cognitive scientists, and artificial intelligence
researchers. In practice, this has not been the case.

(With some exceptions: Bostrom, Hanson, Hofstadter, Moravec)

Perhaps the highly speculative flavour of the singularity idea has
been responsible for academic resistance to it.

Philosophy:
The basic argument for an intelligence explosion is
philosophically interesting in itself, and forces us to think hard about
the nature of intelligence and about the mental capacities of artificial
machines. The potential consequences of an intelligence explosion
force us to think hard about values and morality and about conscious-
ness and personal identity.

To determine whether an intelligence explosion will be a good
or a bad thing, we need to think about the relationship between intelli-
gence and value. To determine whether we can play a significant role
in a post-singularity world, we need to know whether human identity
can survive the enhancing of our cognitive systems

Timescales:
But
we might stipulate that ‘before long’ means ‘within centuries’. This
estimate is conservative compared to those of many advocates of the
singularity, who suggest decades rather than centuries.

(For example, Good 1965->2000, Vinge 1993->2005-2030, Yudkowsky 1996->2021, Kurzweil 2005->2030)

SW vs HW:
My own view is that the history of artificial intelli-
gence suggests that the biggest bottleneck on the path to AI is soft-
ware, not hardware: we have to find the right algorithms, and no-one
has come close to finding them yet.

Timescales:
Other estimates (e.g. Kurzweil’s) rely on
estimates for when we will be able to artificially emulate an entire
human brain. My sense is that most neuroscientists think these esti-
mates are overoptimistic.

Speaking for myself, I would be surprised if
there were human-level AI within the next three decades. Neverthe-
less, my credence that there will be human-level AI before 2100 is
somewhere over one-half.

Obstacles:
Potential defeaters include
disasters, disinclination, and active prevention.

(Nuclear war, deciding singularity is a bad thing)

WBE:
(seems to support argument that brain is machine and so can be emulated)

Although I am sympathetic with some forms of dualism about con-
sciousness, I do not think that there is much evidence for the strong
form of Cartesian dualism that this objection requires. The weight of
evidence to date suggests that the brain is mechanica

If evolution can produce something in this
unintelligent manner, then in principle humans should be able to
produce it much faster, by using our intelligence.

First AI:
(WBE, artificial evolution, “direct programming”, machine learning, perhaps others)

direct programming (writing the program for an AI from
scratch, perhaps complete with a database of world knowledge),

I doubt that direct programming is
likely to be the successful route, but I do not rule out any of the others.

Timescales:
It must be acknowledged that every path to AI has proved surpris-
ingly difficult to date. The history of AI involves a long series of opti-
mistic predictions by those who pioneer a method, followed by a
periods of disappointment and reassessment.

Many of the optimistic predictions were
not obviously unreasonable at the time, so their failure should lead us
to reassess our prior beliefs in significant ways.

Extendibility:
For example, the currently standard method of
creating human-level intelligence is biological reproduction. But bio-
logical reproduction is not obviously extendible. If we have better
sex, for example, it does not follow that our babies will be geniuses.

Another method that is not obviously extendible is brain emulation.

It may nevertheless
be that brain emulation speeds up the path to AI+.  For example,
emulated brains running on faster hardware or in large clusters might
create AI+ much faster than we could without them. We might also be
able to modify emulated brains in significant ways to increase their
intelligence. We might use brain simulations to greatly increase our
understanding of the human brain and of cognitive processing in gen-
eral, thereby leading to AI+.

Evolutionary process:
I think that if AI is possible at all (as the antecedent of
this premise assumes), then it should be possible to produce AI
through a learning or evolutionary process

Extendibility:
There are also potential paths to greater-than-human intelligence
that do not rely on first producing AI and then extending the method.
One such path is brain enhancement.

IA:
Like other AI+ systems, enhanced brains suggest a potential intelli-
gence explosion.

There are
likely to be speed limitations on biological processing, and there may
well be cognitive limitations imposed by brain architecture in addi-
tion. So beyond a certain point, we might expect non-brain-based sys-
tems to be faster and more intelligent than brain-based systems.

Intelligence explosion:
We might call this assumption a proportion-
ality thesis: it holds that increases in intelligence (or increases of a cer-
tain sort) always lead to proportionate increases in the capacity to
design intelligent systems. Perhaps the most promising way for an
opponent to resist is to suggest that this thesis may fail.

(upper limits, diminishing returns, intelligence not correlating with design capacity)

Definition of intelligence:
We can rely instead on the general
notion of a cognitive capacity: some specific capacity that can be com-
pared between systems.

(i) G is a self-amplifying parameter (relative to us).
(ii) G loosely tracks cognitive capacity H (downstream
from us).

Obstacles:
There
might fail to be interesting self-amplifying capacities. There might
fail to be interesting correlated capacities. Or there might be defeaters,
so that these capacities are not manifested. We might call these struc-
tural obstacles, correlation obstacles, and manifestation obstacles
respectively.

I am inclined to think that manifestation
obstacles are the most serious obstacle

(doesn’t think we are at peak intelligence or unable to create more intelligent systems)

Even
among humans, relatively small differences in design capacities (say,
the difference between Turing and an average human) seem to lead to
large differences in the systems that are designed (say, the difference
between a computer and nothing of importance). And even if there are
diminishing returns, a limited increase in intelligence combined with
a large increase in speed will produce at least some of the effects of an
intelligence explosion.

(doesn’t think local maximum)

Perhaps there are some areas of
intelligence space (akin to inaccessible cardinals in set theory?) that
one simply cannot get to by hill-climbing and hill-leaping

I think that the extent to which we can expect various cognitive
capacities to correlate with each other is a substantive open question.
Still, even if self-amplifying capacities such as design capacities cor-
relate only weakly with many cognitive capacities, they will plausibly
correlate more strongly with the capacity to create systems with these
capacities. It remains a substantive question just how much correla-
tion one can expect

Regarding disasters, I certainly cannot exclude the possibility that
global warfare or a nanotechnological accident (‘gray goo’) will stop
technological progress entirely before AI or AI+ is reached. I also
cannot exclude the possibility that artificial systems will themselves
bring about disasters of this sort.

it is likely that foreseeable energy resources will
suffice for many generations of AI+, and AI+ systems are likely to
develop further ways of exploiting energy resources

It is entirely possible that there will be
active prevention of the development of AI or AI+ (perhaps by legal,
financial, and military means), although it is not obvious that such
prevention could be successful indefinitely

When I discussed these issues with cadets and staff at the West Point Military Academy,
the question arose as to whether the US military or other branches of the government
might attempt to prevent the creation of AI or AI+, due to the risks of an intelligence explo-
sion. The consensus was that they would not, as such prevention would only increase the
chances that AI or AI+ would first be created by a foreign power

Verdict:
So I think that the singularity hypothesis is
one that we should take very seriously.

Post-singularity:
If there is AI++, it will have an enormous impact on the world. So if
there is even a small chance that there will be a singularity, we need to
think hard about the form it will take. There are many different forms
that a post-singularity world might take. Some of them may be desir-
able from our perspective, and some of them may be undesirable.

Values:
(separates into agent-relative value and agent-neutral value, which he calls “objective”)

For example, ending disease and poverty would be good.
Destroying all sentient life would be bad. The subjugation of humans
by machines would be at least subjectively bad.

(Note: I’ve tried to separate out the terminal values in the following paragraph, and removed
instrumental values such as “the very fact of an ongoing intel-
ligence explosion all around one could be subjectively bad, perhaps
due to constant competition and instability”)

Many
would hold that human immortality would be subjectively and per-
haps objectively good, although not everyone would agree. The
wholesale replacement of humans by nonhuman systems would plau-
sibly be subjectively bad, but there is a case that it would be objec-
tively good, at least if one holds that the objective value of lives is tied
to intelligence and complexity. If humans survive, the rapid replace-
ment of existing human traditions and practices would be regarded as
subjectively bad by some but not by others. Enormous progress in sci-
ence might be taken to be objectively good

or because certain intel-
lectual endeavours would come to seem pointless.

we do not know what a post-singularity world will be like, and even if
we did, it is nontrivial to assess its value.

Friendly AI:
the question that matters is: how (if at all) should
we go about designing AI, in order to maximize the expected value of
the resulting outcome? Are there some policies or strategies that we
might adopt? In particular, are there certain constraints on design of
AI and AI+ that we might impose

Strategy:
It is far from clear that we will be in a position to impose these con-
straints.

Insofar as the path to AI or AI+ is driven by competitive forces
(whether financial, intellectual, or military), then these forces may
tend in the direction of ignoring these constraints.

Hazards – war:
An especially bad case is a ‘singularity bomb’: an AI+ designed to value primarily the
destruction of the planet (or of a certain population), and secondarily the creation of
ever-more intelligent systems with the same values until the first goal is achieved.

Friendly AI:
We might divide the relevant constraints into two classes. Internal
constraints concern the internal structure of an AI, while external con-
straints concern the relations between an AI and ourselves.

First, we might try to constrain their cognitive capacities in
certain respects, so that they are good at certain tasks with which we
need help, but so that they lack certain key features such as autonomy.
For example, we might build an AI that will answer our questions or
that will carry specified tasks out for us, but that lacks goals of its own.

such an approach is likely to be
unstable in the long run. Eventually, it is likely that there will be AIs
with cognitive capacities akin to ours

Strategy – oracle AI:
Still, it is worth noting that this sort of limited AI and AI+
might be a useful first step on the road to less limited AI and AI+.

Mindspace:
I will subsume all of these under the label of values
(very broadly construed). This may be a sort of anthropomorphism: I
cannot exclude the possibility that AI+ or AI++ will be so foreign that
this sort of description is not useful.

Goal stability:
The values of these systems may well constrain the values
of the systems that they create, and may constrain the values of an ulti-
mate AI++. And in a world with AI++, what happens may be largely
determined by what an AI++ values.

(Note: following passage mixes up predictions with normative values. I’ve attempted to tease them apart)

Goal prediction:
The issues regarding values look quite different depending on
whether we arrive at AI+ through extending human systems via brain
emulation and/or enhancement, or through designing non-human
system.

Under human-based AI, each system is either an extended human
or an emulation of a human. The resulting systems are likely to have
the same basic values as their human sources

Values – machine ethics:
There are likely to be many difficult
issues here, not least issues tied to the social, legal, and political role
of emulations.

Strategy – human-based AI:
(concerning human-based AIs)
Still, the resulting world will at least be inhabited by
systems more familiar than non-human AIs, and the risks may be
correspondingly smaller.

So brain emulation and brain enhancement have potential pruden-
tial benefits. The resulting systems will share our basic values, and
there is something to be said more generally for creating AI and AI+
that we understand.
(Note: something to be said? Then please say it)

Another potential benefit is that these paths might
allow us to survive in emulated or enhanced form in a post-singularity
world, although this depends on difficult issues about personal iden-
tity that I will discuss later.

Values – transhuman:
The moral value of this path is less clear:
given the choice between emulating and enhancing human beings and
creating an objectively better species, it is possible to see the moral
calculus as going either way. But from the standpoint of human
self-interest, there is much to be said for brain emulation and
enhancement.

Friendly AI:
What sort of values should we aim to instil in a non-human-based
AI or AI+? There are some familiar candidates. From a prudential
point of view, it makes sense to ensure that an AI values human sur-
vival and well-being and that it values obeying human commands.
Beyond these Asimovian maxims, it makes sense to ensure that AIs
value much of what we value (scientific progress, peace, justice, and
many more specific values). This might proceed either by a
higher-order valuing of the fulfilment of human values or by a
first-order valuing of the phenomena themselves. Either way, much
care is required. On the first way of proceeding, for example, we need
to avoid an outcome in which an AI++ ensures that our values are ful-
filled by changing our values. On the second way of proceeding, care
will be needed to avoid an outcome in which we are competing over
objects of value.

If we create an AI by
direct programming, we might try to instil these values directly. For
example, if we create an AI that works by following the precepts of
decision theory, it will need to have a utility function. We can in effect
control the AI’s values by controlling its utility function. With other
means of direct programming, the place of values may not be quite as
obvious, but many such systems will have a place for goals and
desires, which can then be programmed directly.

(hard to control values in evolutionary approach)

Still, we can exert at least some
control over values in these systems by selecting for certain sorts of
action (in the evolutionary context), or by rewarding certain sorts of
action (in the learning context), thereby producing systems that are
disposed to produce actions of that sort.

Goal stability:
Of course even if we create an AI or AI+ (whether human-based or
not) with values that we approve of, that is no guarantee that those
values will be preserved all the way to AI++.

This value might be
overcome by other values that take precedence: in a crisis, for exam-
ple, saving the world might require immediately creating a powerful
successor system, with no time to get its values just right. And even if
every AI attempts to preserve relevant values in its successors,
unforeseen consequences in the creation or enhancement process are
always possible.

Law
[Robin Hanson] argues that it is more important that AI
systems are law-abiding than that they share our values. An obvious worry in reply is that
if an AI system is much more powerful than us and has values sufficiently different from
our own, then it will have little incentive to obey our laws

Values:
If at any point there is a powerful AI+ or AI++ with the wrong value
system, we can expect disaster (relative to our values) to ensue.

If the AI+ value system is merely
neutral with respect to some of our values, then in the long run we can-
not expect the world to conform to those values.

Strategy – proof:
I think it would
be optimistic to expect that [provably friendly AI] will
be the path by which we first reach AI or AI++, but it nevertheless
represents a sort of ideal that we might aim for.

Another approach is to constrain the internal design of AI and AI+
systems so that any intelligence explosion does not happen fast but
slowly, so that we have some control over at least the early stages of
the process.

Goal prediction:
(Seems to reject Kant’s view that rationality implies morality, but isn’t certain)

Hazards – robot:
If the systems are created in embodied form, inhabiting and acting
on the same physical environment as us, then the risks are especially
significant. Here, there are at least two worries. First, humans and AI
may be competing for common physical resources: space, energy, and
so on. Second, embodied AI systems will have the capacity to act
physically upon us, potentially doing us harm.

Strategy – box:
The obvious suggestion is that we should first create AI and AI+
systems in virtual worlds

Given such a virtual environment, we could monitor it to
see whether the systems in it are benign and to determine whether it is
safe to give those systems access to our world.

AI box:
If an AI++ is in communication with us and wants to
leave its virtual world, it will.

The same goes even if the AI systems are not in direct communica-
tion with us, if they have some knowledge of our world. If an AI++
has access to human texts, for example, it will easily be able to model
much of our psychology. If it chooses to, it will then be able to act in
ways such that if we are observing, we will let it out.

Strategy – box:
We must also prevent information
from leaking in. We should not directly communicate with these sys-
tems and we should not give them access to information about us.

if we are aiming for a leakproof world, we should seek to minimize
quirks of design,

the leakproof singularity is an
unattainable ideal. Confining a superintelligence to a virtual world is
almost certainly impossible: if it wants to escape, it almost certainly
will.

So to increase the chances of a desirable outcome, we should
certainly design AI in virtual worlds.

Simulation hypothesis:
If one takes seriously the possibility that we are
ourselves in such a simulation (as I do in Chalmers, 2005), one might consequently take
seriously the possibility that our own tipping point lies in the not-too-distant future. It is
then not out of the question that we might integrate with our simulators before we integrate
with our simulatees, although it is perhaps more likely that we are in one of billions of sim-
ulations running unattended in the background.

Values – machine ethics
Switching off the simulation entirely may be out of the question: if the
AI systems are conscious, this would be a form of genocide. But there
is nothing stopping us from slowing down the clock speed

Strategy – summary
1. Human-based AI first (if possible). 2. Human-friendly AI values (if not). 3. Initial
AIs negatively value the creation of successors. 4. Go slow. 5. Create AI in virtual worlds.
6. No red pills. 7. Minimize input

Post-singularity:
what is our place
within that world? There seem to be four options: extinction, isola-
tion, inferiority, or integration.

Values:
I think that [isolation from AI+/AI++] will also be unattractive to
many: it would be akin to a kind of cultural and technological isola-
tionism that blinds itself to progress elsewhere in the world.

I think a model
in which we are peers with the AI systems is much preferable.

Strategy – transhumanism:
On this option, we
become superintelligent systems ourselves. How might this happen?
The obvious options are brain enhancement, or brain emulation fol-
lowed by enhancement. This enhancement process might be the path
by which we create AI+ in the first place, or it might be a process that
takes place after we create AI+ by some other means, perhaps because
the AI+ systems are themselves designed to value our enhancement.
In the long run, if we are to match the speed and capacity of
nonbiological systems, we will probably have to dispense with our
biological core entirely. This might happen through a gradual process
through which parts of our brain are replaced over time, or it happen
through a process of scanning our brains and loading the result into a
computer, and then enhancing the resulting processes. Either way, the
result is likely to be an enhanced nonbiological system, most likely a
computational system.

(Discusses serial sectioning, nanotransfer, nondestructive uploading)

Values – identity/consciousness summary:
the key question is: will I survive uploading?

First, will an uploaded version of me be conscious?
Second, will it be me?

Values – consciousness:
My own view is that functionalist theories are closer to the truth
here.

the default attitude should be that both biological and
nonbiological systems can be conscious.

What happens to consciousness during a gradual uploading pro-
cess? There are three possibilities. It might suddenly disappear, with a
transition from a fully complex conscious state to no consciousness
when a single component is replaced. It might gradually fade out over
more than one replacements, with the complexity of the system’s con-
scious experience reducing via intermediate steps. Or it might stay
present throughout.

by far the most plausible hypothesis is that full con-
sciousness will stay present throughout.

Predicted attitude:
it seems very likely that partial uploading will
convince most people that uploading preserves consciousness. Once
people are confronted with friends and family who have undergone
limited partial uploading and are behaving normally, few people will
seriously think that they lack consciousness.

If this is right, we can say that consciousness is an organizational
invariant: that is, systems with the same patterns of causal organiza-
tion have the same states of consciousness

Values – identity:
On the optimistic view of uploading, the upload will be the same
person as the original. On the pessimistic view of uploading, the
upload will not be the same person as the original.

the issue between the optimistic and pessimis-
tic view is literally a life-or-death question.

personal identity is not an orga-
nizational invariant.

Biological theorists are likely to hold the pessimistic view of
teletransportation, and are even more likely to hold the pessimistic
view of uploading.

Closest-continuer theorists are likely to hold that the
answer depends on whether the uploading is destructive

I do not have a settled view about these questions of personal iden-
tity and find them very puzzling.

My own view is that continuity of consciousness (especially when
accompanied by other forms of psychological continuity) is an
extremely strong basis for asserting continuation of a person.

Strategy – personal survival:
One possibility is that we can preserve our brains for later uploading.
Cryonic technology

The final alternative here is reconstruction of the original system from
records

Values – identity:
The question then arises: is reconstructive uploading a form of
survival?

This is the question of whether personal identity
involves a further fact. That is: given complete knowledge of the
physical state of various systems at various times (and of the causal
connections between them), and even of the mental states of those sys-
tems at those times, does this automatically enable us to know all facts
about survival over time, or are there open questions here?

Still, I think that on a further-fact view, it is very likely that continu-
ity of consciousness suffices for survival.

One could put a pessimistic spin on the deflationary view by saying
that we never survive from moment to moment, or from day to day. 43
At least, we never survive in the way that we naturally think we do.
But one could put an optimistic spin on the view by saying that this is
our community’s form of life, and it is not so bad.

should we care about [hypothetical futures] in the way in which we care about
futures in which we survive?

Speaking for myself, I am not sure whether a further-fact view or a
deflationary view is correct.

Values – transhuman:
Suppose that
before or after uploading, our cognitive systems are enhanced to the
point that they use a wholly different cognitive architecture. Would
we survive this process?

Strategy – personal survival:
(Possibly joking)
My own strategy is to write about the singularity and about
uploading. Perhaps this will encourage our successors to reconstruct
me, if only to prove me wrong

Verdict:
Will there be a singularity? I think that it is certainly not out of the
question, and that the main obstacles are likely to be obstacles of moti-
vation rather than obstacles of capacity.

Strategy – summary:
How should we negotiate the singularity? Very carefully, by build-
ing appropriate values into machines, and by building the first AI and
AI+ systems in virtual worlds.
How can we integrate into a post-singularity world? By gradual
uploading followed by enhancement if we are still around then, and by
reconstructive uploading followed by enhancement if we are not.

Advertisements

0 Responses to “Notes from David Chalmers – “The Singularity: A Philosophical Analysis””



  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: