Archive for September, 2011

Follow whatever is helpful

I have a goal (or more precisely, I strongly prefer some world-outcomes over others). Philosophers can argue over the precise meanings of words, even the meaning of truth itself. But for me, the most important concept is not whether something is true but whether it is helpful to that goal.

  • Truth is a very helpful concept.
  • Scientific truth is an extremely helpful concept.
  • It is helpful to help people with similar goals to mine. It is unhelpful to help people with goals which go against mine.
  • Words are helpful, even poorly defined words. It may be helpful to try and find more precise definitions, but it’s not helpful to define a word too precisely prematurely.
  • The word “helpful” is poorly defined but helpful.
  • It’s helpful to check whether other people agree on what’s true and what’s helpful.
  • My goal is poorly defined. It may be helpful to speculate about what a better-defined goal might be. But it stops being helpful at the point where it’s unlikely to change the decisions I make.
  • I recognise that my goal may change. I don’t want it to change a lot – I’d consider that to be a betrayal by my future self. (Though from the point of view of my future self, it would feel like a change for the better). But some flexibility in my goal system – as a result of new information or reconsidering my position – is probably a good thing. Moral progress is tricky.
  • Mathematical axioms are extremely helpful. Axioms about the real world (such as “subjective experience is real”) usually aren’t. Helpful to consider that such axioms may be wrong.
  • Contemplating nonsense (e.g. p-zombies) can be helpful if it unlocks new ways of thinking. Non-logical modes of thinking can be helpful and it’s helpful to realise this.
  • The notion of a utility function, and a utility-maximising agent, is helpful. Not as a useful model of human behaviour, but as a model of how I want to behave.
  • The notion of bounded rationality is very helpful. I’m not a perfect optimiser; I seek to become more like one, and that requires recognising my failings. I’m not a deduction machine, I’m a chimpanzee with superpowers. So planning out future actions is essentially about placing a souped-up chimpanzee with my goals and memories in some particular future scenario. The chimp might not do what I want.
  • The notion of free will is very helpful, even if it ultimately doesn’t correspond to anything physical.

 

Advertisements

What I Want

There’s a selfish component to what I want, and an altruistic component. I won’t describe the selfish component, but you can probably imagine the sort of thing that’s in there.

The altruistic component is defined as an aggregate of what every other person else wants. But there are some caveats:

  • It’s biased towards people that are closest to me. I’ll explain why in another post – this is a more complicated issue than you might think.
  • This definition is only valid if a relatively small fraction of the world’s population is employing it. If everyone in the world is motivated to helping each other and don’t want anything for themselves, then it doesn’t make any sense. No-one would end up wanting anything at all.
  • This system isn’t completely consequentialist. People might want my help, but might not want that help if I have to behave unethically in order to provide it. So I’m separating “ethics” (which set constraints on behaviour) from “altruism” (which specifies one of my goals).
  • I’m leaving it vague as to exactly how other people’s preferences should be aggregated (especially when they come into conflict with each other)
  • I’m leaving it vague as to exactly what counts as a “person”
  • I’m leaving it vague as to exactly what it means to “want” something

This vagueness is a problem – it means that in a lot of cases, I’m unsure about what the correct moral decision is. But it might also be a strength as it avoids locking myself into a system which I discover has flaws (in fact, at this point I’m still leaving open the possibility that I’ve got the definition of altruism entirely wrong here. The fact I’m still leaving that open worries me somewhat, in case I’m just being cowardly).

Examples of vagueness in action:

  • Any moral dilemma which comes down to a dispute between different interested parties, is essentially a preference aggregation problem.
  • Abortion and animal rights issues (plus quite a few transhumanist topics) come down to the problem of what counts as a person.
  • Some of the problems of what it means to “want” something are about balancing short and long term preferences. For example, euthanasia: someone might want to die (short term preference) but not want to live in a culture that considers suicide an acceptable way out (long term preference).

The other deliberate vagueness in my preference system is the balance between the selfish goal and the altruistic one. If these are expressed as utility functions, and the overall utility function is set as a weighted sum of these, what would happen?

U = a Ualtruistic + b Uselfish

This would obviously depend on what the weights are and whether I have more leverage when it comes to the selfish function or the altruistic one. Right now it looks like altruistic wins by some orders of magnitude – while I have more direct and visible control of my own life, there’s only one of me and there’s a lot of other people, and in some circumstances their lives can be saved or improved drastically for apparently little cost. So I’d need to be pretty stingy with my coefficients for the selfish function to have much impact at all. In other words, we pretty much get

U ≈ Ualtruistic

But that doesn’t mean I have to punish myself. In order to optimise my altruism, I need to protect my own health, mental health, financial security and relationship with my community. That takes care of quite a lot of the selfish goals for free.

But is it enough? Is there some minimum standard of selfishness that I demand, beyond what is required to maintain my effectiveness at helping other people? It doesn’t seem likely that it would be possible for me to do that if I was miserable all the time. But what about if it was possible? Would that be what I want to do? I’m not sure.

About Giles

I’m Giles Edkins. I believe in physicalism, and I believe the universe can very likely be described completely by a mathematical model. The universe at its most basic level is essentially an uncaring place, indifferent to our lives or ethics or suffering. We have to impose our own ethics on our own little section of it – which is difficult because ethics mean so many different things to different people.

For me, I imagine laying out different alternative courses that the history of the world could take, and ranking them according to which are nicest. Generally, when things happen to people against their will, that decreases the overall niceness (and decreases it by a lot of it involves intense pain or death). I want to act in such a way as to maximize the amount of niceness, somewhat constrained by the resources which I’m not greedily spending on myself. I’m aware that the utility function that I describe is both extremely vague and may involve concepts that are not meaningful when the universe is reduced to its physical description. I don’t think this is a fundamental flaw in the approach, but it’s going to come back to bite me at some point.

I believe that statements of fact are either true or false. If I want to help make the world as nice as possible, it’s important to know which is which and to get it right. This means having a set of beliefs which are consistent with each other and with reality as I observe it. For things I’m unsure about (i.e. everything) it means assigning probabilities (although I don’t know how to do that yet). The most important thing is that I don’t want to accept untruths, even comforting ones.

This search for consistency is all well and good, unless it leads to a conclusion that most people are wrong about something important. Then it gets awkward, because either I’m wrong or they are. With that in mind, here are some unusual things that I believe:

1. Humans are not rational animals. Our decisions are mostly made by what we could call the “unconscious” mind – and we don’t really know how it makes those decisions. The “conscious” mind is essentially a justification module – it can cook up a plausible-sounding justification for those decisions in the language that we understand. Sometimes it will be consulted before a decision is made and will reject that decision if it is unable to produce a plausible justification. This means that learning about how human behaviour is biased doesn’t necessarily help fix those biases – it just gives the conscious mind extra stuff to talk about. It also means that when someone’s mind is made up, it can be really difficult to change their mind. It’s not enough to simply provide enough evidence to tip the balance – you effectively need to counter every possible argument, because the conscious mind will keep coming up with new arguments for old decisions for as long as it can.

2. Giving to charity brings social benefits to the donor. From an evolutionary perspective, this may be why people do it (although it won’t feel like that’s the reason – see point 1 – and it isn’t necessarily a bad thing). But there’s very little pressure of any kind to donate that money effectively, i.e. actually helping as many people as much as possible, per dollar. Again this might not be a bad thing on its own, but a consequence is that some charities are vastly more effective than others. The optimal philanthropy movement is growing, but right now there don’t seem to be all that many people who have that exact motivation, and not much stuff out there to help us make our decisions.

3. There is a reasonable chance that before the end of the century, humanity will be wiped out by superintelligent robots.

4. There are substantial opportunities to make the world a better place by addressing the issues described in points 2. and 3.

The other thing you need to know is that I’m new to blogging, never previously having felt that I had anything to say. To start with, I’ll mainly be using this for my own benefit rather than yours, as a place to dump my ideas. Expect posts to be sporadic, poorly written and poorly researched until I get the hang of it. And what’s the blog going to be about? I expect that a lot will be about thinking of the world as a big system and trying to understand how it all works.

Wikipedia: physicalism consequentialism rationalization bias intelligence explosion

Less Wrong woke me up.