Posts Tagged 'helpful'

Follow whatever is helpful

I have a goal (or more precisely, I strongly prefer some world-outcomes over others). Philosophers can argue over the precise meanings of words, even the meaning of truth itself. But for me, the most important concept is not whether something is true but whether it is helpful to that goal.

  • Truth is a very helpful concept.
  • Scientific truth is an extremely helpful concept.
  • It is helpful to help people with similar goals to mine. It is unhelpful to help people with goals which go against mine.
  • Words are helpful, even poorly defined words. It may be helpful to try and find more precise definitions, but it’s not helpful to define a word too precisely prematurely.
  • The word “helpful” is poorly defined but helpful.
  • It’s helpful to check whether other people agree on what’s true and what’s helpful.
  • My goal is poorly defined. It may be helpful to speculate about what a better-defined goal might be. But it stops being helpful at the point where it’s unlikely to change the decisions I make.
  • I recognise that my goal may change. I don’t want it to change a lot – I’d consider that to be a betrayal by my future self. (Though from the point of view of my future self, it would feel like a change for the better). But some flexibility in my goal system – as a result of new information or reconsidering my position – is probably a good thing. Moral progress is tricky.
  • Mathematical axioms are extremely helpful. Axioms about the real world (such as “subjective experience is real”) usually aren’t. Helpful to consider that such axioms may be wrong.
  • Contemplating nonsense (e.g. p-zombies) can be helpful if it unlocks new ways of thinking. Non-logical modes of thinking can be helpful and it’s helpful to realise this.
  • The notion of a utility function, and a utility-maximising agent, is helpful. Not as a useful model of human behaviour, but as a model of how I want to behave.
  • The notion of bounded rationality is very helpful. I’m not a perfect optimiser; I seek to become more like one, and that requires recognising my failings. I’m not a deduction machine, I’m a chimpanzee with superpowers. So planning out future actions is essentially about placing a souped-up chimpanzee with my goals and memories in some particular future scenario. The chimp might not do what I want.
  • The notion of free will is very helpful, even if it ultimately doesn’t correspond to anything physical.