An Unofficial Toronto Bus Map

toronto bus map 2012-06-02

(click to enlarge)

This is an unnofficial map – it will be out of date and contain mistakes. But you may find it useful all the same. I drew this a while ago (2012-06-02) but only just got around to sticking it online. Have fun riding around Toronto!

This is part of my “do one nice thing a week” strategy.


toronto bus map 2012-06-02

Toronto LW Singularity Discussion (sort of), 2012-04-19

Present: SD, GE, SB

SD was reading an article on Ribbonfarm about “Hackstability“, the equilibrium position between singularity and collapse.

SB wonders how far into the future we can reliably predict, so this discussion is more about the near future(trope) than the singularity which we hope is still some way off. We pick a 5 year timescale and try to predict what we think we will see.

(This post may contain buzzwords)

Continue reading ‘Toronto LW Singularity Discussion (sort of), 2012-04-19′

Toronto LW Singularity Discussion, 2012-04-05

Sorry I haven’t been writing these up – or holding new ones. This will get fixed in the near future.

Present: SD, GE, SM

In this discussion we were brainstorming counterarguments to the Singularity Institute worldview. We ranked these according to our vague feeling of how plausible they were – don’t read too much into this classification. When it says ½ or +/- it’s generally because there was some disagreement within the group. (Our plausibility tolerance probably also drifted so I’m listing these in the order they came so that you can correct for that).

We also noted whether the arguments related to the TIMESCALE for the emergence of AGI (as opposed to whether it’s possible in the first place), and whether they relate to the CONSEQUENCES. If you reject the multiple discovery(wp) hypothesis and assume that AGI invention occurs infrequently then arguments that suggest most AGIs will have mild consequences are also relevant to the timescale for the emergence of destructive or high-impact AI.

4 is the most plausible (according to us) and 1 is the least plausible.

Continue reading ‘Toronto LW Singularity Discussion, 2012-04-05′

Just a bit of fun: rock-paper-scissors

So now that I have my few lines of Python written, I obviously want to have a play with it. What would happen if instead of playing Iterated Prisoner’s Dilemma, they were playing Rock Paper Scissors?

There’s no iteration here, and no chance to choose a “mixed” strategy (e.g. randomly choosing each with 1/3 probability). Just three strategies: rock, paper, scissors.

So I figured: either it’s going to go around in cycles, or converge to a point where each strategy is stuck at 1/3. Since the problem feels time-reversible I’d expect cycles. And what do we get?

Continue reading ‘Just a bit of fun: rock-paper-scissors’

Group Selection

This was inspired by last week’s Less Wrong discussion.

First of all, taboo “group selection” – it seems to be responsible for too many cached thoughts.

The game is iterated prisoner’s dilemma. There is a large population, and random individuals from the population have to play IPD against each other. Over time they will accumulate a “utility” (just the sum of all the scores from the different games they played) and this utility is exponentiated to determine the number of asexual offspring that individual will have in the next generation.

We would expect successful strategies to dominate in the population, but also that whether a strategy is “successful” depends on the mix of strategies present in the rest of the population.

The payoffs are as follows:

  D C
D 1,1 3,0
C 0,3 2,2

So on each turn, if both players defect then they both score 1. If they both cooperate they both score 2. If one cooperates and the other defects then the defector scores 3 and the cooperator scores 0. Each game lasts for 10 turns.

If this was real life then we could imagine the creatures evolving all kinds of strategy. But I’m only modelling three possible strategies:

  • Always defect
  • Always cooperate
  • Tit-for-tat. Always cooperate on the first turn and then on subsequent turns, do whatever the other player did on the previous turn.

I’ve made some modelling simplifications here:

  • There is no mutation
  • Individual interactions are not modelled. Instead I compute a matrix of how well each strategy scores against each other, then work out how the populations of different strategies will change based on the mix of other strategies.
  • The total population is held constant. (This would correspond to there being some resource which is both tightly constrained and fully utilised by the population).
  • There are discrete time steps.

So what happens?

Continue reading ‘Group Selection’

But I don’t understand it yet

The first thing that I had to learn is that it is OK to want something.

When a frog jumps, does it want to jump? When a tree grows, does it want to grow or is it just built that way? You can know everything about a person and not know what that person wants. What is good and evil? Atoms have no feelings, but a person is atoms and a person has feelings.

That’s something that I still have to learn. What I want doesn’t have to make sense.

The world makes sense but I don’t understand it yet.

When I am confused about something, it says a lot about me and says nothing about the thing I am confused about.

Is a tomato a fruit or a vegetable? Is Pluto a planet? When we ask these questions we are not talking about vegetables or planets. We are talking about ourselves.

I don’t know if I have had an original thought in my life. How many times have you heard that?

Sometimes the man is nasty. I don’t want to say I admire him. He is somewhere where good ideas have settled.

Everything I can see is inside my own head. We talk about heads a lot. Inside my head is a shadow of your head. Inside that is a shadow of mine. Now it fades.

To think that everyone else is right and you are wrong is arrogance – it means you think you know what is right.

Freedom is the freedom to think two plus two equals five. People make good use of this freedom.

I am only making sense to myself here. Just carry on writing and maybe it will come out OK. Someone else said that. You never need to think the same thought twice unless it’s a thought you like thinking. Someone else said that too – but why is it important?

Will I keep my thoughts to myself? How destructive can a thought be?

To stare into the abyss and say “meh”.

I see someone as me if it’s helpful to see him as me. That person who made all those mistakes? Not exactly me.

I know what it is like to be wrong. If I am right now then I was wrong then. If I was right then then I am wrong now. If the person convinces you they are right and they are right then you have won the argument.

It is OK to win.

Anthropic Principle Primer

We shouldn’t be surprised to find that the world is set up in such a way as to allow us to exist. This is obvious, but it turns out to be surprisingly difficult to formalise exactly what we mean by that. From the perspective of human rationality, we care about these kinds of issue because it would be nice to have a mathematical definition of the ideal epistemic rationalist, that we can then try to approximate. We almost do – Bayesian updating. But the concept of Bayesian updating contains a few holes, and this is one of them.

Note that what we’re interested in here is finding out about the state of the world. That is, we’re doing science (admittedly at a very abstract level in this case). We’re not trying to find out the true nature of reality (i.e. what it means to say that something actually exists) or human consciousness or qualia, or self-identity issues. This topic is easily confusing enough before you get on to those kinds of issue, so I’m generally trying to leave them out here.

Also I’m leaving out the issue of time. In these examples, all evidence is presented to all observers at the same time and they update instantly. There’s no time dimension. Different anthropic principles handle time differently (e.g. whether they consider the fundamental unit to be the observer or the observer-moment). This is another thing I don’t want to go into here because it adds confusion.

There will likely be a dumb arithmetic error in here somewhere. Can you spot it?

Continue reading ‘Anthropic Principle Primer’


Get every new post delivered to your Inbox.