An interesting article in the NYT got me thinking… The story is about a paper written in 2003 by Nick Bostrom, entitled “Are You Living in a Computer Simulation?” The abstract:
This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a posthuman stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.
So – if humans (or our weakly godlike AI progeny) survive long enough and the predictions of continuing computing power increases are true AND the entities with access to the computing power decide to run any significant number of evolutionary simulations, then we’re very likely to be living in one of those simulations (1 ‘real’ world, large number of sims – not a bet I want a piece of).
This is not a new observation – a reader on BoingBoing pointed to a 1995 interview in Wired where Hans Moravec waves off the idea as obvious. Charlie Stross has incorporated the idea of simulations into more than one of his yarns; in “A Colder War” Cthulhu runs simulations on the people it absorbs.
Let’s assume for the sake of argument that we are indeed living in a simulation. Are there implications for how we should behave or live our lives? Not that I can think of – if I hurt someone, their experience of pain is not diminished or mitigated by being perfectly simulated rather than real. There are a couple things that I have been thinking on though – one serious and one not as.
The less serious consideration could be classified as belonging to the Turtles All The Way Down group of speculations. We’re sims, our simulation navigates the posthuman transition successfully and we start – yes, you saw this coming – running evolutionary simulations. How deeply are we nested? Would we run into hardware constraints back in the real world?
The big question – and one I’ve been thinking about for quite a while – is how should we treat software entities? I code up a high fidelity version of a housefly’s nervous system. Then I run in the same signals from the sensorium that a fly would experience when it’s wings were pulled off. Why? I don’t know – why do folks pull wings off real flies rather than just smushing them? Calm down, it’s just a simulation of a fly, you say. Okay, a couple years pass and I code up a puppy… Harlan Ellison’s I Have No Mouth… implemented in software. When do we cross the line from disturbing silliness (I googled ‘the sims torture’ and found this pretty quickly) to real evil?
As flies to wanton boys, are we to the gods; they kill us for their sport.
My Catholo- Buddhist ethos says don’t be cruel even to sims– the act corrupts you, by your acts, intrinsically. And who knows what rains down from above? How would you want ‘them’ to act?
Karma?