As I brought up in my previous post, humanity might have a lot of twin-species. They could be far away in space (if space is sufficiently large) and in other Everett branches (if the many-worlds interpretation of quantum mechanics is correct). What are the implications of this for the curious simulation argument?
It strikes me as quite likely that we, or at least significant fraction of our cosmic counterparts (if we have any) will eventually create a superintelligence, or in some other way get the capability to create ancestor simulations (sentient simulations of people like us). If there are more people like us in ancestor simulations than in real history, we are more likely to be in one of the many simulated histories, according to the simulation argument (1). It seems plausible that many possible civilizations and superintelligences would make these simulations if they could, since they would probably have high instrumental (and recreational) value (2) and advanced civilizations appear to have lots of computing power at their disposal, enough to make trillions of ancestor simulations each. It therefore seems likely that if people like us exist at many places in the multiverse, there will exist very many ancestor simulations, unless almost all of them either have a universal ban on ancestor simulations or go extinct (in some other fashion than by a misaligned superintelligence) before they can make any.
One reason to think that a ban on these simulations would be enforced is because future civilizations might deem sentient simulations immoral (as I hope they would (3)), but this assumes that just about every civilisation succeeds in keeping their AIs and citizens under control for possibly millions of years. A second reason to think that we are not in a simulation is that even unfriendly agents might refrain from making simulations in order to reduce the likelihood that the agent is in a simulation themselves (4). However, it only takes that one in a thousand civilizations makes a thousand ancestor simulations for the simulation hypothesis (5) to become likely. I think it is fairly likely (maybe 40% if I had to pick a number (6)) that we are simulated, especially if there will ever be any large number of civilizations at our stage of development in the base-level multiverse. A final possibility that I don’t find very plausible is that it is for some reason impossible to create sentient simulations.
If the simulation hypothesis is correct, our cosmic endowment could turn out to be a giant wallpaper, or the simulation might stop after a certain amount of resources have been used up by the computer running it. Due to this, and potential correlations between our decisions across simulations (if we make the world better in one simulation, it might make it more likely that the conditions in similar simulations improves as well), Brian Tomasik thinks the relatively short-term looks comparatively more important to improve for its own sake than otherwise. His argument doesn’t depend on the simulation hypothesis being very likely, which I think is counter-intuitive. I think his reasoning and math seems sound, but I don’t think it justifies ignoring far future concerns, and neither does he. Without considering the simulation argument, efforts to improve the ”short term” (meaning decades or centuries in this context) for its own sake seem basically negligible compared to the far future (spanning millions or billions of years) in the expected impact. When taking the simulation argument into account, the ”short term” starts to appear comparable to the far future in importance, when adding appropriately huge error-bars to the values of variables in this calculation.
Max Tegmark thinks (7) that the simulation argument “logically self-destructs” when you reflect on the fact that all the simulated civilizations are likely to make simulations of their own, so we are more likely (according to the Simulation Argument) to be in a simulation within a simulation, and even more likely to be in a simulation in a simulation in a simulation, and so on. I don’t find this counterargument very persuasive, there is no infinite regress as long as the computer running the simulation doesn’t have limitless computing resources.
Conciser the case where both the many-worlds interpretation in base level reality and the simulation hypothesis are correct. Depending on what hardware is used to run the simulation, we get different effects. If observed quantum events in the simulation are determined by quantum computations in base level reality, we should expect the universe where the simulation is running to branch when people inside the simulation do a quantum measurement or quantum computations. Quantum computing can be simulated by classical computers (8) but would use up so much computing power in base level reality that I guess it would probably be impractical. It’s also possible in principle to simulate a universe abiding many-worlds, but this would require huge amounts of computing power unless it was crudely approximated.
If many-worlds is incorrect in base level reality but space is sufficiently large, we should expect the same simulation to be run elsewhere in that space. If it is run at many places and uses quantum computers, we should get the same effect as if many-worlds is correct (9). The simulation automatically “branches” if it’s run with quantum computers and base-level reality is sufficiently large and/or abides many-worlds. The simulation could also “branch” if it uses pseudo-randomness and is designed to fork in certain conditions, in order to approximate many-worlds or study counterfactuals.
If we live in a simulation, can we really know anything about the laws of physics in base level reality? Yes, we can at least infer that if we are in a simulation, the “real” laws of physics must obviously allow for such a simulation to exist. Base level reality must also permit the existence of people like us and the creation of advanced technology for the simulation argument to hold, this constrains the set of possible physical laws in base level reality. The simulators probably had some reason(s) for creating the simulation, if they use it to predict the development of life elsewhere, they would presumably want to design the physics in the simulation to approximate their laws of nature. The simulation might suit another purpose, in which case the simulation might resemble their reality to a lesser extent. It might make sense to think of oneself not as a particular instance of matter that is either simulated or not, but rather as a decision algorithm implemented in different places across the multiverse, some in base reality and some in simulations.
(1) The physicist Sean Carroll has noticed an interesting flaw in this logic, but I think it’s a superficial flaw that does not override the core of the argument.
(2) This point relies partly on recognising that simulations are useful to us now and partly on the idea called ”Computational Irreducibility”, that the behaviour of a complex system usually cannot be predicted without simulating it.
(3) Unless if they can prevent more suffering by running the simulations than they create, that might be morally justified in some cases.
(4) I was not the first to suggest this, see ”Multiverse-wide Cooperation via Correlated Decision Making” page 99.
(5) The simulation argument and the simulation hypothesis should not be confused, see Bostrom’s FAQ, where he also responds to some common counterarguments.
(6) This would be a lot higher if I was more certain that my thinking isn’t completely confused.
(7) (Our Mathematical Universe, p 347) In a recent panel discussion, Tegmark was asked what likelihood he would assign to the simulation hypothesis, to which he replied 17%. David Chalmers, who was also on the panel, gave it a 42% chance tongue-in-cheek.
(8) I might be wrong about this. See, Signal-based classical emulation of a universal quantum computer and Quantum simulator (Wikipedia)
(9) [PDF] The Multiverse Hierarchy (page 8), see also ”Unifying the Inflationary & Quantum Multiverses (Max Tegmark)”