As I brought up in my previous post, humanity might have lots of clone species. They could be far away in space (if space is sufficiently large) and in other Everett branches (if the many-worlds interpretation of quantum mechanics is correct). What are the implications of this for the curious simulation argument?
It strikes me as quite likely that we, or at least significant fraction of our cosmic counterparts (if we have any) will eventually create a superintelligence, or in some other way get the capability to create ancestor simulations (sentient simulations of people like us). If there are more people like us in ancestor simulations than in real history, we are more likely to be in one of the many simulated histories, according to the simulation argument (1). It seems plausible that many possible civilizations and superintelligences would make these simulations if they could, since they would probably have high instrumental (and recreational) value (2) and advanced civilizations appear to have lots of computing power at their disposal, enough to make trillions of ancestor simulations each. It therefore seems likely that if people like us exist at many places in the multiverse, there will exist very many ancestor simulations, unless almost all of them either have a universal ban on ancestor simulations or go extinct (in some other fashion than by a misaligned superintelligence) before they can make any.
One reason to think that a ban on these simulations would be enforced is because future civilizations might deem sentient simulations immoral (as I hope they would (3)), but this assumes that just about every civilization succeeds in keeping their AIs and citizens under control for possibly millions of years. A second reason to think that we are not in a simulation is that even unfriendly agents might refrain from making simulations in order to reduce the likelihood that the agent is in a simulation themselves (4). However, it only takes that one in a thousand civilizations makes a thousand ancestor simulations for the simulation hypothesis (5) to become likely. I think the likelihood that we are simulated is substantial (maybe 40% if I had to pick a number (6)), especially if there will ever be any large number of civilizations at our stage of development in the base-level multiverse. A final possibility is that it is impossible to create simulations with sentient life, I don’t think this is plausible since I think the human brain functions in accordance with physical laws which seem to be computable.
If the simulation hypothesis is correct, our cosmic endowment could turn out to be a giant wallpaper, or the simulation might stop after a certain amount of resources have been used up by the computer running it. Due to this, and potential correlations between our decisions across simulations (if we make the world better in one simulation, it might make it more likely that the conditions in similar simulations improves as well), Brian Tomasik thinks the relatively short-term looks comparatively more important to improve for its own sake than otherwise. His argument doesn’t depend on the simulation hypothesis being very likely, which I think is counter-intuitive. I think his reasoning and math seems sound, but I don’t think it justifies ignoring far future concerns, and neither does he. Without considering the simulation argument, efforts to improve the ”short term” (meaning decades or centuries in this context) for its own sake seem basically negligible compared to the far future (spanning millions or billions of years) in the expected impact. When taking the simulation argument into account, the ”short term” starts to appear comparable to the far future in importance, when adding appropriately huge error-bars to the values of variables in this calculation.
Max Tegmark thinks (7) that the simulation argument “logically self-destructs” when you reflect on the fact that all the simulated civilizations are likely to make simulations of their own, so we are more likely (according to the Simulation Argument) to be in a simulation within a simulation, and even more likely to be in a simulation in a simulation in a simulation, and so on. I don’t find this counterargument very persuasive, there is no infinite regress as long as the computer running the simulation doesn’t have limitless computing resources.
Consider the case where the simulation hypothesis is correct (we live in a simulation), and the many-worlds interpretation is correct in our simulators reality (I’ll call it base level). Depending on what hardware is used to run the simulation, we get different effects. If observed quantum events in the simulation are determined by quantum computations in base level reality, we should expect the universe where the simulation is running to branch when people inside the simulation do a quantum measurement or quantum computations. Quantum computing can be simulated by classical computers but would use up so much computing power in base level reality that I guess it would be impractical (8). It’s also possible in principle to simulate a universe which behaves according to many-worlds, but this would require huge amounts of computing power unless it was crudely approximated.
If many-worlds is incorrect in base level reality but space is sufficiently large, we should expect the same simulation to be run elsewhere in that space. If it is run at many places and uses quantum computers, we should get the same effect as if many-worlds is correct (9). That is, the simulation automatically branches if it’s run with quantum computers and base-level reality is sufficiently large and/or many-worlds is true. The simulation could also branch locally if it is designed to fork in certain conditions, in order to approximate many-worlds or study counterfactuals.
If we live in a simulation, can we really know anything about the laws of physics in our simulators reality? Yes, we can at least infer that if we are in a simulation, their laws of physics must obviously allow for such a simulation to exist. Base level reality must also permit the existence of intelligent agents and the creation of advanced technology for the simulation argument to hold, this constrains the set of possible physical laws in base level reality. Remember, the reason we entertain the simulation hypothesis is the simulation argument and it relies on base level reality once containing agents in a position similar to ours. The simulators had some reason for creating the simulation, if they use it to predict the development of life elsewhere, they would presumably want to design the physics in the simulation to approximate their laws of nature. The simulation might suit another purpose, in which case the simulation might resemble their reality to a lesser extent. I nevertheless think we should take seriously the possibility that if we live in a simulation, it might branch as if the many worlds interpretation is true inside the simulation.
Does it make sense to ask if we live either in a simulation or in base level reality? It might make more sense to think of oneself not as a particular instance of matter that is either simulated or not, but rather as a decision algorithm implemented in different places across the multiverse, some in base level reality and some in simulations.
(1) The physicist Sean Carroll has noticed an interesting flaw in this logic, but my hunch is that it is a superficial flaw that does not override the core of the argument.
(2) This point relies partly on recognising that simulations are useful to us now and partly on the idea called ”Computational Irreducibility”, that the behavior of a complex system usually cannot be predicted without simulating it.
(3) Unless if they can prevent more suffering by running the simulations than they create, that might be morally justified in some cases.
(4) I was not the first to suggest this, see ”Multiverse-wide Cooperation via Correlated Decision Making” page 99.
(5) The simulation argument and the simulation hypothesis should not be confused, see Bostrom’s FAQ, where he also responds to some common counterarguments.
(6) This would be a lot higher if I was more certain that my thinking isn’t completely confused.
(7) (Our Mathematical Universe, p 347) In a recent panel discussion, Tegmark was asked what likelihood he would assign to the simulation hypothesis, to which he replied 17%. David Chalmers, who was also on the panel, gave it a 42% chance tongue-in-cheek.
(8) See Signal-based classical emulation of a universal quantum computer and Quantum simulator (Wikipedia). My reasons for guessing that it would be impractical to simulate observations of quantum events on classical computers comes from Scott Aaronson’s Quantum Computing Since Democritus: ”Describing the state of 200 particles takes more bits then there are particles in the universe” (page 217). It might however be easier to fool us into thinking we are observing quantum events than to run the necessary quantum computation.
(9) [PDF] The Multiverse Hierarchy (page 8), see also ”Unifying the Inflationary & Quantum Multiverses (Max Tegmark)”