Minds And Design Processes In A Mindless Pattern

Reality — What Is This Thing?

I have written about pretty far-out stuff on this blog. Future superintelligent computers going haywire (here), possible multiverses containing our doppelgängers giving us non-local immortality (here), the hypothesis that we live in a simulation (here). You have to have a certain kind of worldview to take these speculations seriously. In this post I will try to paint a picture of what that worldview looks like. This will involve some hand-waving and elaborating on some subjects I’m far from an expert in. I might misrepresent facts in any of these subjects (please let me know if I do). This is not an attempt to make a strong defence of my worldview, I’m just presenting it to make my beliefs explicit and hopefully illuminate the reasons for my focus on future technology and the nature of reality.

Strange times

It’s easy to take modern technology for granted, but if you think about it for a second you’ll realise that it’s really remarkable. Sometimes it can feel like magic. At the risk of rehashing a boring cliché, I will remind you of some of the miracles of modern technology. There are wagons made of metal pulled by invisible superhorses. Stiff giant birds gracefully carrying hundreds of people thousands of kilometers (or miles) in the sky. People touching a slab of lit-up glass in their hands that can send and receive messages, talk to almost anyone regardless of distance, play any music, display any image or video, buy almost anything, get directions to anywhere and distribute information invisibly through the air that is accessible to billions of people virtually without delay, each slab powered by billions of small electrical switches, switching on and off billions of times a second. There are robots on Mars, and we landed one on a comet. People are floating in a space station cruising around the Earth faster than the fastest bullet.

It feels like the last couple of centuries, the development of technology has really taken off. I think this is supported by the evidence. For instance, change in economic output has gone from almost horizontal to almost vertical when plotted on a graph over the last two millennia (1).

gdp since 1 AD darkgrey

For the first time in human history, economical growth has outpaced population growth, making most people much richer. The global average income has increased tenfold (2) and the fraction of the world population living in extreme poverty has declined from about 90 percent to 10 percent since the 19th century (3). The effect of modern technology is not always positive, a demonstration of the unchecked and sometimes unintended power of our technology is our impact on the climate. A sign of modern technology can be seen on the carbon dioxide record, which looks normal for hundreds of thousands of years until about 1950, when it shoots straight up. With deliberate climate engineering, a large country could apparently usher in an ice age if it feels like it by spending a tiny fraction of its GDP, and moderate climate engineering might be a part of the solution to climate change (4). It can feel as though the present is the normal state for things to be in. Decision theorist Eliezer Yudkowsky jokingly points out in one of his talks that the world seems to tend towards greater and greater ”normality” over time. He brings up that women used to be unable to vote, that is odd from our perspective. Going back further, things were even stranger. However, if you could take an outside view of human life since the origin of our species, it would really be in the last few pages of the story where things became strange, and it should not be farfetched to expect this trend towards greater strangeness to continue.

Our ability to manipulate the world at an ever greater degree is strongly linked to our increasing scientific understanding of the world. Philosopher Rebecca Goldstein says about science that it ”gets reality itself to collaborate with us”, by ”prob[ing] reality so that it will answer us back when we are getting it wrong” (5). Someone has an idea about how some facet of reality works and comes up with a test that will settle whether they’re wrong, then others try the experiment themselves to check if the originator of the hypothesis deluded herself. Virtuous scientists should always be on the lookout for subtle ways in which the predictions of a theory fails. For instance, in the 19th century people were confused by Mercurys orbit, because it did not seem to follow the law of Newtonian physics. When general relativity came along, where gravity is conceptualised as the curvature of spacetime, Mercurys orbit made sense (6). This was a big point in favour of the theory  Scientists make progress by trying to refute themselves and others and only what survives scrutiny is given credence. This simple falsification-based view of science can be amplified with the more extensive and somewhat counterintuitive Bayesian account which I won’t cover here. It specifies how your credences in any theory or hypothesis should change as new evidence becomes available.

One seemingly necessary condition for successful science is mathematics. Galileo Galilei, who helped birth the scientific revolution, compared nature to a book written in the language of mathematics (7). Physicist Eugene Wigner wrote about ”The Unreasonable Effectiveness of Mathematics in the Natural Sciences”, the (perhaps surprising) fact that nature follows intelligible mathematical rules. After Maxwell discovered his equation for electromagnetism we mastered electricity, rockets use Newton’s law of gravitation to reach orbit, nuclear power as well as weapons were developed through discoveries in particle physics, and the computer industry depends on quantum mechanics in order to make transistors work (8). Engineering students are taught physics and math because we make technologies work by understanding the forces and laws that govern them, although maybe also because those are difficult subjects that can be used to showcase a students competence. In some cases, the necessity of science for technological development might be overstated, it could be the case that some technologies that came after scientific breakthroughs could have been invented and perfected without scientific theories and could instead rely only on trial-and-error and heuristics, but it would probably take a lot more time and resources.

Design processes

One way to benchmark the pace of technological progress is by comparing it to the first ”design process” (9) in the universe: evolution. Humans have in a few centuries developed technology with abilities that took evolution by natural selection millions of years to acquire and perfect. Flying, seeing and hearing (detecting photons and air waves), harnessing and storing solar energy to take a few examples. Sometimes when we put serious effort into an ability, we have pushed that ability further than evolution ever has. Airplanes fly faster than any bird, boring machines can dig through harder material than any animal can. This is partly because we arguably use a more efficient ”design process” than evolution, we make use of our (imperfect) scientific understanding, reasoning and planning ability as opposed to evolution which lacks any understanding or foresight, and partly because we can choose any desired set of features to optimize, whereas evolution can be said to only optimize inclusive fitness. We can also make use of designs unavailable to evolution, because the designs from evolution can only improve gradually. A classic example is the wheel: there is no use in a half-finished couple of wheels and there are no roads in nature. Nuclear power is another technology that will arguably never be available to the ”design process” of evolution because it seems to require rare, enriched non-organic materials, large facilities and planning.

A mindless pattern

So far I’ve suggested that we live in a strange era in which we are rearranging matter into useful stuff much faster than evolution accomplished similar things, in large parts thanks to scientific progress. I want to say two things with this. Firstly, it should not be too far fetched to think that we can keep this pace up and eventually outperform evolution in the design of intelligent things, perhaps not far from now, which should unlock even faster technological developments – a telescoping of the future, to use Nick Bostrom’s words – a superintelligence could possibly invent in a day what would take humans thousands of years. Intelligence is an evolved ability (governed by the laws of physics I might add) that we are making progress on understanding and reproducing artificially, and just like with many other technologies we can probably take the desired ability to further heights than can be found in nature. It has been pointed out (though I can’t remember where) that after a certain human cognitive ability has been automated, it can instantly or soon after be done faster and better artificially. Secondly, if you work from the assumption that the world is intelligible and constituted of physical stuff interacting according to mathematical rules, you can make remarkably precise predictions (10) and do remarkably powerful stuff. We have, to my knowledge, never found anything that contradicts this assumption. Lightning was thought to be angry God’s, disease and natural disaster were thought to be God punishing the sinners, and sacrifice was thought to control the weather. Supernatural accounts of natural phenomena has one by one been overtaken by a scientific explanation, never has the refinement of explanation gone the other way. This makes me think that the world might really be constituted entirely of physical stuff following mathematical rules. This means that there would be no purpose at the fundamental level of reality, just clockwork. Furthermore, no one makes sure that whatever goes on is fair, and there is unfortunately no fixed limit on how bad it’s allowed to get. We would have no benevolent cosmic father who looks after us and steps in if things get really out of hand (11). It’s just particles (or waves in quantum fields) mindlessly following rules without exception. As long as it’s allowed by the laws of physics, it can happen. The physicist Sean Carroll describes the world poetically as caught in the grip of an unbreakable pattern. A sequence of states, snapshots, following each other according to a mathematical rule. Each state determines the next, like the states of Conway’s game of life evolving according to preprogrammed rules.

The world can be described at different levels, with physics being the most fundamental to the best of our knowledge. It’s not useful to talk only in terms of fundamental particles and the laws of physics when explaining why someone delivered a pizza or how plate tectonics work, even though everything that happens is determined by them. As Dan Dennett notes in a discussion about this topic, someone who knows everything about the universe at the fundamental level, but nothing else i.e. had no conception of chairs, people and pizza delivery apps (the so-called Laplace’s demon), would be surprised by the efficiency of a simple human who can predict with fairly high reliability that a pizza will be delivered to his door in 30 minutes without having the knowledge about the position and velocity of every particle in the universe, and without the computing power required to calculate their trajectories. Understanding the world at the higher level of description with emergent features like money, social trust, and language is indeed indispensable to us, even though they are not necessary for Laplace’s demon to make perfect predictions.

Do we have free will on this view, if we’re just collections of particles mindlessly following rules? I’m a compatibilist, which means that I think free will (the ability to make choices, see more in note 12) is compatible with determinism, the theory that every events is the necessary result of what happened before it. I’m not sure if the universe is deterministic, but free will better be compatible with determinism for us to have free will, because introducing randomness doesn’t seem to help. I’d rather my decisions where determined by my previous experiences, intuition and deliberation than by a roll of the dice (I think).

A concept I find useful is the distinction between the manifest and scientific image of the world, the world as it appears to our senses and the world as it is unveiled by science. The manifest image and the scientific image aren’t always easy to reconcile, as in the case of free will where the feeling that you could have done otherwise which some people see as essential for free will clashes with the scientific view of brains as collections of particles obeying physical laws. In this case, I think the feeling is mistaken and that our notion of free will should be adjusted to fit the scientific view. The manifest image and the scientific image are true in different senses, the manifest image is true in the sense that what your experience of the world is a real experience. Even if you’re a brain in a vat, you can claim things like “I experience a red car” and not be mistaken, but you can be mistaken about the causes of your experience (the car might exist only as a representation in signals fed into your visual system). The scientific image is our best account of the causes of our experiences, and we can reach beyond our raw senses with microscopes, telescopes, infrared cameras and other tools to get a richer view of reality. I think we should trust our scientific understanding over our hunches and raw perception when they disagree about the cause of our experiences, since we know from experience that our senses can be simply fooled by for example conjuring tricks and visual illusions. It shouldn’t be surprising if the universe violates our common sense, we’re evolved to deal with the challenges of the African savanna – not understanding the nature of reality (I recommend this talk by Richard Dawkins on the topic). 

Mind

The most important thing I can’t fit into the scientific, naturalist, view is consciousness. I don’t understand how it can exist. But it is the very last thing I would deny. While apparent causes of experiences can be illusory, conscious experience itself, I submit, cannot be an illusion. It’s the very stage where illusions can appear. However, it seems to be a hard problem to explain why and how physical processes give rise to subjective experience. ”As physicists work toward completing a theory of the universe and biologists unravel the molecular complexity of life, a glaring incompleteness in this scientific vision becomes apparent. The ‘theory of everything’ that appears to be emerging includes everything but us […] We need a ‘theory of everything’ that does not leave it absurd that we exist.” says the back of Incomplete Nature by Terrance Deacon, which exposes his attempt of filling the missing piece. I’ve read two different accounts of consciousness by naturalists, Incomplete Nature and Dennett’s Consciousness explained as well as Brian Tomasik’s writings on consciousness, which echoes Dennett’s. Reading these accounts didn’t get me closer to feeling satisfied with a resolution, though I can’t claim that I fully understand them. Steven Pinker summarises (13) the believed function that consciousness plays according to some neuroscientists as a blackboard in the brain where ”a diverse set of computational modules can post their results in a common format that all the other modules can ’see’” These modules include perception, memory, language and action planning. This seems like a plausible account of the biological function of consciousness to me, but it doesn’t explain the nature of first-person experience.

I think there is a physical process that causes me to believe and say that I’m conscious. I would like to think that this process is informed by the fact that I am conscious, that is, I would like my consciousness to be the reason for why I say that I’m conscious. I think that a perfect computer simulation of me would be conscious (it would talk and write about consciousness just like me, and I think these things are caused by actually having consciousness), so I think a computational process can know that there is consciousness associated with the computation. The question is this: how come some computations are accompanied by consciousness? This question seems kind of hopelessly difficult (14), if it is a good question. Would any answer feel satisfactory? To any answer of the type “because of this particular process”, you could reply: why isn’t that process going on “in the dark” like any other? I’m tempted to believe that consciousness is coming from somewhere else, and the computation is merely summoning the consciousness, which exist outside the physical world. But how could a physical system know and report that it is conscious, unless consciousness is interacting with that physical system? The non-physical would have to interact with the physical, but this violates the Completeness principle, namely that all physical effects appear to have sufficient physical causes. The only way out seems to be epiphenomenalism, which states that mental events are determined by physical processes, but mental events have no casual role in the physical system. You, the conscious witness, would be a silent, powerless witness. According to epiphenonenalism, the reason why I’m thinking and writing about consciousness has nothing to do with the fact that I’m conscious, which seems dubious. I’m left thinking that consciousness must be a feature of some physical, and hence computable, processes. However, conscious experience have properties that seem impossible to grow out from a mathematical pattern, since these properties are nowhere to be found in mathematics. This is the biggest question mark about my current worldview. My working assumption is that those that claim to answer or dissolve this question mark are correct and that I don’t need to dramatically change worldview, it might however be appropriate to take a quick glance at an alternative worldview.

Cognitive scientists Donald Hoffman has a radically different belief about the world, where conscious experience is fundamental. Reality on his view is composed only of minds, and minds made up of minds. These minds don’t see the world as it is but instead see useful evolutionary adaptive illusions. ”Snakes and trains”, he says in an interview, ”have no objective, observer-independent features. The snake I see is a description created by my sensory system to inform me of the fitness consequences of my actions.” My problem with theories about reality that have no objective observer-independent features, by which I take to mean no true facts, only interpretations, is that the nature of the observers themselves is unsupported. To say that an observer makes a certain interpretation, is that also only an observer-dependant interpretation? To say that an observer exists, is that only an observer-dependant interpretation?  Donald Hoffman’s theory doesn’t have this flaw, it is based on a mathematical model of observers, so they do have a specified observer-independent nature, even though everything is observers. I don’t know what to make of this theory, other than that I think it’s surprising that our illusion of reality designed by evolution for survival happens to be so consistent and mathematical, because even the theory of relativity and quantum mechanics are illusions on his view. Hoffman’s view doesn’t explain consciousness, but it at least positions it as fundamental, rather than something that unexplainably emerge. If his view or something resembling it would turn out to be correct, it would possibly have large implications for how to improve the world. Reality would turn out to be very different than it appears. With an inaccurate world model, we might head in the wrong direction. I’m not sure how we should take into account uncertainty about ontology in altruistic efforts, or how much uncertainty it is reasonable to have in the scientific view, given the question mark surrounding consciousness.

Conclusion

To sum up, science and technology are incredibly powerful according to my world view, they progress much faster and produces more powerful designs than evolution does for many of the features we care about, although not outperforming the creations of evolution in all metrics. We live in highly unusual times, economically and technologically, and we should expect things to get increasingly strange. The world runs like Conway’s game of life, a mathematical pattern with simple beginnings that grew in complexity over time. It operates with no purpose, no rhyme or reason and there is no guardian who limits the depth of the abysses in the landscape we traverse. On the plus-side, nobody restricts the heights (this is not to say that the depths or heights are infinite). The mindless pattern underlies and determines everything, including the mindless “design process” of evolution, and the human mind. The human mind is the first and only known entity in the universe that has learned about the mindless and unbreakable pattern, and has come to know the rules the pattern follows in everyday life, i.e. the rules that govern us and our local surroundings. I give credence to this scientific worldview because of its predictive power and the technological progress it has made possible. The big question mark for me is consciousness, how does subjective experience with intrinsic properties emerge from a mindless pattern? The question might be fundamentally confused, I’m certainly also open to that possibility.

I wanted to write this post to explain my focus on future technology, such as superintelligence, and the nature of reality, like the questions about whether we live in a multiverse or a simulation. Especially I want to answer why I take these things seriously. With my scientific view, where the world has no obligation to appeal to our common sense, I take seriously hypothesises that appear strange when these hypothesis are backed by good evidence or rigorous and rational thought, I even expect reality to violate most peoples sensibilities. With technology, humanity is exploiting the power inherent in nature revealed by science and this has made our science and technology rival evolution as the most advanced design process. If evolution can create intelligence, we might outperform it in a tiny fraction of the time (15). It could be important to know if this worldview is inaccurate, as it might profoundly shift what outcome-minded altruists should prioritize. For instance, if I’m wrong about general intelligence being a computable process that is feasible to replicate in machines, I should probably not focus as much on AI. If my assumptions about consciousness turns out to be wrong, that should impact how likely I think it is that we are in a simulation. If I come to realise that I’m too confident in the power of technology and common-sense violating scientific theories taken seriously by scientists, that should lower my relatively high credence in things like the multiverse, and that grand futures are possible and likely to occur in at least some branches of the multiverse, or far away in space on a twin-earth.


Notes

(1) https://ourworldindata.org/economic-growth As I’m required to mention by the CC-license: graph colors are edited.
(2) Page 426 of ”An Introduction to Global Health” by Michael Seear, Obidimma Ezezika
(3) https://ourworldindata.org/extreme-poverty
(4) https://www.youtube.com/watch?v=XkEys3PeseA
(5) https://youtu.be/cypm7hkJ2lQ?t=6m22s
(6)  https://en.wikipedia.org/wiki/Tests_of_general_relativity#Perihelion_precession_of_Mercury
(7) Quote by Galileo: “Philosophy [nature] is written in that great bo…”
(8) https://www.quora.com/Why-are-transistors-said-to-be-dependent-on-quantum-mechanics
Cool fact: GPS satellites compensates for Einstein’s theory of relativity to transmit the right time. I think this is a surprising application of relativity, although it might not have been needed for GPS to work, arguably we would have figured it out with a space-clock experiment if nobody had yet figured out the theory of relativity, see https://en.wikipedia.org/wiki/Global_Positioning_System#Satellite_frequencies
(9) There is disagreements among naturalists about whether evolution should be called a ”design process” or not, so I use the term in scare quotes.
(10) In Quantum electrodynamics, one of the most accurate theories in physics, agreements between predictions of the theory and observations is within 10 parts in a billion for a certain test. https://en.m.wikipedia.org/wiki/Precision_tests_of_QED
(11) Unless we are simulated and the supervisor(s) of the simulation are benevolent
(12) There are different definitions of ”free will”, the one I’m using is from Wikipedia: ”Free will is the ability to choose between different possible courses of action unimpeded.and more specifically I think of free will ”as a psychological capacity, such as to direct one’s behavior in a way responsive to reason”, this is the definition used by determinists, if you think the notion of ”could have done otherwise even if the conditions before the decision were exactly the same” is important in the definition of free will, then you would not be a compatibilist. I like this post by Sean Carroll on determinism and free will.
(13) Page 426, Enlightenment Now.
(14) Maybe that’s why it’s called the hard problem.
(15) One wrinkle in this argument is that the fact of our existence can bias us into thinking intelligence is a likelier outcome of evolution than it actually is, because we are only able to observe evolution in places where it did lead to intelligence, namely us. So the argument that if evolution can do it, so can we might not hold for things like human intelligence, where there could be observer selection effects. I still think a weak form of the argument holds when we look at species far away in the evolutionary tree that have impressive cognitive abilities, like octopi. Read more here.

Above Only Wallpaper Sky

As I brought up in my previous post, humanity might have a lot of twin-species. They could be far away in space (if space is sufficiently large) and in other Everett branches (if the many-worlds interpretation of quantum mechanics is correct). What are the implications of this for the curious simulation argument?

It strikes me as quite likely that we, or at least significant fraction of our cosmic counterparts (if we have any) will eventually create a superintelligence, or in some other way get the capability to create ancestor simulations (sentient simulations of people like us). If there are more people like us in ancestor simulations than in real history, we are more likely to be in one of the many simulated histories, according to the simulation argument (1). It seems plausible that many possible civilizations and superintelligences would make these simulations if they could, since they would probably have high instrumental (and recreational) value (2) and advanced civilizations appear to have lots of computing power at their disposal, enough to make trillions of ancestor simulations each. It therefore seems likely that if people like us exist at many places in the multiverse, there will exist very many ancestor simulations, unless almost all of them either have a universal ban on ancestor simulations or go extinct (in some other fashion than by a misaligned superintelligence) before they can make any.

One reason to think that a ban on these simulations would be enforced is because future civilizations might deem sentient simulations immoral (as I hope they would (3)), but this assumes that just about every civilisation succeeds in keeping their AIs and citizens under control for possibly millions of years. A second reason to think that we are not in a simulation is that even unfriendly agents might refrain from making simulations in order to reduce the likelihood that the agent is in a simulation themselves (4). However, it only takes that one in a thousand civilizations makes a thousand ancestor simulations for the simulation hypothesis (5) to become likely. I think it is fairly likely (maybe 40% if I had to pick a number (6)) that we are simulated, especially if there will ever be any large number of civilizations at our stage of development in the base-level multiverse. A final possibility that I don’t find very plausible is that it is for some reason impossible to create sentient simulations.

If the simulation hypothesis is correct, our cosmic endowment could turn out to be a giant wallpaper, or the simulation might stop after a certain amount of resources have been used up by the computer running it. Due to this, and potential correlations between our decisions across simulations (if we make the world better in one simulation, it might make it more likely that the conditions in similar simulations improves as well), Brian Tomasik thinks the relatively short-term looks comparatively more important to improve for its own sake than otherwise. His argument doesn’t depend on the simulation hypothesis being very likely, which I think is counter-intuitive. I think his reasoning and math seems sound, but I don’t think it justifies ignoring far future concerns, and neither does he. Without considering the simulation argument, efforts to improve the ”short term” (meaning decades or centuries in this context) for its own sake seem basically negligible compared to the far future (spanning millions or billions of years) in the expected impact. When taking the simulation argument into account, the ”short term” starts to appear comparable to the far future in importance, when adding appropriately huge error-bars to the values of variables in this calculation.

Max Tegmark thinks (7) that the simulation argument “logically self-destructs” when you reflect on the fact that all the simulated civilizations are likely to make simulations of their own, so we are more likely (according to the Simulation Argument) to be in a simulation within a simulation, and even more likely to be in a simulation in a simulation in a simulation, and so on. I don’t find this counterargument very persuasive, there is no infinite regress as long as the computer running the simulation doesn’t have limitless computing resources.

Simulation branching

Conciser the case where both the many-worlds interpretation in base level reality and the simulation hypothesis are correct. Depending on what hardware is used to run the simulation, we get different effects. If observed quantum events in the simulation are determined by quantum computations in base level reality, we should expect the universe where the simulation is running to branch when people inside the simulation do a quantum measurement or quantum computations. Quantum computing can be simulated by classical computers (8) but would use up so much computing power in base level reality that I guess it would probably be impractical. It’s also possible in principle to simulate a universe abiding many-worlds, but this would require huge amounts of computing power unless it was crudely approximated.

If many-worlds is incorrect in base level reality but space is sufficiently large, we should expect the same simulation to be run elsewhere in that space. If it is run at many places and uses quantum computers, we should get the same effect as if many-worlds is correct (9). The simulation automatically “branches” if it’s run with quantum computers and base-level reality is sufficiently large and/or abides many-worlds. The simulation could also “branch” if it uses pseudo-randomness and is designed to fork in certain conditions, in order to approximate many-worlds or study counterfactuals.

If we live in a simulation, can we really know anything about the laws of physics in base level reality? Yes, we can at least infer that if we are in a simulation, the “real” laws of physics must obviously allow for such a simulation to exist. Base level reality must also permit the existence of people like us and the creation of advanced technology for the simulation argument to hold, this constrains the set of possible physical laws in base level reality. The simulators probably had some reason(s) for creating the simulation, if they use it to predict the development of life elsewhere, they would presumably want to design the physics in the simulation to approximate their laws of nature. The simulation might suit another purpose, in which case the simulation might resemble their reality to a lesser extent. It might make sense to think of oneself not as a particular instance of matter that is either simulated or not, but rather as a decision algorithm implemented in different places across the multiverse, some in base reality and some in simulations.


Notes

(1) The physicist Sean Carroll has noticed an interesting flaw in this logic, but I think it’s a superficial flaw that does not override the core of the argument.
(2) This point relies partly on recognising that simulations are useful to us now and partly on the idea called ”Computational Irreducibility”, that the behaviour of a complex system usually cannot be predicted without simulating it.
(3) Unless if they can prevent more suffering by running the simulations than they create, that might be morally justified in some cases.
(4) I was not the first to suggest this, see ”Multiverse-wide Cooperation via Correlated Decision Making” page 99.
(5) The simulation argument and the simulation hypothesis should not be confused, see Bostrom’s FAQ, where he also responds to some common counterarguments.
(6) This would be a lot higher if I was more certain that my thinking isn’t completely confused.
(7) (Our Mathematical Universe, p 347) In a recent panel discussion, Tegmark was asked what likelihood he would assign to the simulation hypothesis, to which he replied 17%. David Chalmers, who was also on the panel, gave it a 42% chance tongue-in-cheek.
(8) I might be wrong about this. See, Signal-based classical emulation of a universal quantum computer and Quantum simulator (Wikipedia)
(9) [PDF] The Multiverse Hierarchy (page 8), see also ”Unifying the Inflationary & Quantum Multiverses (Max Tegmark)

 

 

Extinction in Light of The Multiverse

Human extinction might not be an all-or-nothing thing. Some fraction of perfect or close copies of humanity in the multiverse (if it exists) might go extinct while others don’t. These copies could be very far away in space – outside the observable universe (level I multiverse), in other bubble universes where cosmic inflation has ended (level II), other branches in so-called Hilbert space (level III) or possibility in other mathematical structures (level IV)(1). This suggests that humanity, in a non-local sense, is unlikely to go extinct in a very long time.

Imagine the following: there is a large nummer of identical humanitys, and you have a choice between a button that reduces the percentage of copies that go extinct, and one that reduces the risk of intense suffering on a scale vastly exceeding all suffering in the history of life by the same amount and from the same level, which button do you push? Choosing between the buttons represents deciding where to put your finite effort and resources.  In the real world it’s not this clear-cut obviously, but this is one way to think about the decision between reducing extinction risk and suffering risk, when taking the likelihood of a multiverse into account. Since I think preventing prolonged extreme suffering almost always takes precedent over creation of additional pleasure, I would work towards making the future as good as possible around those humanitys that survive, rather than increase the number of humanitys that survive (2).

In case there aren’t any close copies of humanity, or not sufficiently many to ensure that at least some of them will survive, preventing human extinction appears more important to me. Nevertheless, focusing on reducing suffering risk looks more robustly beneficial, reducing involuntary suffering is always good (holding everything else constant) but reducing extinction risk might not be depending on the quality of future and your values (your E- and N-ratios). Suffering risks are also more neglected than extinction risk (3). As I’ve written in the previous post, it might turn out that current AI alignment research prevents future suffering as well as reduces the risk of extinction but this looks uncertain (4).


Notes

Hopefully I don’t have to spell this out but I strongly oppose any attempt to increase the risk of human extinction. In the unlikely case that you need to be persuaded that this is a bad idea, read my following paraphrase of Brian Tomasik: ”If the cause to reduce suffering inspired someone to do something destructive, this could be very bad — not just for other value systems but even for suffering reducers themselves, due to backlash against the cause. Violence by fringe minorities almost always hurts those who perpetrate it.” To get our act together we need global stability, because otherwise people can’t afford to think long-term, and cooperation between key organizations, which requires trust.

(1) From Wikipedia. From what I gather, each level is more controversial than the previous one, with the first one being uncontroversial among cosmologists and the last one highly speculative.
(2) This formulation is similar to an idea first expressed by the Foundational Research Institute: ”Rather than focusing exclusively on ensuring that there will be a future, we recommend interventions that improve the future’s overall quality.”
(3) In “Against Wishful Thinking”, Brian Tomasik explains why he thinks people don’t pay enough attention to the risk that suffering could get greatly multiplied in the future.
(4) See this section of ”Cause prioritization for downside-focused value systems” by Lukas Gloor for a thoughtful analysis of whether current AI alignment research is likely to prevent future suffering. I withhold definite judgement about the matter.

An Exercise in Very Large Numbersᵃ

Humanity’s cosmic endowment (all the resources available to us if we develop the ability to colonize space) allows for the computation of about 1085 operations if we convert all the accessible cosmic resources into efficient computers, according to an estimate by philosopher Nick Bostrom (1). What could we do with all that computing power? There has existed about 1025 (give or take a few orders of magnitude) sentient life forms in the entire history of life on Earth (2). To simulate all their neural activity, every experience that every organism has ever felt, would require about 1039 to 1052 operations according to Nick Bostrom’s estimate (3). If we take the high estimate, we could run “sentient history” about 1033 times with our cosmic supercomputers (4). If we assume that consciousness doesn’t go away when carbon and neurons are replaced with silicon and transistors, the cosmic endowment seems to allow for the creation of conscious experience comparable to at least a billion trillion trillion copies of the history of life on Earth (5). To put this number into perspective, that’s way more copies than there are grains of sand on all the world’s beaches, and roughly as many copies as there would be grains of sand on all beaches in the Milky Way galaxy if every star in it had a planet just like Earth.

What we make of the cosmic endowment might therefore be trillions of trillions of times more morally significant than everything that has ever taken place on Earth (6). There are plenty of uncertainties in this estimation, but the result must be off by tens of magnitudes for my conclusion to change: ensuring that smarter-than-human AI is as benevolent as possible looks incredibly important. Why? Because it looks as if a superintelligence, which can be viewed as an extremely competent goal-achieving system, with a wrongly specified goal might grab the cosmic endowment (including Earth) and turn it into whatever structures best fulfils its goal. If you want to understand why I think this is plausible enough to be taken seriously as a possiblity, I primarily recommend reading the best-seller Superintelligence by Nick Bostrom. If you want the gist without investing several hours, check out Rob Miles excellent videos, especially these two. The advent of a superintelligence could happen this century (7), and lead to astronomical amounts of suffering in at least three ways, as a side-effect if a superintelligence finds instrumental value in suffering, in a potential conflict situation or if creation of suffering terrifyingly is part of its goal, which might happen due to a faulty implementation of human values into the AI (8).

It seems worthwhile to try to prevent this. Right now, there is one organisation I know of that has as its main focus to try to prevent astronomical suffering, namely the Foundational Research Institute. Another organisation that might be beneficial in my opinion is MIRI, which works on AI alignment research, technical research into ensuring that a future superintelligence produces good outcomes. The reason I say might is because I’m not sure an aligned superintelligence will necessarily produce less suffering than a faulty, or misaligned, superintelligence (9). Furthermore, it has been argued that the worst outcomes might come from almost aligned superintelligence, research on AI alignment might make those outcomes more likely (10). Although I can imagine that you could make the opposite argument, namely that without alignment research we will get to almost aligned AI and we will need alignment research to get out of the ”hyper-existential pit”.

It could turn out that helping to prevent extinction by funding AI alignment research causes vastly more extreme suffering in expectation than a scenario with a misaligned AI. It could also be the case that it reduces vast amounts of extreme suffering in expectation. None of these two possibilities seem much more likely than the other. This doesn’t seem like a robust way to improve the world in my opinion. Although if you are okay with high risk, think current AI alignment work is more than 50 percent likely to be positive and has the highest impact per dollar, I will not try to stop you. My uncertainty about the desirability of AI alignment does not apply to the same degree for suffering-focused AI safety (or worst-case AI safety), efforts to specifically avoid astronomical suffering. It seems like this should have a greater chance of being beneficial. This might not be a very consoling thought, but if you worry about extinction, it might not be an all or nothing thing if some kind of a multiverse exist.

If you think AI alignment is equally likely to decrease as to increase the amount of future suffering, you might say that it’s neutral in regard to expected suffering and worth doing because it makes human extinction less likely. It’s like taking a bet that is equally likely to e.g. reduce and increase the total amount of suffering by the same amount, but always decreases extinction risk. Taking the bet seems to make sense from an expected value perspective but I would not want to take that bet. I prefer actions that are less likely to make the world worse, at the expense of lost expected value (I’m not sure this rational, it might be a bias). More crucially, I think reducing involuntary extreme suffering is more important to me than reducing extinction risk, and we have finite resources so it’s important to prioritize. My most favoured option would be for humanity to not make any superintelligences — it’s just not worth the risks in my opinion — but that might be asking for too much, taking into account the economic incentives and enthusiasm in AI research: “the prospect of discovery is too sweet” (11).


Notes

a: The original piece I wrote in 2016 that evolved into this one was called ”The Moral Significance of Getting AI Right”.

(1) Bostrom’s estimate of our cosmic endowment (in Superintelligence, p 102) assumes that there doesn’t exist any other technological civilizations within our cosmic horizons, if they do exist we would have to cooperate or compete with them for the resources. It also assumes that we don’t live in a computer simulation with limited computing resources (Bostrom, Are You Living In a Simulation?). If we choose to aestivate the computing resources could increase 30 orders of magnitude.
(2)
Calculating low- and high-end estimates for number of sentient animals that has existed.
High end:
Current number of animals: about 1022 (mostly nematodes)
x 52 x 500,000,000 ~ 10^32
(one week lifespan on average assumed, animals have existed for half a billion years, no change in populations over that time assumed)
Low end:
Total number of mammals that have ever existed: probably about 10²⁰
Current number of mammals: about 1011
x 200,000,000 ~ 1019
(one year lifespan on average assumed, mammals have existed for 200 million years, no change in populations over that time assumed so it’s likely an overestimate)
It’s hard to estimate the total number of sentient life forms that have existed, but it’s likely in the range of 1019 to 1032. It depends on where we draw the line between sentient and non-sentient, it’s closer to 1030 if we include insects. Numbers from Tomasik, How Many Wild Animals Are There?
See also:
How many animals have ever lived?
How many organisms have ever lived on Earth?
(3) Number of operations required to simulate “sentient history” is derived from Superintelligence, p 26: “If we were to simulate 10²⁵ neurons over a billion years of evolution (longer than the existence of nervous systems as we know them), and we allow our computers to run for one year, these figures would give us a requirement in the range of 1031-1044 FLOPS.”
There are about 3 x 107 seconds in a year, so I multiplied with that. Note that Bostrom’s  1025-estimate is the number of neurons in nature at any given time, while my 1025-estimate is the number of sentient animals in the history of Earth, his estimate also counts non-sentient animals.
(4) We get this by dividing the 1085 operations allowed by the cosmic endowment with the 1052 operations representing all neural activity. Alternatively we could create 1058 digital humans with 100 year lifespans, interacting with each other in virtual worlds (Superintelligence, p 25–26, p 102–103).
(5) If we use the high estimate for the number of operations needed to simulate all “sentient history” and we assume moral significance is proportional to the number of operations (an extreme oversimplification), “sentient history” on Earth is morally comparable to 1025 human lives. If we use the low “sentient history” estimate, “sentient history” is comparable to 1012 human lives.
(6) I’m assuming here that total moral significance is linear to the amount of conscious experiences with moral significance. It doesn’t seem far-fetched to think that headaches are twice as bad if they are twice as intense, are experienced for a period that is twice as long, or occurs to twice as many (all else being equal).
(7) Nobody knows when and if superintelligence will be developed, but in a 2016 survey, the mean of AI expert opinion is that it is 50 percent likely that AIs will outperform humans in all task by around 2060. Superintelligence could also arrive through what’s called whole brain emulation. This path should be easier to predict because it doesn’t depend on any theoretical breakthrough, ”just” continued incremental progress in computing, microscopy, automatic image recognition and neuroscience. Oxford researcher Anders Sandberg estimates that there is a 50 percent chance of this technology being available in the 2060s, and about 90 percent by 2100. Technological forecasting is very difficult, so one should take these predictions with a big pinch of salt. However, the suggestion that artificial superintelligence will arrive this century doesn’t appear ridiculous. I like this quote by researchers Sotala and Yamploskiy: “If the judgment of experts is not reliable, then, probably, neither is anyone else’s. This suggests that it is unjustified to be highly certain of AGI being near, but also of it not being near.”
(8) See the paper ”Superintelligence as a Cause or Cure for Risks of Astronomical Suffering” by Kaj Sotala and Lukas Gloor.
(9) See the previous note and this section of ”Artificial Intelligence and Its Implications for Future Suffering” by Brian Tomasik.
(10) See ”Separation from hyperexistential risk” for proposed attempts to mitigate this risk.
(11) Quote by Geoffrey Hinton in the New Yorker.