The cosmic endowment and superintelligence

Update 25/7 2021: I have changed my mind and no longer agree with parts of this post. Keeping it up for archiving purposes. Note below.

Humanity’s cosmic endowment (all the resources available to us if we develop the ability to colonize space) allows for the computation of about 1085 operations if we convert all the accessible cosmic resources into efficient computers, according to an estimate by philosopher Nick Bostrom (1). What could we do with all that computing power? There has existed about 1025 (give or take a few orders of magnitude) sentient life forms in the entire history of life on Earth (2). To simulate all their neural activity, every experience that every organism has ever felt would require about 1039 to 1052 operations according to Nick Bostrom’s estimate (3). If we take the high, conservative estimate, we could run “sentient history” about 1033 times with our cosmic supercomputers (4). If we assume that consciousness doesn’t go away when carbon and neurons are replaced with silicon and transistors, the cosmic endowment seems to allow for the creation of conscious experience comparable to at least a billion trillion trillion copies of the history of life on Earth (5). To put this number into perspective, that’s roughly as many copies as there would be grains of sand on all beaches in the Milky Way galaxy if every star in it had a planet just like Earth.

What we make of the cosmic endowment is therefore plausibly trillions of trillions of times more morally significant than everything that has ever taken place on Earth (6). There are plenty of uncertainties in this estimation, but the result must be off by tens of magnitudes for my conclusion to change: ensuring that smarter-than-human AI is as benevolent as possible looks incredibly important. Why? Because it looks as if a superintelligence, which can be viewed as an extremely competent goal-achieving system, with a wrongly specified goal might grab the cosmic endowment (including Earth) and turn it into whatever structures best fulfills its goal. If you want to understand why I think this is plausible enough to be taken seriously as a possibility, I primarily recommend reading the best-seller Superintelligence by Nick Bostrom. If you want the gist without investing several hours, I can recommend this succinct Vox article and Robert Miles excellent videos, especially these two that explain core concepts: The Orthogonality Thesis, Intelligence, and Stupidity and Why Would AI Want to do Bad Things? Instrumental Convergence. The advent of a superintelligence could plausibly happen this century (7) and lead to astronomical amounts of suffering in at least three ways, as a side-effect if a superintelligence finds instrumental value in creating suffering agents, in a potential conflict situation (for instance if threats of creating intense suffering are used to extort other agents) or if the creation of suffering is part of its goal, which could conceivably happen due to a faulty implementation of human values into the AI (8).

Right now, there is one organization I’m aware of that has as its main focus to understand and reduce the risk of astronomical suffering, namely Center on long-term risk, an organization in the Effective Altruism community. Another organization that might be beneficial in my opinion is MIRI, which works on AI alignment research, technical research into ensuring that a future superintelligence produces good outcomes. However, it has been speculated that the worst outcomes might come from an almost aligned superintelligence and research on AI alignment might make those outcomes more likely (9). Although I can imagine that you could make the opposite argument, namely that without alignment research we will reach almost aligned AI and we will need alignment research to get out of the “hyper-existential pit” in the design landscape.

My most preferred option would be for humanity to not make any superintelligences — it’s just not worth the risks in my opinion — but that might be asking for too much, taking into account the economic incentives and enthusiasm in AI research: “the prospect of discovery is too sweet” (10).

Note: The part I now disagree with is the “‘hyper-existential pit’ in the design landscape” idea. The idea is similar to the uncanny valley idea and says that AI safety research could get us into the valley but fail to get past it. I now think that this is overly simplified. There could be other peaks far from the hypothesized valley.


Notes

(1) Bostrom’s estimate of our cosmic endowment (in Superintelligence, p 102) assumes that there don’t exist any other technological civilizations within our cosmic horizons, if they do exist we would have to cooperate or compete with them for the resources. It also assumes that we don’t live in a computer simulation with limited computing resources (Bostrom, Are You Living In a Simulation?). If we choose to aestivate the computing resources could increase 30 orders of magnitude.
(2)
Calculating low- and high-end estimates for the number of sentient animals that have existed.
High end:
Current number of animals: about 1022 (mostly nematodes)
x 52 x 500,000,000 ~ 10^32
(one-week lifespan on average assumed, animals have existed for half a billion years, no change in populations over that time assumed)
Low end:
Total number of mammals that have ever existed: probably about 1020
Current number of mammals: about 1011
x 200,000,000 ~ 1019
(one-year lifespan on average assumed, mammals have existed for 200 million years, no change in populations over that time assumed so it’s likely an overestimate)
It’s hard to estimate the total number of sentient life forms that have existed, but it’s likely in the range of 1019 to 1032. It depends on where we draw the line between sentient and non-sentient, it’s closer to 1030 if we include insects. Numbers from Tomasik, How Many Wild Animals Are There?
See also:
How many animals have ever lived?
How many organisms have ever lived on Earth?
(3) Number of operations required to simulate “sentient history” is derived from Superintelligence, p 26: “If we were to simulate 1025 neurons over a billion years of evolution (longer than the existence of nervous systems as we know them), and we allow our computers to run for one year, these figures would give us a requirement in the range of 1031-1044 FLOPS.”
There are about 3 x 107 seconds in a year, so I multiplied with that. Note that Bostrom’s  1025-estimate is the number of neurons in nature at any given time, while my 1025-estimate is the number of sentient animals in the history of Earth, his estimate also counts non-sentient animals.
(4) We get this by dividing the 1085 operations allowed by the cosmic endowment with the 1052 operations representing all neural activity. Alternatively, we could create 1058 digital humans with 100-year lifespans, interacting with each other in virtual worlds (Superintelligence, p 25–26, p 102–103).
(5) If we use the high estimate for the number of operations needed to simulate all “sentient history” and we assume moral significance is proportional to the number of operations (an extreme oversimplification), “sentient history” on Earth is morally comparable to 1025 human lives. If we use the low “sentient history” estimate, “sentient history” is comparable to 1012 human lives.
(6) I’m assuming here that total moral significance is linear with the amount of conscious experiences with moral significance. It doesn’t seem far-fetched to think that headaches are twice as bad if they are twice as intense, are experienced for a period that is twice as long, or occurs to twice as many (all else being equal).
(7) Nobody knows if and when superintelligence will be developed, but in a 2016 survey, the mean of AI expert opinion was that it is 50 percent likely that AIs will outperform humans in all tasks by around 2060. Superintelligence could also arrive through what’s called whole brain emulation. This path should be easier to predict because it doesn’t depend on any theoretical breakthrough, “just” continued incremental progress in computing, microscopy, automatic image recognition, and neuroscience. Oxford researcher Anders Sandberg estimates that there is a 50 percent chance of this technology being available in the 2060s, and about 90 percent by 2100. Technological forecasting is very difficult, so one should take these predictions with a big pinch of salt. However, the suggestion that artificial superintelligence will arrive this century doesn’t appear ridiculous. I like this quote by researchers Sotala and Yamploskiy: “If the judgment of experts is not reliable, then, probably, neither is anyone else’s. This suggests that it is unjustified to be highly certain of AGI being near, but also of it not being near.”
(8) See the paper “Superintelligence as a Cause or Cure for Risks of Astronomical Suffering” by Kaj Sotala and Lukas Gloor.
(9) See “Separation from hyperexistential risk” for proposed attempts to mitigate this risk.
(10) Quote by Geoffrey Hinton in the New Yorker.

Leave a comment