An Exercise in Very Large Numbersᵃ

Humanity’s cosmic endowment (all the resources available to us if we develop the ability to colonize space) allows for the computation of about 1085 operations if we convert all the accessible cosmic resources into efficient computers, according to an estimate by philosopher Nick Bostrom (1). What could we do with all that computing power? There has existed about 1025 (give or take a few orders of magnitude) sentient life forms in the entire history of life on Earth (2). To simulate all their neural activity, every experience that every organism has ever felt, would require about 1039 to 1052 operations according to Nick Bostrom’s estimate (3). If we take the high estimate, we could run “sentient history” about 1033 times with our cosmic supercomputers (4). If we assume that consciousness doesn’t go away when carbon and neurons are replaced with silicon and transistors, the cosmic endowment seems to allow for the creation of conscious experience comparable to at least a billion trillion trillion copies of the history of life on Earth (5). To put this number into perspective, that’s way more copies than there are grains of sand on all the world’s beaches, and roughly as many copies as there would be grains of sand on all beaches in the Milky Way galaxy if every star in it had a planet just like Earth.

What we make of the cosmic endowment might therefore be trillions of trillions of times more morally significant than everything that has ever taken place on Earth (6). There are plenty of uncertainties in this estimation, but the result must be off by tens of magnitudes for my conclusion to change: ensuring that smarter-than-human AI is as benevolent as possible looks incredibly important. Why? Because it looks as if a superintelligence, which can be viewed as an extremely competent goal-achieving system, with a wrongly specified goal might grab the cosmic endowment (including Earth) and turn it into whatever structures best fulfils its goal. If you want to understand why I think this is plausible enough to be taken seriously as a possiblity, I primarily recommend reading the best-seller Superintelligence by Nick Bostrom. If you want the gist without investing several hours, check out Rob Miles excellent videos, especially these two. The advent of a superintelligence could happen this century (7), and lead to astronomical amounts of suffering in at least three ways, as a side-effect if a superintelligence finds instrumental value in suffering, in a potential conflict situation or if creation of suffering terrifyingly is part of its goal, which might happen due to a faulty implementation of human values into the AI (8).

It seems worthwhile to try to prevent this. Right now, there is one organisation I know of that has as its main focus to try to prevent astronomical suffering, namely the Foundational Research Institute. Another organisation that might be beneficial in my opinion is MIRI, which works on AI alignment research, technical research into ensuring that a future superintelligence produces good outcomes. The reason I say might is because I’m not sure an aligned superintelligence will necessarily produce less suffering than a faulty, or misaligned, superintelligence (9). Furthermore, it has been argued that the worst outcomes might come from almost aligned superintelligence, research on AI alignment might make those outcomes more likely (10). Although I can imagine that you could make the opposite argument, namely that without alignment research we will get to almost aligned AI and we will need alignment research to get out of the ”hyper-existential pit”.

It could turn out that helping to prevent extinction by funding AI alignment research causes vastly more extreme suffering in expectation than a scenario with a misaligned AI. It could also be the case that it reduces vast amounts of extreme suffering in expectation. None of these two possibilities seem much more likely than the other. This doesn’t seem like a robust way to improve the world in my opinion. Although if you are okay with high risk, think current AI alignment work is more than 50 percent likely to be positive and has the highest impact per dollar, I will not try to stop you. My uncertainty about the desirability of AI alignment does not apply to the same degree for suffering-focused AI safety (or worst-case AI safety), efforts to specifically avoid astronomical suffering. It seems like this should have a greater chance of being beneficial. This might not be a very consoling thought, but if you worry about extinction, it might not be an all or nothing thing if some kind of a multiverse exist.

If you think AI alignment is equally likely to decrease as to increase the amount of future suffering, you might say that it’s neutral in regard to expected suffering and worth doing because it makes human extinction less likely. It’s like taking a bet that is equally likely to e.g. reduce and increase the total amount of suffering by the same amount, but always decreases extinction risk. Taking the bet seems to make sense from an expected value perspective but I would not want to take that bet. I prefer actions that are less likely to make the world worse, at the expense of lost expected value (I’m not sure this rational, it might be a bias). More crucially, I think reducing involuntary extreme suffering is more important to me than reducing extinction risk, and we have finite resources so it’s important to prioritize. My most favoured option would be for humanity to not make any superintelligences — it’s just not worth the risks in my opinion — but that might be asking for too much, taking into account the economic incentives and enthusiasm in AI research: “the prospect of discovery is too sweet” (11).


Notes

a: The original piece I wrote in 2016 that evolved into this one was called ”The Moral Significance of Getting AI Right”.

(1) Bostrom’s estimate of our cosmic endowment (in Superintelligence, p 102) assumes that there doesn’t exist any other technological civilizations within our cosmic horizons, if they do exist we would have to cooperate or compete with them for the resources. It also assumes that we don’t live in a computer simulation with limited computing resources (Bostrom, Are You Living In a Simulation?). If we choose to aestivate the computing resources could increase 30 orders of magnitude.
(2)
Calculating low- and high-end estimates for number of sentient animals that has existed.
High end:
Current number of animals: about 1022 (mostly nematodes)
x 52 x 500,000,000 ~ 10^32
(one week lifespan on average assumed, animals have existed for half a billion years, no change in populations over that time assumed)
Low end:
Total number of mammals that have ever existed: probably about 10²⁰
Current number of mammals: about 1011
x 200,000,000 ~ 1019
(one year lifespan on average assumed, mammals have existed for 200 million years, no change in populations over that time assumed so it’s likely an overestimate)
It’s hard to estimate the total number of sentient life forms that have existed, but it’s likely in the range of 1019 to 1032. It depends on where we draw the line between sentient and non-sentient, it’s closer to 1030 if we include insects. Numbers from Tomasik, How Many Wild Animals Are There?
See also:
How many animals have ever lived?
How many organisms have ever lived on Earth?
(3) Number of operations required to simulate “sentient history” is derived from Superintelligence, p 26: “If we were to simulate 10²⁵ neurons over a billion years of evolution (longer than the existence of nervous systems as we know them), and we allow our computers to run for one year, these figures would give us a requirement in the range of 1031-1044 FLOPS.”
There are about 3 x 107 seconds in a year, so I multiplied with that. Note that Bostrom’s  1025-estimate is the number of neurons in nature at any given time, while my 1025-estimate is the number of sentient animals in the history of Earth, his estimate also counts non-sentient animals.
(4) We get this by dividing the 1085 operations allowed by the cosmic endowment with the 1052 operations representing all neural activity. Alternatively we could create 1058 digital humans with 100 year lifespans, interacting with each other in virtual worlds (Superintelligence, p 25–26, p 102–103).
(5) If we use the high estimate for the number of operations needed to simulate all “sentient history” and we assume moral significance is proportional to the number of operations (an extreme oversimplification), “sentient history” on Earth is morally comparable to 1025 human lives. If we use the low “sentient history” estimate, “sentient history” is comparable to 1012 human lives.
(6) I’m assuming here that total moral significance is linear to the amount of conscious experiences with moral significance. It doesn’t seem far-fetched to think that headaches are twice as bad if they are twice as intense, are experienced for a period that is twice as long, or occurs to twice as many (all else being equal).
(7) Nobody knows when and if superintelligence will be developed, but in a 2016 survey, the mean of AI expert opinion is that it is 50 percent likely that AIs will outperform humans in all task by around 2060. Superintelligence could also arrive through what’s called whole brain emulation. This path should be easier to predict because it doesn’t depend on any theoretical breakthrough, ”just” continued incremental progress in computing, microscopy, automatic image recognition and neuroscience. Oxford researcher Anders Sandberg estimates that there is a 50 percent chance of this technology being available in the 2060s, and about 90 percent by 2100. Technological forecasting is very difficult, so one should take these predictions with a big pinch of salt. However, the suggestion that artificial superintelligence will arrive this century doesn’t appear ridiculous. I like this quote by researchers Sotala and Yamploskiy: “If the judgment of experts is not reliable, then, probably, neither is anyone else’s. This suggests that it is unjustified to be highly certain of AGI being near, but also of it not being near.”
(8) See the paper ”Superintelligence as a Cause or Cure for Risks of Astronomical Suffering” by Kaj Sotala and Lukas Gloor.
(9) See the previous note and this section of ”Artificial Intelligence and Its Implications for Future Suffering” by Brian Tomasik.
(10) See ”Separation from hyperexistential risk” for proposed attempts to mitigate this risk.
(11) Quote by Geoffrey Hinton in the New Yorker.

2 reaktioner på ”An Exercise in Very Large Numbersᵃ

Kommentera

Fyll i dina uppgifter nedan eller klicka på en ikon för att logga in:

WordPress.com Logo

Du kommenterar med ditt WordPress.com-konto. Logga ut /  Ändra )

Google+-foto

Du kommenterar med ditt Google+-konto. Logga ut /  Ändra )

Twitter-bild

Du kommenterar med ditt Twitter-konto. Logga ut /  Ändra )

Facebook-foto

Du kommenterar med ditt Facebook-konto. Logga ut /  Ändra )

Ansluter till %s