Human extinction might not be an all-or-nothing thing. Some fraction of perfect or close copies of humanity in the multiverse (if it exists) might go extinct while others don’t. These copies could be very far away in space – outside the observable universe (level I multiverse), in other bubble universes where cosmic inflation has ended (level II), other branches in so-called Hilbert space (level III) or possibility in other mathematical structures (level IV)(1). This suggests that humanity, in a non-local sense, is unlikely to go extinct in a very long time.
Imagine the following: there is a large nummer of identical humanitys, and you have a choice between a button that reduces the percentage of copies that go extinct, and one that reduces the risk of intense suffering on a scale vastly exceeding all suffering in the history of life by the same amount and from the same level, which button do you push? Choosing between the buttons represents deciding where to put your finite effort and resources. In the real world it’s not this clear-cut obviously, but this is one way to think about the decision between reducing extinction risk and suffering risk, when taking the likelihood of a multiverse into account. Since I think preventing prolonged extreme suffering almost always takes precedent over creation of additional pleasure, I would work towards making the future as good as possible around those humanitys that survive, rather than increase the number of humanitys that survive (2).
In case there aren’t any close copies of humanity, or not sufficiently many to ensure that at least some of them will survive, preventing human extinction appears more important to me. Nevertheless, focusing on reducing suffering risk looks more robustly beneficial, reducing involuntary suffering is always good (holding everything else constant) but reducing extinction risk might not be depending on the quality of future and your values (your E- and N-ratios). Suffering risks are also more neglected than extinction risk (3). As I’ve written in the previous post, it might turn out that current AI alignment research prevents future suffering as well as reduces the risk of extinction but this looks uncertain (4).
Hopefully I don’t have to spell this out but I strongly oppose any attempt to increase the risk of human extinction. In the unlikely case that you need to be persuaded that this is a bad idea, read my following paraphrase of Brian Tomasik: ”If the cause to reduce suffering inspired someone to do something destructive, this could be very bad — not just for other value systems but even for suffering reducers themselves, due to backlash against the cause. Violence by fringe minorities almost always hurts those who perpetrate it.” To get our act together we need global stability, because otherwise people can’t afford to think long-term, and cooperation between key organizations, which requires trust.
(1) From Wikipedia. From what I gather, each level is more controversial than the previous one, with the first one being uncontroversial among cosmologists and the last one highly speculative.
(2) This formulation is similar to an idea first expressed by the Foundational Research Institute: ”Rather than focusing exclusively on ensuring that there will be a future, we recommend interventions that improve the future’s overall quality.”
(3) In “Against Wishful Thinking”, Brian Tomasik explains why he thinks people don’t pay enough attention to the risk that suffering could get greatly multiplied in the future.
(4) See this section of ”Cause prioritization for downside-focused value systems” by Lukas Gloor for a thoughtful analysis of whether current AI alignment research is likely to prevent future suffering. I withhold definite judgement about the matter.