Longtermism Contra Schwitzgebel
Blog: Philosophy, et cetera
In 'Against Longtermism', Eric Schwitzgebel writes: "I accept much of Ord's practical advice. I object only to justifying this caution by appeal to expectations about events a million years from now." He offers four objections, which are interesting and well worth considering, but I think ultimately unpersuasive. Let's consider them in turn.(1) There's no chance humanity will survive long-term:All or most or at least many future generations with technological capabilities matching or exceeding our own will face substantial existential risk -- perhaps 1/100 per century or more. If so, that risk will eventually catch up with us. Humanity can't survive existential risks of 1/100 per century for a million years.If this reasoning is correct, it's very unlikely that there will be a million-plus year future for humanity that is worth worrying about and sacrificing for.This seems excessively pessimistic. Granted, there's certainly some risk that we will never acquire resilience against x-risk. But it's hardly certain. Two possible routes to resilience include: (i) fragmentation, e.g. via interstellar diaspora, so that different pockets of humanity could be expected to escape any given threat; or (ii) universal surveillance and control, e.g. via a "friendly AI" with effectively god-like powers relative to humans, to prevent us from doing grave harm.Maybe there are other possibilities. At any rate, I think it's clear that we should not be too quick to dismiss the possibility of long-term survival for our species. (And note that any non-trivial probability is enough to get the astronomical expected-value arguments off the ground.)(2) "The future is hard to see." This is certainly true, but doesn't undermine expected value reasoning.Schwitzgebel writes:It could be that the single best thing we could do to reduce the risk of completely destroying humanity in the next two hundred years is to almost destroy humanity right now... that might postpone our ability to develop even more destructive technologies in the next century. It might also teach us a fearsome lesson about existential risk....What we do know is that nuclear war would be terrible for us, for our children, and for our grandchildren. That's reason enough to avoid it. Tossing speculations about the million-year future into the decision-theoretic mix risks messing up that straightforward reasoning. But that isn't really "reason enough to avoid it", because if Schwitzgebel were right that immediate nuclear war was the only way to save humanity, that would obviously change its moral valence. It would be horribly immoral to let humanity go extinct just because saving it would be "terrible for us". When interests conflict, you can't just ignore the overwhelming bulk of them for the sake of maintaining "straightforward reasoning". (I'm sure confederate slaveowners regarded the abolition of slavery as "terrible for us, for our children, and for our grandchildren," but it was morally imperative all the same!)Of course, I don't really think it's remotely credible that nuclear war has positive expected value in the way that Schwitzgebel speculates. The hope that it "might" teach us a lesson seems far-fetched compared to the more obvious risks of permanently thwarting advanced civilization. (We're not even investing seriously in future pandemic prevention! If we can't learn from the past two years, I'm not confident that a rebuilt civilization centuries or millennia hence would learn anything from tragedies in its distant history. And again, there are serious risks that civilization would never fully rebuild.)So I think longtermism remains practically significant for raising the moral stakes of existential risk reduction. However important you think it is to avoid nuclear war, it's much more important once you take the long term into account (assuming you share my empirical beliefs about its expected harmfulness). It also suggests that there's immense expected value to research that would allow us to form better-grounded beliefs about such matters. We shouldn't just pre-emptively ignore them, as Schwitzgebel seemingly recommends. If it's remotely possible that we might find a way to reliably shape the far-future trajectory in a positive direction, it's obviously important to find this out!(3) "Third, it's reasonable to care much more about the near future than the distant future." Schwitzgebel stresses that this concern can be relational in form (tied to particular individuals or societies and their descendants), which avoids the problems with pure time discounting. That's an important point. But I don't think any reasonable degree of partiality can be so extreme as to swamp the value of the long-term future.To see why, just imagine a Parfitian "depletion" scenario, where we imagine that the harms of global warming are delayed by two centuries. Imagine that everyone currently alive (and a couple of generations hence) could reap a bonanza by burning all the planet's fossil fuels, condemning all distant future people to difficult lives in a severely damaged world. Or they could severely limit consumption while investing significantly in renewables, lowering quality of life over these two centuries while protecting the planet for all who come in the further future. Should they choose depletion or preservation? Obviously preservation, right? It's clearly immoral to drastically discount future generations when the trade-offs are made this explicit.(4) "Fourth, there's a risk that fantasizing about extremely remote consequences becomes an excuse to look past the needs and interests of the people living among us, here and now."It's always possible that a moral view is self-effacing, but that's no objection to the truth of the view. Empirically speaking, the people I know to be most concerned about the far-future (i.e., effective altruists) are also the people who seem to do the most to help the global poor, factory-farmed animals, etc. So this fear doesn't seem empirically well-grounded.By contrast, I think there's a much more credible risk that defenders of conventional morality may use dismissive rhetoric about "grandiose fantasies" (etc.) to discourage other conventional thinkers from taking longtermism and existential risks as seriously as they ought, on the merits, to take them. (I don't accuse Schwitzgebel, in particular, of this. He grants that most people unduly neglect the importance of existential risk reduction. But I do find that this kind of rhetoric is troublingly common amongst critics of longtermism, and I don't think it's warranted or helpful in any way.)Of course, it's possible that enthusiasts might end up drawn towards bad bets if they exaggerate their likely efficacy on influencing the far future. But that's just more reason to think that it's really important to investigate these questions carefully, and get the empirical estimates right. It's not a reason to reject longtermism wholesale.