Existential risk is the risk of an event that would wipe out humanity. That means, not just present lives, but all the future ones. Populations can recover from lots of things, but not from complete extinction. If you value future people at all, you might care a lot about even a small reduction in the probability of an extinction event for humanity.
There are two big problems with reasoning about existential risk:
1) By definition, we have never observed the extinction of humanity.
This means that we don't have an uncensored dataset to do statistics on - extrapolating from the past will give our estimates anthropic bias: people only observe the events that don't wipe out humanity, but events that wipe out humanity are possible. Therefore, our past observations are a biased subset that make the universe look safer than it is.
2) Our intuitions are terrible about this sort of thing.
To reason about existential risk, we have to tell stories about the future, think about probabilities, sometimes very small ones, and think about very large numbers of people. Our brains are terrible at all these things. For some of these beliefs, there's just no way to build a "feedback loop" to test your beliefs quickly in small batches - and that's the main way we know how to figure out when we're making a mistake.
Moreover, we have to do this on a topic that evokes strong emotions. We're not talking about the extinction of some random beetle here. We're talking about you, and me, and everyone we know, and their grandchildren. We're talking about scary things that we desperately don't want to believe in, and promising technologies we want to believe are safe. We're talking about things that sound a lot like religious eschatology. We're talking about something weird that the world as a whole hasn't quite yet decided is a normal thing to be worried about.
Can you see how rationality training might be helpful here?
I'm writing this on a flight to Oakland, on my way to CFAR's workshop to test out in-development epistemic rationality material. A few months ago, I expressed my excitement at what they've already done in the field of instrumental rationality, and my disappointment at the lack of progress on training to help people have accurate beliefs.
During conversations with CFAR staff on this topic, it became clear to me that I cared about this primarily because of Existential Risk, and I strongly encouraged them to develop more epistemic rationality training, because while it's very hard to do, it seems like the most important thing.
In the meantime, I've been trying to figure out what the best way is to train myself to make judgments about existential risks, and about the best thing I can do to help mitigate them.
It turns out that it's really hard to fix something without a very specific idea of how it's broken. So I'm going to use the inadequate tools I have, on this flight, without access to experts or the ability to look up data, to build the best bad estimate I can of: