Orders of Doom

The Ant and the Grasshopper

The Ant knew that food would be hard to come by in the winter, so one hot summer day, as the Grasshopper frittered away the day leaping and dancing and making merry, the ant thought of nothing but gathering food for the hive. It even hoped to save enough food for the Grasshopper, who was not responsible for an upbringing and genetic makeup that gave it insufficient Conscientiousness.

The Ant focused so completely on gathering food, making its route more efficient, carrying the most efficient load possible, that it missed the shadow that fell over it mid-afternoon. The Anteater's tongue sprung out to carry the Ant to a waiting, hungry mouth. The Ant was delicious.

Meanwhile, a Black Swan ate the Grasshopper. The Grasshopper was even more delicious, having feasted on a variety of treats during its short but pleasant life.

-Not Aesop's Fables

I'm trying to figure out what the important problems are in the world so that I can figure out what I should do about it. But there are a few very different ways the world could be. Depending on which is the case, I might want to do very different things to save the world. This is an enumeration of the cases I've thought of so far.

I'll start with existential risks, because they have the potential to affect the largest number of people.

AI Risk

The basic safety problem in artificial intelligence is that minds are very powerful, and human values are hard to precisely express. If it is feasible to construct a mind more intelligent (or just faster) than humans, the first one built will very quickly improve itself and acquire resources until it transforms as much of the universe as it can get at into whatever best satisfies its preferences. A nuclear bomb isn't malicious, it's just very powerful, and not constrained to protect the people around it. An artificial intelligence powerful enough to control large parts of the world we live in is not "friendly" by default. This is called the problem of "Unfriendly AI," or UFAI.

The folks at MIRI suspect that the best way to remedy this risk is to build a sufficiently powerful AI that has the appropriate safeguards - a "Friendly AI" or FAI. If a sufficiently powerful mind shares our values, it will not only avoid destroying us - it will protect us from UFAI, and any other existential risk, more effectively than we can with our lesser intelligence.

However, we don't know how hard it is to build an UFAI powerful enough to destroy the world, and how hard it is to build an FAI powerful enough to prevent that. There are a few possibilities:

  1. FAI and UFAI are both feasible soon.
  2. UFAI is feasible but FAI is not.
  3. UFAI and FAI are both feasible, but only UFAI is feasible soon.
  4. FAI is feasible, but UFAI is not.
  5. Neither UFAI nor FAI is feasible.
  6. Something Else

1) FAI and UFAI are both feasible soon

This would require the following to be true:

  • Either the insights necessary to create a true general intelligence are similar to the ones involved in figuring out that safeguards are necessary, or the friendliness part of the problem is comparatively easy.
  • It requires a truly general intelligence to have a shot at destroying or protecting us. Simple machine learning algorithms or other less than general minds just aren't smart enough to do much good or harm.

If this is the case, then FAI is the most important problem to work on - if we solve it, we solve everything, and if we don't, then we don't see gains from solving other existential risks.

2) UFAI is feasible but FAI is not.

Imagine a boot stamping on a computer, until 2984

-Not George Orwell

If FAI turns out to be impossible, then we just have to use technology other than powerful AI to suppress UFAI. There are a few ways we could prevent an UFAI short of building an FAI:

  • End Modern Civilization
    This has lots of problems - it causes tremendous suffering, it prevents us from mitigating existential risks that are not catastrophic, because of aging each person dies, and because of a lack of space travel, humanity dies off when the sun blows up. It also seems hard to prevent civilization from restarting anywhere on the globe, using only pre-modern social control.
  • Voluntary Cooperation
    If everyone knows that building a powerful mind without safeguards is very dangerous, then no one will want to. The big risk is destroying the world by accident, not on purpose. However, an arms race, or some smart-stupid people, could easily derail this, so this would require both functional, rational local institutions to supervise research, and better international relations and institutions. World peace would be essential.
  • Policing
    If we can't avoid destroying the world through voluntary abstention, then we could organize to forcibly prevent it. Modern technology could permit a future world government to monitor all AI progress and confiscate anything approaching a world-destroying level of AI technology. The costs of a global police state, though, may be high. Some people consider a stable totalitarian world government itself to be a catastrophic global risk.
  • Space Travel
    If the UFAI is not very smart - for example, if something stupider than a human in some respects - too stupid to recursively improve itself and dominate everything in its light cone - but as fast as the fastest computer, is smart enough to destroy the world - then we might not have to worry about it following us to the stars and colonizing the universe. If we get off this rock, humanity is probably safe. More on this below.

3) UFAI and FAI are both feasible, but only UFAI is feasible soon

If this is true, then FAI is an important problem to work on (since it solves all the others), but nowhere near the most urgent. We may have to figure out how to hold off other catastrophes for quite a while until we can build an FAI. This includes the catastrophe of UFAI. In other words, if FAI is a lot harder than AI, then we have to learn how to suppress AI until it can be made friendly.

4) FAI is feasible, but UFAI is not.

It is conceivable - though unlikely enough that I only put this in for the sake of completeness - that intelligence and values are related. There might be a set of values that sufficiently intelligent minds must share, and humans might be sufficiently intelligent to already have values in that space. If this is the case, then all powerful AI must be friendly.

Other possible mechanisms could be something like divine intervention to prevent world-destroying catastrophes, though this seems unlikely given the amount of suffering and death that has already been permitted to happen.

5) Neither UFAI nor FAI is feasible.

If this is the case, then there's no AI Risk and we should just start worrying about something else.

6) Something Else

If you don't want your head to asplode all the time, you'd better have a category for something you didn't think of yet.

Other Imminent Existential Risks

The most important risk may just be the next one, not the one with biggest impact considered on its own. Even in 1983, Stanislav Petrov might have been able to figure out that FAI could be the most important problem - but more urgent was not starting a nuclear war. Petrov admirably refrained from destroying the world, and for that we owe him more thanks than if he had gotten halfway towards designing FAI.

A brief list of some possible imminent existential risks:

  • Nuclear War
  • Nanotechnology
  • Engineered or Natural Pandemics

One important subcategory of imminent risks is catastrophic risks that are not on their own existential, that trigger a civilizational collapse. Civilizational collapse is extremely bad for a few reasons. First, there's a lot of suffering and death immediately. Second, it wipes out progress we've already made toward mitigating existential risks. This might not be so bad for ones that are a product of modern technology - but there are also natural existential risks, like asteroid strikes, massive volcanic eruptions, and the death of the Sun. These are probably much less imminent than technological risks, but they are also less tractable without our current level of technology.

Space Travel

Not unless you can alter time, speed up the harvest or teleport me off this rock.

-Luke Skywalker

Another way to mitigate existential risks, aside from directly reducing the probability of a global catastrophe, is to live in more places than just one globe. If there are people elsewhere, humanity has a chance at survival, and eventually filling the galaxy with people alike enough to us that we care about them.

Aging

If there aren't any imminent or nearly imminent existential risks, aging becomes a big deal. Most deaths are related to aging, and this costs people a tremendous number of years of life.

Quantum Immortality

If we don't expect to experience death, we should at least entertain the hypothesis that it doesn't matter, so we should only worry about suffering and not death at all. Things like animal welfare and awesome video games become more important, and things like existential risk don't matter at all - because we'll only have subjective experiences in the future worlds where we don't die.

Simulation Hypothesis

Far away across the sea, there is a well-governed kingdom ruled by a king chosen by a secretive circle of electors. In this kingdom there is a special sort of delusion called the King Madness, which produces the belief that one is the king of this land, but preserves the rest of one's mental faculties, and in fact tends to be associated with excess wisdom and prudence. At any given time, there are several thousand people with the King Madness, held in the Royal Asylum, but only one king. Of course, the king is well aware that he is much more likely a sufferer of the King Madness, than the actual king. So naturally he ensures that they are well treated, since this increases the probability that the actual kind is doing the same for him.

Rumor has it that the actual king is also chosen from among the population with the King Madness, on the theory that it is the one role in society for which his belief poses no impediment.

The simulation argument is that at least one of the following must be true:

  1. The human species is very likely to go extinct before reaching a “posthuman” stage.
  2. Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof).
  3. We are almost certainly living in a computer simulation.

If 3 is true, then this means that the fundamental nature of the reality we live in is very different than we might think, in ways that could strongly affect how we should live. For example, it might imply that we should be kind to people we simulate, or that we shouldn't build things that are very costly to continue to simulate.

Wild Animal Suffering

There are lots of animals and they may suffer a lot. (Brian Tomasik has done work on this.) If their experiences carry significant moral weight, they could outweigh human well-being, at least when you consider only the near term and not the far future.

Political Coordination

A tremendous number of very different important problems would be much more tractable if there were effective global cooperation, or even if local decisionmakers could coordinate better. This is upstream of many of the above causes, including some AI risk.

Cause Prioritization

Knowing which causes are most important, most responsive to working on them, and most neglected, is necessary in order to reliably work on the most important problem. Katja Grace has done a "shallow investigation" into it here.

Something Else

Like I said in the AI risk section: if you don't want your head to asplode all the time, you'd better have a category for something you didn't think of yet.

2 thoughts on “Orders of Doom

  1. Pingback: Orders of Doom | Effective Altruism Society of DC

  2. Jeffrey

    Notice that the definitions used for AI in all these cases predispose the idea that the AI cannot be constrained. Always this mystical thing can easily reproduce and control everything else at least digitally based, as it is also assumed that the coming AI will be compatible by design. But honestly, these are separate features. An AI does not have to have any particular ability to do other than to learn, but even our basic concepts of learning and intelligence have baked-in the concept of discrimination - itself a values based concept. I take it as axiomatic that there is no intelligence without discrimination, because the only alternative appears to be just association among observations. But how does a relative of the computer - based as it is on nothing but mathematical logic - get any values besides the ones we give?

    So anyway, what I think I'm saying is that AI problems are just biological WMDs and nanotechnology all over again: we maliciously or accidentally program the bad behavior AND provide the tools and resources to the golem, then bring it to life. Might as well work on the broader issues instead, either the philosophy of ensuring that humans don't bring about their own destruction on our own, or the defense or counter of shielding or proliferating humanity.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *