Should Effective Altruism be at war with North Korea?

Summary: Political constraints cause supposedly objective technocratic deliberations to adopt frames that any reasonable third party would interpret as picking a side. I explore the case of North Korea in the context of nuclear disarmament rhetoric as an illustrative example of the general trend, and claim that people and institutions can make better choices and generate better options by modeling this dynamic explicitly. In particular, Effective Altruism and academic Utilitarianism can plausibly claim to be the British Empire's central decisionmaking mechanism, and as such, has more options than its current story can consider.

Context

I wrote to my friend Georgia in response to this Tumblr post.

Asymmetric disarmament rhetoric

Ben: It feels increasingly sketchy to me to call tiny countries surrounded by hostile regimes "threatening" for developing nuclear capacity, when US official policy for decades has been to threaten the world with nuclear genocide.

Strong recommendation to read Daniel Ellsberg's The Doomsday Machine.

Georgia: Book review: The Doomsday Machine

So I get that the US' nuclear policy was and probably is a nightmare that's repeatedly skirted apocalypse. That doesn't make North Korea's program better.

Ben [feeling pretty sheepish, having just strongly recommended a book my friend just reviewed on her blog]: "Threatening" just seems like a really weird word for it. This isn't about whether things cause local harm in expectation - it's about the frame in which agents trying to organize to defend themselves are the aggressors, rather than the agent insisting on global domination. 

Georgia: I agree that it's not the best word to describe it. I do mean "threatening the global peace" or something rather than "threatening to the US as an entity." But, I do in fact think that North Korea building nukes is pretty aggressive. (The US is too, for sure!)

Maybe North Korea would feel less need to defend itself from other large countries if it weren't a literal dictatorship - being an oppressive dictatorship with nukes is strictly worse.

Ben: What's the underlying thing you're modeling, such that you need a term like "aggression" or "threatening," and what role does it play in that model?

Georgia: Something like destabilizing to the global order and not-having-nuclear-wars, increases risk to people, makes the world more dangerous. With "aggressive" I was responding to to your "aggressors" but may have misunderstood what you meant by that.

Ben: This feels like a frame that fundamentally doesn't care about distinguishing what I'd call aggression from what I'd call defense - if they do a thing that escalates a conflict, you use the same word for it regardless. There's some sense in which this is the same thing as being "disagreeable" in action.

Georgia: You're right. The regime is building nukes at least in large part because they feel threatened and as an active-defense kind of thing. This is also terrible for global stability, peace, etc.

Ben: If I try to ground out my objection to that language a bit more clearly, it's that a focus on which agent is proximately escalating a conflict, without making distinctions about the kinds of escalation that seem like they're about controlling others' internal behavior vs preventing others from controlling your internal behavior is an implicit demand that everyone immediately submit completely to the dominant player.

Georgia: It's pretty hard to make those kind of distinctions with a single word choice, but I agree that's an important distinction.

Ben: I think this is exactly WHY agents like North Korea see the need to develop a nuclear deterrent. (Plus the dominant player does not have a great track record for safety.) Do you see how from my perspective that amounts to "North Korea should submit to US domination because there will be less fighting that way," and why I'd find that sketchy?

Maybe not sketchy coming from a disinterested Martian, but very sketchy coming from someone in one of the social classes that benefit the most from US global dominance?

Georgia: Kind of, but I believe this in the nuclear arena in particular, not in general conflict or sociopolitical tensions or whatever. Nuclear war has some very specific dynamics and risks.

Influence and diplomacy

Ben: The obvious thing from an Effective Altruist perspective would be to try to establish diplomatic contact between Oxford EAs and the North Koreans, to see if there's a compromise version of Utilitarianism that satisfies both parties such that North Korea is happy being folded into the Anglosphere, and then push that version of Utilitarianism in academia.

Georgia: That's not obvious. Wait, are you proposing that?

Ben: It might not work, but "stronger AI offers weaker AI part of its utility function in exchange for conceding instead of fighting" is the obvious way for AGIs to resolve conflicts, insofar as trust can be established. (This method of resolving disputes is also probably part of why animals have sex.)

Georgia: I don't think academic philosophy has any direct influence on like political actions. (Oh, no, you like Plato and stuff, I probably just kicked a hornet's nest.) Slightly better odds on the Oxford EAs being able to influence political powers in some major way.

Ben: Academia has hella indirect influence, I think. I think Keynes was right when he said that "practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back." Though usually on longer timescales.

FHI is successfully positioning itself as an advisor to the UK government on AI safety.

Georgia: Yeah, they are doing some cool stuff like that, do have political ties, etc, which is why I give them better odds.

Ben: Utilitarianism is nominally moving substantial amounts of money per year, and quite a lot if you count Good Ventures being aligned with GiveWell due to Peter Singer's recommendation.

Georgia: That's true.

Ben: The whole QALY paradigm is based on Utilitarianism. And it seems to me like you either have to believe

(a) that this means academic Utilitarianism has been extremely influential, or

(b) the whole EA enterprise is profiting from the impression that it's Utilitarian but   then doing quite different stuff in a way that if not literally fraud is definitely a bait-and-switch.

Georgia: I'm persuaded that EA has been pretty damn influential and influenced by academic utilitarianism. Wouldn't trying to convince EAs directly or whatever instead of routing through academia be better?

Ben: Good point, doesn't have to be exclusively academic - you'd want a mixture of channels since some are longer-lived than others, and you don't know which ones the North Koreans are most interested in. Money now vs power within the Anglo coordination mechanism later.

Georgia: The other half of my incredulity is that fusing your value functions does not seem like a good silver bullet for conflicts.

Ben: It worked for America, sort of. I think it's more like, rarely tried because people aren't thinking systematically about this stuff. Nearly no one has the kind of perspective that can do proper diplomacy, as opposed to clarity-opposing power games.

Georgia: But saying that an academic push to make a fused value function is obviously the most effective solution for a major conflict seems ridiculous on its face.

Is it coherent to model an institution as an agent?

Ben: I think the perspective in which this doesn't work, is one that thinks modeling NK as an agent that can make decisions is fundamentally incoherent, and also that taking claims to be doing utilitarian reasoning at face value is incoherent. Either there are agents with utility functions that can and do represent their preferences, or there aren't.

Georgia: Surely they can be both - like, conglomerations of human brains aren't really perfectly going to follow any kind of strategy, but it can still make sense to identify entities that basically do the decisionmaking and act more-or-less in accordance to some values, and treat that as a unit

It is both true that "the North Korean regime is composed of multiple humans with their own goals and meat brains " and that "the North Korean regime makes decisions for the country and usually follows self-preservationist decisionmaking."

Ben:I'm not sure which mode of analysis is correct, but I am sure that doing the reconciliation to clarify what the different coherent perspectives are, is a strong step in the right direction.

Georgia: Your goal seems good!

Philosophy as perspective

Ben: Maybe EA/Utilitarianism should side with the Anglo empire against NK, but if so, it should probably account for that choice internally, if it wants to be and be construed as a rational agent rather than a fundamentally political actor cognitively constrained by institutional loyalties.

Thanks for engaging with this - I hadn't really thought through the concrete implications of the fact that any system of coordinated action is a "side" or agent in a decision-theoretic landscape with the potential for conflict.

That's the conceptual connection between my sense that calling North Korea's nukes "threatening" is mainly just shoring up America's rhetorical position as the legitimate world empire, and my sense that reasoning about ends that doesn't concern itself with the reproduction of the group doing the reasoning is implicitly totalitarian in a way that nearly no one actually wants.

Georgia: "With the reproduction of the group doing the reasoning" - like spreading their values/reasoning-generators or something?

Ben: Something like that.

If you want philosopher kings to rule, you need a system adequate to keep them in power, when plenty of non-philosophers have an incentive to try to get in on the action, and then that ends up constraining most of your choices, so you don't end up benefiting much from the philosophers' competence!

So you build a totalitarian regime to try to hold onto this extremely fragile arrangement, and it fails anyway. The amount of narrative control they have to exert to prevent people from subverting the system by which they're in charge ends up being huge.

(There's some ambiguity, since part of the reason for control is education into virtue - but if you're not doing that, there's not really much of a point of having philosophers in charge anyway.)

I'm definitely giving you a summary run through a filter, but that's true of all summaries, and I don't think mine is less true than the others -just, differently slanted.


Related: ON GEOPOLITICAL DOMINATION AS A SERVICE, Egoism in Disguise

13 thoughts on “Should Effective Altruism be at war with North Korea?

  1. Daniel Kokotajlo

    1. Like Ben said, I object to your use of "north korea" to describe the Kim regime. (I feel sheepish for saying this, since normally I don't think arguing over word choice is productive. But I think in this case, it would be more honest to say "The Kim regime" or "The North Korean regime." Otherwise, it sounds like you think it's the people of North Korea vs. the "anglos." Makes it sound like an ethnic conflict, which it very much is not.

    2. The USA offering substantial concessions to the Kim regime to keep it from building and deploying nukes or other WMDs is not a good precedent to set. There are many dictators and oppressive regimes in the world. In utility function parlance, it might be a good idea to offer a very small portion of our utility function, smaller than would meaningfully incentivise other nations to go for nukes. But we already do this, IIRC--we give the Kim regime tons of food aid.

    3. "It feels increasingly sketchy to me to call tiny countries surrounded by hostile regimes "threatening" for developing nuclear capacity, when US official policy for decades has been to threaten the world with nuclear genocide." 1. Several countries have developed nuclear capacity besides NK and are not called threatening to nearly the same extent--e.g. India, Pakistan, Israel, France... The reason has to do with the Kim regime's demonstrated willingness to do threatening things like assassinate South Korean presidents, kidnap and assassinate various citizens of other countries in places around the world, unprovokedly invade South Korea with the express purpose of annexing them... 2. I feel like it's a bit of an exaggeration: you make it sound like the US is extorting money from everyone with the threat of nuclear holocaust, which is super implausible. It's not a credible threat--the US would never launch a nuke over a merely economic matter. They might do it during a conventional war, but not a trade war. Moreover to my knowledge it's never been made, even implicitly--common wisdom is that if a country wants to stop trading with the US, or tax the hell out of US products or whatnot, they'll get economic sanctions and maaaaaaaybe funding and weapons given to dissidents within. Not invasion, certainly not global nuclear holocaust! Finally, ideas like this were discussed in the US when nukes were first invented (e.g. by Bertrand Russell, who advocated the US using nukes to maintain their nuclear monopoly) and these ideas did not receive uptake.

    Reply
    1. Benquo Post author

      Thanks for switching venues from the Other Place where you originally commented, to the public internet!

      I don't see how I can believe that this isn't at least partly an ethnic conflict, given e.g. the US plans discussed in The Doomsday Machine. Secret US official policy the last time a whistleblower was in a position to know, was to nuke as many Chinese people as possible if the US got in a batallion-level conventional conflict with the Soviet Union, regardless of whether China was involved. This was the case after China and the Soviet Union had a diplomatic falling-out, and before China had any serious ability to strike the US. The US also publicly threatened nuclear escalation of conventional conflict with the Soviet Union multiple times, in a context where there was only one plan maintained internally - the aforementioned single all-out strike against the Soviet Union and China simultaneously. I'd be pretty surprised if NK got left out of the current analogous plan.

      But I share your sense that the country of North Korea isn't an unified actor here - the regime and the people aren't identical, and there are strong structural factors constraining the regime's behavior. These structural factors are part of the reason for dictators and oppressive regimes, and any analysis of this situation has to take into account the pressures backing countries into that particular corner.

      Reply
      1. Anonymous

        The plan to nuke China was bad; good point I had forgotten about that.

        Everything is at least partly an ethnic conflict, but I think in this case it is not very much at all an ethnic conflict. This is a conflict between the US and NK regimes; the common folk don't really have much to do with it.

        Reply
        1. Benquo Post author

          Eisenhower's on record as worrying that to use nuclear weapons in the Korean war would create the inconvenient impression that America was interested in genocide against East Asians. So, that's more evidence that the principals thought that this would look a lot like an ethnic conflict to many observers. I agree that this conflict has many other important levels, like a clash of political systems and ideologies, and the more general contest for control and spheres of influence.

          Reply
    2. Benquo Post author

      I didn't specifically claim that the US ever threatened a nuclear strike for purely economic extortion reasons, nor do I specifically recall a case of this, but the combination of the US's incumbent status as economic core, its ability and willingness to defend the structures that status depends on, and its ability and willingness to block rivals' coordination to create and defend a workable replacement, means that it never actually has to escalate that far - other countries have to do things the US construes as escalation long before things get to that point. Consider the fact that US courts can seize the Argentinian government's financial assets, for instance. No nukes required.

      Given this context, I think the "incentives for bad behavior" framing is fundamentally confused about what's going on (see ChristianKl's comment on LessWrong). Even if it's not, I see no reason to think that Chinese and Russian leaders, for instance, aren't sincerely worried that Western encirclement or encroachment into buffer zones is a prelude to more direct pressure and interference with their countries' ability to self-govern.

      I'm not saying that one has to automatically grant concessions to anyone who claims to feel aggressed on. But before making that call one way or the other, it seems worth making a serious effort to understand the other side's perspective, which I don't feel like I see here.

      Reply
      1. Anonymous

        Yes--the US gov doesn't need nukes to get its way in the world. My point proved for me. 🙂

        I don't think the incentives for bad behavior framing is confused at all.

        Of course the Russian and Chinese leaders are sincerely worried about such things. And of course the NK regime's perspective is accurately described by the narrative you've been spinning. (It's too late now but if you had asked me a week ago to write down what I thought the NK regime's perspective was, I probably would have written something very similar to what you wrote.) Everyone is the hero of their own narrative; everyone is the noble victim of unfair aggression, etc.

        As to whether & how much the NK regime's perspective is *correct,* or at least *more correct than the US regime's perspective,* well that would require a lengthy analysis of the history of the last hundred years, I think. Certainly the US regime's perspective is a little biased in their favor. But I still think on balance that the truth is significantly closer to the US regime's perspective than the NK regime's perspective.

        Reply
        1. Benquo Post author

          Yes--the US gov doesn't need nukes to get its way in the world. My point proved for me.

          That's really, really not what I said and not the situation if you look at behavior. The US has openly used the threat of nuclear war to get what it wanted quite a few times! This is also clearly described in The Doomsday Machine. But it doesn't have to make the explicit threat on every occasion, for the same reason that it doesn't have to threaten a nuclear strike to seize Argentina's assets or assassinate a wedding party in Afghanistan. Small rivals can't efficiently route around the lesser control systems, and nukes have been used to intimidate actors that might organize into larger rivals. This is a big part of why the Soviet Union and China prioritized a nuclear deterrent!

          Reply
        2. Benquo Post author

          if you had asked me a week ago to write down what I thought the NK regime's perspective was, I probably would have written something very similar to what you wrote.

          This might still be a useful exercise - if nothing else it would help with double-illusion of transparency. I expect that the emphasis you put on things will differ from the emphasis I would put on things in ways that would, if we followed up on them, clarify some deep model differences.

          I haven't actually given what I consider a full summary of the NK perspective, I've mostly just tried to point out that such a perspective probably exists, and given a bit of detail on what I expect to be common to perspectives of agents outside the US sphere of influence and trying to stay that way. Part of why I'm reluctant to try to represent the NK perspective is that I'm not an expert in Korean culture, I understand very little of it, and nearly all of what I know has been filtered through Anglo sources. I'm worried that claiming to represent that perspective to others will contribute to passing off a very poor copy as the original, and blind people to real differences between them and us that I can't see.

          Reply
  2. Pingback: Rational Feed – deluks917

  3. Alexander Appel

    I think the proper nuclear argument is as follows:

    We know from the cold war that there's a pretty high risk of nuclear accidents, and nuclear weapons also permit a brinksmanship tactic analogous to the game of chicken. The best move is to act like you'll use them, and then not, and due to standard human biases, people will underestimate just how much of a risk they're taking. (See the india-pakistan incident from a few months ago)

    If there's n countries that have nuclear weapons, then there's (n^2-n)/2 relationships between countries that have the opportunity to degrade to the point where they engage in nuclear brinskmanship, or get an itchy-enough trigger finger that nuclear accidents are taken seriously instead of ignored.

    Also, due to the benefits of deterring invasion, many countries would seek nuclear weapons in the absence of other countries restricting the supply of fissile material.

    Therefore, to minimize the chance of nuclear war, the best move is to minimize the number of countries which have access to nuclear weapons.

    Reply
  4. Benquo Post author

    On merging utility functions, here's the relevant quote from Coherent Extrapolated Volition, by Eliezer Yudkowsky:

    Avoid creating a motive for modern-day humans to fight over the initial dynamic.
    One of the occasional questions I get asked is “What if al-Qaeda programmers write an AI?” I am not quite sure how this constitutes an objection to the Singularity Institute’s work, but the answer is that the solar system would be tiled with tiny copies of the Qur’an. Needless to say, this is much more worrisome than the solar system being tiled with tiny copies of smiley faces or reward buttons. I’ll worry about terrorists writing AIs when I am through worrying about brilliant young well-intentioned university AI researchers with millions of dollars in venture capital. The outcome is exactly the same, and the academic and corporate researchers are far more likely to do it first. This is a critical point to keep in mind, as otherwise it provides an excuse to go back to screaming about politics, which feels so much more satisfying. When you scream about politics you are really making progress, according to an evolved psychology that thinks you are in a hunter-gatherer tribe of two hundred people. To save the human species you must first ignore a hundred tempting distractions.

    I think the objection is that, in theory, someone can disagree about what a superintelligence ought to do. Like Dennis [sic], who thinks he ought to own the world outright. But do you, as a third party, want me to pay attention to Dennis? You can’t advise me to hand the world to you, personally; I’ll delete your name from any advice you give me before I look at it. So if you’re not allowed to mention your own name, what general policy do you want me to follow?

    Let’s suppose that the al-Qaeda programmers are brilliant enough to have a realistic chance of not only creating humanity’s first Artificial Intelligence but also solving the technical side of the FAI problem. Humanity is not automatically screwed. We’re postulating some extraordinary terrorists. They didn’t fall off the first cliff they encountered on the technical side of Friendly AI. They are cautious enough and scared enough to double-check themselves. They are rational enough to avoid tempting fallacies, and extract themselves from mistakes of the existing literature. The al-Qaeda programmers will not set down Four Great Moral Principles, not if they have enough intelligence to solve the technical problems of Friendly AI. The terrorists have studied evolutionary psychology and Bayesian decision theory and many other sciences. If we postulate such extraordinary terrorists, perhaps we can go one step further, and postulate terrorists with moral caution, and a sense of historical perspective? We will assume that the terrorists still have all the standard al-Qaeda morals; they would reduce Israel and the United States to ash, they would subordinate women to men. Still, is humankind screwed?

    Let us suppose that the al-Qaeda programmers possess a deep personal fear of screwing up humankind’s bright future, in which Islam conquers the United States and then spreads across stars and galaxies. The terrorists know they are not wise. They do not know that they are evil, remorseless, stupid terrorists, the incarnation of All That Is Bad; people like that live in the United States. They are nice people, by their lights. They have enough caution not to simply fall off the first cliff in Friendly AI. They don’t want to screw up the future of Islam, or hear future Muslim scholars scream in horror on contemplating their AI. So they try to set down precautions and safeguards, to keep themselves from screwing up.

    One day, one of the terrorist programmers says: “Here’s an interesting thought experiment. Suppose there were an atheistic American Jew, writing a superintelligence; what advice would we give him, to make sure that even one so steeped in wickedness does not ruin the future of Islam? Let us follow that advice ourselves, for we too are sinners.” And another terrorist on the project team says: “Tell him to study the holy Qur’an, and diligently implement what is found there.” And another says: “It was specified that he was an atheistic American Jew, he’d never take that advice. The point of the Coherent Extrapolated Volition thought experiment is to search for general heuristics strong enough to leap out of really fundamental errors, the errors we’re making ourselves, but don’t know about. What if he should interpret the Qur’an wrongly?” And another says: “If we find any truly general advice, the argument to persuade the atheistic American Jew to accept it would be to point out that it is the same advice he would want us to follow.” And another says: “But he is a member of the Great Satan; he would only write an AI that would crush Islam.” And another says: “We necessarily postulate an atheistic Jew of exceptional caution and rationality, as otherwise his AI would tile the solar system with American music videos. I know no one like that would be an atheistic Jew, but try to follow the thought experiment.”

    I ask myself what advice I would give to terrorists, if they were programming a superintelligence and honestly wanted not to screw it up, and then that is the advice I follow myself.

    The terrorists, I think, would advise me not to trust the self of this passing moment, but try to extrapolate an Eliezer who knew more, thought faster, were more the person I wished I were, had grown up farther together with humanity. Such an Eliezer might be able to leap out of his fundamental errors. And the terrorists, still fearing that I bore too deeply the stamp of my mistakes, would advise me to include all the world in my extrapolation, being unable to advise me to include only Islam.

    But perhaps the terrorists are still worried; after all, only a quarter of the world is Islamic. So they would advise me to extrapolate out to medium-distance, even against the force of muddled short-distance opposition, far enough to reach (they think) the coherence of all seeing the light of Islam. What about extrapolating out to long-distance volitions? I think the terrorists and I would look at each other, and shrug helplessly, and leave it up to our medium-distance volitions to decide. I can see turning the world over to an incomprehensible volition, but I would want there to be a comprehensible reason. Otherwise it is hard for me to remember why I care.

    Suppose we filter out all the AI projects run by Dennises who just want to take over the world, and all the AI projects without the moral caution to fear themselves flawed, leaving only those AI projects that would prefer not to create a motive for present-day humans to fight over the initial conditions of the AI. Do these remaining AI projects have anything to fight over? This is an interesting question, and I honestly don’t know. In the real world there are currently only a handful of AI projects that might dabble. To the best of my knowledge, there isn’t more than one project that rises to the challenge of moral caution, let alone rises to the challenge of FAI theory, so I don’t know if two such projects would find themselves unable to agree. I think we would probably agree that we didn’t know whether we had anything to fight over, and as long as we didn’t know, we could agree not to care. A determined altruist can always find a way to cooperate on the Prisoner’s Dilemma.

    Reply
  5. Zack M. Davis

    (This method of resolving disputes is also probably part of why animals have sex.)

    Wait, really?? I see the obvious analogy (children are like compromise AIs that maximize a weighted sum of paperclips and staples), but I don't see how that could be the actual reason: I thought sex was about creating variation (populations of "offspring with the parental genes but not the parental genotypes" can acquire information faster and resist parasites better than asexual clones), and sexual dimorphism is about isogamy not being game-theoretically stable. (I've read a lot about this because of reasons.)

    Reply
    1. Benquo Post author

      I was talking with a biologist, but in cleaning this up for a general audience perhaps I should have made it clearer that the main, original reason is the one you mention. But it seems like the thing I'm talking about is a secondary repurposing that contributes at least a little to the overall prevalence of creatures that reproduce sexually.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *