Against responsibility

I am surrounded by well-meaning people trying to take responsibility for the future of the universe. I think that this attitude – prominent among Effective Altruists – is causing great harm. I noticed this as part of a broader change in outlook, which I've been trying to describe on this blog in manageable pieces (and sometimes failing at the "manageable" part).

I'm going to try to contextualize this by outlining the structure of my overall argument.

Why I am worried

Effective Altruists often say they're motivated by utilitarianism. At its best, this leads to things like Katja Grace's excellent analysis of when to be a vegetarian. We need more of this kind of principled reasoning about tradeoffs.

At its worst, this leads to some people angsting over whether it's ethical to spend money on a cup of coffee when they might have saved a life, and others using the greater good as license to say things that are not quite true, socially pressure others into bearing inappropriate burdens, and make ever-increasing claims on resources without a correspondingly strong verified track record of improving people's lives. I claim that these actions are not in fact morally correct, and that people keep winding up endorsing those conclusions because they are using the wrong cognitive approximations to reason about morality.

Summary of the argument

  1. When people take responsibility for something, they try to control it. So, universal responsibility implies an attempt at universal control.
  2. Maximizing control has destructive effects:
    • An adversarial stance towards other agents.
    • Decision paralysis.
  3. These failures are not accidental, but baked into the structure of control-seeking. We need a practical moral philosophy to describe strategies that generalize better, and that benefit from the existence of other benevolent agents rather than treating them primarily as threats.

Responsibility implies control

In practice, the way I see the people around me applying utilitarianism, it seems to make two important moral claims:

  1. You - you, personally - are responsible for everything that happens.
  2. No one is allowed their own private perspective - everyone must take the public, common perspective.

The first principle is almost but not quite simple consequentialism. But it's important to note that it actually doesn't generalize; it's massive double-counting if each individual person is responsible for everything that happens. I worked through an example of the double-counting problem in my post on matching donations.

The second principle follows from the first one. If you think you're personally responsible for everything that happens, and obliged to do something about that rather than weigh your taste accordingly – and you also believe that there are ways to have an outsized impact (e.g. that you can reliably save a life for a few thousand dollars) – then in some sense nothing is yours. The money you spent on that cup of coffee could have fed a poor family for a day in the developing world. It's only justified if the few minutes you save somehow produce more value.

One way of resolving this is simply to decide that you're entitled to only as much as the global poor, and try to do without the rest to improve their lot. This is the reasoning behind the notorious demandingness of utilitarianism.

But of course, other people are also making suboptimal uses of resources. So if you can change that, then it becomes your responsibility to do so.

In general, if Alice and Bob both have some money, and Alice is making poor use of money by giving to the Society to Cure Rare Diseases in Cute Puppies, and Bob is giving money to comparatively effective charities like the Against Malaria Foundation, then if you can cause one of them to have access to more money, you'd rather help Bob than Alice.

There's no reason for this to be different if you are one of Bob and Alice. And since you've already rejected your own private right to hold onto things when there are stronger global claims to do otherwise, there's no principled reason not to try to reallocate resources from the other person to you.

What you're willing to do to yourself, you'll be willing to do to others. Respecting their autonomy becomes a mere matter of either selfishly indulging your personal taste for "deontological principles," or a concession made because they won't accept your leadership if you're too demanding - not a principled way to cooperate with them. You end up trying to force yourself and others to obey your judgment about what actions are best.

If you think of yourself as a benevolent agent, and think of the rest of the world and all the people in it in as objects with regular, predictable behaviors you can use to improve outcomes, then you'll feel morally obliged - and therefore morally sanctioned - to shift as much of the locus of control as possible to yourself, for the greater good.

If someone else seems like a better candidate, then the right thing to do seems like throwing your lot in with them, and transferring as much as you can to them rather than to yourself. So this attitude towards doing good leads either to personal control-seeking, or support of someone else's bid for the same.

I think that this reasoning is tacitly accepted by many Effective Altruists, and explains two seemingly opposite things:

  1. Some EAs get their act together and make power plays, implicitly claiming the right to deceive and manipulate to implement their plan.
  2. Some EAs are paralyzed by the impossibility of weighing the consequences for the universe of every act, and collapse into perpetual scrupulosity and anxiety, mitigated only by someone else claiming legitimacy, telling them what to do, and telling them how much is enough.

Interestingly, people in the second category are somewhat useful for people following the strategy of the first category, as they demonstrate demand for the service of telling other people what to do. (I think the right thing to do is largely to decline to meet this demand.)

Objectivists sometimes criticize "altruistic" ventures by insisting on Ayn Rand's definition of altruism as the drive to self-abnegation, rather than benevolence. I used to think that this was obnoxiously missing the point, but now I think this might be a fair description of a large part of what I actually see. (I'm very much not sure I'm right. I am sure I'm not describing all of Effective Altruism – many people are doing good work for good reasons.)

Control-seeking is harmful

You have to interact with other people somehow, since they're where most of the value is in our world, and they have a lot of causal influence on the things you care about. If you don't treat them as independent agents, and you don't already rule over them, you will default to going to war against them (and more generally trying to attain control and then make all the decisions) rather than trading with them (or letting them take care of a lot of the decisionmaking). This is bad because it destroys potential gains from trade and division of labor, because you win conflicts by destroying things of value, and because even when you win you unnecessarily become a bottleneck.

People who think that control-seeking is the best strategy for benevolence tend to adopt plans like this:

Step 1 – acquire control over everything.

Step 2 – optimize it for the good of all sentient beings.

The problem with this is that step 1 does not generalize well. There are lots of different goals for which step 1 might seem like an appealing first step, so you should expect lots of other people to be trying, and their interests will all be directly opposed to yours. Your methods will be nearly the same as the methods for someone with a different step 2. You'll never get to step 2 of this plan; it's been tried many times before, and failed every time.

Lots of different types of people want more resources. Many of them are very talented. You should be skeptical about your ability to win without some massive advantage. So, what you're left with are your proximate goals. Your impact on the world will be determined by your means, not your ends.

What are your means?

Even though you value others' well-being intrinsically, when pursuing your proximate goals, their agency mostly threatens to muck up your plans. Consequently, it will seem like a bad idea to give them info or leave them resources that they might misuse.

You will want to make their behavior more predictable to you, so you can influence it better. That means telling simplified stories designed to cause good actions, rather than to directly transmit relevant information. Withholding, rather than sharing, information. Message discipline. I wrote about this problem in my post on the humility argument for honesty.

And if the words you say are tools for causing others to take specific actions, then you're corroding their usefulness for literally true descriptions of things far away or too large or small to see. Peter Singer's claim that you can save a life for hundreds of dollars by giving to developing-world charities no longer means that you can save a life for hundreds of dollars by giving to developing-world charities. It simply means that Peter Singer wants to motivate you to give to developing-world charities. I wrote about this problem in my post on bindings and assurances.

More generally, you will try to minimize others' agency. If you believe that other people are moral agents with common values, then e.g. withholding information means that the friendly agents around you are more poorly informed, which is obviously bad, even before taking into account trust considerations! This plan only makes sense if you basically believe that other people are moral patients, but independent, friendly agents do not exist; that you are the only person in the world who can be responsible for anything.

Another specific behavioral consequence is that you'll try to acquire resources even when you have no specific plan for them. For instance, GiveWell's impact page tracks costs they've imposed on others – money moved, and attention in the form of visits to their website – but not independent measures of outcomes improved, or the opportunity cost of people who made a GiveWell-influenced donation. The implication is that people weren't doing much good with their money or time anyway, so it's a "free lunch" to gain control over these.1 By contrast, the Gates foundation's Valentine's day report to Warren Buffet tracks nothing but developing-world outcomes (but then absurdly takes credit for 100% of the improvement).

As usual, I'm not picking on GiveWell because they're unusually bad – I'm picking on GiveWell because they're unusually open. You should assume that similar but more secretive organizations are worse by default, not better.

This kind of divergent strategy doesn't just directly inflict harms on other agents. It takes resources away from other agents that aren't defending themselves, which forces them into a more adversarial stance. It also earns justified mistrust, which means that if you follow this strategy, you burn cooperative bridges, forcing yourself farther down the adversarial path.

I've written more about the choice between convergent and divergent strategies in my post about the neglectedness consideration.

Simple patches don't undo the harms from adversarial strategies

Since you're benevolent, you have the advantage of a goal in common with many other people. Without abandoning your basic acquisitive strategy, you could try to have a secret handshake among people trying to take over the world for good reasons rather than bad. Ideally, this would let the benevolent people take over the world, cooperating among themselves. But, in practice, any simple shibboleth can be faked; anyone can say they're acquiring power for the greater good.

It's a commonplace in various discussions among Effective Altruists, when someone identifies an individual or organization doing important work, to suggest that we "persuade them to become an EA" or "get an EA in the organization", rather than directly about ways to open up a dialogue and cooperate. This is straightforwardly an attempt to get them to agree to the same shibboleths in order to coordinate on a power-grabbing strategy. And yet, the standard of evidence we're using is mostly "identifies as an EA".

When Gleb Tsipursky tried to extract resources from the Effective Altruism movement with straightforward low-quality mimesis, mouthing the words but not really adding value, and grossly misrepresenting what he was doing and his level of success, it took EAs a long time to notice the pattern of misbehavior. I don't think this is because Gleb is especially clever, or because EAs are especially bad at noticing things. I think this is because EAs identify each other by easy-to-mimic shibboleths rather than meaningful standards of behavior.

Nor is Effective Altruism unique in suffering from this problem. When the Roman empire became too big to govern, gradually emperors hit upon the solution of dividing the empire in two and picking someone to govern the other half. This occasionally worked very well, when the two emperors had a strong preexisting bond, but generally they distrusted each other enough that the two empires behaved like rival states as often as they behaved like allies. Even though both emperors were Romans, and often close relatives!

Using "believe me" as our standard of evidence will not work out well for us. The President of the United States seems to have followed the strategy of saying the thing that's most convenient, whether or not it happens to be true, and won an election based on this. Others can and will use this strategy against us.

We can do better

The above is all a symptom of not including other moral agents in your model of the world. We need a moral theory that takes this into account in its descriptions (rather than having to do a detailed calculation each time), and yet is scope-sensitive and consequentialist the way EAs want to be.

There are two important desiderata for such a theory:

  1. It needs to take into account the fact that there are other agents who also have moral reasoning. We shouldn't be sad to learn that others reason the way we do.
  2. Graceful degradation. We can't be so trusting that we can be defrauded by anyone willing to say they're one of us. Our moral theory has to work even if not everyone follows it. It should also degrade gracefully within an individual – you shouldn't have to be perfect to see benefits.

One thing we can do now is stop using wrong moral reasoning to excuse destructive behavior. Until we have a good theory, the answer is we don't know if your clever argument is valid.

On the explicit and systematic level, the divergent force is so dominant in our world that sincere benevolent people simply assume, when they see someone overtly optimizing for an outcome, that this person is optimizing for evil. This leads to perceptive people who don’t like doing harm, like Venkatesh Rao, to explicitly advise others to minimize their measurable impact on the world.

I don't think this impact-minimization is right, but on current margins it's probably a good corrective.

One encouraging thing is that many people using common-sense moral reasoning already behave according to norms that respect and try to cooperate with the moral agency of others. I wrote about this in Humble Charlie.

I've also begun to try to live up to cooperative heuristics even if I don't have all the details worked out, and help my friends do the same. For instance, I'm happy to talk to people making giving decisions, but usually I don't go any farther than connecting them with people they might be interested in, or coaching them through heuristics, because doing more would be harmful, it would destroy information, and I'm not omniscient, otherwise I'd be richer.

A movement like Effective Altruism, explicitly built around overt optimization, can only succeed in the long run at actually doing good with (a) a clear understanding of this problem, (b) a social environment engineered to robustly reject cost-maximization, and (c) an intellectual tradition of optimizing only for actually good things that people can anchor on and learn from.

This was only a summary. I don't expect many people to be persuaded by this alone. I'm going to fill in the details in the future posts. If you want to help me write things that are relevant, you can respond to this (preferably publicly), letting me know:

  • What seems clearly true?
  • Which parts seem most surprising and in need of justification or explanation?

(Cross-posted at LessWrong)

References

References
1 Their annual metrics report goes into more detail and does track this, and finds that about a quarter of GiveWell-influenced donations were reallocated from other developing-world charities (and another quarter from developed-world charities).

18 thoughts on “Against responsibility

  1. ozymandias

    I disagree with your characterization of the Gleb situation. In my experience, rationalists and effective altruists are particularly bad at dealing with bad actors. (My experience is mostly with rationalists, as I don't commonly interact with non-rationalist effective altruists, but in my limited observations similar dynamics occur in the effective altruism community, and the treatment of Gleb seemed to me to be very similar to the treatment of bad actors in the rationalist community that I am aware of.) We are very averse to witchhunts and to anything that smacks of a callout, so we tend to offer forgiveness, to give people the benefit of the doubt, to try to settle problems privately instead of creating common knowledge, to assume that situations are misunderstandings, and to disbelieve accusations that someone was a bad actor. These are not bad impulses: forgiveness, mercy, and caution in believing bad things of others are virtues. But in the current rationalist and effective altruist community, these impulses make it very hard to deal with people who consistently do bad things.

    Reply
    1. Benquo Post author

      I'm confused about where you think our disagreement is. Would you try restating the thing I seem to be saying that you think is wrong here?

      Reply
    2. Benquo Post author

      Some thoughts in the meantime:

      I agree that forgiveness, mercy, and careful judgment are virtues. I agree that Rationalists and EAs both seem to have a general problem punishing bad actors. I think the reason for this is that we want to be sure the person's really bad (i.e. the enemy in their heart of hearts), instead of being willing to enforce standards of conduct (which could push away some good-hearted people who simply don't know how to have mutually beneficial interactions with us). Rationalists have at least been willing to dismiss people as boring or stupid sometimes (which perception I think in practice overlaps considerably with causing harm), though I agree there hasn't been enough policing of bad behavior.

      One thing Rationalists and EAs have in common is our tendency to take beliefs seriously, which tends to result (when unchecked) in believing that the labels on things are accurate. As a result, we absurdly expect bad actors to be honest about being bad actors. Science has a similar problem. It turns out that if someone's willing to mislead you, they will often be willing to blatantly lie about whether they are trying to mislead you.

      The kind of thinking where good people behave much like evil people, but for a good cause, makes us extremely dependent on shibboleths. Fortunately, this is not only an inconvenient thing to believe, it's also false. Mostly there aren't evil people - there are just malicious heuristics that people use, or toxic dynamics between people.

      Reply
    3. hamnox

      Forgiveness, mercy, and caution in believing bad things of others are important virtues in a *closed* system, I think. The commons systems which tend to *survive* the assault of Moloch have a well-defined and stable pool of participants(1), which the rat+EA community patently do not. We can't really avoid people with a short time horizon self-identifying as EAs, defecting on a bunch of long-term coordination attempts, and then leaving before the consequences hit. We also can't fully avoid working with people who don't identify as EA or rationalist at all--and anyone who claims our network has a monopoly on truth and effectiveness is an idiot--flat out refusing to recognize the legitimacy of our agreements and enforcement thereof.

      (1: Ref. Ostrom's Governing The Commons book. Have a summary - http://wikisum.com/w/Ostrom:_Governing_the_commons.)

      Reply
  2. tcheasdfjkl

    I think the one part I sort of disagree with is this (emphasis mine):

    1. Some EAs get their act together and make power plays, implicitly claiming the right to deceive and manipulate to implement their plan.

    2. Some EAs are paralyzed by the impossibility of weighing the consequences for the universe of every act, and collapse into perpetual scrupulosity and anxiety, mitigated only by someone else claiming legitimacy, telling them what to do, and telling them how much is enough.

    Interestingly, people in the second category are somewhat useful for people following the strategy of the first category, as they demonstrate demand for the service of telling other people what to do. (I think the right thing to do is largely to decline to meet this demand.)

    I think it is good that there exist organizations like GiveWell that provide the service of "telling people what to do" with regard to effective giving. Personally I am willing to give money and not really willing to give time, so it's crucial for me that there be something like GiveWell's ranking of top charities, so that I can spend pretty minimal time deciding what to do. (It's not that I think GiveWell is always necessarily right, it's just that they spend far more time than I can afford figuring out what the best things to do are, so as long as they have similar values to mine they'll do better than I will.)

    I agree that the people providing this service need to be honest and cooperative. In this regard I am reassured by GiveWell's openness about past failures, and I'm also reassured by the presence of EAs who will put a lot more work than me into examining all the details (following such people on Facebook to passively become aware of such arguments is much easier for me than sitting down to do actual research myself).

    Though perhaps I'm talking about a somewhat different situation than you have in mind. I am not paralyzed by the need to figure out the best possible thing to do; I use GiveWell's recommendations as a way to do a good enough thing and a better thing than I would be able to do without those recommendations.

    Reply
  3. Zeke Sherman

    >The above is all a symptom of not including other moral agents in your model of the world. We need a moral theory that takes this into account in its descriptions (rather than having to do a detailed calculation each time), and yet is scope-sensitive and consequentialist the way EAs want to be.

    There's one answer called rule consequentialism, and it has been around for a long time. It pretty much directly answers all the issues you mentioned about cooperation and adversarial actions.

    But I don't see why we should change moral theories in order to do this. Morality is the most fundamental axiom which drives behavior. Prima facie, it seems that having the wrong moral theory would lead to the highest costs in our ability to identify and locate methods of maximizing value, compared to other solutions.

    There seem to be other things that work as well: acknowledgement of the explicit and implicit costs of failures and harms (under an act consequentialist framework), following heuristics and guidelines (again, very reasonable under an act consequentialist framework), sophisticated decision theory (FDT? Acausal trading?), establishing high reputation costs for bad actions, and loyalty to the EA brand. None of these problems are new; they have been faced in similar forms by governments and social movements of all stripes, and I'm not convinced that the prevalence of utilitarianism makes it that much different. Humans are not automatons blindly following moral algorithms; the way that you construct and maintain a movement affects how people will think and act even if they nominally have the same moral theory. Personally, I think the last solution I mentioned is the best, but it dies a little bit every time someone says that the movement has a structural problem.

    And before you describe the need for a costly solution, have you accurately assessed the size of the problem? I've not seen a significant problem with EAs behaving poorly with regard to each other's values.

    >It needs to take into account the fact that there are other agents who also have moral reasoning. We shouldn't be sad to learn that others reason the way we do.

    I don't think a moral theory can "take into account the fact that there are other agents who also have moral reasoning" in a robust way. Moral theories can't step outside of morality and value other moral theories' morals. By definition, they can only care about what they care about. A better solution would be a metanormative theory, where agents are uncertain about moral theories.

    That a consequentialist would "be sad to learn that others reason the way we do" looks like a serious exaggeration to me; act consequentialists prefer that others are act consequentialists, because they can cooperate with those people and work towards similar ends. I've never seen nor suspected any instance of this sort of sentiment.

    Minor points:

    - Consequentialism in its most basic form says that you are responsible for everything that is within your control, not everything in general. One of the universal principles of moral philosophy is that "ought implies can".

    - The "double counting" you mentioned is a matter of attribution and praise. It has applications and is worth thinking about, but it's not an actual paradox or obstacle to consequentialist theory.

    - It's commonly understood by now that utilitarianism doesn't actually obligate having as little wealth as the global poor, for multiple reasons.

    Reply
    1. Zeke Sherman

      Something which I neglected to mention above is that it's false that only utilitarianism believes we ought to cheat/steal/lie in order to help the poor. For instance, Peter Unger defended this in his book Living High and Letting Die. Egalitarian views would establish even stronger reasons to help the worst-off at the expense of the well-off. Many consequentialist views can support such actions depending on whether they take rights into account and how broadly they define their rules. And many nonconsequentialists might believe that the need to end poverty or farming is so strong that it overcomes other issues, such as 'threshold' deontology. Proffering the abandonment of utilitarianism as a solution here is dangerous, since it's simply false that other moral theories would never demand similar actions.

      Reply
      1. Benquo Post author

        I agree, "abandon utilitarianism" is not sufficient, and in some ways it's not necessary either. I've been trying to point to where I see neglected costs or benefits, and towards a better framework for thinking about this stuff, but the work is very much incomplete.

        Reply
    2. Benquo Post author

      There's one answer called rule consequentialism, and it has been around for a long time.

      I agree that rule consequentialism is promising! I just wish folks would more explicitly work through how to use it on real decisions, instead of coming up with elaborate but badly incomplete quantifications of act-utilitarian considerations. I'm criticizing the implementations I see, not the literal content of the claims the best utilitarian philosophers make.

      I've not seen a significant problem with EAs behaving poorly with regard to each other's values.

      Some examples:

      • I have trouble finding a coherent model in which GiveWell/OPP aren't leaving huge amounts of value on the table, though their behavior is somewhat consistent with believing that it is morally correct to simply build a giant pile of money and prestige. In any event, it's embarrassing that EA advertises itself using the top charities as examples, there's a $10 billion foundation under EA advisement, and we still can't get the Top Charities fully funded. (That's not necessarily wrong on its own but it's worrying in the context of the other stuff.)
      • There's yield-chasing behavior where EAs exaggerate the benefits of, or certainty around, favored interventions.
      • Peter Singer often strongly implies you can save a life for a couple hundred dollars (though in other contexts he admits this is off by at least an order of magnitude). We should expect this to lead to malinvestment.

      The "double counting" you mentioned is a matter of attribution and praise. It has applications and is worth thinking about, but it's not an actual paradox or obstacle to consequentialist theory.

      Double-counting moral credit in some cases but not others can lead to malinvestment. I worked through an example in an earlier post on matching donation fundraisers. Possibly utilitarians have a different name for this - do you know?

      Reply
    3. komponisto

      >One of the universal principles of moral philosophy is that "ought implies can"

      This is, alas, not a universal principle of moral philosophy; see the concept of "moral luck", which some philosophers believe in.

      Reply
  4. G Gordon Worley III

    I'm curious to see what thoughts you have re morals and ethics. It sounds to me, though I may be wrong, that you're headed in the same direction Eliezer, myself, and many others have gone to seeing morality as a pattern built on top of preferences rather than a system for reasoning what one should prefer.

    That is to say, preferences precede morals, not the other way around. That we feel otherwise seems an error in our naive ontology that exists because it proved useful to our survival to feel compelled to prefer certain things even if we have strong reasons to prefer other things.

    My own take has been that you get different moral theories depending on how you chose to frame your preferences, thus why we get three broad classes of moral theory (roughly deontology, consequentialism, and virtue) depending on what you are trying to do with your preferences, but ultimately you can't see the whole picture until morals are sublimated by preferences.

    I've said some things on this topic already elsewhere, but if you have more specific questions maybe I can answer them to reify the position if you're interested.

    https://mapandterritory.org/nothing-is-forbidden-but-some-things-are-good-b57f2aa84f1b

    Reply
  5. Pingback: The true outside view – Research Strategy Research

  6. Pingback: The True Outside View – Reason times

  7. Jeff A.

    How do you feel about attempting to determine what is valuable, attempting to convince others that these things are valuable, and that everyone should join together in taking responsibility for maintaining and promoting value in the universe?

    That's still an attempt at universal control. Distributed across benign agents, where benign is loosely defined as agreeing with one's values. Are you against that sort of universal control, too?

    You still end up with an adversarial stance towards other agents; those who aren't convinced that what you value is valuable or that they should avoid destroying that which you value.

    Decision paralysis doesn't strike me as a necessary consequent of taking responsibility. That's a separate malady with a separate cause and a cure that doesn't require abandoning responsibility.

    Is proselytization and collaboration a strategy that benefits from the existence of other benevolent agents rather than treating them primarily as threats, as you request? If not, I'm confused as to what might not be. Agents should only be treated as threats to the extent you're uncertain that they're benevolent.

    Reply
    1. Benquo Post author

      Attempts to persuade that try to be persuasive iff true are basically friendly. Trying to be more persuasive by sending an easier to parse message is also a net contribution. If you try to persuade/recruit along dimensions unrelated to truth then you're playing a zero-sum game. The correct share of your effort to allocate to zero-sum games is more than nothing, but in general you should be looking for ways to reduce it. If you try to persuade by causing people to believe falsehoods, then you're burning the epistemic commons as part of your zero-sum game, and that's really unfortunate.

      There's a big difference between not trying to limit the bounds of your influence, and trying to control outcomes in detail. If you're not reducing others' agency as a proxy goal in order to route more decisions through yourself, then you're not doing the thing I'm warning against. (And again, it's not always the wrong choice, it's just always very unfortunate.)

      Reply
      1. Jeff A.

        Okay, so it sounds like you're not against heroic responsibility, at least not as I understand it. E.g., I don't think HPMOR!Harry was or is trying to route all decisions through him; from a pragmatic standpoint, fixing the world involves getting people on one's team, even aside from the moral reason (which supports "persuade iff true" and which I endorse, aside from exceptions like "this lie will save the world" which I am fuzzier about, because they are dangerous but maybe sometimes the right thing to do anyway). Though reducing others' agency might be a fair thing to do for HPMOR!Harry if they are clearly not value aligned and not going to be persuaded otherwise. Probably depends on how they're misaligned, what actions they're taking against your values, how you'd intervene, etc.

        Reply
  8. Pingback: The Mechanics Fallacy | Radimentary

  9. Pingback: Moral differences in mediocristan | Compass Rose

Leave a Reply

Your email address will not be published. Required fields are marked *