Totalitarian ethical systems

(Excerpt of another conversation with my friend Mack.)

Mack: Do you consider yourself an Effective Altruist (capital letters, aligned with at least some of the cause areas of the current movement, participating, etc)?

Ben: I consider myself strongly aligned with the things Effective Altruism says it's trying to do, but don't consider the movement and its methods a good way to achieve those ends, so I don't feel comfortable identifying as an EA anymore.

Consider the position of a communist who was never a Leninist, during the Brezhnev regime.

Mack: I am currently Quite Confused about suffering. Possibly my confusions have been addressed by EA or people who are also strongly aligned with the stated goals of EA and I just need to read more. I want people to thrive and this feels important, but I am pretty certain that "suffering" as I think the term is colloquially used is a really hard thing to evaluate, so "end suffering" might be a dead end as a goal

Ben: I think the frame in which it's important to evaluate global states using simple metrics is kind of sketchy and leads to people mistakenly thinking that they don't know what's good locally. You have a somewhat illegible but probably coherent sense that capacity and thriving matter to you, and that suffering matters in the context of the whole minds experiencing the suffering, not atomically.

There's not actually a central decisionmaker responsible for all the actions, who has to pick a metric to add up all the goods and bads to decide which actions to prioritize. There are a lot of different decisionmakers with different capacities, who can evaluate or generate specific plans to e.g. alleviate specific kinds of suffering, and counting the number of minds affected and weighting by impact is one thing you might do to better fulfill your values. And one meta-intervention might be centralizing or decentralizing decisions.

Since you wouldn't need to do this if the info were already processed, the best you can do really is try to see (a) how different levels of centralization have worked out in terms of benefiting from economies of scale vs costs due to value-distortion in the past, and (b) whether there's a particular class of problem you care about that requires one or the other.

So, for instance, you might notice that factory farming creates a lot of pointless (from the animal's perspective) suffering that doesn't enable growth and thriving, but results from constantly thwarted intentions. This is pretty awful, and you might come up with one of many plans to avert that problem. Then you might, trying to pool resources to enact such a plan, find that other people have other plans they think are better, and try to work out some way to decide which large-scale plans to use shared resources to enact. (Assuming everyone with a large-scale plan thinks it's better than smaller-scale plans, or they'd just do their own thing)

So, one way to structure that might be hiring specialists like GiveWell / Open Phil - that's one extreme where a specialized group of plan-comparers are entrusted with the prioritization. At the other extreme there are things like donor lotteries, where if you have X% of the funds to do something, the expected value of participating has to be at least X% of the value of funding the thing. And somewhere in the middle is some combination of direct persuasion and negotiation / trade.

Only if you go all the way to the extreme of total central planning do you really need a single totalizing metric, so to some extent proposing such a metric is proposing a totalitarian central planner, or at least a notional one like a god. This should make us at least a little worried about the proposal if it seems like the proposers are likely to be part of the decisionmaking group in the new regime. E.g. Leninism.

Mack: I'm...very cognizant of my uncertainty around what's good for other people, in part because I am often uncertain about what's good for me.

Ben:

Yeah, it's kind of funny in the way Book II (IIRC) of Plato's Republic is funny. "I don't know what I want, so maybe I should just add up what everyone in the world wants and do that instead..."

"I don't know what a single just soul looks like, so let's figure out what an ENTIRE PERFECTLY JUST CITY looks like, and then assume a soul is just a microcosm of that."

Mack: Haven't read it, heard his Republic is a bit of a nightmare.

Ben: Well, it's a dialogue Socrates is having with some ambitious young Spartaphilic aristocrats. He points out that their desire to preserve class differences AND have good people in charge requires this totalitarian nightmare (since more narrowminded people will ALSO want the positions of disproportionate power - to be captain of the Titanic, to use a metaphor from earlier - I actually stole the ship metaphor from Republic - and be less distracted by questions of "how to steer the ship safely.")

He describes how even a totalitarian nightmare like this will break down in stages of corruption, and then suggests that maybe they just be happy with what they have and mostly leave other people alone.

Mack: That seems like...replacing a problem small enough for the nuance to intimidate you with one large enough that you can abstract away the nuance that would intimidate you if you acknowledged the nuance.

Ben: Yes, it's not always a bad idea to try. But, like, it's one possible trick for becoming unconfused, and deciding a priori to stick with the result even if it seems kind of awful isn't usually gonna be a good move. You still gotta check that it seems right and nonperverse when applied to particular cases, using the same metrics that motivated you to want to solve the problem in the first place.


Related: Egoism in Disguise

One thought on “Totalitarian ethical systems

  1. Pingback: Should Effective Altruism be at war with North Korea? | Compass Rose

Leave a Reply

Your email address will not be published. Required fields are marked *