Model-building and scapegoating

When talking about undesirable traits, we may want to use simple labels. On one hand, simple labels have the virtue of efficiently pointing to an important cluster of behavioral predictions. On the other, they tend to focus attention on the question of whether the person so described is good or bad, instead of on building shared models about the causal structure underlying the perceived problem.

Slate Star Codex recently posted a dialogue exploring this through the example of the term "lazy." (Ozy's response is also worth reading.) I think that Scott's analysis itself unfortunately focuses attention on the question of whether assigning simple labels to adverse traits is good or bad (or alternately, true or false) instead of on building shared models about the causal structure underlying the perceived problem.

When I call someone lazy, I am doing two things. The first is communicating factual information about that person, which can help others avoid incurring costs by trusting the lazy person with some important tasks. This is shared model-building, and it's going to be more salient if you're focused on allocating resources to mitigate harm and produce things of value. In other words, if you're engaged in a community of shared production.

The second is creating a shared willingness to direct blame at that person. Once there's common knowledge that someone's considered blameworthy, they become the default target for exclusion if the group experiences a threat. This can be as simple as killing them and taking their stuff, so there's more per survivor to go around, but this can also take the form of deflecting the hostility of outsiders to the supposed one bad apple. This dynamic is called scapegoating, and it's going to be more salient when zero-sum dynamics are more salient. 

Even though I may intend to do only one of these, it's actually quite hard to act along one of these dimensions without side effects along the other. For instance, Protocols of the Elders of Zion was famously intended to cast blame on Jews, but Japanese readers with no cultural history of anti-Semitism concluded that if Jews were so powerful, they should make sure to stay on their good side, and consequently the Japanese government made diplomatic overtures to Jewish leaders in the 1930s despite their alignment with the Axis powers.

For more on descriptive vs enactive norms, see Actors and Scribes and the prior posts it links to.

If clearly labeling undesirable traits marks the target for scapegoating, then people who don't want to do that may find themselves drawn to euphemism or other alternative phrasings.

When a single standard euphemism is proposed, this leads to the infamous "euphemism treadmill" in which the new term itself becomes a term of abuse, marking the target for blame or attack. For instance, "mentally retarded" was originally a highfalutin euphemism for "imbecilic," but itself was ultimately replaced with "intellectually disabled," and so on.

Perhaps a more stable solution is the one Scott and Ozy seem to suggest, of preferring more precise and detailed language. Using more precise and detailed descriptions helps avoid scapegoating since more precise usage is more likely to differ from case to case, making it harder to identify a consistently blamed party.

However, these solutions are not the same as the problem. There are multiple distinct interests here. You could want language to be more precise and not much care about the scapegoating side effect. Or you could mainly want to avoid contributing to a process in which a group is targeted for a broad array of social attacks, and not much care about what this does to the precision of your language. (Euphemisms sometimes decrease expressive power instead of increasing it.)

Crucially, neither of these interests requires that you find the thing you're describing unobjectionable, only that you don't want to scapegoat.

10 thoughts on “Model-building and scapegoating

  1. Pingback: Rational Feed – deluks917

  2. J

    > Even though I may intend to do only one of these, it's actually quite hard to act along one of these dimensions without side effects along the other

    Yes, everything here, and especially this, seems correct.

    Reply
  3. Pingback: MeToo is good | Compass Rose

  4. Pingback: Blame games | Compass Rose

  5. Eli

    > Even though I may intend to do only one of these, it's actually quite hard to act along one of these dimensions without side effects along the other.

    This seems extremely, and importantly true, and I am quite interested in the project of trying to find techniques that are more effective at separating them.

    Reply
    1. Eli

      A [not even remotely sufficient, or even desirable] first thought is to put a moratorium on punishment, to change the incentive landscape around revealing what happened in some situation: cancel all the debts, and clear the slate. If you can credibly create a context in which people will not be attacked for their previous behaviors, then no one has the incentive to oppose talking about the bad behavior.

      Obviously, of course, this lets the bad behavior "off the hook", which we might have moral objections to, and which will be a hard sell to those who were harmed. This makes this approach less desirable, but also less tractable, since lots of people will oppose it, and it is hard to enforce a ban on punishments of all kinds (including social punishments, and "just not interacting with a person further).

      Reply
      1. Benquo Post author

        Unfortunately due to the nature of the problem we have to figure out how to do this along a frontier, not within a community that's already able to deliberate together about policy.

        We can't effectively coordinate to all stop doing violence at once, so people are sometimes going to do things in perceived self-defense. We can discredit claims that punishments enforce standards (since standards are obviously not being followed). And we can develop a shared discourse that accurately distinguishes between not transgressing, transgressing but revealing information about the transgression, and coverups (and gradations between these).

        Reply
        1. Eli

          > Unfortunately due to the nature of the problem we have to figure out how to do this along a frontier, not within a community that's already able to deliberate together about policy.

          By your own lights, is this problem solved within communities that are able to deliberate together about policy?

          If not, then that seems like a simpler not-quite-toy problem to start with. If yes, then we might want to study those solutions.

          I think I know how to avoid this problem with groups of 3 highly selected people, and have evidence of being able to navigate these dynamics among as many as 20 highly selected people. I posit that the dynamic of doing it with 50 similarly selected people are probably similar to the smaller numbers, but I don't have concrete evidence of that one way or the other.

          Relaxing the "highly selected people" constraint seems damning. But I could imagine moderately intensive instruction in small batches that conveys the core idea and the technique for dealing with it, to a group of 300 people who are bought into some shared norms? As a case study, the concept of a "crux" has scaled pretty well, such that it can be invoked usefully in 100-person online discussions. Maybe the relevant concepts and strategies for separating these threads could be similarly compressed into a cognitively affordable package?

          The hard part is that these dynamics are triggering to people, in the way that the concept of "crux" is really not. Part of the strategy would need to be people productively responding to their own triggeredness, including in responses to other's triggeredness. And while you can teach that skill set, it is much less robust: individulas are much less likely to manage to do that systematically (compared to merely using the concept of a crux, which is mostly neutral, not pushing _against_ some psychological pressure). So some fraction of the people in the discussion are going to fail, and you'll have spirals of triggeredness.

          But maybe you can deal with that, on the macro-level, in much the way that one might on the micro-level? Designated facilitators mindfully notice the triggerdness in the discussion without getting sucked into being triggered themselves, and guide the conversation back to the process designed to make headway?

          Still seems really hard to do on the public internets, because there will be low-context bistanders, and many people will be (correctly) concerned about the impact on the bystanders, and will therefore be micro-motivated (and maybe macro-motivated?) to disrupt the process.

          So the natural option is to not do this on the public internet, but on the only semi-private internet.

          It seems like this could work, though it seems like it requires really a lot of leg work...

          This line of thought does somewhat incline me to get back into teaching conversational skills, en mass. I hadn't quite considered that if there is a critical mass of some subset of relevant skills, you can scale discussions much further. I still don't feel enormously enthusiastic however.

          Reply
          1. Eli

            (Note that I tried to mark most of this comment (everything after "I think I know how to avoid this problem...") as a digression, using a metaphorical open and close HTML tag. Unfortunately, the website appears to have parsed it as _actual_ HTML and hid my tags.)

  6. Pingback: Can crimes be discussed literally? | Compass Rose

Leave a Reply

Your email address will not be published. Required fields are marked *