Judgment, Punishment, and the Information-Suppression Field

There are a lot of senses in which I or the people around me can be considered unsafe. Many-tonned hunks of metal whiz by us on the same streets we have to navigate on foot to buy our groceries. The social infrastructure by which we have access to clean drinking water is gradually being adulterated. Our country is run by increasingly nasty white nationalists. And, of course, The Bomb. But when I hear people talk about feeling unsafe, they are almost never describing a concrete threat to their physical well-being. (As usual, life may be different for the less privileged classes, who have reason to fear the authorities, and behave accordingly.) "Safety" does not come up as a motive for actions taken or avoided in order to mitigate such threats. Instead, it seems that "safety" nearly always means a nonjudgmental context (the exact opposite of what I would naively expect to be able to ensure clean drinking water or keep the cars from colliding with us), and "feeling unsafe" is generally used to explain only why they're trying to withhold information (mainly "vulnerable," i.e. relevant-to-their-interests, information) in a way that seems out of proportion to actually existing risks and opportunities. 

Judgment and Punishment

Consider a simple model: information about social censure consists of two parts. Each socially legible action is assigned a vulnerability score based on how often, empirically, someone responds by revealing the intent to punish the actor. Actions are sometimes defined contextually, so that talking loudly in a crowded bar or on the street is different than talking loudly in a library or theater, but it's not a different action depending on who is present - only impersonal context cues and stereotyped identities (e.g. some things are inappropriate "in mixed company"). Vulnerability is a global variable with respect to persons.

If Cato is observed to punish singing but not dancing, and Flaccus is observed to punish dancing but not singing, this is treated as unpredictable random variation - possibly just measurement error. Cato and Flaccus both acquire a reputation for judginess, and both singing and dancing start to feel like vulnerable activities, so people will feel inhibited about doing either activity in the presence of either censor.

At the same time, each known person is evaluated for their generic propensity to punish, or judginess. Some people will physically attack you for violating norms (they often wear dark blue or gray-green), others will just yell at you, still others will politely hint that others might disapprove, and a few are universal receivers, totally nonjudgmental. Revealing others' intent to punish is considered a veiled threat, and is therefore itself a mild form of intent-to-punish. To be nonjudgmental, one must deny others information about what is likely to be punished elsewhere.

We recognize judgmental people not merely by their actual punishment behavior (in the ancestral environment, where ostracism could easily be permanent and deadly, this might have been playing things quite a bit too close), but by their posture, the patterns of tension in their voice, and so on.

I think that this model fits how people in our society experience a sense of social safety or unsafety surprisingly well for something so simple. One virtue of this model is that it correctly predicts that "code-switching," i.e. adjusting to variations in standards between different cultures for the same activity in the same context, is more difficult than learning different behaviors for different contexts within a single culture. Code-switching imposes a greater cognitive load due to its strong dependence on theory of mind.

The Information-Suppression Field

One important characteristic of this setup is that it structurally advantages information-suppression tactics over clarity-creation tactics.

If I try to judge people adversely for giving me misleading information, I end up complaining a lot, and quickly acquire a reputation for being judgmental and therefore unsafe. Ironically, I get more of the behavior I punish, since being categorized as judgy leads to people avoiding all vulnerable behaviors around me, not just the ones I specifically punished. I cut myself off from a lot of very important information, and in exchange, maybe slightly improve the average punishment function - but this would provide an information subsidy to all other judgy agents, even ones whose interests conflict with mine and are trying to prevent me from learning some things. And most likely I just add to the morass of learned inhibitions.

On the other hand, if I wish to suppress some information - say, that some enterprise I'm profiting from is fraudulent - and I don't otherwise read as unsafe, then I can very slightly punish it - say, by gently discouraging people from talking about it because it seems likely to be harmful, because it hurts some people's feelings, etc, If I only need to suppress a few pieces of information, and there are other REALLY judgy people out there, then I can externalize most of the enforcement costs onto either the actual judgy people or the imaginations of the people I am manipulating.

A simple example:

Alice has a pervasive sense that she is being cheated in life somehow, and lashes out from time to time at people who seem like they're piling on. Carol has a consistently gentle, positive vibe, and owns a drugstore from which Alice regularly purchases expensive homeopathic medicines. Bob, who knows both of them, starts to tell Carol about how he's done some thinking about it, and homeopathy seems to him like it couldn't possibly work. Carol hints to Bob that this is a sensitive subject. Bob reasons, implicitly, that if even Carol doesn't like him talking about his idea, he had darn well better make sure not to talk about it around Alice.

This is an adversarial game that different secretive coalitions can play against each other, at the expense of other people trying to use censure for other reasons. All such moves, however, also benefit nonjudgmental people, who can collect a surplus from living in a society that relies on standards, while collecting a disproportionate amount of information and social capital by never contributing to attempts to track and censure misbehavior.

17 thoughts on “Judgment, Punishment, and the Information-Suppression Field

  1. James Babcock

    This model predicts that these dynamics only happen in groups where the typical member is within a specific range of intellectual ability: people have to be smart enough to keep track of a judginess level for each person and a tabooness level for each topic, but not smart enough to keep track of person-topic interactions. In your example, if Bob were smarter, he would notice that Carol reacts atypically to the topic of homeopathy ("has a conflict of interest", in common parlance), and keep this separate from his model of how sensitive a subject it is.

    Reply
    1. Benquo Post author

      It seems like your model implies that no one smart ever does things based on stupid models much simpler than they could construct if they thought about it. If that were true, talk therapy wouldn't ever work, emotions would be totally nonmysterious, and "impulsiveness" would not be considered a negative trait.

      Reply
  2. Elo

    You would benefit from the ontological separation of the domains of concrete, subtle and causal.

    Physical (concrete) safety being independent from subtle safety and I have no idea what causal safety really looks, feels or is like.

    (best outline in the book, "integral spirituality", but also able to be googled"

    Reply
  3. Georgia

    > But when I hear people talk about feeling unsafe, they are almost never describing a concrete threat to their physical well-being. (As usual, life may be different for the less privileged classes, who have reason to fear the authorities, and behave accordingly.) "Safety" does not come up as a motive for actions taken or avoided in order to mitigate such threats.

    The rest of your piece seems to be getting at a real thing, but I wanted to point out that I hear people talk about feeling "unsafe" to refer pretty directly to risks of physical harm all the time, even in social contexts. Women walking alone at night, risks of reprisal from abusers, assault risk from people who are known to be willing to cross personal boundaries, whatever. These are more common and central examples to me than social retribution or even emotional pain.

    (I'm sure at least *sometimes* these fears are disproportionate to the real risk, but they're not trivial, and I simply mean to point out that physical safety often comes up in social contexts.)

    Reply
      1. Georgia

        I think maybe 80% of the time I hear someone use a word like "unsafe", they're alluding to a physical risk. (Based on your other reply to my comment, I think I may have been unclear, hopefully this clarifies what I meant. This doesn't actually come up in conversation very often.)

        Reply
        1. Benquo Post author

          Here are a few things I think might be generating this difference:

          1. I try to move things in directions I model as actually-safe but people have been conditioned or are motivated to construe as socially-unsafe a lot more often than you do.
          2. Women generally don't talk about safety with men they're not dating, and usually not even with men they're dating, anywhere near as readily as they do with other women.
          3. The second reason seems to me like it's caused in large part by the general bias towards suppressing "actionable" information because it's likely to hurt someone's feelings. (Please no one blame the victims for this, basically everyone with any privilege at all would do well to display more courage here but blaming doesn't help, love's the only engine of survival.)

          Reply
    1. Benquo Post author

      My sense is that maybe every tenth or hundredth potential interaction with someone, something doesn't happen that might have been nice because of someone's physical fear, but the kind of thing I'm describing prevents gains from trade multiple times per actual interaction, before even counting the times it prevents one or the other side from initiating an interaction in the first place.

      Reply
  4. michael ray

    in the far off ancestral environment, where ostracism could easily be permanent and deadly. doesn't resemble anything you see around you? asshole

    Reply
    1. Benquo Post author

      It seems like you're implying that people with enough privilege to have upper-middle-class jobs in a major metropolitan area are sufficiently materially constrained that they mostly can't say things for fear of being punished. In practice, I observe substantial variation in what people feel "safe" saying, and no particularly strong correlation between this and being punished.

      It's much, much more likely that the life path that leads to occupying this sort of position usually involves learning mental habits that cause people to behave as though they faced such constraints. Cf the story of the elephant and the chain.

      Your willingness to complain at all, however inarticulately - instead of just being totally silenced - implies at least some hope that this is the case. I'm curious why you're angry at me, though. Do you perceive this as a personal attack somehow? Do you think I intend to blame the people who have been silenced?

      I think that we comparatively privileged people should learn through experience that it's safe to say to each other much more than have been saying, if we're willing to step beyond our conditioning and try to survive on the power of our own minds, and to keep trying even when people mostly-impotently try to punish us by yelling. But first, be kind to yourself and accept that you're afraid and that it couldn't have been otherwise, with the conditioning you've had. This poem by Leonard Cohen seems like the way to start.

      Reply
  5. lauren

    This seems very messed up by basic assumptions. People can be miscalibrated, but ben, come on, people are afraid of real threats, and arguing that people are making things up is a direct insult by nature of the belief about their honesty that it states, which is what I mean when I say the normal phrasing of that statement: it's invalidating of their experiences. also:

    > Revealing others' intent to punish is considered a veiled threat, and is therefore itself a mild form of intent-to-punish. To be nonjudgmental, one must deny others information about what is likely to be punished elsewhere.

    This isn't actually the true rule, it's a low-resolution oversimplification of it that misses a core point: people value *predictability in what will be safe*, which means that giving others information about how actions will be unsafe in front of others *will make them feel safer*, as long as the person giving the information is themselves trusted to not have intent to harm. Society has built up a lot of communication protocols around stating intent to harm, and under an assumption of reasonably good faith, where apologies are not likely to be false and promises to not harm are at minimum trusted to be vaguely true at time of statement (though usually much more than that), it's quite possible to communicate that one thinks something is dangerous because someone else will be harmful about it while also communicating that the other agent's likelihood of being harmful about it is unreasonable.

    You're right that in the average cases, these averages are what are at play, but brains don't work based on plain averages, they work based on matching a suite of possible exemplar cases, which are vaguely averages of the concrete experiences those exemplars were chosen to combine. it depends heavily on what type of interaction you're having and with who to predict what level they'll be thinking about this on, and in general the way to speak truth is to communicate that you're trustworthy by learning the language people use to establish trust and using it to say things that demonstrate reliable honesty and friendliness.

    Reply
    1. Benquo Post author

      People can be miscalibrated, but ben, come on, people are afraid of real threats, and arguing that people are making things up is a direct insult by nature of the belief about their honesty that it states, which is what I mean when I say the normal phrasing of that statement: it's invalidating of their experiences.

      On the other hand, they're making things up. I don't want to hurt people who are lying to me, but if they insist on construing invalidation of their lies as an attack, it's not good strategy to capitulate and help them hide the truth from themselves.

      Reply
    2. Benquo Post author

      and in general the way to speak truth is to communicate that you're trustworthy by learning the language people use to establish trust and using it to say things that demonstrate reliable honesty and friendliness.

      In my relevant experience this just leads to being gaslit more.

      Reply
    3. Benquo Post author

      people value *predictability in what will be safe*, which means that giving others information about how actions will be unsafe in front of others *will make them feel safer*, as long as the person giving the information is themselves trusted to not have intent to harm.

      It's unclear that "intent to harm" means anything specific across contexts here, given how often "harm" literally just means someone getting upset about their perceptions of someone else's intent.

      Reply
  6. Pingback: CPTSD and Attachment | Compass Rose

  7. Pingback: Notes on the Autobiography of Malcolm X | Compass Rose

  8. Pingback: The Debtors’ Revolt | Compass Rose

Leave a Reply

Your email address will not be published. Required fields are marked *