We were only pretending to engage with each other. But it wasn’t our fault. We had to be, because talking about bad faith is Not OK.
Related: Assuming Positive Intent, Bad intent is a disposition, not a feeling
Dialogue on a friend’s Facebook wall
He wrote: While patriarchy hurts men, it oppresses women.
I asked: What does oppress mean here?
He responded: “The exercise of authority or power in a burdensome, or unjust manner." Same thing it has always meant.
I asked: Do I understand you right that you think women, but not men, are victims of the exercise of authority or power in a burdensome, or unjust manner?
He responded: Yes. To think otherwise is to be willfully ignorant.
But later in the same comment: I am saying that our society is cruel and unjust to both men and women, but its more cruel and unjust to women.
At this point I called him out for arguing in bad faith. But with a bit of perspective, I can see that I was arguing in bad faith as well.
What went wrong?
Oppression is a loaded term without a clear consistent definition. There were reasonable things he might have meant, and there were unreasonable things he might have meant, and I wanted to know what he meant. So went my narrative. So I asked what it meant in this context.
The point at which I started expecting a bad-faith interaction is the point at which I asked what oppression meant in this context, and was given a dictionary definition, that clearly wasn’t the definition he was using.
There's no plausible model on which that was a sincere attempt to help me understand what he meant. Corroborating evidence - the definition he gave is the first definition offered on the first Google result I got for the term.
But if he wasn’t trying to help me understand, what was he doing? Offering an Official Definition was a purely defensive move, an attempt to score points for "answering the question" (or avoid social penalties for not doing so) on the assumption that people aren't keeping track of the overall content, but instead only responding to each transaction as a one-off.
One honest alternative would have been to try to think about why I might be confused about the meaning of "oppress" in this context, and explain what it meant here. But defensive moves are often responses to perceived bad faith on the other side. The other thing he could have done - and perhaps the more realistic option - was to actually tell me that he didn't think my question was in good faith, and explain why.
Sadly, being honest about your sense that someone else is arguing in bad faith is Officially Not OK. It is read as a grave and inappropriate attack. And as long as that is the case, he could reasonably expect that bringing it up would lead to getting yelled at by everyone and losing the interaction. So maybe he felt and feels like he has no good options here.
And I’m guilty of the same offense. When I asked my second question, I too was being dishonest. I didn’t expect an honest answer, but was pretending that I did.
I'm so sorry for that ghost I made you be
Only one of us was real and that was meI heard the snake was baffled by his sin
He shed his scales to find the snake within
But born again is born without a skin
The poison enters into everythingAnd I wish there was a treaty we could sign
I do not care who takes this bloody hill
I'm angry and I'm tired all the time
I wish there was a treaty, I wish there was a treaty
Between your love and mine
As usual, I think you're being overly fundamentalist about the concept of "good faith".
It's a common discourse move, in our circles, when someone seems to be bullshitting, to ask probing questions that can be answered consistently iff there's a consistent model generating those answers! This is good! It is a truthseeking activity. If the interlocutor is in fact bullshitting, then this fact ought to be exposed to everyone watching the conversation. And if you're wrong and they actually do have a consistent model, then this line of questioning will expose that too.
I think there's something to what Amelia is saying, but I don't think 'probing questions' are universally a good thing.
There's this standard model of honest speech where a person is speaking directly from what they "really believe" and inconsistencies in speech are signs of lying (the bad faith you describe) or error in something low level (like you grabbed for the wrong word). If you change your argument in the middle of talking, that's definitely bad faith, you're obviously just changing your argument to are more locally convenient one so you don't lose the argument. That's where you get things like 'trying to catch someone in a contradiction'
But if you introspect closely on your own process of speech, you'll notice that this model of 'honest speech' doesn't match your own process. This is most obvious when you do focusing where there's a specific step of 'trying on words and noticing when they don't feel right', not just single words but whole sentences. Basically all speech is like the focusing maneuver except there's less error correction happening.
There is unfortunately no direct access to 'what you really believe'; humans are universally rationalizing their reasons for their beliefs.
Now of course you can rationalize better or worse! There's still a fact of the matter about what you believe and why. And your micro focusing process can could either be giving you a correct or incorrect answer.
The thing that is going on with a lot of people is that they themselves believe that their explicit words is one and the same as their belief. This obviously leads to a lot of discomfort with inconsistency and contradiction. The larger the discomfort the harder actual communication is.
I think some amount of this discomfort at inconsistency is basically inescapable and your brain is going to try to paper over it. All you can do is try to notice and address it. (I certainly do; if you think you don't do this at all, I'm pretty interested in how you manage that). Unfortunately I think most people suffer huge discomfort at inconsistency, which makes them very hard to actually communicate with them on non-concrete things.
I would strongly guess the poster suffers a lot of discomfort at their own inconsistency.
A second thing that's probably going on is that the person endorses using the tactics for covering up inconsistency and contradiction because winning the argument is important. This is what I normally think of as 'bad faith', and I think this was also happening with your poster.
Pingback: Rational Feed – deluks917
When I think someone is arguing in bad faith (and this happens a lot), I tend to not call them on it, and my most salient current explanations for why are:
1) Calling them on it makes it look like I'm in the minority on a position that's popular in our mutual social circle. This feels pretty scary. Maybe they'll gossip about me, and they and other people we know will trust me less in the future. When I read your dialogue with the poster, I feel this fear (I also probably wouldn't have responded as much as you).
2) I think that calling them on it would bring up pain for them which is quite hard to fix. This happens most often when people talk about a job or project they are heavily invested in. I think they might be thinking about a situation incorrectly, possibly willfully, and expect that calling them on this will make them first feel hurt and next try to paper that hurt over and continue with what they're doing.
You discuss accusations of bad faith being "read as grave and inappropriate attack[s]", which I think maps pretty well to the second reason.
But solving either of these problems in even a few cases sounds like a huge relief to me, and I'm guessing others would feel the same. Some ideas:
For 1): Lower social threat of calling someone out. If the discussion is big and public, taking it small and private might help. Spending most of the early part of a conversation showing that you're willing to change our mind, rather than talking directly about the ideas, might help. (I bet sending many mixed signals messes this up though.)
For 2): Credibly signal that you'll have the person's back in things even if you disagree with them. I'm not sure how to do this -- just saying "I disagree with you but still support you" tends to feel condescending and like it doesn't work, but my sample size is small.
If this comment is a confused mess, I can try to explain better / understand better. Bad faith generally feels pretty tangly in my head.
Mainstream common sense has become that all interactions are always in bad faith. The idea of ettiquette is basically the working out of that, but while it pretends to be a set of constraints that impede communication, the actual purpose is to block communication entirely, such that any adherence to the set of constraints while still communicating remains a violation of ettiquette.
In general, bad faith is for allying with others who are also acting in bad faith. Once you detect it, discourse is over.
Here; let not the fact that striving to understand this is honorable mistake my judgement that the spirit of your understanding is entirely incorrect. I'll respond with my immediate, gut-level interpretation of each line. My understanding is that that will be more helpful to you than a gentler, less directly informative approach; you should correct me if this is wrong.
On reflection, after writing, my immediate interpretations feel correct, though I've noted when I've added to them a little. On reflection, I feel my weak pride has bested me even here, that the strength of my credence in what I have said has joined with it, and that I have let our battle end in a prayer more sorry than sincere. But what I have spoken will have such a limited audience here, that it is best I submit this much, and move on. Now:
"He wrote: While patriarchy hurts men, it oppresses women."
He's primarily communicating a feeling, itself born of a proposition he believes. Communicating the feeling in terms that make it literally sound like a proposition would have been reasonably expected to be more efficient than communicating the feeling in terms that make it literally sound like a feeling, at least to a typical listener.
Added: he's secondarily, de facto, communicating a proposition, too. With less thought, I'd say *I* don't know if he means to or not, but actually, I'd be missing the point in saying as much: most of the time, most *speakers* only figure out whether they actually meant to express a particular secondary implication or feeling after they've finished speaking, once they're (however indirectly) asked to clarify their meaning.
"I asked: What does oppress mean here?"
You literally meant to indicate uncertainty as to his meaning.
"He responded: 'The exercise of authority or power in a burdensome, or unjust manner.' Same thing it has always meant."
Here, he meant to communicate that, yes, he previously intended to communicate a particular feeling about women's rights, as would typically be assumed, rather than having intended to use those words in a different way than normal.
Added: he also intended to check you for not understanding his meaning previously; "same thing it has always meant" socially implies disapproval here.
"I asked: Do I understand you right that you think women, but not men, are victims of the exercise of authority or power in a burdensome, or unjust manner?"
I believe you are still unsure of his meaning, and trying to resolve that uncertainty. I may properly read (rather than just skim) your post to check that.
"He responded: Yes. To think otherwise is to be willfully ignorant.
But later in the same comment: I am saying that our society is cruel and unjust to both men and women, but its more cruel and unjust to women."
First, he's primarily annoyed that he hasn't been understood, and communicates that in a way that superficially sounds like a value judgement. Then, he transitions to, I think, reiterating exactly the meaning of the first line you quote, except (I think) as a qualification to his annoyance, rather than as a statement that stands on its own.
Now.
These gut-level interpretations of his lines are longer than those of yours. There is life behind that observation. There's a certain set of System-1-implemented social habits and skills that will let anyone glean most of the information value of these statements. And most of the information contained in the sorts of statements your facebook friend was making is information that would, if it's it's gathered in the first place, be processed by your intention-gauging sense, by your sense of who is unjustly extracting concessions from who by covert means, by your sense of whether you're being gaslit, by your sense of if you're being misled or scammed; by that whole cluster of senses.
This speaks your writing on integrity in EA. In short: you seem to have a few senses for OPP et. al's fund allocation strategies, typical EA marketing strategies, and so on, that are quite acute, that you have used this to write about these things in eloquent detail. And though I never knew which path you might take, it was always readily intuitively clear to me what conclusions you were coming to before you started, and that those conclusions about integrity and honesty in EA were correct. From my gut sense. I've had the experience that non-EAs will smell that something curious is going on with said fund allocation and marketing strategies when exposed to the same information as EAs; the non-EAs will typically use their gut sense for this, and most of the EAs will discount their gut sense with qualifications about burdens of proof and what a morally reprehensible thing false positives are and so on, when they honor such a gut sense at all.
And this is the half of the frustration I have with EA--if you have a poorly developed gut sense for this thing, of course you'll get false positives all over the place, but if you don't, you have lots of human social history to look back on that shows it's harder to fall prey to a bunch of systematic biases that'll let one interpretation of data mislead you if you use your gut a bit. Might be interesting to look for an example of the other side of the coin--a historical example where people threw out their gut senses and were all the worse for it.
I might be wrong about exactly how common well-developed gut senses are; I've seen very strong ones in people of the same intellectual caliber as us who are artistic types, in those who are the type to strive for professional success. I get the feeling average people have somewhat more developed gut senses than most of us, though. I've also seen a bunch of people patch over parts of the skill gap with CFAR techniques--focusing, for instance.
And here; I am left to conclude like an adventure without its heroes, for there is not only much for me to perceive in this domain, but much wisdom and courage to first sew together from the dozen parts of me, and the path in even this is unsure. Be well, and sing your own light.
Thanks, this was helpful. This feels broadly consistent with my interpretation of what went on, but your description of his intent to communicate a feeling explains a lot that was sort of a black box to me. I was genuinely confused about what his words meant, but also had a vague sense that he wasn't using words to explicitly communicate structured content, in which case that curiosity is a non-sequitur. I didn't engage with this, and instead pursued the strategy that would have worked in a world more amenable to my preferences, in part because I expected that if I were honest about my sense of unease with what he was doing, I'd be attacked.
On the gut feeling about EA stuff, I think that also tends to generate massive false positives, albeit in a different way. For instance, a lot of people don't even have in their hypothesis space that someone might be trying to follow the rules in good faith.
You're welcome! On the gut feeling about EA stuff: the result I observe is that people with both* an actually well-developed/calibrated gut sense for this stuff, and high-level rationality (in the way people normally use that word), end up being able to figure things out correctly better than people who are just as good at one but not as good at the other. But the claim that gut feelings generate massive false positives feels super true as well. Even so, my itch is that this isn't true in a way that's relevant to the specific cases we ought to be interested in. How might that be?
We could posit a "valley of bad gut-feeling-calibration", where at most points, making your gut feelings a bit more calibrated actually makes you worse off (at least if you start relying on them more), because it's so easy to go wrong with gut feelings. But it's possible to do better overall once you've gotten past a certain level of training your gut feelings. I mean, this is necessarily a bit of a metaphor since "ability to figure things out correctly" is a function of more than just "extent of rationality" and "level of gut feeling-calibration". Not to mention that I mean something more specific than "gut-level feelings" by "gut level feelings" throughout this comment, and am sort of hoping it is obvious what (but if it isn't, then I'm glad I checked, because that not only tells me I'll need to clarify before this comment can make sense, but that I need to be even more careful with non-literal meanings in the future).**
The idea of a "valley of bad gut-feeling-calibration" feels pretty close, but, actually, that's really horrifying. Because rationalists have incentives to disbelieve that in the way I meant it, and I recognize I have strong social motives to believe it myself. I cut a number of paragraphs on the particular ways in which this would be bad, because I think the main reason is obvious enough--figuring out true things was hard enough already.
* This is highly correlated with, but just slightly distinct from, the "has all of the useful rationalist tricks, but acts more socially normal/is professionally successful/ has an immediate feel for people's trustworthiness" archetype. Maybe I need to go introduce more rationalists to even more competent non-rationalists?
** I don't know if I've ever mentioned how non-literal my default conversational style is, and I'm trying to be more literal here, but I think that, for you, it's going to be more useful to frequently ask for clarification with me than it is for you to do so with most people. I'll try to notice when messages aren't getting transmitted reliably, as well.
***My normal approach is to say that "the best way to understand the workings of System 1 is, unless there is reason to think otherwise, through System 1". I think this is actually true, but didn't pull it off here because I was trying to be more literal, and I'm not sure this actually helped.
A lot of my more theoretical posts recently have been helpful to me in developing a good enough model to notice these sorts of problems fast. For instance, I didn't notice the double-counting asymmetry until I thought about matching donations a bunch, but now I can see analogous asymmetries lots of places and have an intuitive sense of when they're being glossed over problematically. How does this relate to the thing you mean by gut feeling? I'm guessing you mean something more like being attuned to more purely social tells that someone's being dishonest or exploitative - is that right?
That's about right, actually--the sense for double-counting asymmetry is closest to what I had in mind, though there are situations I also mean to reference, where something that feels like being attuned to social tells of dishonesty performs a similar enough function as to be hard to distinguish from the former. I'm not decided on whether the latter is fundamentally just the former with a visceral, strong sentiment of oughtness attached.
I don't get it.
I think your interlocutor means,
"Yes. it is willfully ignorant to think that women, but not men, are the victims of the exercise of authority or power in a burdensome or unjust manner __by the patriarchy__".
If you leave out that last clause, the claim is absurd, but if you leave it in then he's not saying anything particularly unusual, and not in bad faith. I think he was (perhaps sloppily) assuming that clause.
I don't see how the purported explanation works any better with that clause added, and I don't think you're holding it to the Bayesian standard of "what would someone with a clear idea they're trying to explain to someone who doesn't already know it be likely to say?", which is what determines the odds ratio.
I think you're holding it to the much weaker standard of "what could I prove was not fair play to a somewhat motivatedly stupid third party?".