Plato’s Gorgias explores the question of whether rhetoric is a “true art,” that when practiced properly leads to true opinions, or whether it is a mere “knack” for persuading people to assent to any arbitrary proposition. Socrates advances the claim that there exists or ought to exist some true art of persuasion that is specifically about teaching people true things, and doesn’t work on arbitrary claims.
(Interestingly, the phrase I found appealing to use in the title of this post, "truth-friendly," is pretty similar to the literal meaning of the Greek word philosophy, "friendliness towards wisdom.)
Six principles of the knack of rhetoric
Robert Cialdini’s Influence is about the science of the “knack” of rhetoric - empirically validated methods of persuading people to agree to arbitrary things, independent of whether or not they are true beliefs or genuinely advantageous actions. He outlines six principles of persuasion:
- Reciprocity - People tend to want to return favors. An example of this with respect to actions is the practice of Hari Krishna giving people “gifts” like a book or a flower, and then asking for a donation. A special case of this is “reciprocal concessions” - if I make a request and you turn it down, and then I make a smaller request, you’re likely to feel some desire to meet me halfway and agree to the small request.
- Commitment and consistency - People use past behavior and commitments as a guide to present behavior. If you persuade someone that they’re already seen as having some attribute, they’re more likely to want to “live up to” it. If you get people to argue for a point, even without any commitment to believing their argument, they’re more likely to say they believe it in the future. If you get someone to agree in principle to do a thing, they’re more likely to agree to specific requests to do the thing.
- Social proof - People use others’ behavior as a proxy for what’s reasonable. Advertisements exploit this by showing people using a product.
- Authority - People tend to accept the judgment of people who seem respectable and high-status whether or not they are an expert in the field in question.
- Liking - people are more likely to buy things from people they like.
- Scarcity - People are more eager to buy things that appear scarce. “Limited time offers” exploit this.
Six principles of the art of rhetoric
Making it easier for people to avoid these traps seems like a desirable attribute of a discourse, if we want to move more efficiently towards truth. Therefore, a rational rhetoric will have the following six principles, each one countering one of Cialdini's principles of the knack of influence:
- Non-reciprocity - Someone else trying hard to help you understand doesn’t require you to assent to their point of view. Repaying effort with effort is fine, but repaying effort with agreement is neither required nor encouraged. No one is punished socially for continuing to disagree even after others are nice to them. Nor is there any pressure towards “reciprocal concessions.” If I update towards your view based on your arguments, that does not require you to update towards mine based on mine. My arguments might be wrong, and should be judged on their persuasiveness, not how accommodating I’ve been.
- Inconsistency - While holding people to a standard of present consistency is valuable, no one should ever be held to a standard of consistency with their past beliefs. If someone is persuaded that they were wrong, repudiating their past views should be met with respect, not shaming. Even - perhaps especially - if these were seemingly socially desirable views.
- Nonconformity - It must be acceptable to hold beliefs shared by almost no one. The mere fact of a view’s unusualness or idiosyncrasy must not lead to pressure to abandon it, except via reasoned argument.
- Egalitarianism - Someone high-status holding a belief must never be offered, in itself, as a reason to believe something. It’s OK to take track records into account, but the default response to naming an authority figure’s or local celebrity's beliefs as reason for someone else to believe something is for it to be perceived as an argument from authority. Therefore, the track record argument should be made very explicitly, and with great care. if at all.
- Disagreeableness - Holding different beliefs should not lead to active shunning of any kind. (Bad conduct should, and of course it’s fine to find people who are consistently wrong uninteresting or not want to actively affiliate with them.) It must be acceptable to be good friends with someone when they disagree with you on lots of things. Expressing intellectual disagreement should not be read as an aggression, but at worst as orthogonal to friendliness.
- Abundance - There must be no pressure to “catch on” quickly. Discourse should be open, in the sense that there should be no deadline for becoming an insider. Anyone, at any time, with something to offer, should be listened to. Anyone interested should have access to the state of the art - novel arguments and claims should be published, when at all feasible, so that new people can get up to speed, rather than kept private so that new people have to be accepted as members of the club before they can learn its secrets.
Truth-seeking rhetoric and the Aumann agreement theorem
The Aumann agreement theorem states that rational agents with the same priors can't agree to disagree - once they find out that they disagree, they ought to update their beliefs based on this evidence, until they agree. Robin Hanson generalizes this to agents that don't share the same prior.
Note that a lot of the above suggestions tend to disagree with “Aumanning” as people try to practice it. I think that “Aumann agreement” as humans try to implement it combines poorly with known cognitive biases to produce information cascades: one trusted person shares a belief, which causes a second person to update substantially towards that belief. A third person then observes two people with that belief and adopts it, and so on. By the time someone with a justified belief to the contrary shows up, the "evidence" in the form of consensus is so overwhelming that they decide they must be wrong.
Sharing likelihood ratios of observed evidence instead of probabilities comes closer to an ideal humans can implement - but sharing arguments and specific considerations comes closer still. This means that while learning to feel discomfort at continued disagreement is probably good, it’s also good to learn to live with that discomfort, when the alternative is not being honestly persuaded, but updating based on the mere fact of someone else’s belief. Sometimes you actually should act on someone else’s belief - but when you do so, you should make that explicit. Say something like, “I’m doing this because person X believes it’s good and I trust their judgment”, so you don’t end up being a piece of false evidence in a belief cascade.
UPDATE: On Facebook, Sarah Constantin commented:
It's interesting how many of your good practices have to do with the ability to sit with awkwardness. When it comes to ideas, you *don't* have to resolve social debts, eliminate disagreements or inconsistencies, or end uncertainties.
This seems basically right. The image of the important socioepistemic virtue of sitting with awkwardness:
"By the time someone with a belief to the contrary shows up, the "evidence" in the form of consensus is so overwhelming that they" seems to be missing the end of the sentence.
"is so overwhelming that they"
forgot a word?
I was discussing how to frame the echo chamber/opposite of consilience issue with Malcolm Ocean. My proposed solution is to make asking 'whence?' much more available. I.e. an explicit query relating to where a belief or piece of evidence came from in order to figure out whether you are updating twice on the same piece of evidence. I think this is a large effect in essentially all subcultures.
Related is the mutually reinforced ontology hiding amongst popular conceptual frameworks the subculture uses. We're fairly familiar at this point with the idea that a hedgehog can have an unfalsifiable belief structure and thus fail to update on evidence. Something I haven't encountered is the fox version of the same phenomenon (maybe there's a name/phrases related to this I just don't know), whereby the combination of models is sufficient to explain all observed evidence and thus prevent updates. I think this is a reason to consider decompartmentalization to be an essential activity and not just a good idea. There should be a drive towards making the models cross check each other and asking pointed questions if they can't.
Like the opposite of an unsatisfiable ontology. Googling around I see a phrase I like: "an implausibly permissible ontology"
How about "irrational" agents? That would be much more useful... to have them "researched" as a variable of "degree of irrationality". When or how can they come to agreement, what should be used as "enabler", etc, etc.