Perhaps much of what appears to be disagreement on how much dishonesty is permissible is in fact disagreement on how much words have meanings. I'll begin with a brief treatment of the reputation considerations for keeping one's word, and then complicate it.
Actors and Quakers
Imagine a world where there are 2 types of people:
- Actors1 (people whose speech has to be understood in the context of the scene in which they're improvising, driven by the passion of the moment, who embody the enacting spirit of rajas or thumos.)
- Quakers (for the actual historical Quakers)
Actors are known to be almost-always unreliable, and Quakers almost never lie.
If two Quakers go into business together they feel comfortable assuming that their contract will always be honored no matter what the short-run incentives are. If two Actors deal with each other, they can only trust what they can directly observe. They have to set up their arrangements so that at each step each of them has more incentive to cooperate than to defect.
If an Actor deals with a Quaker, the contract can impose long-run obligations on the Quaker in exchange for short-run actions from the Actor, because Actors know that Quakers can be trusted, but not the other way around. Naturally, Quakers do all the banking, etc. No one would give an Actor their money for the promise to give it back later.
If I'm a Quaker dealing with an Actor, is it right for me to lie or renege on a promise? After all, if I were forgetful enough to trust their word, they'd renege on their promise readily. But, because I know that, and they know that I know that, in some sense their words don't have the weight of meaning. It's my own fault if I trust them.
On the other hand, my words do have weight, so if they trust me, it's justified. Because I've never broken my word in the past, I enjoy an advantage: the ability to enter into contracts that require them to trust me. I might never be interested in lending to an Actor, but I might want to keep my reputation so that I can borrow from one. They're destroying nothing if they renege on a promise. I'm destroying an important asset if I renege.
I think this neatly divides two very different arguments:
- It's fine to be dishonorable towards people who haven't behaved honorably towards you.
- It's fine to behave dishonorably towards people who don't treat you as honorable.
1 argues that a Quaker should break their word. 2 argues that an Actor should.
Case 1: Ought a Quaker break their word to an Actor?
Claim 1 seems obviously wrong to me. The Quaker is destroying something of real value by breaking their word. The Actor's more or less in the position of someone who consistently refuses to make promises about the future. Their words about the future have no information-value.
Nor does the case get any less clear if you're not sure whether someone's an Actor or a Quaker. If they break their word, that should make you trust them less, but doesn't reduce the harm done if you break your word.
This suggests a rule that the honorableness of your conduct towards others should not depend on the honorableness of their conduct, although your trust in their honor should.
Case 2: Ought an Actor break their word to a Quaker?
Claim 2 seems like a weird edge case. By construction, Actors are known to be untrustworthy, so they should be surprised to ever be put in the situation of being trusted. If you don't expect to be able to pass as a Quaker in the long-run, then this is just a fluke, and you might as well exploit it for everything you can get.
Long-term contracts and incentives over time
The honorableness of your conduct towards someone else should generally not depend on the honorableness of their conduct. But one important exception to this rule is the case of contracts.
Ordinarily if you make a pledge, sign a contract, or in some other way make a promise about your future behavior, and receive some benefit from another party for this, they’re counting on you to do what you said even if you change your mind about whether you’d like to. To do otherwise would make many long-run agreements impossible. As mentioned above, lending money to an Actor without some sort of third-party enforcement is inadvisable. Even if an Actor were enthusiastic about the contract at the time when it involved receiving money from you – because they’re likely to be much less excited when honoring the contract would instead mean disbursing money to you. “This no longer seems like a good deal” is not ordinarily a good reason to renege on an agreement.
The situation changes if you find out that you misunderstood your counterparty.
Suppose you sign a contract with someone, that requires you to incur an expense presently, so that they will do something for you in the farther future. Then, after the promise is made, but before you're obligated to act, you learn that they've regularly broken their word and failed to honor contracts. As previously established, you should downgrade your trust in them.
The very same logic that excuses Actors against Quakers in the ordinary case – that their word doesn't mean anything – now justifies a change in your behavior. You did not knowingly agree to a contract with a Actor; you thought you were contracting with a Quaker. The terms of the contract you have are, in practice, worse than the terms you agreed to.
If the contract now seems terrible because you expect them to break their word, it's reasonable to renege. This is not justified as retaliation – it seems valid whether or not they've broken their word to you in particular - but because you now have reason to believe that you made promises based on false assurances.
You still have an interest in trying to extricate yourself from the contract in a way that doesn't put you ahead of where you'd have been with no contract - i.e. to try to honestly unwind it. Otherwise, it's hard for others (and you!) to distinguish between breaking a contract because it was convenient, and breaking a contract because you no longer trusted the other party to follow through. For the same reason, it's often best to try and test the other person's willingness to follow through, instead of assuming that your judgment is right.
Wordbreaking as an imagined cost
We’ve moved from the realm of absolutes and binaries where everyone’s either a perfectly unreliable Actor or a perfectly reliable Quaker, to the realm of ordinary life, where people are finitely reliable to differing degrees, and many people make promises that can’t literally always be kept. Sometimes you promise to take out the trash tomorrow and then get hit by a truck. (Well, usually no more than once.)
It’s not really possible to discharge a deontological duty to never lie, if by lying you include making promises that you might not keep. Instead, we have to talk about assigning a cost to breaking one’s word.
People who assign a high cost to breaking their word behave as though there were substantial punishments for wordbreaking. They will be motivated by this internal incentive to spend more effort before making a promise, figuring out whether it’s keepable – and to add qualifiers and caveats beforehand to make the literal exact promise one they can keep. Thus, they reduce the expected cost of oathbreaking by reducing the probability that they will be unable to discharge their obligation. Later, if unforeseen circumstances make it more difficult for them to discharge that obligation, they will honor the contract if the cost of doing so is less than the moral cost of oathbreaking, and renege if the opposite is true.
People who assign a low cost to breaking their word will be free and easy with their pledges, as authentic expressions of their current disposition:
There’s this idea of authenticity: you know who someone truly is by seeing them in their unguarded moments, seeing uncensored emotions, that’s when you can have a real interaction with them, that’s when you can see their true self.
[…] Even when my immediate reaction to a thing does get read as authentic, it may not use all my knowledge, may not be my endorsed judgment, and may not be the most true thing I know how to say. If I think things through and filter them, I can be more truthful than if I just react without thinking about whether what I’m saying is true.
Lying, to me, is not just telling an untruth that one consciously knows to be untrue. […] I feel that I have an affirmative duty to seek the truth, to follow up on nagging doubts and loose ends and bits of my beliefs that don’t quite line up. Telling someone something that, if I thought a bit harder, I could see to be untrue, feels to me like a kind of lying - telling a knowably-untrue thing, even if the knowledge is only potential.
I think that when people think of authenticity as the state of not having a filter, they have an idea of truthfulness that makes sense in an environment with much less trust. They don’t even consider the idea that the person they’re talking with will be able to reliably tell them true things - they may not even bother fully processing the literal content of verbal statements. They’re looking, instead, for verifiability.
To someone who perceives the cost of breaking contracts as low, having a filter about such things makes it seems like you have something to hide. If you’re guarded with your promises and do things like criticize popular pledges for being bad in some contingencies, you seem weirdly cagey. If the behavior feels like a good idea now, why not say you’ll do it?
Local and global integrity
If we believe the imagined cost model of keeping one’s word, a natural consequence is that one’s propensity for self-serving dishonesty is proportional to the stakes of the conflict. In extreme cases, even people of high integrity will just plain lie. However, this model is a poor fit for the observed facts. Some people lie even for very small gains. Others keep their word even when the stakes are very high. We'd like to be trustworthy even in high stakes situations, so it’s fortunate that it seems possible.
Contracts for Hobbesians
In Leviathan, Hobbes lays out a model in which the legitimacy of the state comes from the consent of the governed. But this sort of consent isn’t a good fit to what we ordinarily think of in environments where we are already subject to the same law. Instead, it’s the kind of consent that arises from a “state of nature” logically (if not temporally) prior to the state.
We’re used to thinking of consent as a sort of contractarian thing that happens after the threat of violence was removed. But if the state of nature is a state of violence, then the act of consenting was not free in that sense; people “consent” to be governed, by deciding that it is less bad than not being governed.
Accordingly, the state, with its origins in violence, has a claim on you only to the extent that it can continue to offer you a better option than resisting it. As Hobbes writes [spelling modernized for your convenience]:
A Man's Covenant Not To Defend Himself, Is Void
A Covenant not to defend myself from force, by force, is always void. For […] no man can transfer, or lay down his right to save himself from death, wounds, and Imprisonment, (the avoiding whereof is the only end of laying down any right,) and therefore the promise of not resisting force, in no covenant transfers any right; nor is obliging. For though a man may covenant thus, "Unless I do so, or so, kill me;" he cannot covenant thus "Unless I do so, or so, I will not resist you, when you come to kill me." For man by nature chooses the lesser evil, which is danger of death in resisting; rather than the greater, which is certain and present death in not resisting. And this is granted to be true by all men, in that they lead criminals to execution, and prison, with armed men, notwithstanding that such criminals have consented to the law, by which they are condemned.
On this model, moral outrage at people lying to protect themselves or their friends from the threat of deadly violence is a sort of category error. If the state has not protected them, then they’re in a state of war towards their surroundings, so the threat of making war against them has no additional teeth.
We can loosely analogize verbal norms to one of many layers of government among people. If it’s hard enough for someone to keep their word, then they may decide that entering into a state of war against “words have meanings” is less bad than submitting to it. If you want honesty, you need to reward honest reports.
Integrity, or a bagel?
The above exposes two important problems with the imagined cost model of keeping one’s word. First, it doesn’t predict the prevalence of deception in low-stakes situations. Second, it doesn’t point towards a way to hold onto integrity in high-stakes situations, which is an important desideratum.
In Prices or Bindings?, Eliezer Yudkowsky describes both these problems with the imagined cost model:
During World War II, Knut Haukelid and three other saboteurs sank a civilian Norwegian ferry ship, the SF Hydro, carrying a shipment of deuterium for use as a neutron moderator in Germany's atomic weapons program. Eighteen dead, twenty-nine survivors. And that was the end of the Nazi nuclear program. Can you imagine a Hollywood movie in which the hero did that, instead of coming up with some amazing clever way to save the civilians on the ship?
Stephen Dubner and Steven Levitt published the work of an anonymous economist turned bagelseller, Paul F., who dropped off baskets of bagels and came back to collect money from a cashbox, and also collected statistics on payment rates. The current average payment rate is 89%. Paul F. found that people on the executive floor of a company steal more bagels; that people with security clearances don't steal any fewer bagels; that telecom companies have robbed him and that law firms aren't worth the trouble.
Hobbes (of Calvin and Hobbes) once said: "I don't know what's worse, the fact that everyone's got a price, or the fact that their price is so low."
If Knut Haukelid sold his soul, he held out for a damned high price—the end of the Nazi atomic weapons program.
Others value their integrity less than a bagel.
One suspects that Haukelid's price was far higher than most people would charge, if you told them to never sell out. Maybe we should stop telling people they should never let themselves be bought, and focus on raising their price to something higher than a bagel?
But I really don't know if that's enough.
The German philosopher Fichte once said, "I would not break my word even to save humanity."
Raymond Smullyan, in whose book I read this quote, seemed to laugh and not take Fichte seriously.
Abraham Heschel said of Fichte, "His salvation and righteousness were apparently so much more important to him than the fate of all men that he would have destroyed mankind to save himself."
I don't think they get it.
If a serial killer comes to a confessional, and confesses that he's killed six people and plans to kill more, should the priest turn him in? I would answer, "No." If not for the seal of the confessional, the serial killer would never have come to the priest in the first place. All else being equal, I would prefer the world in which the serial killer talks to the priest, and the priest gets a chance to try and talk the serial killer out of it.
It’s surprising, if people are implicitly pricing integrity at a constant value, that some set their price so low as a bagel. It’s especially surprising that wealthier people like executives, with a presumably lower marginal utility for money, seem to set an unusually low implied price. This model is also, as Yudkowsky points out, totally inadequate for high-stakes scenarios where we might want to be able to count on someone’s integrity even if whole lives, or the entire world, were at stake.
One response might be a sort of modified incentive model, in which you imagine you’re not paying a constant cost, but whatever damages would be by some imaginary court. So you'll be comparatively relaxed in low-stakes situations, but tighten up as your words are more likely to affect others' actions or well-being. You might still break your word when the costs of keeping it are unreasonably high – but you’ll take into account the high cost of being a wordbreaker on high-stakes things. The imaginary court would want the serial killer to know that there are very high penalties to breaking the seal of confession. But these penalties don’t need to be infinite, which avoids the problem where perfect integrity means never making promises.
This approach fails in a few cases, though. Is there a duty to disclose important information that you think harms people? What about deceiving an individual in order to help them? It’s easy to rationalize your way into imagining that you would persuade the imaginary court not to assess damages.
A related answer to the problem is Paul Christiano’s integrity for consequentialists proposal:
I aspire to make decisions in a pretty simple way. I think about the consequences of each possible action and decide how much I like them; then I select the action whose consequences I like best.
To make decisions with integrity, I make one change: when I imagine picking an action, I pretend that picking it causes everyone to know that I am the kind of person who picks that option.
This is a more general version of the imaginary court idea. But it still fails to explain why some people would value their integrity at less than the price of a bagel.
Of course they could just be making some mistake. Or they could be unusually selfish or short-termist, and correctly assessing that in some situations there’s little practical chance of getting caught. But I don’t think that’s all of it.
Integrity in the environment
The above models presume that there's a choice as to whether to lie or to tell the truth. But it seems to be that there's an underlying variation of disposition that makes lying easier for some people, and truth-telling easier for others.
When I want to come up with a speech-act to accomplish a certain goal, the plans my mind generates are ones that accomplish the goal by informing people of reasons for it. If I want to conceal some information from others, that's an additional constraint that's not natural for me to think of. This gives me an advantage in long-run cooperation. By contrast, if I found it more natural to think of the people around me as social objects whose levers can be worked by my words, I'd be better at social manipulation and affiliation, but worse at transferring nonsocial information or evaluating arguments.
My working hypothesis is that some people mainly perceive their environment as one in which words have meanings, while for others speech-acts are primarily construed as social moves. If you're in the first group, you imagine that people are tracking honesty of attempts to inform in a way that contributes to your reputation. If you're in the second, you might think that they're mainly interpreting your words as a statement about your current posture and intent. Are you on their side? What are you about to do next? Where are you trying to point their attention?
When I was in high school, one of my friends introduced me to the Marx Brothers. At some point, he said, "hey, we should put on a Marx Brothers show over the summer." I asked him if he'd be willing to play Harpo. He said yes. We then started "casting" the musical, thinking of people in the theater group who might be good fits for different parts.
Next, I went around, asking the people we'd picked if they'd be up for doing a Marx Brothers musical, and playing such-and-such a part, over the summer. They said yes. I got a pianist to say they'd be up for playing accompaniment
Next, I secured a venue at the local public library, and the rights to perform the show, and gave everyone involved a copy of the script. I didn't look for a summer job, because that would interfere with what I expected to be the considerable time it would take me as producer to put the show together.
Then, when the summer started, I asked people to show up at such-and-such a time to begin rehearsing. It turned out that various people had gotten summer jobs, no one had taken it upon themselves to learn their lines, etc. At some point in the summer, it became obvious to me that we could not possibly put together a musical in time.
I downgraded my ambitions. I found a short comic play in the public domain we could do, and got a subset of the actors to agree to do that. I got scripts to them. I scheduled a rehearsal, at a time they said was okay. They didn't show up.
What mistake was I making? I think my mistake was to assume, that if I asked someone, "will you play this part in this play?", and they replied "sure!", that they had thought about the implied time commitment, and decided to commit to putting in the work it would take to play the part. I think that their interpretation was that they were expressing preliminary interest in a thing, and that they weren't committing to do any of the work implied. My mistake, on this model, was taking their words as assurances about their future actions, rather than as expressions of enthusiasm.
They figured I wasn't all that serious, because I wasn't directing their attention by a process of persistent rapport-building, attunement, and gradual escalation. The correct strategy for me, on this model would have been to wait until they had demonstrated some willingness to put in work, before putting in more of my own.
No one was deceiving me on purpose. This is not a purely adversarial world I'm describing. Nor is it purely short-run thinking; enthusiasm-based methods of cooperation can last for quite a while, and be very cooperative. But it does make some forms of cooperation much more expensive. To execute the correct strategy, I'd have had to start by calling a meeting, for the sole (unstated) purpose of determining who was willing to do the work of actually showing up for this. And I don't like it when people do that to me; it's condescending, devalues my time, and doesn't take me at my word. It's an insult.
Perhaps the extent to which people use speech this way depends on their actual experience of their environment. It would be surprising if this happened not at all; I definitely perceive some speech as entirely a social move, and other speech as entirely about describing objective reality. Sometimes I mechanically respond to "how are you?" from acquaintances with "fine, how are you?". Other times, if a friend I haven't seen in a while tells me they'd like to know what's been going on in my life, and how I’m doing, I'll take the query literally. It's hard to see how this would happen if I weren't at least partly learning from context when words are meant to have meanings.
But there also seem to be deep differences in how much of speech people are disposed to think of as belonging to each class – even people in my own social set.
Parselmouths and Quaker class consciousness
There's another case not quite covered by the Actor-Quaker distinction: partial, situational integrity. For instance, a group of people may want to be honest with one another, perhaps in one particular domain, but deceitful towards outsiders. Eliezer Yudkowsky's Harry Potter and the Methods of Rationality provides a dramatic illustration; descendants of Salazar Slytherin have a private language called Parseltongue (the language is also a control interface for snakes) in which it is impossible for them to lie:
Harry swallowed. Snakes can't lie. "Two pluss two equalss four." Harry had tried to say that two plus two equalled three, and the word four had slipped out instead.
"Good. When Salazar Slytherin invoked the Parselmouth curse upon himself and all his children, his true plan was to ensure his descendants could trust one another's words, whatever plots they wove against outsiders."
Are there advantages of being a Quaker over being a Parselmouth? I've already argued that in particular cases there can be advantages in being trusted by the untrustworthy. A Quaker bank might not be happy to lend money to Actors, but it should be happy to have them as depositors.
But I don't automatically get credited for my attempts to say what I mean and no more. If the people around me have no idea that this might even be a thing, then what incentive do I have to keep doing it? And yet, I don't find myself smoothly adjusting to my circumstances - I find myself awkwardly trying to say only and exactly what I mean, even in circumstances when people are expected to exaggerate, so I'll be taken to mean much less than I do. I suspect that it's not quite possible for humans to completely fine-tune their honesty case by case. I suspect that it's hard to learn that words have meanings here but not there, that justice is a virtue in this place but a vice in that one.
Instead, I suspect that for the health of the souls of those who are dispositionally inclined towards treating words not as mere reports of current inclinations, but as things designed to stand enduringly, monumental inscriptions meant to be true long after the time in which they were written passes away, these people need an environment where this is in fact globally the case.
If Actors don't know about Quakers - then that destroys the practical incentive for Quakers as a class to keep their word towards Actors. And yet, some actual human, in a world with no Quakers, invented actual Quakerism, and found people who were willing to join him.
Right now, Effective Altruists are arguing about a couple of internal issues:
- How acceptable is it to follow standard marketing practices when promoting Effective Altruist charities? If it's normal, if it's expected, is it even dishonest?
- How seriously should we take a pledge that is not legally binding?
The actual historical Quakers did a few things that were interesting in this regard:
- They refused to follow what was then the standard commercial practice of setting prices higher than the market rate, to be bargained down, even though everyone else expected it.
- They refused to swear oaths in court, because it would deprecate all their other words, which – though they were not legally binding – they felt morally obliged to make equally truthful.
- They had special meetings, in which Quakers could simply talk to a room full of other Quakers.
It's not clear to me that everyone can or should be Quakerlike. Perhaps it is an impossible and therefore unfair standard for ordinary people. But the actual historical Quakers seem to have done a lot of good, and accumulated an impressive reputation, simply by being themselves, and not compromising. They did not adopt a Parselmouth strategy, and I'm pretty sure that this is because the Parselmouth strategy is impossible for humans. Perhaps there are others like me to whom a Quakerlike standard sounds good, like a harbor, a respite from a sickening sea of lies. I think they should probably go for it. Perhaps they should even have special meetings.
(At some point I'm going to go see how the actual Quakers do it too - some of them are still around.)
I'm not advocating full secession. I'm not advocating the equivalent of "going Galt." Not yet, anyway. There are people out there to cooperate with, even if they don't use words the way people like me would like. And they matter. There is only one world, and we have got to learn how to share it with each other.
But it really does seem to me that our society has engaged in a long campaign to deny the existence of people who believe that words have meanings, and have opinions about facts, and think about the exact nature of the obligation before making a promise. Under such circumstances, you should "put your own oxygen mask on first," and only focus on helping others in ways consistent with meeting your own basic needs. I think it's important to ensure your own spiritual survival and safety. Perhaps these Quakerlike types, like the actual historical Quakers, need a safe space to meet with friends – and before that, a way to find each other.
|↑1||An earlier version called this group Crookeds. I've updated this post to use the less derogatory "Actors" per PDV's suggestion.|
"At some point I'm going to go see how the actual Quakers do it too - some of them are still around"
I expect you will be disappointed. Modern Quakerism is pretty similar to modern Unitarianism. If a modern Quaker said "you have my word as a Quaker that ..." I would trust them pretty far, but if someone just said something I wouldn't put more stock in it than something said by a demographically matched control.
(I was raised Quaker)
I expect so too but it seems silly not to check.
"Have opinions about facts."
Doesn't everybody? I'm confused by what you mean.
Otherwise, helpful and insightful. Thank you.
Plenty of things that would be mere factual assertions if you took words literally in practice commit one to political positions that aren't logically entailed by the factual assertions. Citing specific examples of such things is itself taken as strong evidence that you hold the unorthodox view. Happy to try to think of some anyway if this wasn't clear enough.
Pingback: On Pointy Hair | Perfecting Dated Visions
Something like Paul's idea or an environmental/contextual one seems like a better default to me than Quakerism. Some considerations:
1) I don't really understand your criticism of Paul's concept. I mostly see you arguing that it fails to actually describe human behavior. ("But it still fails to explain why some people would value their integrity at less than the price of a bagel.") "Some people are unethical" seems like a good explanation here. The fact that high schoolers are impulsive and not trustworthy and some people steal bagels when nobody is looking feels like a positive observation that doesn't have direct consequences for your normative claims against Paul.
2) I agree with you that there are some costs to non-Quakerism. The less Quaker our norms are the harder it is to commit to an action that might be really awful for you in the future. But Quakerism also has costs. The more Quaker our norms are, the easier it is to end up accidentally trapped in a contract where breach would be efficient. In general, I don't see how you could form a strong opinion on this without better addressing the issue of efficient breach.
3) We actually do have a lot of real world evidence on how much the "court" concept of integrity degrades trust - from courts themselves and contract law. In particular, U.S. contract law has a pretty strong norm that the remedy for breach of contract is compensatory damages to make the victim whole. "Specific performance" (where the court orders the breacher to actually go through with the terms of the contract) is rare.1 Economists tend to support this system because it allows for efficient breach. In cases where your expected benefit from breaching greatly outweighs the harm you think it'd do to your counterparty, you can breach the contract and then compensate them.2
I think your argument would predict that this would lead to a low trust equilibrium where firms struggle to credibly contract with each other because they can just back out of the contract if it becomes costly. But I think the U.S. economy actually stands out for the amount of trust firms have in each other. Everybody knows that their counterparty cares about its credibility (and possibly its integrity) so we ended up with a high trust equilibrium where we get the benefits of efficient breach when the efficiency gains are really enormous but are still able to fairly credibly contract with each other.
4) Note that in contract law, and in life, you can generally take steps to make it clear that you want to deviate from the default norm on how serious a promise is. In a contract, you can (with some exceptions) specify higher damages for breach up front if it's particularly important for you to be able to rely on the performance of a contract. Outside of the law, you can ask somebody to explicitly specify the conditions under which they'd break a promise. So what we're really talking about here is what the default understanding ought to be. One of the main reasons to have a default is that specifying all of the terms upfront creates transaction costs and can never be done perfectly anyway. Given the ability to do this, I think your concerns about a non-Quaker world making it impossible to make certain types of commitments to be overblown. It's basically just a coordination problem where we want to minimize transaction costs by setting default assumptions to cover the most common cases. No matter what norms you settle on, parties will have to bear some extra transaction costs when they deviate from those norms. So I think we mostly have an empirical question of whether people would typically want to allow for efficient breach at the time they're making most of their promises or whether they would typically want to bind themselves to specific performance. I'd say usually we want to bind ourselves to something in between. We want to commit that we won't breach for the sake of very small efficiency gains but we will breach if performance of the promise becomes extremely onerous. If both parties share that understanding at the time of the deal, I don't think this leads to any kind of degradation of trust.
5) I think people have a contextual understanding of the seriousness of promises so that the bar for breach is different in different circumstances. This also seems OK to me as long as people share the same understanding. It's obviously a problem when people miscommunicate. Quakerism is a Schelling point but I'd say it's also one that imposes incredibly high transaction costs on commitments because it sets a default that's not what most people mean when they make most of their commitments. I think a lot of mutually beneficial commitments that occur today would not occur in a world with a Quaker default because the transaction costs of specifying in advance all the deviations from the Quaker default would be too high.
6) In general, I felt like your framing this as a disagreement about "how much words have meanings" was kind of derogatory, a straw man, and stacked the deck because clearly nobody wants to argue that words do not have meanings. This seems to me to be much more of a disagreement about things like: 1) whether the meaning of a word changes in different contexts; 2) what the current social consensus is about the meaning of words like "pledge"; 3) whether "social consensus" is the right way to determine a word's meaning. I don't see anybody treating words as though they don't have meanings at all.
7) I found the "actors" group to be a bit of a straw man. I don't really see the purpose of spending so much time on a type of person who has so little integrity that they cannot make any commitments at all. I think everybody would agree that if this were the result of non-Quakerism, we should set a norm of Quakerism. I don't see anybody using words in a way where I don't feel like I could make a deal with them. I don't find the example of "actors" to be very illuminating for the question of how to trade off ability to commit v. efficient breach at the relevant margins.
1 Sources: https://en.wikipedia.org/wiki/Contract#Remedies_for_breach_of_contract; https://en.wikipedia.org/wiki/Specific_performance;
On (1), I think Paul's prescription implicitly claims that humans are the sort of agents who can be expected to make this sort of calculation. My best guess is that some humans are in some cases, but there are a lot of prerequisites that aren't universal, and understanding what those are is important to coming up with a robust solution. The US just elected an Actor president, which is pretty difficult behavior to model if the voting public comprises mainly the sort of agents Paul seems to be assuming. ClearerThinking's survey research on this also seems relevant. (I wrote about it here previously.)
On (6) I don't actually think substantial disbelief of "words have meanings" is a straw man. I think Wittgenstein's Philosophical Investigations makes a pretty good case that "words have meanings" is not quite 100% true. Quine also points to indeterminacy of translation and thus of explicable meaning with his "gavagai" example. Everyone begins life as a tiny immigrant who does not know the local language, and has no native language in which "words have meanings" to which they can analogize their adopted tongue. We learn language through mimesis, and while there might be some sort of cognitive machinery that does something qualitatively different with some of our language where we start to think in terms of pointer-referent and concept-referent relations, it would be surprising if the mimetic / move-in-a-game component to language ever totally went away. Tendencies towards mimesis are hard to resist, and part of why I think it's so important to push back against falsehoods in any spaces that are meant to be accreting truth.
Another thing that makes the stark Quaker/Actor distinction less easy to distinguish in real life is that many "scenes" and "roles" are sticky and last way longer than a stage play. My model of admonitions as performance-enhancing drugs would explain why even someone with a comparatively strong Actorlike disposition might expect to be able to affect their behavior with some substantial probability decades from now, by taking a lifelong pledge now, even if they don't bother to actually model likely future scenarios first. In the other direction, humans are really good at imitating other humans, so people imitating those with Quakerlike dispositions will end up using words is pretty similar ways.
If there are some Quakerlike types in the group affecting how words are used, and they have enough ability to shape the discourse (they'll have an unusually strong interest in determining which words get used how), then we should expect the discourse as a whole to resemble, perhaps more than superficially, the kind of discourse you might expect from a pure "Quaker" society. However, systems set up to rely on Quakerlike types can fail if you lean too hard on them, without correspondingly strong pressures to put Quakerlike types in charge. For instance, Trump seems like he's
obviouslyplausibly [see correction in replies to this comment] in violation of the emoluments clause of the Constitution, and it seems like this basically won't be prosecuted unless Congressional Republicans decide that it's advantageous to perform that particular act of political theater. Basically no one's confused enough to think that what the law says is sufficient to decide what happens. Contrast this with Henry VIII's considerable difficulty leaving his marriage, despite literally being the king, and getting to appoint most of the clergy.
I've read a few business books / articles that contrast national styles of contract negotiation. Some countries such as the US have a style where a contract is meant to be fully binding such that if one of the parties could predict that they will likely break the contract in the future, accepting that version of the contract is seen as substantively and surprisingly dishonest. In other countries (usually the example is Arab countries) this is not seen as terribly unusual - a contract's just an initial guideline to be renegotiated whenever incentives slip too far out of whack. I agree that this is not fully due to court penalties, but I'd count actual reputational penalties as part of the enforcement mechanism.
More generally, some people reward me for thinking carefully before agreeing to do costly things for them or making potentially big promises, and wording them carefully to not overcommit, because it raises their level of trust in me. Others seem to want to punish me for this because it makes them think I don't really want to do the thing / don't really like them. I think this is a good intuitive fit for the Quaker/Actor distinction, and a poor fit for "different costs on different contracts".
I think it's quite possible that Quakerlike norms aren't for everyone. But it seems like the actual historical Quakers did a huge amount of good, and I suspect this is related to their weirdness, and I suspect that recognition that *some* people are that way, and that *some* kinds of progress depend on them, is quite important. I don't think, for instance, in the absence of this type, and the expectation that this type existed, that something like life insurance or Baconian science could have been invented.
"I've read a few business books / articles that contrast national styles of contract negotiation. Some countries such as the US have a style where a contract is meant to be fully binding such that if one of the parties could predict that they will likely break the contract in the future, accepting that version of the contract is seen as substantively and surprisingly dishonest."
I've read this too and actually think it's some of the strongest evidence for my position. I think it's pretty clear that the U.S. business world has ended up in a very high trust equilibrium without going full Quaker. We have a whole contract system premised on the idea that efficient breach is acceptable. We have a bankruptcy system that's relatively generous to debtors. I think we've managed to find an equilibrium that successfully places shame on somebody who breaches a contract if they knew but didn't disclose that there was a very high probability of breach at the time it was signed but is explicitly set up with a presumption that your ex ante probability of breaching is not near zero. This seems like excellent evidence that you can have high integrity equilibria without going so far as to say that a "contract will always be honored no matter what the short-run incentives are."
JTBC, I don't think the U.S. on the whole is at the right place on the equilibrium. I think it has very high reliability and very low reliability communities. I'm just arguing that you don't have to go all the way to Quaker to get the benefits of trust.
"I think it's quite possible that Quakerlike norms aren't for everyone."
I'm actually pretty confused about what your position is on this topic. You've said things like this but in your other post you argued that lack of Quakerness has caused EA to have a trustworthiness problem, called non-quakers cavalier towards promises, have called the assumption of non-Quaker norms manipulative, etc. So, it seems to me like you've been advocating that a community where many/most people have not defaulted to Quakerism should adopt Quaker norms. Maybe you've changed your mind but if not, arguing for the recognition that "some people are that way and that some kinds of progress depend on them" feels like moving the goal posts.
"something like life insurance or Baconian science could have been invented."
I don't follow the relevance of Baconian science but I have trouble understanding why you'd believe something so extreme about life insurance. I don't know the actual history here of whether Quakers themselves were involved with the invention of life insurance but (for reasons Jeff gives) I don't actually think that's relevant b/c real-life Quakers aren't actually Quakers in your sense anyway. In any case, I think lots of non-Quakers are still trustworthy enough to invent life insurance. It's a clear case where the default incentives are going to be terrible so it's a clear case where you'd want to opt into more explicitly binding promises in a culture with non-Quaker norms. Nobody's ever going to want to sign up for life insurance on a hand shake deal with a vague promise and trust. And if I have a long, complicated life insurance policy with very explicit terms and a regulatory system, I think I get very little additional information out of knowing that my counterparty only honors 96% of his promises in cases where the terms for breach are not explicit.
Fwiw, I would agree with almost all of your argument if I thought you were saying that it's crucial for people to take commitments very seriously (for some version of very serious that's less binding than "their contract will always be honored no matter what the short-run incentives are."
In my prior post, my problem with e.g. Rob's comments was his emphasis on "interpreting" the pledge however turned out best, in a way that seemed to me to imply that it wasn't especially important to take into account the interests of people whom this would predictably mislead. His wording might have simply been sloppy, but looks to me like it implies a pretty dismal view of the feasibility of promises in general.
I took issue with two other things specific to EA:
I think it's also important to distinguish between these policies:
I think (1) is impossible, so the early Quakers can't have implemented it and I'm not for trying. I think we very much need (2) in order to do joint truth-tracking. It's unclear to me whether (3) needs to be an universal norm in something like EA, but I think that at least some people are constituted such that an arrangement that deprecates such reservations is going to harm them, and we should recognize this and try to take it into account.
Maybe you can say a bit more about what gave you the impression that my prior post was advocating something like a universal early-Quaker norm? If I was unclear or misleading then I should of course clarify. (I think Jeff was specifically talking about present-day Quakers, who are no longer famous for exceptional honesty, or for doing related weird things like refusing to take oaths in court or refusing to set fake "bargaining" prices when it was standard practice.)
The historical Quakers had clear markers that identified them to each other and to everyone:
- special dress so they were visibly identifiable
- restricted trades that they could go into (because they couldn't be clergy, didn't believe in violence so couldn't enter the military, and I believe couldn't enter university in England because that required loyalty oaths to the Crown)
- harsh internal penalties for rule-breaking: could be thrown out of the community for bankruptcy or shady business dealings, marriage outside the religion, or things like going to the theater.
Once they stopped being so very insular, the strict adherence to things like honesty faded as well. I know someone who wore historical Quaker dress so that people would hold him constantly accountable. Short of something like that to make sure that everyone you meet holds you to higher-than-usual standards, I think just internally knowing that you're a Quaker doesn't have nearly as much power if most people you encounter don't know it.
I do think there are some remnants of heightened integrity in modern Quakerism; even though it's been years since I participated in the Quaker community I still occasionally ask myself if Quakers would approve of some action I'm considering or if it's consistent with the Quaker reputation. But I imagine most religions work like that.
Speaking of words having meanings...
For that to be the case (let alone *obviously* the case), there has to exist an instance where Trump, *as president* (so, in the last week), received a prohibited emolument. As far as I'm aware, that isn't the allegation. What the allegation actually is is that he is in potential danger of violating it in the future, because he has so many business interests.
I fully expect some people to speak as if the clause prohibited anyone with substantial business interests from becoming president. Such people, of course, believe that national government should be restricted to the political class, and are basically asserting this by making this allegation. For all I know, they may even be right, in the sense that that would be a good policy for the US to adopt. However, they are engaging in passive-aggressive verbal behavior. Not the kind of verbal behavior where words have strict denotations, and thus not the kind of verbal behavior I would generally expect to see in a place like this.
Apologies if this was too much of a diversion (and a potentially mind-killing one at that).
No, this was a good correction; I think I may be wrong. I'll try to speak more carefully on this in the future.
I definitely recall reading in the news about Trump pushing deals through in foreign countries just after the election, advising foreign diplomats to stay in Trump-owned hotels, and we don't know exactly what terms his Russian loans are on. But, I haven't looked into this very carefully.
I’m with you on integrity being super important. Someone once broke up with me because I was insufficiently enthusiastic about their baking, even though I could tell they wanted me to be enthusiastic, I really didn’t want to fake it because it felt dishonest (I’m sure they also had other reasons for breaking with up with me).
I also didn’t sign the pledge because it felt dishonest.
That said, I really don’t like the repeated use of the phrase ‘words have meaning’. I think its an inaccurate description of the problem, and functions as a yet-undeserved social cudgel for an idea that is not obviously right.
If it doesn’t seem clear that it has social power, imagine someone saying it with substantial emphasis and a bit of anger as you’re arguing against them. I don’t know about you, but that’s scary as fuck for me.
I like the baking example – its a great concrete illustration of the divide between the attitude that enthusiasm in the direction of existing social momentum is prosocial and skepticism is antisocial, and the attitude that taking special care to avoid saying false things is prosocial, and saying things mainly to be agreeable is antisocial.
I think “words have meaning” and “words have meanings” are substantially different claims. Basically everyone agree that words have meaning, in the sense that you generally get a lot of information from which words someone’s using. The question is whether words largely denote concepts that refer to specific classes of objects, events, or attributes in the world, and should be parsed as such. (See my reply to Howie above for more on this.)
I'm updating towards these being the wrong terms, though, since you and Howie both seem to have perceived "words have meanings" as a rhetorical bludgeon. (Interestingly, as far as I can tell Rob seems to have taken it in the spirit I intended it.) I think the right ones will point to the distinction between using words enactively as ways to build or reveal momentum, and to denote the position of things on your world-map. I think this might make it clearer what my problem is with the position of "take the pledge, and then we'll figure out what the best interpretation is."
"I think the right ones will point to the distinction between using words enactively as ways to build or reveal momentum, and to denote the position of things on your world-map."
This was actually pretty helpful in clarifying the point you're making to me. I think I was missing an important part of it.
I might phrase it as words revealing relative direction on your map vs. absolute position on your map. Or something like that.
I think the phrase "words have meanings" feels like it both semi-accurately points to the thing you're discussing (though not very precisely for me) and also feels like a rhetorical bludgeon (in that I feel like I'm being pushed into agreeing with something I'm not sure I want to agree with because otherwise I will obviously be in the wrong).
There's an interesting application of directionality vs. absolute position to the use of "words have meanings" actually!
I think I would find it much more palatable if you made it clear that you're using directionally rather than absolutely. It legitimately is helpful in pointing to what sort of thing you're talking about and what you think the right attitude towards words is.
For example, if you said "Disagreement on whether words should mostly be used directionally or to point to absolute positions in concept space. In other words, disagreement about how much words have meaning." in your summary instead of "disagreement on whether words have meanings."
Using "words have meanings" in your primary summary sort of implicitly claims that it is about an accurate a statement as you can make. And since it feels like a statement that could easily be used to hurt me, its scary.
Btw, I'm liking this concept of directional/metaphorical vs absolute positional usage of words.
It helps me understand something I was trying to explain to a friend about understanding the concept of 'truth'.
Here clearly there are differences in usage. For example, " therefore, Theorem 3.1 is true." is one (very absolute positional) usage. And "its true that the monkey's habitat is destroyed" is another (much more metaphorical or directional) usage.
From this its clear that you really need both uses of the word. But its important for people to be to choose which one to use and for people to be on the same page about where on the metaphorical/absolute spectrum a given use is. Often you can guess well from context (as above), but in the case of the GWWC Pledge, that was apparently much less true. And in the case of "words have meanings" I think you're accidentally equivocating where on the spectrum your use is.
I notice I consider putting me in a position to choose between dishonesty and harshing someone's buzz a hostile act, moreso if the person tends to be emotionally fragile or sensitive to momentum.
I suspect the right unit of analysis is hostile / corruption-friendly culture, rather than hostile individuals. Still makes the thing a hostile act, but the agent isn't always an individual human.
Pingback: The humility argument for honesty | Compass Rose
Pingback: Bindings and assurances | Compass Rose
After re-reading the OP, I realized more clearly several interesting points.
1. You seem to represent a set of traits which I propose arises from having a cognitive skillset that is relatively stronger around Kegan levels 2 & 4 (personal interests and systems) than 1 & 3 (emotions and relationships). (Note that what I'm saying is *not* the same as "being on Kegan stage X", or "not being in touch with emotions".) Such a composition might be correlated with being quick to advance through stages, and having less time to integrate the relevant skillsets before they become internally obsolete; but is probably more adequately explained by some pre-existing psychological disposition.
To clarify, from my notes: "system-level preference" - a preference that is above emotions and relationships, but not by neglecting them, but rather recognizing that they are reactions to a world defined on a higher level
2. The trait I am gesturing towards in 1), call it being a "2-4 person", results in a lot of frustration and tensions around relating to most "normal" people, and even most "normal" rationalists. An interesting bad pattern I saw in myself recently was trying to signal to someone that I respect them on the higher level by bulldozing through the social perimeters and making it clear that I knew I was doing this, but had good intentions anyway. It turns out that's a good way to come across as a jerk. Go figure. Another bad one was when I noticed I was trying to hold on to my principles with the hope of leading by example, but after becoming too frustrated with people just never getting it anyway, acting as if to *pressure* them into principles by making my expression of them more exaggerated.
3. Most of the time though, things are pretty fine between 2-4 people and their friends. It might require above-average insight and effort, but we can manage fruitful relationships with all sorts of people. However, it will often happen that some good-intentioned 3-people will try to teach us the value of being more social, and good-intentioned 1-people about the value of being more connected to emotions. (The 2-people we are not talking to, finally after a long period of frustration and investing way too much in them.) Unfortunately, even though they might have good skills and points that could be steelmanned, the way they talk to us predictably falls short of what we would find persuasive. To adopt a behaviour X, we want to understand why it's useful and moral, how to decide when it applies, how it's consistent with other desired behaviours. To adopt a change with less constraints would be going backward, not forward. And this is regardless of the social and emotional pressures, which makes *them* feel like they are talking to a wall.
4. More speculative: Being a 2-4 person in a "hostile" environment is hard, and we tend to fall back on closed environments (e.g. math, academia) to keep ourselves sane and have a bigger chance of meeting people that we can talk to. This is, however, *not* what we most desire.
5. I'd probably enjoy conversations with you.
Skipping Kegan stage 3 is a way I've independently described the thing. Unusual high-function neurotypes often look like skipping a stage.
Pingback: Actors and scribes, words and deeds | Compass Rose
Pingback: Taking integrity literally | Compass Rose
Pingback: Why I am not a Quaker (even though it often seems as though I should be) | Compass Rose
"they will honor the contract if the cost of doing so exceeds the moral cost of oathbreaking, and renege if the opposite is true." This looks backwards to me -- if the moral cost of oathbreaking is lower than the cost of honoring the contract, it seems like you should renege.
I think you're right, will fix it, thanks
Pingback: rtp programação