Perhaps much of what appears to be disagreement on how much dishonesty is permissible is in fact disagreement on how much words have meanings. I'll begin with a brief treatment of the reputation considerations for keeping one's word, and then complicate it.
- 1 Actors and Quakers
- 2 Long-term contracts and incentives over time
- 3 Local and global integrity
Actors and Quakers
Imagine a world where there are 2 types of people:
- Actors1 (people whose speech has to be understood in the context of the scene in which they're improvising, driven by the passion of the moment, who embody the enacting spirit of rajas or thumos.)
- Quakers (for the actual historical Quakers)
Actors are known to be almost-always unreliable, and Quakers almost never lie.
If two Quakers go into business together they feel comfortable assuming that their contract will always be honored no matter what the short-run incentives are. If two Actors deal with each other, they can only trust what they can directly observe. They have to set up their arrangements so that at each step each of them has more incentive to cooperate than to defect.
If an Actor deals with a Quaker, the contract can impose long-run obligations on the Quaker in exchange for short-run actions from the Actor, because Actors know that Quakers can be trusted, but not the other way around. Naturally, Quakers do all the banking, etc. No one would give an Actor their money for the promise to give it back later.
If I'm a Quaker dealing with an Actor, is it right for me to lie or renege on a promise? After all, if I were forgetful enough to trust their word, they'd renege on their promise readily. But, because I know that, and they know that I know that, in some sense their words don't have the weight of meaning. It's my own fault if I trust them.
On the other hand, my words do have weight, so if they trust me, it's justified. Because I've never broken my word in the past, I enjoy an advantage: the ability to enter into contracts that require them to trust me. I might never be interested in lending to an Actor, but I might want to keep my reputation so that I can borrow from one. They're destroying nothing if they renege on a promise. I'm destroying an important asset if I renege.
I think this neatly divides two very different arguments:
- It's fine to be dishonorable towards people who haven't behaved honorably towards you.
- It's fine to behave dishonorably towards people who don't treat you as honorable.
1 argues that a Quaker should break their word. 2 argues that an Actor should.
Case 1: Ought a Quaker break their word to an Actor?
Claim 1 seems obviously wrong to me. The Quaker is destroying something of real value by breaking their word. The Actor's more or less in the position of someone who consistently refuses to make promises about the future. Their words about the future have no information-value.
Nor does the case get any less clear if you're not sure whether someone's an Actor or a Quaker. If they break their word, that should make you trust them less, but doesn't reduce the harm done if you break your word.
This suggests a rule that the honorableness of your conduct towards others should not depend on the honorableness of their conduct, although your trust in their honor should.
Case 2: Ought an Actor break their word to a Quaker?
Claim 2 seems like a weird edge case. By construction, Actors are known to be untrustworthy, so they should be surprised to ever be put in the situation of being trusted. If you don't expect to be able to pass as a Quaker in the long-run, then this is just a fluke, and you might as well exploit it for everything you can get.
Long-term contracts and incentives over time
The honorableness of your conduct towards someone else should generally not depend on the honorableness of their conduct. But one important exception to this rule is the case of contracts.
Ordinarily if you make a pledge, sign a contract, or in some other way make a promise about your future behavior, and receive some benefit from another party for this, they’re counting on you to do what you said even if you change your mind about whether you’d like to. To do otherwise would make many long-run agreements impossible. As mentioned above, lending money to an Actor without some sort of third-party enforcement is inadvisable. Even if an Actor were enthusiastic about the contract at the time when it involved receiving money from you – because they’re likely to be much less excited when honoring the contract would instead mean disbursing money to you. “This no longer seems like a good deal” is not ordinarily a good reason to renege on an agreement.
The situation changes if you find out that you misunderstood your counterparty.
Suppose you sign a contract with someone, that requires you to incur an expense presently, so that they will do something for you in the farther future. Then, after the promise is made, but before you're obligated to act, you learn that they've regularly broken their word and failed to honor contracts. As previously established, you should downgrade your trust in them.
The very same logic that excuses Actors against Quakers in the ordinary case – that their word doesn't mean anything – now justifies a change in your behavior. You did not knowingly agree to a contract with a Actor; you thought you were contracting with a Quaker. The terms of the contract you have are, in practice, worse than the terms you agreed to.
If the contract now seems terrible because you expect them to break their word, it's reasonable to renege. This is not justified as retaliation – it seems valid whether or not they've broken their word to you in particular - but because you now have reason to believe that you made promises based on false assurances.
You still have an interest in trying to extricate yourself from the contract in a way that doesn't put you ahead of where you'd have been with no contract - i.e. to try to honestly unwind it. Otherwise, it's hard for others (and you!) to distinguish between breaking a contract because it was convenient, and breaking a contract because you no longer trusted the other party to follow through. For the same reason, it's often best to try and test the other person's willingness to follow through, instead of assuming that your judgment is right.
Wordbreaking as an imagined cost
We’ve moved from the realm of absolutes and binaries where everyone’s either a perfectly unreliable Actor or a perfectly reliable Quaker, to the realm of ordinary life, where people are finitely reliable to differing degrees, and many people make promises that can’t literally always be kept. Sometimes you promise to take out the trash tomorrow and then get hit by a truck. (Well, usually no more than once.)
It’s not really possible to discharge a deontological duty to never lie, if by lying you include making promises that you might not keep. Instead, we have to talk about assigning a cost to breaking one’s word.
People who assign a high cost to breaking their word behave as though there were substantial punishments for wordbreaking. They will be motivated by this internal incentive to spend more effort before making a promise, figuring out whether it’s keepable – and to add qualifiers and caveats beforehand to make the literal exact promise one they can keep. Thus, they reduce the expected cost of oathbreaking by reducing the probability that they will be unable to discharge their obligation. Later, if unforeseen circumstances make it more difficult for them to discharge that obligation, they will honor the contract if the cost of doing so exceeds the moral cost of oathbreaking, and renege if the opposite is true.
People who assign a low cost to breaking their word will be free and easy with their pledges, as authentic expressions of their current disposition:
There’s this idea of authenticity: you know who someone truly is by seeing them in their unguarded moments, seeing uncensored emotions, that’s when you can have a real interaction with them, that’s when you can see their true self.
[…] Even when my immediate reaction to a thing does get read as authentic, it may not use all my knowledge, may not be my endorsed judgment, and may not be the most true thing I know how to say. If I think things through and filter them, I can be more truthful than if I just react without thinking about whether what I’m saying is true.
Lying, to me, is not just telling an untruth that one consciously knows to be untrue. […] I feel that I have an affirmative duty to seek the truth, to follow up on nagging doubts and loose ends and bits of my beliefs that don’t quite line up. Telling someone something that, if I thought a bit harder, I could see to be untrue, feels to me like a kind of lying - telling a knowably-untrue thing, even if the knowledge is only potential.
I think that when people think of authenticity as the state of not having a filter, they have an idea of truthfulness that makes sense in an environment with much less trust. They don’t even consider the idea that the person they’re talking with will be able to reliably tell them true things - they may not even bother fully processing the literal content of verbal statements. They’re looking, instead, for verifiability.
To someone who perceives the cost of breaking contracts as low, having a filter about such things makes it seems like you have something to hide. If you’re guarded with your promises and do things like criticize popular pledges for being bad in some contingencies, you seem weirdly cagey. If the behavior feels like a good idea now, why not say you’ll do it?
Local and global integrity
If we believe the imagined cost model of keeping one’s word, a natural consequence is that one’s propensity for self-serving dishonesty is proportional to the stakes of the conflict. In extreme cases, even people of high integrity will just plain lie. However, this model is a poor fit for the observed facts. Some people lie even for very small gains. Others keep their word even when the stakes are very high. We'd like to be trustworthy even in high stakes situations, so it’s fortunate that it seems possible.
Contracts for Hobbesians
In Leviathan, Hobbes lays out a model in which the legitimacy of the state comes from the consent of the governed. But this sort of consent isn’t a good fit to what we ordinarily think of in environments where we are already subject to the same law. Instead, it’s the kind of consent that arises from a “state of nature” logically (if not temporally) prior to the state.
We’re used to thinking of consent as a sort of contractarian thing that happens after the threat of violence was removed. But if the state of nature is a state of violence, then the act of consenting was not free in that sense; people “consent” to be governed, by deciding that it is less bad than not being governed.
Accordingly, the state, with its origins in violence, has a claim on you only to the extent that it can continue to offer you a better option than resisting it. As Hobbes writes [spelling modernized for your convenience]:
A Man's Covenant Not To Defend Himself, Is Void
A Covenant not to defend myself from force, by force, is always void. For […] no man can transfer, or lay down his right to save himself from death, wounds, and Imprisonment, (the avoiding whereof is the only end of laying down any right,) and therefore the promise of not resisting force, in no covenant transfers any right; nor is obliging. For though a man may covenant thus, "Unless I do so, or so, kill me;" he cannot covenant thus "Unless I do so, or so, I will not resist you, when you come to kill me." For man by nature chooses the lesser evil, which is danger of death in resisting; rather than the greater, which is certain and present death in not resisting. And this is granted to be true by all men, in that they lead criminals to execution, and prison, with armed men, notwithstanding that such criminals have consented to the law, by which they are condemned.
On this model, moral outrage at people lying to protect themselves or their friends from the threat of deadly violence is a sort of category error. If the state has not protected them, then they’re in a state of war towards their surroundings, so the threat of making war against them has no additional teeth.
We can loosely analogize verbal norms to one of many layers of government among people. If it’s hard enough for someone to keep their word, then they may decide that entering into a state of war against “words have meanings” is less bad than submitting to it. If you want honesty, you need to reward honest reports.
Integrity, or a bagel?
The above exposes two important problems with the imagined cost model of keeping one’s word. First, it doesn’t predict the prevalence of deception in low-stakes situations. Second, it doesn’t point towards a way to hold onto integrity in high-stakes situations, which is an important desideratum.
In Prices or Bindings?, Eliezer Yudkowsky describes both these problems with the imagined cost model:
During World War II, Knut Haukelid and three other saboteurs sank a civilian Norwegian ferry ship, the SF Hydro, carrying a shipment of deuterium for use as a neutron moderator in Germany's atomic weapons program. Eighteen dead, twenty-nine survivors. And that was the end of the Nazi nuclear program. Can you imagine a Hollywood movie in which the hero did that, instead of coming up with some amazing clever way to save the civilians on the ship?
Stephen Dubner and Steven Levitt published the work of an anonymous economist turned bagelseller, Paul F., who dropped off baskets of bagels and came back to collect money from a cashbox, and also collected statistics on payment rates. The current average payment rate is 89%. Paul F. found that people on the executive floor of a company steal more bagels; that people with security clearances don't steal any fewer bagels; that telecom companies have robbed him and that law firms aren't worth the trouble.
Hobbes (of Calvin and Hobbes) once said: "I don't know what's worse, the fact that everyone's got a price, or the fact that their price is so low."
If Knut Haukelid sold his soul, he held out for a damned high price—the end of the Nazi atomic weapons program.
Others value their integrity less than a bagel.
One suspects that Haukelid's price was far higher than most people would charge, if you told them to never sell out. Maybe we should stop telling people they should never let themselves be bought, and focus on raising their price to something higher than a bagel?
But I really don't know if that's enough.
The German philosopher Fichte once said, "I would not break my word even to save humanity."
Raymond Smullyan, in whose book I read this quote, seemed to laugh and not take Fichte seriously.
Abraham Heschel said of Fichte, "His salvation and righteousness were apparently so much more important to him than the fate of all men that he would have destroyed mankind to save himself."
I don't think they get it.
If a serial killer comes to a confessional, and confesses that he's killed six people and plans to kill more, should the priest turn him in? I would answer, "No." If not for the seal of the confessional, the serial killer would never have come to the priest in the first place. All else being equal, I would prefer the world in which the serial killer talks to the priest, and the priest gets a chance to try and talk the serial killer out of it.
It’s surprising, if people are implicitly pricing integrity at a constant value, that some set their price so low as a bagel. It’s especially surprising that wealthier people like executives, with a presumably lower marginal utility for money, seem to set an unusually low implied price. This model is also, as Yudkowsky points out, totally inadequate for high-stakes scenarios where we might want to be able to count on someone’s integrity even if whole lives, or the entire world, were at stake.
One response might be a sort of modified incentive model, in which you imagine you’re not paying a constant cost, but whatever damages would be by some imaginary court. So you'll be comparatively relaxed in low-stakes situations, but tighten up as your words are more likely to affect others' actions or well-being. You might still break your word when the costs of keeping it are unreasonably high – but you’ll take into account the high cost of being a wordbreaker on high-stakes things. The imaginary court would want the serial killer to know that there are very high penalties to breaking the seal of confession. But these penalties don’t need to be infinite, which avoids the problem where perfect integrity means never making promises.
This approach fails in a few cases, though. Is there a duty to disclose important information that you think harms people? What about deceiving an individual in order to help them? It’s easy to rationalize your way into imagining that you would persuade the imaginary court not to assess damages.
A related answer to the problem is Paul Christiano’s integrity for consequentialists proposal:
I aspire to make decisions in a pretty simple way. I think about the consequences of each possible action and decide how much I like them; then I select the action whose consequences I like best.
To make decisions with integrity, I make one change: when I imagine picking an action, I pretend that picking it causes everyone to know that I am the kind of person who picks that option.
This is a more general version of the imaginary court idea. But it still fails to explain why some people would value their integrity at less than the price of a bagel.
Of course they could just be making some mistake. Or they could be unusually selfish or short-termist, and correctly assessing that in some situations there’s little practical chance of getting caught. But I don’t think that’s all of it.
Integrity in the environment
The above models presume that there's a choice as to whether to lie or to tell the truth. But it seems to be that there's an underlying variation of disposition that makes lying easier for some people, and truth-telling easier for others.
When I want to come up with a speech-act to accomplish a certain goal, the plans my mind generates are ones that accomplish the goal by informing people of reasons for it. If I want to conceal some information from others, that's an additional constraint that's not natural for me to think of. This gives me an advantage in long-run cooperation. By contrast, if I found it more natural to think of the people around me as social objects whose levers can be worked by my words, I'd be better at social manipulation and affiliation, but worse at transferring nonsocial information or evaluating arguments.
My working hypothesis is that some people mainly perceive their environment as one in which words have meanings, while for others speech-acts are primarily construed as social moves. If you're in the first group, you imagine that people are tracking honesty of attempts to inform in a way that contributes to your reputation. If you're in the second, you might think that they're mainly interpreting your words as a statement about your current posture and intent. Are you on their side? What are you about to do next? Where are you trying to point their attention?
When I was in high school, one of my friends introduced me to the Marx Brothers. At some point, he said, "hey, we should put on a Marx Brothers show over the summer." I asked him if he'd be willing to play Harpo. He said yes. We then started "casting" the musical, thinking of people in the theater group who might be good fits for different parts.
Next, I went around, asking the people we'd picked if they'd be up for doing a Marx Brothers musical, and playing such-and-such a part, over the summer. They said yes. I got a pianist to say they'd be up for playing accompaniment
Next, I secured a venue at the local public library, and the rights to perform the show, and gave everyone involved a copy of the script. I didn't look for a summer job, because that would interfere with what I expected to be the considerable time it would take me as producer to put the show together.
Then, when the summer started, I asked people to show up at such-and-such a time to begin rehearsing. It turned out that various people had gotten summer jobs, no one had taken it upon themselves to learn their lines, etc. At some point in the summer, it became obvious to me that we could not possibly put together a musical in time.
I downgraded my ambitions. I found a short comic play in the public domain we could do, and got a subset of the actors to agree to do that. I got scripts to them. I scheduled a rehearsal, at a time they said was okay. They didn't show up.
What mistake was I making? I think my mistake was to assume, that if I asked someone, "will you play this part in this play?", and they replied "sure!", that they had thought about the implied time commitment, and decided to commit to putting in the work it would take to play the part. I think that their interpretation was that they were expressing preliminary interest in a thing, and that they weren't committing to do any of the work implied. My mistake, on this model, was taking their words as assurances about their future actions, rather than as expressions of enthusiasm.
They figured I wasn't all that serious, because I wasn't directing their attention by a process of persistent rapport-building, attunement, and gradual escalation. The correct strategy for me, on this model would have been to wait until they had demonstrated some willingness to put in work, before putting in more of my own.
No one was deceiving me on purpose. This is not a purely adversarial world I'm describing. Nor is it purely short-run thinking; enthusiasm-based methods of cooperation can last for quite a while, and be very cooperative. But it does make some forms of cooperation much more expensive. To execute the correct strategy, I'd have had to start by calling a meeting, for the sole (unstated) purpose of determining who was willing to do the work of actually showing up for this. And I don't like it when people do that to me; it's condescending, devalues my time, and doesn't take me at my word. It's an insult.
Perhaps the extent to which people use speech this way depends on their actual experience of their environment. It would be surprising if this happened not at all; I definitely perceive some speech as entirely a social move, and other speech as entirely about describing objective reality. Sometimes I mechanically respond to "how are you?" from acquaintances with "fine, how are you?". Other times, if a friend I haven't seen in a while tells me they'd like to know what's been going on in my life, and how I’m doing, I'll take the query literally. It's hard to see how this would happen if I weren't at least partly learning from context when words are meant to have meanings.
But there also seem to be deep differences in how much of speech people are disposed to think of as belonging to each class – even people in my own social set.
Parselmouths and Quaker class consciousness
There's another case not quite covered by the Actor-Quaker distinction: partial, situational integrity. For instance, a group of people may want to be honest with one another, perhaps in one particular domain, but deceitful towards outsiders. Eliezer Yudkowsky's Harry Potter and the Methods of Rationality provides a dramatic illustration; descendants of Salazar Slytherin have a private language called Parseltongue (the language is also a control interface for snakes) in which it is impossible for them to lie:
Harry swallowed. Snakes can't lie. "Two pluss two equalss four." Harry had tried to say that two plus two equalled three, and the word four had slipped out instead.
"Good. When Salazar Slytherin invoked the Parselmouth curse upon himself and all his children, his true plan was to ensure his descendants could trust one another's words, whatever plots they wove against outsiders."
Are there advantages of being a Quaker over being a Parselmouth? I've already argued that in particular cases there can be advantages in being trusted by the untrustworthy. A Quaker bank might not be happy to lend money to Actors, but it should be happy to have them as depositors.
But I don't automatically get credited for my attempts to say what I mean and no more. If the people around me have no idea that this might even be a thing, then what incentive do I have to keep doing it? And yet, I don't find myself smoothly adjusting to my circumstances - I find myself awkwardly trying to say only and exactly what I mean, even in circumstances when people are expected to exaggerate, so I'll be taken to mean much less than I do. I suspect that it's not quite possible for humans to completely fine-tune their honesty case by case. I suspect that it's hard to learn that words have meanings here but not there, that justice is a virtue in this place but a vice in that one.
Instead, I suspect that for the health of the souls of those who are dispositionally inclined towards treating words not as mere reports of current inclinations, but as things designed to stand enduringly, monumental inscriptions meant to be true long after the time in which they were written passes away, these people need an environment where this is in fact globally the case.
If Actors don't know about Quakers - then that destroys the practical incentive for Quakers as a class to keep their word towards Actors. And yet, some actual human, in a world with no Quakers, invented actual Quakerism, and found people who were willing to join him.
Right now, Effective Altruists are arguing about a couple of internal issues:
- How acceptable is it to follow standard marketing practices when promoting Effective Altruist charities? If it's normal, if it's expected, is it even dishonest?
- How seriously should we take a pledge that is not legally binding?
The actual historical Quakers did a few things that were interesting in this regard:
- They refused to follow what was then the standard commercial practice of setting prices higher than the market rate, to be bargained down, even though everyone else expected it.
- They refused to swear oaths in court, because it would deprecate all their other words, which – though they were not legally binding – they felt morally obliged to make equally truthful.
- They had special meetings, in which Quakers could simply talk to a room full of other Quakers.
It's not clear to me that everyone can or should be Quakerlike. Perhaps it is an impossible and therefore unfair standard for ordinary people. But the actual historical Quakers seem to have done a lot of good, and accumulated an impressive reputation, simply by being themselves, and not compromising. They did not adopt a Parselmouth strategy, and I'm pretty sure that this is because the Parselmouth strategy is impossible for humans. Perhaps there are others like me to whom a Quakerlike standard sounds good, like a harbor, a respite from a sickening sea of lies. I think they should probably go for it. Perhaps they should even have special meetings.
(At some point I'm going to go see how the actual Quakers do it too - some of them are still around.)
I'm not advocating full secession. I'm not advocating the equivalent of "going Galt." Not yet, anyway. There are people out there to cooperate with, even if they don't use words the way people like me would like. And they matter. There is only one world, and we have got to learn how to share it with each other.
But it really does seem to me that our society has engaged in a long campaign to deny the existence of people who believe that words have meanings, and have opinions about facts, and think about the exact nature of the obligation before making a promise. Under such circumstances, you should "put your own oxygen mask on first," and only focus on helping others in ways consistent with meeting your own basic needs. I think it's important to ensure your own spiritual survival and safety. Perhaps these Quakerlike types, like the actual historical Quakers, need a safe space to meet with friends – and before that, a way to find each other.