We were only pretending to engage with each other. But it wasn’t our fault. We had to be, because talking about bad faith is Not OK. Continue reading
if you don’t correct errors, you don’t get anything done, because you stay wrong. I don't think we do enough to reward saying oops.
Lately, I’ve been complaining about ways the EA community’s been papering over problems in ways that forgo this sort of learning. But while complaining is important, on its own it doesn’t offer any specific vision for how to do things. At the recent EA Global conference in Boston, I was reflecting with a friend on what sorts of positive norms I would like to see in the discourse.
One example of something I wish I saw more of, is people publicly and very clearly saying, "we tried X, it didn’t work, so now we’re stopping.” Or, “I used to believe X, and as a result asked people to do Y, but now I don’t believe X anymore and don’t think Y is a particularly good use of resources.” People often invest a lot of social capital in their current beliefs and plans; admitting that you were wrong can cost you valuable social momentum and mean you have to start over. You might worry that people will associate you with wrongness. We need communities where instead, clear admissions of error or failure are publicly acknowledged as signs of integrity, and commitment to communal learning and shared model-building.
So I'm offering a prize. But first, let me give an example of the sort of thing we need to be praising more loudly more often. Continue reading
It’s common to think that someone else is arguing in bad faith. In a recent blog post, Nate Soares claims that this intuition is both wrong and harmful:
I believe that the ability to expect that conversation partners are well-intentioned by default is a public good. An extremely valuable public good. When criticism turns to attacking the intentions of others, I perceive that to be burning the commons. Communities often have to deal with actors that in fact have ill intentions, and in that case it's often worth the damage to prevent an even greater exploitation by malicious actors. But damage is damage in either case, and I suspect that young communities are prone to destroying this particular commons based on false premises.
To be clear, I am not claiming that well-intentioned actions tend to have good consequences. The road to hell is paved with good intentions. Whether or not someone's actions have good consequences is an entirely separate issue. I am only claiming that, in the particular case of small high-trust communities, I believe almost everyone is almost always attempting to do good by their own lights. I believe that propagating doubt about that fact is nearly always a bad idea.
It would be surprising, if bad intent were so rare in the relevant sense, that people would be so quick to jump to the conclusion that it is present. Why would that be adaptive? Continue reading
Among the kinds of people, are the Actors, and the Scribes. Actors mainly relate to speech as action that has effects. Scribes mainly relate to speech as a structured arrangement of pointers that have meanings.
I previously described this as a distinction between promise-keeping "Quakers" and impulsive "Actors," but I think this missed a key distinction. There's "telling the truth," and then there's a more specific thing that's more obviously distinct from even Actors who are trying to make honest reports: keeping precisely accurate formal accounts. This leaves out some other types – I'm not exactly sure how it relates to engineers and diplomats, for instance – but I think I have the right names for these two things now.
Everyone agrees that words have meaning; they convey information from the speaker to the listener or reader. That's all they do. So when I used the phrase “words have meanings” to describe one side of a divide between people who use language to report facts, and people who use language to enact roles, was I strawmanning the other side?
I say no. Many common uses of language, including some perfectly legitimate ones, are not well-described by "words have meanings." For instance, people who try to use promises like magic spells to bind their future behavior don't seem to consider the possibility that others might treat their promises as a factual representation of what the future will be like.
Some uses of language do not simply describe objects or events in the world, but are enactive, designed to evoke particular feelings or cause particular actions. Even when speech can only be understood as a description of part of a model of the world, the context in which a sentence is uttered often implies an active intent, so if we only consider the direct meaning of the text, we will miss the most important thing about the sentence.
Some apparent uses of language’s denotative features may in fact be purely enactive. This is possible because humans initially learn language mimetically, and try to copy usage before understanding what it’s for. Primarily denotative language users are likely to assume that structural inconsistencies in speech are errors, when they’re often simply signs that the speech is primarily intended to be enactive. Continue reading
I am surrounded by well-meaning people trying to take responsibility for the future of the universe. I think that this attitude – prominent among Effective Altruists – is causing great harm. I noticed this as part of a broader change in outlook, which I've been trying to describe on this blog in manageable pieces (and sometimes failing at the "manageable" part).
I'm going to try to contextualize this by outlining the structure of my overall argument.
Why I am worried
Effective Altruists often say they're motivated by utilitarianism. At its best, this leads to things like Katja Grace's excellent analysis of when to be a vegetarian. We need more of this kind of principled reasoning about tradeoffs.
At its worst, this leads to some people angsting over whether it's ethical to spend money on a cup of coffee when they might have saved a life, and others using the greater good as license to say things that are not quite true, socially pressure others into bearing inappropriate burdens, and make ever-increasing claims on resources without a correspondingly strong verified track record of improving people's lives. I claim that these actions are not in fact morally correct, and that people keep winding up endorsing those conclusions because they are using the wrong cognitive approximations to reason about morality.
Summary of the argument
- When people take responsibility for something, they try to control it. So, universal responsibility implies an attempt at universal control.
- Maximizing control has destructive effects:
- An adversarial stance towards other agents.
- Decision paralysis.
- These failures are not accidental, but baked into the structure of control-seeking. We need a practical moral philosophy to describe strategies that generalize better, and that benefit from the existence of other benevolent agents rather than treating them primarily as threats.
Some theater people at NYU people wanted to demonstrate how gender stereotypes affected the 2016 US presidential election. So they decided to put on a theatrical performance of the presidential debates – but with the genders of the principals swapped. They assumed that this would show how much of a disadvantage Hillary Clinton was working under because of her gender. They were shocked to discover the opposite – audiences full of Clinton supporters, watching the gender-swapped debates, came away thinking that Trump was a better communicator than they'd thought.
The principals don't seem to have come into this with a fair-minded attitude. Instead, it seems to have been a case of "I'll show them!":
Salvatore says he and Guadalupe began the project assuming that the gender inversion would confirm what they’d each suspected watching the real-life debates: that Trump’s aggression—his tendency to interrupt and attack—would never be tolerated in a woman, and that Clinton’s competence and preparedness would seem even more convincing coming from a man.
Let's be clear about this. This was not epistemic even-handedness. This was a sincere attempt at confirmation bias. They believed one thing, and looked only for confirming evidence to prove their point. It was only when they started actually putting together the experiment that they realized they might learn the opposite lesson:
But the lessons about gender that emerged in rehearsal turned out to be much less tidy. What was Jonathan Gordon smiling about all the time? And didn’t he seem a little stiff, tethered to rehearsed statements at the podium, while Brenda King, plainspoken and confident, freely roamed the stage? Which one would audiences find more likeable?
What made this work? I think what happened is that they took their own beliefs literally. They actually believed that people hated Hillary because she was a woman, and so their idea of something that they were confident would show this clearly was a fair test. Because of this, when things came out the opposite of the way they'd predicted, they noticed and were surprised, because they actually expected the demonstration to work. Continue reading
I saw a beggar leaning on his wooden crutch.
He said to me, "You must not ask for so much."
And a pretty woman leaning in her darkened door.
She cried to me, "Hey, why not ask for more?"
-Leonard Cohen, Bird on the Wire
In my series on GiveWell, I mentioned that my mother's friend Charlie, who runs a soup kitchen, gives away surplus donations to other charities, mostly ones he knows well. I used this as an example of the kind of behavior you might hope to see in a cooperative situation where people have convergent goals.
I recently had a chance to speak with Charlie, and he mentioned something else I found surprising: his soup kitchen made a decision not to accept donations online. They only took paper checks. This is because, since they get enough money that way, they don't want to accumulate more money that they don't know how to use.
When I asked why, Charlie told me that it would be bad for the donors to support a charity if they haven't shown up in person to have a sense of what it does. Continue reading
I have faith that if only people get a chance to hear a lot of different kinds of songs, they'll decide what are the good ones. -Pete Seeger
A lot of the discourse around honesty has focused on the value of maintaining a reputation for honesty. This is an important reason to keep one's word, but it's not the only reason to have an honest intent to inform. Another reason is epistemic and moral humility. Continue reading
Perhaps much of what appears to be disagreement on how much dishonesty is permissible is in fact disagreement on how much words have meanings. I'll begin with a brief treatment of the reputation considerations for keeping one's word, and then complicate it. Continue reading
I've promoted Effective Altruism in the past. I will probably continue to promote some EA-related projects. Many individual EAs are well-intentioned, talented, and doing extremely important, valuable work. Many EA organizations have good people working for them, and are doing good work on important problems.
That's why I think Sarah Constantin’s recent writing on Effective Altruism’s integrity problem is so important. If we are going to get anything done, in the long run, we have to have reliable sources of information. This doesn't work unless we call out misrepresentations and systematic failures of honesty, and these concerns get taken seriously.
Sarah's post is titled “EA Has A Lying Problem.” Some people think this is overstated. This is an important topic to be precise on - the whole point of raising these issues is to make public discourse more reliable. For this reason, we want to avoid accusing people of things that aren’t actually true. It’s also important that we align incentives correctly. If dishonesty is not punished, but admitting a policy of dishonesty is, this might just make our discourse worse, not better.
To identify the problem precisely, we need language that can distinguish making specific assertions that are not factually accurate, from other conduct that contributes to dishonesty in discourse. I'm going to lay out a framework for thinking about this and when it's appropriate to hold someone to a high standard of honesty, and then show how it applies to the cases Sarah brings up. Continue reading