Sufficiently sincere confirmation bias is indistinguishable from science

Some theater people at NYU people wanted to demonstrate how gender stereotypes affected the 2016 US presidential election. So they decided to put on a theatrical performance of the presidential debates – but with the genders of the principals swapped. They assumed that this would show how much of a disadvantage Hillary Clinton was working under because of her gender. They were shocked to discover the opposite – audiences full of Clinton supporters, watching the gender-swapped debates, came away thinking that Trump was a better communicator than they'd thought.

The principals don't seem to have come into this with a fair-minded attitude. Instead, it seems to have been a case of "I'll show them!":

Salvatore says he and Guadalupe began the project assuming that the gender inversion would confirm what they’d each suspected watching the real-life debates: that Trump’s aggression—his tendency to interrupt and attack—would never be tolerated in a woman, and that Clinton’s competence and preparedness would seem even more convincing coming from a man.

Let's be clear about this. This was not epistemic even-handedness. This was a sincere attempt at confirmation bias. They believed one thing, and looked only for confirming evidence to prove their point. It was only when they started actually putting together the experiment that they realized they might learn the opposite lesson:

But the lessons about gender that emerged in rehearsal turned out to be much less tidy. What was Jonathan Gordon smiling about all the time? And didn’t he seem a little stiff, tethered to rehearsed statements at the podium, while Brenda King, plainspoken and confident, freely roamed the stage? Which one would audiences find more likeable?

What made this work? I think what happened is that they took their own beliefs literally. They actually believed that people hated Hillary because she was a woman, and so their idea of something that they were confident would show this clearly was a fair test. Because of this, when things came out the opposite of the way they'd predicted, they noticed and were surprised, because they actually expected the demonstration to work. Continue reading

Bindings and assurances

I've read a few business books and articles that contrast national styles of contract negotiation. Some countries such as the US have a style where a contract is meant to be fully binding such that if one of the parties could predict that they will likely break the contract in the future, accepting that version of the contract is seen as substantively and surprisingly dishonest. In other countries this is not seen as terribly unusual - a contract's just an initial guideline to be renegotiated whenever incentives slip too far out of whack.

More generally, some people reward me for thinking carefully before agreeing to do costly things for them or making potentially big promises, and wording them carefully to not overcommit, because it raises their level of trust in me. Others seem to want to punish me for this because it makes them think I don't really want to do the thing or don't really like them. Continue reading

Humble Charlie

I saw a beggar leaning on his wooden crutch.
He said to me, "You must not ask for so much."
And a pretty woman leaning in her darkened door.
She cried to me, "Hey, why not ask for more?"

-Leonard Cohen, Bird on the Wire

In my series on GiveWell, I mentioned that my mother's friend Charlie, who runs a soup kitchen, gives away surplus donations to other charities, mostly ones he knows well. I used this as an example of the kind of behavior you might hope to see in a cooperative situation where people have convergent goals.

I recently had a chance to speak with Charlie, and he mentioned something else I found surprising: his soup kitchen made a decision not to accept donations online. They only took paper checks. This is because, since they get enough money that way, they don't want to accumulate more money that they don't know how to use.

When I asked why, Charlie told me that it would be bad for the donors to support a charity if they haven't shown up in person to have a sense of what it does. Continue reading

Against neglectedness considerations

Effective Altruists talk about looking for neglected causes. This makes a great deal of intuitive sense. If you are trying to distribute food, and one person is hungry, and another has enough food, it does more direct good to give the food to the hungry person.

Likewise, if you are trying to decide on a research project, discovering penicillin might be a poor choice. We know that penicillin is an excellent thing to know about and has probably already saved many lives, but it’s already been discovered and put to common use. You’d do better discovering something that hasn’t been discovered yet.

My critique of GiveWell sometimes runs contrary to this principle. In particular, I argue that donors should think of crowding out effects as a benefit, not a cost, and that they should often be happy to give more than their “fair share” to the best giving opportunities. I ought to explain. Continue reading

GiveWell and the problem of partial funding

At the end of 2015, GiveWell wrote up its reasons for recommending that Good Ventures partially but not fully fund the GiveWell top charities. This reasoning seemed incomplete to me, and when I talked about it with others in the EA community, their explanations tended to switch between what seemed to me to be incomplete and mutually exclusive models of what was going on. This bothered me, because the relevant principles are close to the core of what EA is.

A foundation that plans to move around ten billion dollars and is relying on advice from GiveWell isn’t enough to get the top charities fully funded. That’s weird and surprising. The mysterious tendency to accumulate big piles of money and then not do anything with most of it seemed like a pretty important problem, and I wanted to understand it before trying to add more money to this particular pile.

So I decided to write up, as best I could, a clear, disjunctive treatment of the main arguments I’d seen for the behavior of GiveWell, the Open Philanthropy Project, and Good Ventures. Unfortunately, my writeup ended up being very long. I’ve since been encouraged to write a shorter summary with more specific recommendations. This is that summary. Continue reading

The humility argument for honesty

I have faith that if only people get a chance to hear a lot of different kinds of songs, they'll decide what are the good ones. -Pete Seeger

A lot of the discourse around honesty has focused on the value of maintaining a reputation for honesty. This is an important reason to keep one's word, but it's not the only reason to have an honest intent to inform. Another reason is epistemic and moral humility. Continue reading

Honesty and perjury

I've promoted Effective Altruism in the past. I will probably continue to promote some EA-related projects. Many individual EAs are well-intentioned, talented, and doing extremely important, valuable work. Many EA organizations have good people working for them, and are doing good work on important problems.

That's why I think Sarah Constantin’s recent writing on Effective Altruism’s integrity problem is so important. If we are going to get anything done, in the long run, we have to have reliable sources of information. This doesn't work unless we call out misrepresentations and systematic failures of honesty, and these concerns get taken seriously.

Sarah's post is titled “EA Has A Lying Problem.” Some people think this is overstated. This is an important topic to be precise on - the whole point of raising these issues is to make public discourse more reliable. For this reason, we want to avoid accusing people of things that aren’t actually true. It’s also important that we align incentives correctly. If dishonesty is not punished, but admitting a policy of dishonesty is, this might just make our discourse worse, not better.

To identify the problem precisely, we need language that can distinguish making specific assertions that are not factually accurate, from other conduct that contributes to dishonesty in discourse. I'm going to lay out a framework for thinking about this and when it's appropriate to hold someone to a high standard of honesty, and then show how it applies to the cases Sarah brings up. Continue reading

Guess culture screens for trying to cooperate

My friend Miri (quoted with permission) wrote this on Facebook a while back:

Midwesterners are intolerably passive aggressive. My family is sitting among some grass in the dunes because it's the only shady place and a park ranger drives by and says, "That grass you're sitting in--we try to protect that." I say the only thing that makes sense to say in response, which is, "Thanks for letting me know! We'll be careful with it." And I go back to my reading.

Then I look up and she's still there. I look at her for a few moments and she says, "You need to get out of there." I'm like, ok. Why can't you just say that the first time? Not everyone grew up in your damn convoluted culture. Say what you fucking mean.

In the comments, someone replied:

One of the best parts of NYC is that no one dances around what they mean to say here. On the contrary, once I heard a guy on the subway say, to confused-looking strangers, "Do you need some fucking help or what?”

This particular incident seems like obnoxious behavior on the part of the park ranger, but it got me curious about why this sort of norm seems to win out over more explicit communication in many places. Continue reading

Honesty, magic, and science

A chocolatier friend posted this to Facebook (quoted with permission):

Just turned down an invite to sell chocolate at an event because they were going to advertise it using *free Tarot readings*

Three reasons:

-Do we as a society need more of this nonsense?

-Do I want to deal with customers that naive?

-Do I trust organizers that are either credulous or unethically pandering?

Nope, nope and nope.

I think that this is an excellent example of sticking up for principles in ways that it seems a lot of the people around me find nonobvious: refusing to sanction something you think is deceptive. This is a good practice and needs to be more widespread.

I've previously criticized the practice of crediting "matching donations" drives with gains from controlling others’ behavior, but not the corresponding loss of information they would otherwise have contributed (or the loss from accepting their symmetrical control over you). Similarly, there’s a temptation to count the gains from exploiting an event full of Tarot-credulous customers to sell your actually-high-quality chocolate, but not to count the loss of allowing such an event to exploit you. When you help someone else attract attention to something dishonest, you are imposing costs on others.

That said, I think things like Tarot (and "Magic" in general) are hard to talk about reasonably because people mean such different things when talking about them. Obviously which Tarot cards one draws are determined by a pseudorandom process, and not one meaningfully causally entangled with the future life outcomes of the person for whom the Tarot cards are being read.

However, like many other divination processes, Tarot can serve as a seed around which the reader can signal-boost their own insights about the person being read for. Often we have subtle intuitions about each other that don't make it into consciousness but are fairly insightful. I've done a Tarot reading (once), and while I don't need the cards to weave a story about someone with my intuitions, it's easy for me to imagine someone only having access to that kind of intuition if they're in a headspace where they imagine that the cards are "telling" them the story.

I also wonder whether it's possible to consistently apply this epistemic standard. The replication crisis really happened and we need to update on it - even "science" isn't immune to casual deceptiveness and sloppiness with the facts. Someone giving a TED-style talk on psychology research is also likely to be saying stuff that's intuitive but not based on solid knowledge, and making up a story whereby we "know" these things because an experiment was performed.

(I'm not saying that science isn't real. Science was clearly real at some point in the past, and some forms of science and engineering now seem to be making real progress even to this day. I'm just saying that not ALL contemporary "science" is clearly better than Tarot.)

IF we don't apply this epistemic standard consistently, then what we're actually doing is calling out the out-group for deception, while tolerating in-group hypocrisy. We have cultural cover in our in-group for calling out Tarot as lies, but people would probably look at us funny for refusing to associate with someone giving a talk on power poses for the same reason. This might actually be the right choice, I'm not sure - in practice it's close to what I do - but it seems important to notice when that's what we're doing.