Category Archives: Cooperation

Humble Charlie

I saw a beggar leaning on his wooden crutch.
He said to me, "You must not ask for so much."
And a pretty woman leaning in her darkened door.
She cried to me, "Hey, why not ask for more?"

-Leonard Cohen, Bird on the Wire

In my series on GiveWell, I mentioned that my mother's friend Charlie, who runs a soup kitchen, gives away surplus donations to other charities, mostly ones he knows well. I used this as an example of the kind of behavior you should

I recently had a chance to speak with Charlie, and he mentioned something else I found surprising: his soup kitchen made a decision not to accept donations online. They only take paper checks. This is because, since they get enough money that way, they don't want to accumulate more money that they don't know how to use.

When I asked why, Charlie told me that it would be bad for the donors to support a charity if they haven't shown up in person to have a sense of what it does.

At first I was confused. This didn't seem like very consequentialist thinking. I briefly considered the possibility that Charlie was being naïve, or irrationally traditionalist, or thinking about what resembles his idea of a good charity. But after thinking about it for a moment, I realized that Charlie was getting something deeply right that almost everyone gets wrong, at least where money was involved. He was trying to maximize benefits rather than costs, in a case where the costs are much easier to measure.

Donations are a cost. Charlie had enough basic humility not to try to impose costs for no specific reason, even though he could easily do so.

Donors who know what the charity they're supporting does, who have personally inspected it, are following a heuristic that actually does something. Donors who give based on branding and buzz are putting their money somewhere that's well-optimized for branding and buzz. What do you do if you want to cooperate with the first heuristic, and not accidentally promote the second? What do you do if you don't want your own attention drawn towards things that will please the second crowd, at the expense of the first? Charlie's strategy seems like a reasonable answer.

Attention is a cost. Charlie's strategy avoids imposing this cost, except where it is most needed.

When I was writing my series on GiveWell, I asked Charlie for permission to tell the story about him. He asked me not to mention the name of his soup kitchen, because if other charities knew he had surplus money to give away, he'd be deluged with solicitations. This would waste their time asking him for money, and his own time responding.

Right now, if someone mentions a giving opportunity to Charlie, it's a high-quality signal. Charlie felt the need to hide his good works, because developing a reputation as someone with money to give away would destroy his ability to find high-quality giving opportunities.

Of course, there are benefits to honest openness as well, if you are humble enough to open yourself up to judgment, both positive and negative. But not everyone can pull that off in every context. Some people just don't like being criticized, for instance. And if you hide the things about yourself or your institution that are most likely to draw criticism, while publicizing the parts that attract positive attention and resources, then you're not really capturing those benefits. You're just engaged in brand image management, part of our society's cost-maximization machinery.

If you seems like a promising best strategy for implementing convergent strategies in a divergent world: Stay humble. Minimize the costs you impose on others. Hide from the machinery of cost-maximization. Send out signals optimized for audience quality rather than quantity, to help like-minded people find you.

Against neglectedness considerations

Effective Altruists talk about looking for neglected causes. This makes a great deal of intuitive sense. If you are trying to distribute food, and one person is hungry, and another has enough food, it does more direct good to give the food to the hungry person.

Likewise, if you are trying to decide on a research project, discovering penicillin might be a poor choice. We know that penicillin is an excellent thing to know about and has probably already saved many lives, but it’s already been discovered and put to common use. You’d do better discovering something that hasn’t been discovered yet.

My critique of GiveWell sometimes runs contrary to this principle. In particular, I argue that donors should think of crowding out effects as a benefit, not a cost, and that they should often be happy to give more than their “fair share” to the best giving opportunities. I ought to explain. Continue reading

GiveWell and the problem of partial funding

At the end of 2015, GiveWell wrote up its reasons for recommending that Good Ventures partially but not fully fund the GiveWell top charities. This reasoning seemed incomplete to me, and when I talked about it with others in the EA community, their explanations tended to switch between what seemed to me to be incomplete and mutually exclusive models of what was going on. This bothered me, because the relevant principles are close to the core of what EA is.

A foundation that plans to move around ten billion dollars and is relying on advice from GiveWell isn’t enough to get the top charities fully funded. That’s weird and surprising. The mysterious tendency to accumulate big piles of money and then not do anything with most of it seemed like a pretty important problem, and I wanted to understand it before trying to add more money to this particular pile.

So I decided to write up, as best I could, a clear, disjunctive treatment of the main arguments I’d seen for the behavior of GiveWell, the Open Philanthropy Project, and Good Ventures. Unfortunately, my writeup ended up being very long. I’ve since been encouraged to write a shorter summary with more specific recommendations. This is that summary. Continue reading

The humility argument for honesty

I have faith that if only people get a chance to hear a lot of different kinds of things, they'll decide what are the good ones. -Pete Seeger

A lot of the discourse around honesty has focused on the value of maintaining a reputation for honesty. This is an important reason to keep one's word, but it's not the only reason to have an honest intent to inform. Another reason is epistemic and moral humility. Continue reading

Honesty and perjury

I've promoted Effective Altruism in the past. I will probably continue to promote some EA-related projects. Many individual EAs are well-intentioned, talented, and doing extremely important, valuable work. Many EA organizations have good people working for them, and are doing good work on important problems.

That's why I think Sarah Constantin’s recent writing on Effective Altruism’s integrity problem is so important. If we are going to get anything done, in the long run, we have to have reliable sources of information. This doesn't work unless we call out misrepresentations and systematic failures of honesty, and these concerns get taken seriously.

Sarah's post is titled “EA Has A Lying Problem.” Some people think this is overstated. This is an important topic to be precise on - the whole point of raising these issues is to make public discourse more reliable. For this reason, we want to avoid accusing people of things that aren’t actually true. It’s also important that we align incentives correctly. If dishonesty is not punished, but admitting a policy of dishonesty is, this might just make our discourse worse, not better.

To identify the problem precisely, we need language that can distinguish making specific assertions that are not factually accurate, from other conduct that contributes to dishonesty in discourse. I'm going to lay out a framework for thinking about this and when it's appropriate to hold someone to a high standard of honesty, and then show how it applies to the cases Sarah brings up. Continue reading

Guess culture screens for trying to cooperate

My friend Miri (quoted with permission) wrote this on Facebook a while back:

Midwesterners are intolerably passive aggressive. My family is sitting among some grass in the dunes because it's the only shady place and a park ranger drives by and says, "That grass you're sitting in--we try to protect that." I say the only thing that makes sense to say in response, which is, "Thanks for letting me know! We'll be careful with it." And I go back to my reading.

Then I look up and she's still there. I look at her for a few moments and she says, "You need to get out of there." I'm like, ok. Why can't you just say that the first time? Not everyone grew up in your damn convoluted culture. Say what you fucking mean.

In the comments, someone replied:

One of the best parts of NYC is that no one dances around what they mean to say here. On the contrary, once I heard a guy on the subway say, to confused-looking strangers, "Do you need some fucking help or what?”

This particular incident seems like obnoxious behavior on the part of the park ranger, but it got me curious about why this sort of norm seems to win out over more explicit communication in many places. Continue reading

Exploitation as a Turing test

A friend recently told me me that the ghosts that chase Pac-Man in the eponymous arcade game don't vary their behavior based on Pac-Man's position. At first, this surprised me. If, playing Pac-Man, I'm running away from one of the ghosts chasing me, and eat one of the special “energizer” pellets that lets Pac-Man eat the ghosts instead of vice-versa, then the ghost turns and runs away.

My friend responded that the ghosts don't start running away per se when Pac-Man becomes dangerous to them. Instead, they change direction. Pac-Man's own incentives mean that most of the time, while the ghosts are dangerous to Pac-Man, Pac-Man will be running away from them, so that if a ghost is near, it's probably because it's moving towards Pac-Man.

Of course, I had never tried the opposite – eating an energizer pellet near a ghost running away, and seeing whether it changed direction to head towards me. Because it had never occurred to me that the ghosts might not be optimizing at all.

I'd have seen through this immediately if I'd tried to make my beliefs pay rent. If I'd tried to use my belief in the ghosts' intelligence to score more points, I'd have tried to hang out around them until they started chasing me, collect them all, and lead them to an energizer pellet, so that I could eat it and then turn around and eat them. If I'd tried to do this, I'd have noticed very quickly whether the ghosts' movement were affected at all by Pac-Man's position on the map.

(As it happens, the ghosts really do chase Pac-Man – I was right after all, and my friend had been thinking of adversaries in the game Q-Bert – but the point is that I wouldn’t have really known either way.)

This is how to test whether something's intelligent. Try to make use of the hypothesis that it is intelligent, by extracting some advantage from this fact. Continue reading

Canons (What are they good for?)

People in the Effective Altruist and Rationalist intellectual communities have been discussing moving discourse back into the public sphere lately. I agree with this goal and want to help. There are reasons to think that we need not only public discourse, but public fora. One reason is that there's value specifically in having a public set of canonical writing that members of an intellectual community are expected to have read. Another is that writers want to be heard, and on fora where people can easily comment, it's easier to tell whether people are listening and benefiting from your writing.

This post begins with a brief review of the case for public discourse. For reasons I hope to make clear in an upcoming post, I encourage people who want to comment on that to click through to the posts I linked to by Sarah Constantin and Anna Salamon. For another perspective you can read my prior post on this topic, Be secretly wrong. The second section explores the case for a community canon, suggesting that there are three distinct desiderata that can be optimized for separately.

This is an essay exploring and introducing a few ideas, not advancing an argument. Continue reading

GiveWell: a case study in effective altruism, part 6

This is the last of a series of blog posts examining seven arguments I laid out for limiting Good Ventures funding to the GiveWell top charities. In this post, I articulate what it might look like to apply the principles I've proposed. I then discuss my prior relationship with and personal feelings about GiveWell and the Open Philanthropy Project.

A lot of arguments about effective altruism read to me like nitpicking without specific action recommendations, and give me the impression of criticism for criticism's sake. To avoid this, I've tried to outline here what it might look like to act on the considerations laid out in this series of posts in a principled way. I haven't constructed the arguments in order to favor, or even generate, the recommendations; to the contrary, I had to rewrite this section after working through the arguments. Continue reading