GiveWell: a case study in effective altruism, part 1

Direct critiques of effective altruism have tended to take a form ill-suited to persuade the sort of person who is excited about it. One critique points somewhat vaguely at the virtues of intuition and first-hand knowledge, and implies that thinking is not a good way to make decisions. Others have criticized effective altruism's tendency in practice towards centralization and top-down decisionmaking, and implied that making comparisons across different programs is immoral. What's missing is a critique by someone sympathetic to the things that make effective altruism appealing: a desire to follow the evidence wherever it leads, use explicit methods of evaluation whenever possible, and be sensitive to considerations of scope.

I am going to try to begin that sympathetic critique here by looking at GiveWell, a nonprofit that tries to recommend the best giving opportunities. GiveWell is a good test case because it is now fairly central to the effective altruist movement, and it has been unusually honest and open about its decisionmaking processes. As it has developed and grown, it has had to deal with some of the tensions inherent in the effective altruist project in practice.

In the course of implementing effective altruist ideals, GiveWell has accumulated massive conflicts of interest, along with ever-larger amounts of money, power, and influence. When I hear this discussed, people generally justify it by saying that it is in the service of a higher impact on the world. Such a standard allows for a lot of moral flexibility in practice. If GiveWell wishes to be held to that standard, then we need to actually hold it to that standard - the standard of maximizing expected value - and see how it measures up.

That’s an extremely high standard to meet. GiveWell’s written that you shouldn’t take expected-value calculations literally. Maybe any attempt to maximize impact by explicitly evaluating options should be scope-limited, and moderated by common sense. But if you accept that defense, then the normal rules apply, and we should be skeptical of any organization whose conduct is justified by the fully general mandate to do the most good.

We can’t have it both ways.

GiveWell has recently written about coordination between donors. GiveWell wrote that up to explain why it recently recommended that a major funder commit, in some circumstances, to not fully funding the charities GiveWell recommends to the public, based on concerns of crowding out other donors. My post is largely a response to this.

I'm going to start by considering the general question of coordination and trust. The key claims I make here are:

  • If you do not assume that you are extremely special, then you shouldn't be very worried about "crowding out" others' giving.
  • A common sense approach that assumes that other people can be good gets a similar answer, and performs reasonably well.

Next, I will evaluate GiveWell's coordination recommendation on the merits:

  • I try to elucidate GiveWell's public reasoning on this point, and why it's worrying.
  • I articulate seven distinct arguments I've heard used to explain GiveWell's recommendation.

This post is the first in a series. In future posts, I will evaluate each of the seven arguments separately. Then, I explore what it might mean in practice to take my critique to heart, fleshing this out with specific recommendations.

Cooperation between donors

The argument for crowding out

Effective altruist discourse often talks about replaceability as a thing to consider when making career or giving decisions. The simple replaceability argument says that when you consider spending some resource on doing good - say, your own time or money - you should look at how much you end up displacing vs adding to other resources brought to bear on the problem. But any resources you displace end up valued at zero; if in order to heal the sick, you beat out another candidate for admission to medical school, it is as if that other person is deleted from the world, at least as far as their work is concerned. You then compare this to the benefit of investing your effort in other areas. This argument favors spending resources on neglected causes, since you're more likely to be adding to the resources spent on them, rather than crowding people out. (Replacement is not always 1:1, and might be better modeled as elasticity.)

But In the absence of specific evidence to the contrary, you should assume that when your time or money crowds out others' time or money, the people you're considering crowding out don't just go home and sulk - they go work on something else. And since their favorite option is similar to yours, this is evidence that they are looking for similar things - so their next-best option, their opportunity cost, is similar to yours.

At this year's EA Global conference, Jeff Kaufman made this point pretty well:

As effective altruists, we care about the good that comes because of us, not the good that comes through us. This means we're always asking the question, "what will happen otherwise?" If I become a doctor, how much less suffering will there be? Would someone else have been working those same emergency room shifts, treating those same injuries? This is the concept of replacability [sic]: would this still happen if I didn't do it.

In the early days of EA there was a lot of thought around replacability that wrote off many potentially beneficial careers as "fully replacable". We argued that the number of doctors was determined by the number of medical school admission slots, or that the number of non-profit workers was determined by available funding. With this view the benefit of you getting a job doing direct work is minimal, because you're just displacing someone else who would have done about as good a job as you. It places earning to give as a very strong option, because the other people in your position would not be donating anywhere near as much as you do, plus the direct work you can fund wouldn't happen otherwise.

This view applies to funding opportunities, however, just as it applies to earning opportunities. Organizations have an amount of funding they can practically use, the best organizations are likely to fill their funding gaps, my donation just displaces someone else's. With lots of EAs trying to give to where their money will do the most good is there anywhere left to give?

The problem with replacability arguments is that they put you in a frame of mind where you tend to overlook important considerations. If I apply to CEA and get the job, what will the applicant they would have hired instead do with their time? A replacability argument generally assumes they wouldn't do much, which implies the value of my taking the job is the difference between how well they would do it and how well I would do it. But since we both applied to work at an EA organization, if they don't get the job they're probably going to do something pretty valuable with their time, and the argument falls apart. Similarly, if you fill an EA organization's funding gap and so I have more money to give elsewhere, we likely have similar ideas about what's important and the difference between what I end up funding and what you would have funded, had the situations been reversed, may not be large.

Thus, it's a very good rule of thumb to do the best thing you can, considered simply - and assume that the people you replace will do the next best thing. This approximation should be closer to optimal in circumstances where you're contributing a fungible resource, or lack direct information about which opportunities to do good you are uniquely well-placed for.

According to this heuristic, when you are considering donating to a charity or funding some other beneficial program, you should consider its room for more funding prior to your donation - you should be crowded out by prior donors, and future donors whose behavior you will not affect - but you should not consider the presence of people who might fund the program if you decline to do so. You want to crowd out everyone after you.

This does not imply that you should always make sure to support the best possible program; it can take forever to find the best possible program. Search costs are real. If a program looks set to be adequately funded, it may not be a good use of your time to evaluate it. It may be more efficient to look for programs others have overlooked, than to rehash the same judgments they made. But for the same reason, when you crowd others out, you save them time evaluating the option you've already taken.

People can favor the same action for different reasons. Sometimes there is reason to think that you have different values than other people, even if you agree that a program is worth supporting. This is a valid reason to think that their opportunity cost may be much worse than yours, when judged by your standards. But this kind of misalignment is not axiomatic - it is a positive description of the world that requires evidence. Even when you do have such evidence, others' values are unlikely to be totally orthogonal to yours. It would be a pretty strange coincidence if their best action were the same as yours, but their second-best action would be a terrible idea.

Nor is alignment among humans an all-or-nothing deal. The question you have to ask is how badly misaligned you and others are in the relevant domain, and this question has a quantitative answer, not a binary one - especially once you account for uncertainty. The same considerations that make considering all the options in detail prohibitively expensive also make it expensive to game out exactly how your actions will affect others'. This expense trades off against the expense of value misalignment; it's often worth paying a substantial price in potential misalignment, in order to get on with the work and avoid spending too much time and effort keeping others in line.

A world where those who want to do good typically follow the rule of taking high-value opportunities to do good, and not worrying too much about whether there is some implied "crowding out," looks a lot like a world in which people are looking for the best opportunities to buy the funder's credit for a program. The most unambiguous opportunities would be the ones that most reliably find funding, for the same reason that you'd want to get in early on any other sort of good deal. If you're the only person who thinks a program is good, it's less likely that that program will be funded - but the uniqueness of your opinion is also evidence that you're mistaken in some way, unless you're very, very special.

Can trust work?

I was recently talking with my mother, and she shared a story in which embracing crowding out had exactly the effect one would hope it would have under ideal conditions of cooperation.

A while ago she was introduced to Charlie, who runs a soup kitchen in San Francisco. She has visited the soup kitchen, knows people who benefit from it, and overall knows personally that it does some substantial good with the money it receives. Accordingly, she has from time to time given money to the soup kitchen. My mother is not careful to make sure that the charities she gives to have room for more funding. Instead, she trusts that the people running this soup kitchen are doing their best to use the money for good.

In an apparently unrelated development, my mother found out that a high school friend had started a women's art collective in Haiti, designed to provide an income source for Haitian women by helping them sell their artwork. My mother did not perform an explicit cost-benefit analysis, but she has the common sense to understand that providing an income source is especially valuable in low-income countries. She did not use formal accountability metrics, but talked with someone personally known to her, who is frequently physically present in Haiti running the program, and my mother is satisfied that she understands the basics of how this particular program works. So from time to time she gives to this charity as well.

In both cases, the charities were the sort too small to be worth GiveWell's time evaluating, but these are the kinds of heuristics I might expect to perform well at finding ways to meaningfully help others.

Since my mother finds helping others interesting, from time to time she brings up opportunities she has found to do so in conversation with her friends such as Charlie. Not as a pitch to influence his behavior, but because she is excited about opportunities she has found to do good.

During one of their conversations, my mother told Charlie about the charitable work her friend does in Haiti. Charlie asked for contact information. He explained that he does not try to hold onto more than he needs to keep the soup kitchen running, and that at times, when the soup kitchen has had more money than it needed, he has passed the money along to other charities he thinks can use it - such as the women's art collective in Haiti. He had been looking for a charity he could count on to use the money well, and he was grateful when my mother, a trusted friend, recommended one.

Neither my mother nor Charlie identifies as an effective altruist. Neither my mother nor Charlie has studied coordination theory. But they got this simple coordination problem right - they coordinated much as they might have done as the result of a laborious explicit cooperation - by keeping it simple, and focusing on nothing more than trying to do good, and trusting their friends to do the same.

GiveWell, Good Ventures, and partial funding

GiveWell is a charity evaluator, committed to finding the best cost-effective evidence backed charities. Good Ventures is a foundation started by Dustin Moskovitz and Cari Tuna with a plan to eventually give away about ten billion dollars1 in the highest-impact way it can, largely advised by the Open Philanthropy Project (a GiveWell spinoff). At the end of 2015, GiveWell recommended that Good Ventures commit to only partially funding GiveWell's top charities, because of coordination theory considerations:

We do not want to be in the habit of – or gain a reputation for – recommending that Good Ventures fill the entire funding gap of every strong giving opportunity we see. In the long run, we feel this would create incentives for other donors to avoid the causes and grants we’re interested in; this, in turn, could lead to a much lower-than-optimal amount of total donor interest in the things we find most promising.

Does this make sense for Good Ventures, if it wants to maximize the impact of its giving?

Splitting: prudent thrift or funding gap theater?

The argument for crowding out suggests that Good Ventures should ignore its effect on future donors, and simply give to the best giving opportunities available, until it runs out of money.

Therefore, if Good Ventures believes that its opportunity cost is lower than the value of GiveWell top charities, then it should be happy to fund the whole of GiveWell's top charities, "crowding out" other donors into their next-best giving opportunities, and continuing to do so until - and only until - it runs out of opportunities that look better than holding onto the money. If, on the other hand, Good Ventures believes that its opportunity cost is higher than the value of today's giving opportunities, it should fund none of GiveWell's top charities.

GiveWell's actual funding recommendations appear to be plausibly consonant with this; GiveWell recommended that in most cases Good Ventures fund GiveWell top charities only up to "capacity-relevant" levels (i.e. the level below which the charity might need to shut its doors or dramatically scale down), after which point we should expect the marginal value of donations to decline. (The one exception is an additional $16.3 million for the Against Malaria Foundation.) But in a comment on the 2015 recommendations post, GiveWell co-founder Holden Karnofsky suggests that GiveWell and the Open Philanthropy Project do not expect to find funding opportunities that are better than the current GiveWell top charities, for Good Ventures to spend its last billion dollars on:

By declining to fund these opportunities, Good Ventures is instead holding onto the funds and will spend them later on the best opportunities possible. We’re extremely unsure how these future opportunities will compare to today’s; our best guess (based on the reasoning I laid out) is that today’s are better, but that’s an extremely tentative guess and we don’t feel strongly about it.

Holden indicates some uncertainty around this estimate, but if we take this to be a central estimate, then GiveWell's position appears to be: GiveWell top charities are better giving opportunities than the marginal Good Ventures giving opportunity. However, GiveWell does not recommend that Good Ventures fully fund the top charities. This is in order to make sure that other donors have an incentive to fund GiveWell's top charities.

The post on coordination theory lays out the reasoning a bit more explicitly:

Encouraging other donors to help support the causes and organizations we’re interested in – and ensuring that they have genuine incentives to do so – will sometimes directly contradict the goal of fully funding the best giving opportunities we see. Thinking about GiveWell’s top charities provides a vivid example. If we recommended that Good Ventures fully fund each of our top charities, GiveWell would no longer recommend these charities to individual donors. In the short run, this could mean forgoing tens of millions of dollars of potential support for these charities from individuals (this is how much we project individuals will give to our top charities this year). In the long run, the costs could be much greater: we believe that individual-donor-based support of GiveWell’s top charities has the ability to grow greatly. A major donor who simply funded top charities to capacity would be – in our view – acting very suboptimally, putting in a much greater share of the funding than ought to be necessary over the long run.

[...] Individuals do most of their giving in December. We could wait until we’ve seen how much support comes in for each of our top charities, and then – in, say, February of 2016 – recommend Good Ventures grants to fill whatever funding gaps remain.

This would have the advantage of fully funding top charities, while not spending more (in the short run) than necessary to do so. However, it would have the disadvantage of creating a long-term incentive for individuals to stop supporting our top charities, since the only effect of their giving (in this scenario) would be to reduce the amount we recommend to Good Ventures. Most individuals would probably not notice this issue unprompted, but it’s very important to us to be open with our audience about the pros and cons of taking our recommendations, and we don’t want our offering to be valuable/attractive only to people who misunderstand it.

[...] Any approach that is designed to ensure that the entire funding gap is always filled will be creating the kind of problematic incentives outlined here.

The basic question is whether Good Ventures should respond to each additional dollar given by other GiveWell donors by giving less ("funging"), more ("matching"), or the same amount ("splitting"). Matching has obvious perverse consequences - Good Ventures's giving to GiveWell top charities would be least in the circumstances where the charities most need it. But what's the problem with funging?

As I understand it, the reasoning here is: GiveWell can potentially influence a large pool of individual donors to give to effective charities, but only if it claims attractive cost-effectiveness numbers. GiveWell is only willing to make this claim if it honestly believes the claim to be true. If Good Ventures fully funds the top charities or commits to filling any funding gap that arises, then the current cost-effectiveness numbers will no longer be meaningfully true, since the marginal effect of giving a dollar to a GiveWell top charity is not an extra dollar allocated to the charity, but one fewer dollar given by Good Ventures. Therefore, if Good Ventures makes such a commitment, GiveWell will not be able to honestly make its cost-effectiveness claims. For this reason, GiveWell is recommending that Good Ventures refuse to fund available opportunities to save lives in the developing world, even though they have enough money to do so and they do not believe that their next-best use for the money is better, in order to provide other donors with a real incentive to fund such charities.

For instance, take the case of GiveWell's top-ranked charity, AMF; GiveWell estimates (with substantial uncertainty) that AMF saves a child from dying of malaria for each ~$3,500 in donations. If Good Ventures fills the funding gap, then other GiveWell donors can no longer save a child's life for $3,500. On the other hand, if Good Ventures does not fully fund AMF, this keeps it true that $3,500 can save a child's life. This also results in more children dying of malaria.

Consequently, GiveWell recommends a "splitting" approach, in which Good Ventures commits to funding its "fair share" of GiveWell top charities' funding gap, as estimated at the beginning of each giving season. The decision rule proposed is a 50-50 split between Good Ventures and all other donors. But as Carl Shulman points out, if Good Ventures's share is estimated as a portion of top charities' room for more funding, then increased donations by other donors will still eventually be partially offset by decreased donations by Good Ventures. To the extent that small donors leave a funding gap in any year, this will contribute to the charity's measured room for more funding in the following year, thus increasing Good Ventures's next-year giving. On the other hand, if small donors give enough that the charity ends the year with a cash reserve, this will reduce the expected room for more funding, thus reducing Good Ventures's expenditure in the next year. Thus, splitting delays and attenuates but does not eliminate the "funging" effect, though the effect is smaller when Good Ventures's responsiveness to the charity's room for more funding is slow.

I am not so worried about funging making cost-effectiveness estimates less meaningful; for the reasons I gave above, I think that it's fine to crowd out other donors, and it may even be bad to try to avoid this. If Good Ventures is able to crowd out other donors, then this is evidence that other donors think they have other good uses for their charity budgets, including (but not necessarily limited to) other good giving opportunities. If donations from other sources continue to grow, crowding out Good Ventures, then this is evidence that other donors do not think they have good alternative uses for the money.

My main concern is with GiveWell's recommendation against Good Ventures fully funding its top charities. Absent some specific claim that the money is better used elsewhere, it is unclear to me how to justify this.

I am somewhat uncomfortable addressing this topic because it's so close to telling other people what to do with their money. I want to draw a distinction here between the obligatory and the supererogatory. I don't approve of guilt-based appeals for giving, and I don't endorse infinite debt. That the money might be better used elsewhere doesn't make it my business what you do with it, so long as it's your money. If you personally have something you'd rather use the money for, then as far as I'm concerned, you don't have to justify that. This is not a criticism of Good Ventures founder-donors Dustin Moskovitz and Cari Tuna personally; everything they're doing is extra, and they should do whatever they're personally motivated to do. I think it's great that this involves giving away a bunch of money in high-impact ways, and think that the grants made via the Open Philanthropy Project as well as to GiveWell top charities are likely to do a lot of good. I think that they are making a serious attempt at doing good, and that they have found exceptionally skilled and well-intentioned people to help them use their money effectively.

My concern is with what those exceptionally skilled and well-intentioned people are recommending in this case. If you recommend that other people, in order to best accomplish their charitable goals, hold onto their money instead of donating it, you have a responsibility to get that call right. And as someone who's given based on GiveWell recommendations in the past and recommended them to friends as a source of good information on how to give, I feel a responsibility to figure out whether this recommendation made sense. If we take "our best guess is that today's [opportunities] are better" literally, this implies that there's an improvement to be made by giving more now until these equalize. In other words, a naive view of the situation would suggest that GiveWell thinks that its recommendation underperformed opportunity cost;, and therefore did net harm. Is there a less naive view under which GiveWell's recommendation is justified?

Sophisticated arguments for giving less

I can think of a few reasons why GiveWell might still have been right to recommend that Good Ventures only partially fund the GiveWell top charities. These arguments are not all mutually exclusive, but each one seems to stand on its own. In the spirit of making disjunctions explicit, I will treat each one separately, so I can assess the strengths, weaknesses, and implications of each claim without accidentally switching topics. The arguments are:

  1. Good Ventures can find better opportunities to do good than other GiveWell donors can, because it is willing to accept more unconventional recommendations from the Open Philanthropy Project.
  2. Even if Good Ventures isn't special, it should expect that some of its favorite giving opportunities will be ones that others can't recognize as good ideas, due to different judgment, expertise, and values. If the Open Philanthropy Project does not expect to be able to persuade other future donors, but would be able to persuade Good Ventures, then these opportunities will only be funded in the future if Good Ventures holds onto its money for long enough. Thus, while Good Ventures may currently have a lower opportunity cost than individual GiveWell donors, this will quickly change if it commits to fully funding the GiveWell top charities.
  3. The important part of GiveWell's and the Open Philanthropy Project's value proposition is not the programs they can fund with the giving they're currently influencing, but influencing much larger amounts of charitable action in the future. For this reason it would be bad to get small donors out of the habit of giving based on GiveWell recommendations.
  4. The amount of money Good Ventures will eventually disburse based on the Open Philanthropy Project's recommendations gives them access to potential grantees who would be interested in talking to one of the world's very largest foundations, but would not spend time on exploratory conversations with smaller potential donors who are not already passionate about their program area.
  5. GiveWell, the Open Philanthropy Project, and their grantees and top charities, cannot make independent decisions if they rely exclusively or almost exclusively on one major donor. They do not want Good Ventures to crowd out other donors, because it makes them more dependent on Good Ventures, which will reduce the integrity of their decisionmaking process, and therefore the quality of their recommendations.
  6. If no one else is willing to fund a program, then this is evidence that the program should not be funded. Crowding out other donors destroys this source of independent validation.
  7. If Good Ventures fully funds every high-value giving opportunity it finds, this could lead to other donors preemptively abandoning programs the Open Philanthropy Project is looking into, thus substantially reducing the amount of effective giving in the Open Philanthropy Project's perceived current and potential focus areas.

In the following posts, I will consider each of these arguments for limiting Good Ventures funding to the GiveWell top charities separately, and try to articulate why someone might believe it and what else it would imply:

Disclosure: in the past I've worked with GiveWell and the Open Philanthropy Project. I have no current institutional affiliation and my opinions here are my own. I'll go into this in more detail in the final post.

References

References
1 Dustin Moskovitz and Cari Tuna have publicly stated that they plan to give away the vast majority of their fortune, largely via Good Ventures. Forbes estimates this to be about ten billion dollars as of 2016.

15 thoughts on “GiveWell: a case study in effective altruism, part 1

  1. PDV

    4 is a subset of a stronger point: More money can afford to be better informed than less money. The costs of investigation are largely fixed costs per opportunity, which small donors would pay repeatedly and judge not worthwhile, but Good Ventures would happily pay and judge worthwhile.

    The major concern is: which will be better informed; $1 billion from Good Ventures or $1 billion from many small donors? And the answer is pretty clear.

    Reply
  2. Michael Vassar

    Pretty clear to me too, but to me, PDV is pretty clearly wrong. In the fully general case, PDV's argument is an assertion that central planning is better than markets, and should be assumed to be so, even in the presence of conflicts of interest.

    Anyway, the specific point will be addressed in full detail in a later post

    Reply
  3. Pingback: GiveWell: a case study in effective altruism, part 2 | Compass Rose

  4. Pingback: GiveWell: a case study in effective altruism, part 3 | Compass Rose

  5. Pingback: GiveWell: a case study in effective altruism, part 4 | Compass Rose

  6. Pingback: GiveWell: a case study in effective altruism, part 5 | Compass Rose

  7. Pingback: GiveWell: a case study in effective altruism, part 6 | Compass Rose

  8. Pingback: On Philosophers Against Malaria | Compass Rose

  9. Pingback: Improve comments by tagging claims | Compass Rose

  10. Pingback: Why I Promote the GWWC Pledge | Thing of Things

  11. Pingback: GiveWell and partial funding | Compass Rose

  12. Pingback: Humble Charlie | Compass Rose

  13. Eli

    I think that I broadly disagree with this post, on a number of accounts. {I'm noting that you might deal with these objections in later posts, and if so, feel free to just link me to those posts.]

    I think it does make sense to track (roughly) what money your displacing with donations,not because of miss-alignment, but because epistemic privilege.

    In the model that I'm implicitly working from, different people have different abilities to identify and assess opportunities. This might be because of having special epistemic skills, or domain specific knowledge, or better access to relevant people and institutions, or whatever.

    I think that there are some charities that I think are worth funding, but that I also think that many other people can identify as worth funding. And, if it seems like those people will fill the funding gap of that charity, it makes sense for me to hold back, so that I can allocate resources to other projects that I think are approximately as good, but I think other people won't be able to identify.

    For instance, I endorse donating to MIRI, but I also think that these days there are are a decent number of people that support MIRI. And many of those folks are software engineers by profession who don't have the time or the access to evaluate many smaller promising projects. But because the nature of my day to day work, I DO have lots of opportunities to talk with people at length about smaller projects. Furthermore, in my own estimation, I think I have some "lenses" that let me distinguish between promising projects and similar-sounding projects that aren't actually very promising.

    It seems reasonable to me for me to save my funds for those smaller more illegible projects, and let other people fund MIRI. EVEN IF, overall, I think that MIRI is a better charitable investment than those projects.

    And I don't think I'm at "the top of the stack" in this respect. I think that there are opportunities that others can identify that I can't, and I hope that they let me fund the things that we can both see, and I'll leave them to fund the things that only they can see.

    (Indeed, I don't think that it is a well-ordered "stack" at all, since different people can have comparative advantage at assessing different opportunities, on the basis of their own personal epistemic expertise.)

    Overall, this seems akin to "trying to be corrigible". One of the most important skills for someone wanting to help with x-risk is a calibration about when they should step up and try to take bold action, and when they should hold back from bold action, because considering the other people on the field, and their own level of skill, they're likely to make things worse instead of better.

    (As an analogy, if someone is having a severe mental breakdown near you, sometimes the right move is to begin doing impromptu therapy, and other times the right move is to do nothing more than offer to get the person a drink of water, or help them go for a walk. But knowing which is the right move, means being calibrated about your level of skill, and also the skill of the other people who might intervene. [Noting of course that you can often talk with each other and coordinate. You don't always need to guess other people's level of skill.])

    Similarly, I think it makes sense for people to track which opportunities they can see compared to others. Where possible they should explicitly coordinate to do this, though that can be tricky in cases where one person have specific epistemic skills that allows them to assess opportunities that others can't see. That sort of coordination requires trust, which depends on high-bandwidth communication, which is possible with my closest collaborators, but not with most people attempting to help with x-risk.

    ......

    I'm less confident about how this analysis should apply to GiveWell and Good Ventures.

    I hear them saying something like this:

    > GiveWell is in a position of having special epistemic expertise at evaluating charities. Thus, we think it is good if more people follow our recommendations. If Good Ventures fully funded all of the GiveWell recommended charities, other funders would make much worse decisions with their charitable donations, because even if they are aligned with GiveWell, being as good as GiveWell at assessing charities is really hard.

    > We could just continue to make recommendations, and have Good Ventures fully fund every one, until we run out of money. And then from that point on, other funders could decide if they want to follow our recommendations or not. But in practice if we did that, many of those funders would get discouraged, because they were relying on GiveWell to help them identify excellent giving opportunities, and suddenly we would completely stop having those. Now they either have to give much less effectively, or do a lot more cognitive work of figuring out where to give.

    > And importantly, they would no longer have a reason to pay attention to GiveWell's recommendations anymore, because they won't be actionable. So in 30 years (or whenever), when Good Ventures runs out of money, there will be fewer people that will have been paying attention to our recommendations in the meantime, ready to pick up the slack.

    > In the world where GoodVentures fully funds all our recommendations, there's a sharp cut-off between a time when it is of no use for anyone other than GoodVentures to pay attention to GiveWell recommendations, and a time in which suddenly, we really want other donors to heed those recommendations, because they're unusually effective.

    > With this policy of only funding up to 50%, we're making those two "eras" synchronous, instead of one after the other. We expect that this will result in more people paying attention to GiveWell's recommendations, and increasing the total amount of money that will go to charities assessed to be effective by GiveWell. In the long run, this will save more lives and do more good than the naive alternative.

    A crux here is that GiveWell enjoys an epistemic advantage over its donors who give to GiveWell-recommended charities. If that weren't the case (as it sounds like you're arguing in your post), this policy wouldn't make any sense. But on the face of it, that claim seems really likely to me. Most GiveWell-charity donors don't specialize in evaluating charities full time, and so I would expect that they do a much poorer job a that task on average.

    Basically, I think this...

    > you should assume that when your time or money crowds out others' time or money, the people you're considering crowding out don't just go home and sulk - they go work on something else. And since their favorite option is similar to yours, this is evidence that they are looking for similar things - so their next-best option, their opportunity cost, is similar to yours.

    ...doesn't hold up. Because there is a wide variance in people's ability to assess good opportunities.

    (Also, I do think that there ARE people who basically do "go home and sulk", because they were glad to make the trade of a X thousand dollars a year, for an unusually good ROI, without needing to think about it much themselves, but are not excited about the trade of spending X thousand dollars AND many of their own hours to try and figure out where to give, and ultimately getting much worse ROI than they would have gotten from a GiveWell-recommended charity.)

    Reply
    1. Benquo Post author

      I decomposed epistemic privilege into Argument 1 and Argument 2, and it seems like you're making Argument 2.

      Reply
    2. Benquo Post author

      Also, I do think that there ARE people who basically do "go home and sulk", because they were glad to make the trade of a X thousand dollars a year, for an unusually good ROI, without needing to think about it much themselves, but are not excited about the trade of spending X thousand dollars AND many of their own hours to try and figure out where to give, and ultimately getting much worse ROI than they would have gotten from a GiveWell-recommended charity.

      I think what you are saying here is that many GiveWell donors would not voluntarily donate at the true marginal ROI, so GiveWell has an interest in misleading them since it doesn't care about their interests. If that's not what you meant, you could try to clarify.

      Reply

Leave a Reply to PDV Cancel reply

Your email address will not be published. Required fields are marked *