This is part of a series of blog posts examining seven arguments I laid out for limiting Good Ventures funding to the GiveWell top charities. My prior post considered the second argument, that even assuming symmetry between Good Ventures and other GiveWell donors, Good Ventures should not fund more than its fair share of the top charities, because it has a legitimate interest in preserving its bargaining power. In this post, I consider the third through fifth arguments:
Argument 3: The important part of GiveWell's and the Open Philanthropy Project's value proposition is not the programs they can fund with the giving they're currently influencing, but influencing much larger amounts of charitable action in the future. For this reason it would be bad to get small donors out of the habit of giving based on GiveWell recommendations.
Argument 4: The amount of money Good Ventures will eventually disburse based on the Open Philanthropy Project's recommendations gives them access to potential grantees who would be interested in talking to one of the world's very largest foundations, but would not spend time on exploratory conversations with smaller potential donors who are not already passionate about their program area.
Argument 5: GiveWell, the Open Philanthropy Project, and their grantees and top charities, cannot make independent decisions if they rely exclusively or almost exclusively on one major donor. They do not want Good Ventures to crowd out other donors, because it makes them more dependent on Good Ventures, which will reduce the integrity of their decisionmaking process, and therefore the quality of their recommendations.
If you already think that GiveWell is doing good, Argument 3 should make you more excited about it - and implies that Good Ventures should be looking for ways to give away money faster in order to build a clear track record of success sooner.
Argument 4 seems plausible at some margin - if Good Ventures gives away most of its money quickly, then it will become a small foundation and have access to fewer potential grantees. But it would be a surprising coincidence if the amount of money Good Ventures will eventually give away were very close to this access threshold. If giving money away freely now will make it difficult to change behavior later once remaining funds are close to the access threshold, this is an argument for communicating this intention in advance, which may require Good Ventures and its donors to make their funding commitments more explicit.
I deal with two components of Argument 5 separately. First, GiveWell's top charities may become less effective if dependent on a single primary donor. Second, GiveWell and the Open Philanthropy Project have a legitimate interest in preserving their own independence.
The top charities independence consideration seems unlikely to uniformly apply to all the GiveWell top charities; each has a different funding situation and donor base, so this seems like a situation worth assessing on a case-by-case basis, not with a blanket 50-50 donation split between Good Ventures and everyone else.
To the extent that Good Ventures becoming the dominant GiveWell donor threatens GiveWell's institutional independence, this problem seems built into the current institutional structure of GiveWell and the Open Philanthropy Project, in ways that aren't materially resolved by Good Ventures only partially funding the top charities.
Contents
Argument 3: Influence
GiveWell is able to influence tens of millions of dollars per year in donations aside from Good Ventures - and was able to get the attention of Good Ventures in the first place - because of its track record of careful recommendations to donors. Continuing this track record is likely to lead to more people influenced in the future. The direct impact of donations influenced by GiveWell and the Open Philanthropy Project now might be much smaller than the potential impact of influencing the donation of many times more money in the future. For this reason, anything in danger of "crowding out" GiveWell donors could be net harmful, because it would cause donors to stop paying attention, derailing the process of building GiveWell's reputation.
One problem with this argument is that "influence" is not a homogeneous substance. GiveWell's reputation is based on honesty and integrity, trying to recommend the best charities it can with its specified methods, and being open about changes in its methods. This is a large part of why people trust GiveWell to tell them where to give. If instead GiveWell started making recommendations, not on the basis of where money does the most good, but on the basis of which recommendation has the best effect on the world, this would erode the information value of their recommendations, and the reason for people to trust them.
To influence people in a particular direction, one must to begin as one would go on. If GiveWell follows an influence-maximizing strategy in the short run, then its audience will be the people who like or are most strongly affected by influence-maximizing strategies, and if they follow its example, they will behave likewise. There does not seem to be a shortage of charity donors giving based on what they think looks good to others. GiveWell's current stated strategy of making honest recommendations and not trying to game the system seems like a much better way of promoting a culture of trying to do one's honest best in giving.
The influence argument also suggests that Good Ventures ought to be trying to spend money faster, not slower, in order to build up a track record of demonstrable achievements. For an impact-maximizing organization, trying to increase influence by increasing the extent to which you receive donations is backwards. Maximizing impact means maximizing the ratio of benefits to costs, not the other way around. When credibility comes from evidence of impact, the way to gain influence should be to create, and then demonstrate, impact. This is hard in areas like AI risk, but comparatively easy in things like reducing the global burden of disease, where public health organizations like the WHO are already monitoring outcomes such as disease rates.
Right now, the GiveWell top charities are still fairly small relative to Good Ventures. The maximum estimated funding gap in 2015 was a couple hundred million dollars.1 For a ten billion dollar foundation,2 that's just a couple percent of the total. Under usual financial assumptions, that's less than the expected annual return on investment; they'd have to spend more than that just to break even. It's also less than the legal minimum Good Ventures would be required to spend annually, if all the money were already in the foundation.
Substantially more than the present level of funding should be sustainable indefinitely. So the influence strategy mainly matters if there are going to be larger giving opportunities in the future. Either the GiveWell top charities can scale at current cost-effectiveness levels, or they can't.
If they can't scale, then the long-run cost of committing to fully fund anything competitive with the other current top charities is extremely sustainable for Good Ventures, for many years - at current rates of spending, they're not even touching the principal of the Good Ventures fortune, just spending some of the interest.
On the other hand, if they can scale, then it should be possible to very quickly make a huge, obvious, measurable difference in the well-being of the global poor. This seems like the sort of demonstration project that would provide huge amounts of credibility to GiveWell and the Open Philanthropy Project, and motivate other large donors to chip in.
Case 1: Top charities can't scale.
It's plausible to believe that that the GiveWell top charities can't scale up at current cost-effectiveness numbers. GiveWell has often struggled to find underfunded scalable interventions, and this seems likely to persist. If the GiveWell top charities can't scale up at current levels of cost-effectiveness, then it's not clear why GiveWell wants to influence a larger pool of donors with its recommendations.
The one plausible exception to the difficulty finding charities that are expected to scale well is GiveDirectly. GiveDirectly is evaluated as much less cost-effective than GiveWell's other top charities, equivalent to a cost per life saved of about $50,000. GiveDirectly is plausibly much more valuable than that as a test of the hypothesis that cash transfers work as well as some simple economic intuitions would suggest, but this does not imply that a GiveDirectly ten times its current size will be anywhere near ten times as good for the world. However, it's conceivable that a successful GiveWell could influence large numbers of people to give to GiveDirectly. It's imaginable that the end result of global poverty oriented effective altruism is to move about five billion dollars annually - about what the United Way moves now - to the null charitable strategy of just giving the world's poorest some money. However, I am not optimistic about the prospects for this; in practice, the appeal of effective altruism seems to depend on a narrative in which ordinary people can save multiple lives each year simply by living frugally and giving the savings to charity.
If this is the end state, then it's crucially important to experimentally test things like the macroeconomic objection to cash transfers (that they'll just reallocate wealth and cause local inflation, rather than providing more real resources to the poor on net), before GiveWell invests much more in promoting them.
Case 2: Top charities can scale.
On the other hand, maybe the interventions can scale. This implies that dramatically successful demonstration projects are possible; world-historically successful projects on the order of the eradication of smallpox or polio. If this is true, it's puzzling that such projects are not being financed or attempted, by GiveWell or anyone else - or if they have been attempted, that they are not succeeding.
There's an intuition among some global-poverty focused effective altruists that giving to effective charities is net good, but that it's also like throwing resources into a bottomless pit of suffering. You could argue that the global burden of disease is so great that Good Ventures could throw a large share of its resources into the problem at current cost-effectiveness levels without making a dent. But - if GiveWell's cost per life saved estimates are anywhere close - you'd be wrong. GBD 2015 estimates that communicable, maternal, neonatal, and nutritional deaths worldwide amount to about 10 million in 2015. And they are declining at a rate of about 20% per decade. If at current cost-effectiveness levels, top charities could scale up to solve that whole problem, then if we assume a cost of $5,000 per life saved, the whole thing would cost $50 billion. That's more than Good Ventures has on hand - but it's not an order of magnitude more. It's not more than Good Ventures and its donors and the Gates foundation ($40 billion) and Warren Buffett's planned gifts to the Gates Foundation add up to - and all of those parties seem to be interested in this program area.
[UPDATE: I'm assuming that if you eliminate an infection disease for a discrete contiguous area for a year, it doesn't come back. This is why I'm using annual and not cumulative deaths.]
That's an extreme upper bound. It's not limited to the developing world, or to especially tractable problems. You almost certainly can't scale up that high at current costs - after all, the GiveWell top charities are supposed to be the ones pursuing the most important low-hanging fruit, tractable interventions for important but straightforward problems. But then, how high can you scale up at similar cost-effectiveness numbers? Can you do a single disease? For one continent? One region? One country? Now, we're getting to magnitudes that may fall well within Good Ventures's ability to fund the whole thing. (Starting with a small area where you can show clear gains is not a new idea - it's the intuition behind Jeffrey Sachs's idea of millennium villages.) And remember that once you wipe out a communicable disease, it's much cheaper to keep it away; when's the last time people were getting smallpox? Similarly, nutritional interventions such as food fortification tend to be permanent. There's a one-time cost, and then it's standard practice.
GBD 2015 estimates that there only about 850,000 deaths due to neglected tropical diseases each year, worldwide. At $5,000 per life saved, that's about $4.2 billion to wipe out the whole category. Even less if you focus on one continent, or one region, or one country. To name one example, Haiti is a poor island with 0.1% of the world's population; can we wipe out neglected tropical diseases for $4.2 million there? $40 million?
It seems improbable to me that GiveWell would have a harder time persuading people to give on its recommendation in the future, if everyone agreed that its money moved wiped out neglected tropical diseases in a whole region or even a whole country. If GiveWell designed an experiment like that to get clear, demonstrable, unambiguous results, with predictions made in advance, publicly, then that would be a great example of what money can do to improve outcomes. It would be easy to verify, and extremely motivating to donors looking for a proven organization to fund. The pitch would be simple: "this thing we already did, but everywhere."
You might be skeptical that this can be done. For instance, you might wonder, if it were possible, why the Gates foundation or many other charities focused on the developing world haven't already done it. But to the extent that you're skeptical of that, you're saying that the top charities can't scale to anything near the level that would quickly use up the Good Ventures fortune.
Argument 4: Access
The Open Philanthropy Project's commitment to exploring many potential cause areas before embracing one constitutes an unusual challenge; potential grantees may understandably be reluctant to spend time talking with a dilettantish funder unlikely to end up funding their program area. This is likely offset by their ability to truly claim that they will eventually move money consistent with being one of the ten largest charitable foundations in the US, and twenty largest worldwide. A commitment by Good Ventures to fully funding GiveWell top charities could quickly spend this money down to a level where it can no longer support the level of access that the Open Philanthropy Project currently enjoys.
This argument implies that there are substantial advantages to smaller donors creating a donor-advised fund lottery, or simply playing the lottery:
[D]onors giving large amounts may be able to achieve more per dollar with their donations, enjoying economies of scale. When this is true, small donors may be able to do more good by exchanging a donation for a lottery with a 1/n chance of delivering a donation n times as large. In practice, transaction costs and taxation mean the donation will be smaller, a cost which must be compared against scale economies. However, the use of randomization, casino gambling, derivatives, and other institutions can limit lottery costs to a modest factor, lowest when investments are used. So small donors who believe strong scale economies exist can take advantage of them.
I am not sure whether this is a good thing to do, but I notice in practice that nearly no one is doing it, suggesting that most effective altruists do not think that it is advisable. If you would advise an effective altruist against playing an expected value neutral lottery, then the reversal test suggests that you think that there are diminishing returns to scale, and that the larger pools of money should be diffused among more people with similar values. (One way to do this would be for Good Ventures to commit to fully funding the GiveWell top charities, effectively freeing up smaller donor money for other uses.)
The reversal test does not strictly demonstrate that either all money be maximally concentrated, or that all concentrations of money be fully diffused. For instance, there might be important threshold effects, and the optimal distribution of donor size might lie somewhere in between an uniform distribution (perfect diffusion) and perfect concentration. But if there's a threshold, then the argument for concentration only applies once Good Ventures has spent down to this threshold.
If Good Ventures is restricting its current rate of spending in order to reserve a fund of a certain size for ordinary Open Philanthropy Project spending, then it could solve this problem by disambiguating. It could explicitly commit that amount to ordinary Open Philanthropy Project grants, and spend down the remainder in a way unaffected by access considerations. If this would lead to a precipitous decline in giving once the remaining funds are spent, some combination of tapering off when the cliff approaches and clear advance communication could alleviate that problem.
Finally, the access consideration suggests that the Open Philanthropy Project should strongly consider lowering its funding threshold. Grantees don't just want to touch money, they want grants to do work. They care about whether they might get funded this year or next, much more than whether they might get funds twenty years from new.
Argument 5: Independence
GiveWell has previously discussed the problem with relying on one funder:
Good Ventures is a major foundation, and it is interested enough in our work on strategic cause selection – for its own purposes in choosing causes – that it would potentially (if it were the only way this work could be done) be willing to commit significant funding to it.
At the same time, both we and Good Ventures agree that it would be a bad idea for GiveWell to draw all – or too great a proportion – of its support from Good Ventures.
One reason for this is that it would put GiveWell in an overly precarious position. While our interests are currently aligned, it is important to both parties that we would be able to go our separate ways in the case of a strong enough disagreement. If Good Ventures provided too high a proportion of support to GiveWell, the consequences of a split could become enormous for us, because we wouldn’t have a realistic way of dealing with losing Good Ventures’s support without significant disruption and downsizing. That would, in turn, put us in a position such that it would be very difficult to maintain our independence.
Another reason is that raising substantial support from individuals keeps us accountable to individuals, both in terms of perception and reality. If we did not raise a substantial part of our support from individuals, our incentives would not be aligned with our mission of serving large numbers of donors. We have hopes of serving many more individuals and institutions (such as Good Ventures) in the future; drawing too much of our budget from Good Ventures could make this more difficult by leading to a perception that serving Good Ventures is our main mission.
Both GiveWell and its top charities have a legitimate interest in not relying excessively on any one donor. Many charities have policies limiting the amount of money they are willing to spend in any one year from any one donor, and it's plausible that GiveWell is implicitly imposing such a limit on behalf of its top charities, as well as attempting to protect its own independence by preventing Good Ventures from displacing other sources of support.
The arguments for protecting top charities' independence and GiveWell's own independence are different enough to justify separate discussions.
Top Charity independence
GiveWell is already in the position of influencing a very large share of the donations received by its top charities.3 This is already enough of a problem that GiveWell likely faces difficulties communicating openly with charities; when GiveWell staff ask questions, or are open about their evaluations of charities' practices, this can amount to accidentally pressuring these charities to accommodate GiveWell's preferences,. This pressure could potentially override the best judgment of the people who actually work at the charities GiveWell is evaluating.
This problem may in part be mitigated by the indirect and time-delayed nature of GiveWell's influence. In 2013, when AMF had been removed from the list of GiveWell top charities due to concerns about its room for more funding, it still received millions from GiveWell-influenced donors, suggesting that GiveWell donors are applying some amount of independent judgment. If this money all or mostly came from a single donor, it could exacerbate this problem, or lead top charities to hold some of the money in reserve.
If GiveWell is concerned about this effect, it could ask top charities how much money they would be willing to spend in a year from a single donor. Good Ventures could also negotiate a taper-down schedule for withdrawing funding, to reduce the potential costs of withdrawing a major source of funds.
GiveWell independence
If GiveWell is trying to protect its own independence, its close connection with the Open Philanthropy Project is the main problem, because it conflicts with GiveWell's central mission.
An important problem with recommending charitable actions is that not every consideration that you consider relevant will persuade others. One way around this is to share information and arguments that both you and the other party consider relevant. But evidence is hard to evaluate without one's own underlying model of the domain where one is trying to reach a conclusion, and building such a model takes a lot of time. Even if you trust a salesperson not to outright lie, you may not know what questions to ask, that would yield information that would favor the competition's product.
For this reason, people often look for advisors to recommend decisions directly. For instance, The Wirecutter tries to recommend the best gadget in each category (e.g. computer, sound bar), and The Sweethome does the same for home goods. GiveWell and the Open Philanthropy Project are also advisors, on how to do the most good through giving money, serving different types of donor. But the usefulness of advice depends crucially on trust in not only the competence of the advisor, but the degree to which their motivations are well-aligned with one's own.
There are different qualitative levels of trust. I can trust a lot of parties to deliver a standardized good, in the quantity they say they will, at the time they say they will. If Amazon frequently just neglected to deliver a product, or depositing money at the bank often didn't lead to increased ability to withdraw money later, or an 8-cookie box bought at the corner bodega turned out to have only seven cookies, I'd expect to find out about it pretty quickly. But I wouldn't necessarily trust any of these institutions to recommend me a book, an investment, or a recipe that meets my needs.
An example of a rater operating at this standardized, commodity level of trust in consumer goods is Consumer Reports, which is famous for not accepting advertising in order to avoid conflicts of interest, and for extensively documented objective testing methodology. However, most people I talk to who have looked at their ratings find them difficult to evaluate, and often overweight factors that are not relevant to their needs. For instance, I was recently talking with someone who pointed out that their washer-dryer recommendations weight energy efficiency heavily relative to things that consumers care about more like ease of use and effectiveness, so that their top-rated products score much less well on customer reviews than products with much lower ratings. (By contrast, The Wirecutter and The Sweethome think and write extensively about what they think the most important product attributes are for consumers in each category, a less "objective" but more value-relevant method.)
Charity Navigator is a standardized charity rater, operating at the commodity level of trust. It attempts to rate every charity, on objective, easy to verify criteria, metrics such as the amount of a charity's budget allocated to administrative overhead. However, while commodity-quality trust is adequate if I know I want the commodity in question, commodity charity ratings do not seem likely to steer me towards the giving opportunities with the highest impact.
At the other extreme, I can hire an expert in the field to learn my particular preferences and circumstances, and recommend the action that best satisfies them. General-purpose fee-based financial planners (but not salespeople for a particular investment company or product) fall into this category, as do personal shoppers, interior decorators, and medical doctors. However, getting this right on high-stakes issues requires a very high level of both integrity and mutual comprehension.
One friend tells a story of a physician recommending a test in response to a persistent pain. When my friend asked how this would affect treatment, the physician answered that the treatment would be the same either way, and they were just ordering it to reassure my friend that something was being done about the problem. Of course my friend declined the test. But incentive misalignment doesn't have to come from an underlying conflict of interest; differing perceptions of what is relevant can be enough. I recently developed a bad case of airplane ear on a flight, and had another flight scheduled in a few days. I wanted to find out whether, if the condition didn't clear up by then, it was safe to fly. I saw two physicians about this, and it was only near the end of the second doctor's visit that I managed to get an answer to the question I most cared about: will I do long-term damage to myself if I fly while my head still hurts? This wasn't because they didn't know the answer (the second one did), or had some incentive to deny me the information. They just assumed that I was there to manage the short-run pain, when I mainly cared about permanent damage.
Commodity trust scales well, but often answers irrelevant questions. On the other hand, personal trust works very well when it works at all, but it is expensive to find someone trustworthy at this level, and the solution does not scale well. Is there a compromise that scales well like commodity trust, but captures some of the value-alignment benefits of personal trust?
Managers frequently face a related problem. Excellent personal judgment does not scale up well - an important part of why managers are necessary is that the people reporting to them are not necessarily as good at taking into account strategic considerations. For this reason, they often tend to abstract strategic considerations into simplified principles people can act on. Facebook's famous motto "Move Fast and Break Things" is an example of this, and may have successfully exhorted employees to test more hypotheses more quickly instead of obsessing over flawless execution from the beginning. Now that Facebook's business conditions have changed (since it's successfully positioned itself as the dominant social network), upper management has changed its motto, in order to adjust employees' behavior accordingly. Former Southwest Airlines CEO Herb Kelleher famously described how he applied this sort of principle to reduce the complexity of executive decisions:
I can teach you the secret to running this airline in 30 seconds. This is it: We are THE low-fare airline. Once you understand that fact, you can make any decision about this company’s future as well as I can.
Tracey, from marketing, comes into your office. She says her surveys indicate that the passengers might enjoy a light entrée on the Houston to Las Vegas flight. All we offer is peanuts, and she thinks a nice chicken Caesar salad would be popular. ‘What do you say?’
You say ‘Tracey, will adding the chicken Caesar salad make us THE low-fare airline from Houston to Las Vegas? Because if it doesn’t help us become the unchallenged low-fare airline, we’re not serving any damn chicken salad’.
Humans have a tendency to substitute easier questions for hard ones. One way of thinking about management by principle is that it allows this process to be steered by conscious deliberation, providing appropriate simplified substitutes that preserve much of the original question's value.
Katja Grace provides a simplified example of how this might apply to making charity recommendations where incentives may diverge:
Suppose you are in the business of making charity recommendations to others. You have found two good charities which you might recommend: 1) Help Ugly Children, and 2) Help Cute Children. It turns out ugly children are twice as easy to help, so 1) is the more effective place to send your money.
You are about to recommend HUC when it occurs to you that if you ask other people to help ugly children, some large fraction will probably ignore your advice, conclude that this effectiveness road leads to madness, and continue to support 3) Entertain Affluent Adults, which you believe is much less effective than HUC or HCC. On the other hand, if you recommend Help Cute Children, you think everyone will take it up with passion, and much more good will be done directly as a result.
[...]
If you claim to be recommending ‘something good’, or ‘something better than EAA’ or anything that is actually consistent with recommending HCC, then probably you should recommend HCC. (This ignores some potential for benefit from increasing the salience of effective giving to others by recommending especially effective things).
If you claim to be recommending the most effective charity you can find, then recommending HCC is dishonest. I claim one shouldn’t be dishonest, but people do have different views on this. Setting aside any complicated moral, game theoretic and decision theoretic issues, dishonestly about recommendations seems likely to undermine trust in the recommender in the medium run, and so ultimately lead to the recommender having less impact.
You could honestly recommend HCC if you explicitly said that you are recommending the thing that is most effective to recommend (rather than most effective to do). However this puts you at odds with your listeners. If you have listeners who want to be effective, and have a choice between listening to you and listening to someone who is actually telling them how to be effective, they should listen to that other person.
[...]
I think a reasonable approximation of this might be to choose the set of values and epistemological situation you want to cater to based on which choice will do the most good, and then honestly cater to those values and epistemological situation, and say you are. If your listeners won’t donate to HUC because they value feeling good about their donations, and they don’t feel good about helping ugly children, and you still want to cater to that audience, then explicitly add a term for feeling good about donations, say you are doing that, and give them a recommendation that truly matches their values.
[...]
In sum, I think it is dishonest to advertise HCC as the most effective charity, and one shouldn’t do it. Even if you don’t have a principled stance against dishonesty, it seems unsustainable as an advice strategy. However you might be able to honestly advertise HCC as the best charity on a modified effectiveness measure that better matches what your audience wants, and something like that seems promising to me.
The original, complex problem is to find the best giving opportunity. The generalization of Katja's proposed strategy is to find a well-defined subset of the original problem, with the following desirable properties:
- Simplicity - The constrained problem is simpler than the general problem, so that it is easier for both you and members of your audience to verify that you are honestly optimizing within the stated constraints.
- External aligment - The constrained problem represents the genuine preferences of some substantial market segment, such that they will be correctly persuaded to change their behavior based on your recommendations.
- Internal alignment - You believe that it greatly improves outcomes for people in your target market to do better on the constrained problem, so that you feel good about honestly optimizing within this category.
- Noninterference - If people who do not share the constraints of your target market would otherwise make better decisions than the ones you recommend, you are able to communicate the parameters of the constrained problem clearly enough to avoid causing many such people to wrongly adopt your recommendation.
GiveWell seems like a clear attempt to simplify the problem of finding the best charities in such a way:
- The problem of finding the highest-impact use of money is simplified to the constrained problem of finding the best charities with strong empirical evidence that their intervention works and is cost-effective. This allows GiveWell to economize on effort by e.g. performing an intervention report successively answering the questions of whether the program works, whether it is cost-effective, and whether there is work still to be done, and only prioritizing the investigation of individual charities within a program area if the intervention appears to be successful.
- GiveWell's constrained problem is externally aligned - many donors have explicitly expressed unwillingness to give to high-expected value but very high uncertainty programs, especially ones where the arguments for their benefits are difficult to empirically verify. The recent Atlantic profile of effective altruism expresses this preference profile:
"I felt drawn to two personal values for my donation: I wanted to prevent premature deaths, and I wanted a high degree of scientific certainty that the money would be spent well.
The most common refrain from experts I consulted was that my priorities pointed in a clear direction: If what you want is to save lives with certainty, several people said, you have to go to GiveWell."
More broadly, the fact that GiveWell is very open about its methodology and people give based on its recommendations is evidence of external alignment. - GiveWell's constrained problem seems like a good fit for internal alignment. It is the result of GiveWell's founders doing research on charities that they personally thought was valuable, adjusting their methodology based not only on market response but their own sense of which criteria seemed most informative.
- GiveWell has expended considerable effort trying to limit interference through being open about their methodology and results, but judging purely by outcomes, there are problems with noninterference - effective altruists seem to have some desire for authoritative charity recommendations, and project their own sense of what ought to be the best charity onto GiveWell's actual recommendations, even if GiveWell's own publicly available charity reports show plainly that the recommended charities do not live up to this ideal.
On balance it seems to me that GiveWell has a good thing going. But if a single dominant donor provides the bulk of support for GiveWell and its top charities, this fundamentally changes GiveWell's environment in a way that can lead to severe incentive misalignment. This can happen in two ways.
First, if most of GiveWell's money moved is controlled by a handful of people, it becomes a more appealing value proposition to learn more about their individual preferences, and use this information to become more persuasive. If Good Ventures becomes GiveWell's dominant donor, than any recommendation Good Ventures will not be persuaded by becomes much less useful. There will be a constant temptation to expand scope, when considerations outside the original scope feel relevant to both GiveWell staff and Good Ventures. This turns the constrained problem back into the unconstrained one, and the problem again becomes one of personal trust. Knowing that most of their impact comes from their ability to influence Good Ventures president Cari Tuna,4 GiveWell staff would have to work very hard not to have their epistemic processes be strongly conditioned by her involuntary communication, even if (as I believe is the case) both sides start with a genuine desire to do the most good they can.
Second, GiveWell has generally avoided advising donors how much to give, though co-founder Holden Karnofsky has publicly expressed skepticism of the moral obligation to give all one's spare money to effective charities. GiveWell has even avoided explicitly advising donors whether to give to GiveWell charities or not, at most adopting an even-handed stance listing considerations on both sides. But once GiveWell has a single dominant donor with unified finances, it becomes tempting to take this information into account when making recommendations.
Advising anyone how much to give, under the GiveWell brand, based on anything other than GiveWell's own judgment about the charities themselves, is entirely outside the scope of the constrained problem around which GiveWell has built its brand and accumulated trust. Perhaps this was just a messaging error for GiveWell to make such recommendation to Good Ventures, under the GiveWell brand, instead of the Open Philanthropy Project brand. But this was a hard situation to avoid. GiveWell's model makes sense if donors are diffuse. Advising a foundation holding onto massive amounts of money creates an equally massive conflict of interest, and there's really no way to resolve this so long as GiveWell, the Open Philanthropy Project, and Good Ventures share an office:
The Open Philanthropy Project is a collaboration between Good Ventures and GiveWell in which we identify outstanding giving opportunities, make grants, follow the results, and publish our findings. [...]
The people working to identify giving opportunities for the Open Philanthropy Project are primarily GiveWell staff and Cari Tuna, President of Good Ventures (with input from expert advisors). [...]
Good Ventures and GiveWell share office space. Information is shared relatively freely between the two organizations, so it’s rarely necessary to communicate with us separately about a giving opportunity. Despite our close coordination on research and grantmaking, Good Ventures and GiveWell are separate entities with separate financial and human resources and separate governing bodies. We are in the process of creating an independent organization to house the Open Philanthropy Project’s operations.
If GiveWell's mission as an organization is still relevant, then recommendations about how much a foundation it is advising gives to its top charities should be disclosed as a potential conflict of interest, but it is not obvious to me that GiveWell needs to own this decision beyond the mere fact that some GiveWell staff were involved in this recommendation. If the Open Philanthropy Project separately has a strategic incentive to explain its reasoning, that's fine, but it's a poor fit for GiveWell. Under this argument for the recommendation, separating GiveWell and the Open Philanthropy Project seems extremely important.
Conclusions from Arguments 3-5
Overall, these arguments favor more explicit commitment or conditionality of funds, and distinctions between institutions. GiveWell's current status as part of an organization serving Good Ventures's institutional interest makes it difficult for GiveWell to pursue its own distinct mission. To continue to lead by example as a charity evaluator, GiveWell may need to make a clean institutional break from Good Ventures.
If it's important to preserve a minimum foundation endowment to enable access, this similarly weighs in favor of making more explicit distinctions between various institutions and allocations of funds. The Open Philanthropy Project seems substantially constrained here by the ambiguity of the commitment by Good Ventures and its donors to give through the Project or on the basis of its recommendations. It is also difficult to distinguish the Open Philanthropy Project's interest in preserving its and Good Ventures's perceived value to potential grant recipients, and its institutional interest in increasing the share of Good Ventures's eventual giving under its influence. Good Ventures, not the Open Philanthropy Project or GiveWell, seems like the natural organization to decide, based on Good Ventures's institutional interests, how much to give to the GiveWell top charities.
The other viable option is for GiveWell to give up for now on most public-facing recommendations and become a fully-funded branch of Good Ventures, to demonstrate to the world what GiveWell-style methods can do when applied to a problem where it is comparatively easy to verify results.
These arguments also favor faster giving - and explicitly allocating funds to GiveWell top charities or Open Philanthropy Project grantees sooner rather than later - in order to build a track record sooner.
References
↑1 | GiveWell's 2015 top charities post:
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
↑2 | Dustin Moskovitz and Cari Tuna have publicly stated that they plan to give away the vast majority of their fortune, largely via Good Ventures. Forbes estimates this to be about ten billion dollars as of 2016. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
↑3 |
Total money donated, by charity (as of May 2016)
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
↑4 | Cari Tuna is the president of Good Ventures and runs it day to day; co-founder Dustin Moskovitz seems to be focused on his own company, Asana. |
Pingback: GiveWell: a case study in effective altruism, part 5 | Compass Rose
Pingback: GiveWell: a case study in effective altruism, part 6 | Compass Rose
Pingback: GiveWell: a case study in effective altruism, part 1 | Compass Rose
Pingback: On Philosophers Against Malaria | Compass Rose
Pingback: GiveWell and partial funding | Compass Rose
Pingback: Drowning children are rare | Compass Rose
Pingback: My Site