This is the last of a series of blog posts examining seven arguments I laid out for limiting Good Ventures funding to the GiveWell top charities. In this post, I articulate what it might look like to apply the principles I've proposed. I then discuss my prior relationship with and personal feelings about GiveWell and the Open Philanthropy Project.
A lot of arguments about effective altruism read to me like nitpicking without specific action recommendations, and give me the impression of criticism for criticism's sake. To avoid this, I've tried to outline here what it might look like to act on the considerations laid out in this series of posts in a principled way. I haven't constructed the arguments in order to favor, or even generate, the recommendations; to the contrary, I had to rewrite this section after working through the arguments.
Any real attempt to learn from this analysis will probably look somewhat different, due to facts I'm currently unaware of. Some of these recommendations may have already been in the process of implementation - if so, I'd be very interested in reading about them!
Also, it's important to acknowledge that I am recommending that people who are not me perform work, which of course trades off against other work of substantive value, so it may make sense not to make progress on all these points immediately.
- 1 Recommendations to GiveWell and related organizations
- 1.1 Conflicts of interest
- 1.2 Consistent standards
- 1.3 Cooperative influence
- 2 Recommendations to other Effective Altruists
- 3 Disclosures and a note on interpretation
- The institutional relationships between GiveWell, the Open Philanthropy Project, and Good Ventures generate substantial conflicts of interest. To resolve this:
- GiveWell should reevaluate the GiveWell project. To the extent that GiveWell continues as an independent organization, it should make recommendations based on its values and preferences, and not try to take Good Ventures's perspective.
- Dustin Moskovitz and Cari Tuna should make the conditionality of their relationship to the Open Philanthropy Project more explicit, taking into account the costs of ambiguity.
- The Open Philanthropy Project should clarify whether it is working as an advisor to Good Ventures or an independent evaluator of grant opportunities.
- In the absence of a specific model of their future ability to put funds to good use, GiveWell and the Open Philanthropy Project have applied very different implicit standards when evaluating their own abilities to use funds, from the standards they apply to grantees. This inconsistency has likely led to a substantial misallocation of resources. Each of these organizations - and Good Ventures as well - should come to a more explicit opinion about:
- How it expects its marginal cost-effectiveness to compare to that of the GiveWell top charities.
- How it expects returns on giving to increase or diminish with scale.
- Try to influence others in ways that generalize well, prioritizing cooperation over control.
- Prioritize building a track record others can see and evaluate.
- Holding onto money is just as much a funding decision as making a grant. Apply "room for more funding" standards symmetrically, to move money to where it can do the most good.
- GiveWell already has a good track record of reasoning things through publicly. Lean into this by treating open communication as a tool rather than an intrinsic value.
Conflicts of interest
The GiveWell project
GiveWell's processes were designed to recommend charities with room for more funding. Good Ventures is taking GiveWell's advice, and has more money available than the entire funding gap of GiveWell's top charities. This is a substantial change in GiveWell's position. It may have made sense to keep the old model so long as the association between GiveWell and Good Ventures was exploratory, but now that GiveWell is in the position of recommending to Good Ventures how much money to give to the GiveWell top charities, and Good Ventures acts on these recommendations, GiveWell's constitution might no longer be well-suited to its situation.
GiveWell ought to reassess its mission. If it still makes sense to have an organization doing more or less what GiveWell is currently doing, making recommendations to Good Ventures that take into account Good Ventures's institutional interests seems like a massive conflict of interest. Such recommendations should probably come from elsewhere, like the Open Philanthropy Project, and GiveWell should stick to the business of recommending the best evidence-backed highly cost-effective charities with room for more funding that it can find. More separation between the decisionmaking processes of the two organizations would be very valuable in limiting the potential for such conflicts of interest.
If GiveWell does make a grant size recommendation, it should be based only on GiveWell's interests and mission, such as considerations of limiting its top charities' dependence, or its own dependence on a single donor. So long as GiveWell continues to position itself as a neutral arbiter of opportunities to do good by giving, it should not take or pretend to take Good Ventures's perspective here.
Perhaps it is important that GiveWell itself continue to recommend grant sizes to Good Ventures, optimizing for Good Ventures's negotiating position or the Open Philanthropy Project's institutional interests; trying not to do so might lead to insurmountable conflicts of interest. In this case, GiveWell should simply be integrated into either the Open Philanthropy Project or Good Ventures itself, as a department focused on global health and development grants, and any recommendations to external donors should come with a clear, explicit indication of the obvious potential conflict of interest.
Conditional funding and the Open Philanthropy Project
I've discussed a few reasons why the potential of influencing billions of dollars in charitable giving is likely to distort the decision processes of GiveWell and the Open Philanthropy Project. More generally, it seems like conflicts of interest and partial trust go hand in hand.
The corrosive effects of partial trust can be nonobvious because there is more than one kind of influence. In particular, it is very easy to confuse social power with influence over outcomes. If there is a problem with my car and I leave it with a mechanic, my influence on the quality of the work done is limited to the quality of the process used to select a mechanic. If instead, I, who know nothing about cars, stay on-site and repeatedly inspect the mechanic's work, or ask them to explain what they're doing so I can double-check that it makes sense, this might give me more of a feeling of control - but it's fairly likely that I'm not improving the quality of the work. Of course, it's plausible that I might be able to detect outright fraud and blatantly unnecessary work if I'm on-site. But I'm not improving the best-case outcomes much at all.
To the extent that Good Ventures relies on the Open Philanthropy Project's recommendations, then the expected quality of its giving decisions are as good as the quality of the process it used to select the Open Philanthropy Project. To the extent that Good Ventures staff evaluate Open Philanthropy Project recommendations on the merits, and sometimes find problems and correct them, they will have an experience of ongoing control that is related to the quality of decisions being made. But so long as Good Ventures holds onto the money, there will be an additional factor that contributed to Good Ventures staff's experience of influence, unrelated to their contributions to the quality of decisions being made: Open Philanthropy Project staff will be very responsive to their requests.
If Good Ventures were not meaningfully constraining the Open Philanthropy Project's grant recommendations, then it would not be improving the quality of giving decisions, relative to giving them the money. But to the extent that it is affecting (or even seems like it is affecting) grant recommendations, it gives the Open Philanthropy Project an incentive to deviate from its own best judgment. This leads to some combination of a few potential downsides:
- It constrains the Open Philanthropy Project's grants, not to the set they think is best, but to the best ones they think they can persuade Good Ventures to endorse. Since communication among humans is always imperfect, this has to mean that less good gets done in expectation
- It habituates Open Philanthropy Project staff to optimize for persuasiveness to Good Ventures staff, and not their own judgment, which will corrupt their own sense of truth-seeking.
- It provides the illusion of control, which means that Good Ventures staff will likely think less hard about grant decisions, than they would if it were either getting advice from a stranger, or deciding how much money to give to a friend to give away with no accountability, or making an executive decision based on an employee's work.
As I see it, there are three simple potential courses of action that don't suffer from massive hidden incentive problems:
- Hire Open Philanthropy Staff to explicitly work for Good Ventures.
- Give the Open Philanthropy Project money with no strings attached.
- Keep the money, to spend it by some other means.
The optimal combination of these might involve some explicit conditionality, but is unlikely to involve ambiguity; ambiguity just buys the worst of both worlds.
Good Ventures co-founders (and ongoing donors) Dustin Moskovitz and Cari Tuna might be uncertain how much they personally trust the Open Philanthropy Project or the individuals working there; this is a coherent stance to take. However, as far as I can tell, the correct response to this is to make their uncertainty explicit, and hedge their bets with equal explicitness. If they want to diversify their giving portfolio, then they can commit however much money they think is worth entrusting to the Open Philanthropy Project, and take responsibility for spending the rest themselves, or give it elsewhere. Similarly, they might think that there are diminishing returns to the Open Philanthropy Project's advantage at allocating money. This also suggests a strategy of explicit bifurcation. The same applies if they think that their values are mostly aligned with those of the Open Philanthropy Project, but also want advice in some program areas the Open Philanthropy Project thinks are less important.
If they think that the Open Philanthropy Project gets important access to donors from the impression that it will move billions of dollars, but they are not ready to confidently to commit to this, I am not sure what the right thing to do is in the short run - but it seems like it would be a mistake to let the situation stay amorphous indefinitely. More generally, if they expect to be able to make a better decision on how much to allocate to the Open Philanthropy Project after a few more years of track record, then I would advise them to make that explicit, at least privately - commit some amount of money, with the option of committing more later - but remember that while that sort of conditional commitment is less corrosive than approving every grant decision, it still carries that cost to some extent. And of course, outsiders will have greater trust in the independent judgment of the Open Philanthropy Project if that commitment is made public.
Having made a direct recommendation for what someone else should do with their own money, I want to reiterate - I don’t think Dustin Moskovitz and Cari Tuna are morally obligated to give away anything. They certainly don't owe that action to me personally. But, since they have already expressed an admirable interest in giving away their fortune, in the way that does the most good, I am offering my advice on how best to do that. Their giving will be more effective if done in a way that doesn't create massive perverse incentives. I recognize that I am recommending unusual behavior. But Good Ventures is already a weird foundation. Moskovitz and Tuna have already shown readiness to do weird things, if the weird thing seems to them to be the best thing. I hope that they will do what I say if and only if they are personally persuaded.
Finally, while I've addressed the possibility that mixing their judgment with the Open Philanthropy Project's might lead to passing up opportunities that the Open Philanthropy Project alone might identify, the reverse is also true. Moskovitz and Tuna are smart and benevolent people, with different cognitive styles, knowledge, personal interests, and backgrounds, than Open Philanthropy Project staff. They have given some money away on their own initiative, and an important benefit to making a division of funds at least partially explicit could be that they themselves feel free to do the best thing by their own judgment with whatever money they decide to hold onto, without waiting for respectable authorities to weigh in.
The Open Philanthropy Project is about finding high-impact uses for money. Reserving money for the Open Philanthropy Project is itself an use for money. To recommend that Good Ventures hold off on giving to the GiveWell top charities is, effectively, recommending that reserving that money for other uses such as the Open Philanthropy Project, is a better use of the money.
This is, in principle, possible. But the test should be symmetrical. If a higher standard is applied on one side than the other, money will be misallocated to the side with a lower standard.
The only way to apply a symmetrical standard is for the Open Philanthropy Project to explicitly evaluate itself as a program, and compare itself with other ways to distribute money.
The decision processes of the Open Philanthropy Project and Good Ventures would strongly benefit from an explicit judgment on whether their expected opportunity cost is better than the cost-effectiveness of the GiveWell top charities, worse than the GiveWell top charities but better than that of marginal GiveWell donors, or worse than that of marginal GiveWell donors.
If they decide to go ahead with this, then in making this judgment, I expect them to be correctly skeptical of extreme conclusions and double-check any assumptions that produce them. However, I also expect that they will face the temptation to avoid specifying an implicit claim if it seems arrogant to them. It is very important to resist this temptation, at least in internal communication; the only way to reliably not act on arrogant assumptions, is to examine those assumptions explicitly, so that you can evaluate how arrogant they are, and whether you really believe them. Normal-seeming behavior is not a reliable safeguard against behavior that makes no sense given your values, especially when you are trying to do much better than the norm.
And of course, there's the very real possibility that the Open Philanthropy Project really can knowably do better than anyone else, in which case it would be a terrible loss to humanity for the Project to assume otherwise, and behave too timidly.
A scenario analysis that explicitly accounts for uncertainty and learning over time is important for building any such model. For instance, suppose the Open Philanthropy Project would be comfortable recommending that Good Ventures spend down its endowment now, if not for a 5% chance of an unanticipated opportunity to cost-effectively spend a billion dollars averting a global catastrophic risk some time in the next ten years. The correct response to this is to take into account the value of that spending, so that each dollar of spending that cuts into the relevant reserve gets penalized at 5% of the value of the hypothetical opportunity. Similarly, if the Open Philanthropy Project expects much more information in five years about the viability of spending large amounts of money in a cost-effective way to fund basic scientific research, with a 50% probability of success, it should compare the value of spending now against the present value of the expected future opportunity - 50% of the value of spending the money on basic scientific research, and 50% of the value of spending the money on opportunities it is more confident will exist.
It’s reasonable to act slowly and sit on extreme recommendations for a while (but not too long!) before acting, but one learns that one is wrong faster by precisely specifying one's assumptions. Sharing a tentative model, and the actions that it seems to imply are best, even before the Open Philanthropy Project and Good Ventures are ready to act on that model, could be an important part of that process. This not only gives their audience the chance to suggest corrections and improvements to the model, but gives people the chance to see whether they agree with it and reallocate their giving accordingly.
If we outsiders really are worse at giving on current margins than the Open Philanthropy Project is, it's important for us to know that so we can give our charity budgets to it and then decide between earning to give to them and direct work. If we think we’re likely to be better at allocating our own money, then we should likewise act accordingly.
Recommendations to Good Ventures about how much to fund the GiveWell top charities, based on its institutional interests, should eventually be informed by this sort of model.
Returns to scale
Coming up with an opinion on one's marginal cost-effectiveness as a funder ought to involve an opinion on whether one faces diminishing, constant, or increasing returns to scale. Constant returns to scale at a cost-effectiveness level similar to that of current GiveWell top charities implies a very large global impact. Diminishing returns imply that more good might be done by giving money to smaller organizations or individuals to regrant, even if they are less good at it. Increasing returns suggests that it would be more promising to look for already-established funders to pool money with.
My impression is that GiveWell is full of smart, impressive, honest people, and that it really has identified a class of giving opportunities substantially better than the typical charity. It has also managed to document its learning process, and change course multiple times when old methods weren't working. The Open Philanthropy Project has identified some very promising program areas and made some interesting grants. It seems quite reasonable to allocate a substantial amount of money to test the hypothesis that something like the Open Philanthropy Project can work well.
But there does exist some substantial competition. The Gates Foundation has a decent-looking track record in consequentialist giving. DARPA and IARPA have decent track records in funding innovative research, and seem like a good fit for caring about averting global catastrophic risks. JPAL and Evidence Action have been doing good research on developing-world interventions. The Future of Humanity Institute has a track record of finding people doing research important for the future of humanity and supporting them. The Singularity Institute settled on two important focus areas where the Open Philanthropy Project has now made grants (it's now split into CFAR and MIRI, both grantees) long before the Open Philanthropy Project decided to look into these areas. These institutions all have important practical and methodological limitations, but so does the Open Philanthropy Project. I suggest explicitly evaluating the case for delegating some funding decisions to these organizations, even if only to make it clear exactly why giving them the money seems worse in expectation than holding onto it.
Plausible individuals to consider include people mentioned as personally influencing GiveWell co-founder Holden Karnofsky's thinking, in his writeup of how his thinking has changed. I know less about which individuals Cari Tuna and Dustin Moskovitz know and trust, but plausible contenders include Peter Singer (through whom they found GiveWell), Dustin Moskovitz and Cari Tuna personally, and GiveWell founders Holden Karnofsky and Elie Hassenfeld personally, as well as other current employees. Anyone GiveWell has relied on as an expert about a potentially promising intervention seems like a good fit too. Google CEO Larry Page has suggested giving to Elon Musk instead of any charitable organization at all; interestingly, in addition to developing a commercially viable electric car, rockets, and solar panels, Musk has also identified AI risk as an important focus area and funded efforts to alleviate it before the Open Philanthropy Project did.
(I suspect that individuals' personal giving patterns would be substantially different if they were personally allocating large amounts of money unaccountably, than the recommendations they would make to a charitable foundation, which are subject to accountability constraints and the need to persuade an outside party. It is not obvious to me that the latter method is better, and figuring out what method works better seems potentially extremely important.)
Finally, other GiveWell donors have shown an ability to notice GiveWell's charity recommendations and give based on them, a strategy very similar to the one currently used by Good Ventures.
This list is mainly meant as an example, to illustrate a few alternatives to holding onto the money; it is not a strong recommendation to fund all parties mentioned. No one mentioned or alluded to above has asked to be placed on a list like this.
The same mechanism that might lead Good Ventures to conflate personal involvement with influence over outcomes applies to GiveWell and the Open Philanthropy Project. Applying tight standards to others often implies very optimistic assumptions about oneself. The solution to this problem is to pick a standard that can be applied symmetrically, such that you'd be glad if another agent like you used the same standard.
To the extent to which the value proposition of GiveWell and the Open Philanthropy Project is to influence others' giving in the future, rather than optimizing present giving, they should invest in creating information that future similarly motivated donors are likely to want. This means prioritizing creating a clearer track record, over prioritizing direct impact.
Symmetric standards for room for more funding
GiveWell and the Open Philanthropy project have generally tried not to recommend more funding than they know an organization can use in the short run. This "room for more funding" standard made a lot of sense in GiveWell's early days when its money moved was comparatively small, and used up every year. However, with the money Good Ventures will eventually move, applying this standard now implies that the Open Philanthropy Project's room for more funding is $10 billion. It seems unlikely to me that this number would be justified if the same standards were applied internally.
The right answer isn't to give more in an unprincipled way, or to somehow decide faster - time really is scarce - but to relax room for more funding standards until Good Ventures can consistently hold onto whatever endowment is left.
This is an unconventional recommendation since it involves spending down the Good Ventures endowment much more quickly - but to the extent to which Good Ventures expects to outperform other foundations, it should expect to be doing unusual things.
The Open Philanthropy Project has already been reevaluating its communication strategy, and the role of transparency. This change is in the right direction, but to capture the full value of such a shift, they should explicitly define the tactical and strategic goals they hope to achieve through communication and disclosure, and then use that to determine where they share information. Some relevant considerations are:
- Affecting others’ donation behavior.
- Demonstrating that their recommendations deserve the credence of the public.
- Checking crucial models and assumptions.
- Sharing info that might be useful for others’ benevolent actions.
These considerations suggest that those organizations should often share less information, in more compact form, but faster. For instance, if an investigation into a program area has been put on hold indefinitely, publicly announcing this decision and the reasons for it may encourage others to pick up the work (if the delay is due to the interference of higher-priority work or a lack of fit between the organization and the program), or either avoid the topic or point out incorrect assumptions (if the program has been dismissed on the merits). As another example, if a major writeup is in progress, clearly indicating this may lead others to delay their decisions and research until they can take into account this soon to be available information, thus avoiding unnecessary duplication of effort.
On the other hand, if there doesn't seem to be any benefit to others acting based on some information, and there's no conflict of interest involved, then there's no need to spend valuable staff time writing it up, potentially delaying the release of information more relevant to coordination.
Recommendations to other Effective Altruists
Not all the work here has to come from inside organizations like GiveWell. I’m hoping to contribute to a broader public conversation about effective giving, one that GiveWell and the Open Philanthropy Project have been contributing to through articles and blog posts and other public documentation of decisions. Much of the work described above can be attempted by someone outside GiveWell and the Open Philanthropy Project almost as well as by someone inside it. However, it would help avoid wasted effort if GiveWell and the Open Philanthropy Project gave some public guidance on which arguments they find persuasive or potentially persuasive, and what further work would be most likely to affect their decisions.
More generally I hope that effective altruists, especially the sort of highly engaged ones likely to read this post, will make more independent decisions as a result of this, and trust their own judgment more, comparing the value of the best options they can find with that of GiveWell's top charities (again using their own judgment), and doing what actually seems best to them.
Disclosures and a note on interpretation
I worked at GiveWell / the Open Philanthropy Project for a year, left on good terms, am friends with people who work there now, and am also friends with people who work for organizations that have received funding from them, directly or indirectly, and it's plausible that one of those organizations will employ me in the future.1
The quotes from official GiveWell and Open Philanthropy Project publications are the only parts of this post that directly reflect their opinions. Any other conjectures I've made are based on those publications, the public record of their actions, and my own interpretation of those things. My opinions here reflect my own private judgment.
I've seen someone summarize my prior post on the limits of GiveWell's case for deworming charities as saying that people should stop taking GiveWell's word as gospel. I am actually trying to make a different point: that the responsible thing for potential donors to do is to take seriously the literal content of what GiveWell is publicly saying, and evaluate it for themselves. We have a wonderful wealth of info provided by a scrupulously careful organization that's trying to be honest and note the downsides as well as the upsides to its recommendations.
I don't mean to deprecate time-constrained donors here either. Some people are legitimately busy doing other things that fulfill their values, and want to do good with their money, but don't have the time to do a lot of their own research on the matter. Some people are still learning. But if you're giving to a charity because GiveWell seems very good and careful, and recommends it - or because a friend you trust told you to give to a GiveWell top charity - then that is the decision process you used, and that is the decision process you should report. What's wrong is to make up another reason, because it feels more respectable.
Similarly, I've pointed out a bunch of problems with the way GiveWell is currently set up, but I don't think this should make anyone think worse of the individuals working at GiveWell. GiveWell is probably more honest than most people I know. It isn't good with perfect reliability, because no one is. Almost no one seems to even be trying to hit that standard. But GiveWell seems to be. They flubbed transparency early on (the astroturfing incident) - and responded with a mistakes page and a very exacting transparency policy. That's what it looks like to genuinely try to do better. I couldn't have made this kind of critique without the help of public information that GiveWell and the Open Philanthropy Project have made available laying out their thinking. This is an excellent practice, and I wish more organizations, inside and outside the effective altruism movement, did likewise. And based on my experience in both public and private conversations, I would not be at all surprised if, of those who read and respond to this critique, some of the most serious engagement comes from GiveWell itself.