A philosopher friend told me about a fundraiser urging philosophers to give to the Against Malaria Foundation (AMF), and asked me for my thoughts on it. They were especially interested in making sure there were multiple public perspectives on this because some philosophers seem to have been responding by giving more than they can afford.
I applaud these philosophers for putting the ideal of taking basic rational argument seriously into practice, and taking responsibility for trying to use this power for good. This fundraiser is part of a broader event called Philosophers Against Malaria, which is affiliated with the Effective Altruism (EA) movement, and it seems like a natural expansion of the ideas and methods of that movement. This is extremely appropriate; philosophers are some of the key founders of and proponents of EA, and for good reason – giving a large share of one’s developed-world income to charities focused on health interventions in poor countries is an unconventional action, but follows from clear and simple reasoned arguments based on common moral intuitions.
However, I think that there are some limits to the way EA’s recommendations are applied in practice, that are going to predictably lead to underperforming your true potential at doing good. To be a bit more specific, there’s an obvious argument that if you live in a rich country, care about the well-being of the people around you, and don’t have a principled reason to care less about those far away, then it should look like a great deal to give to charities operating in much poorer countries where money goes farther. This is true as stated.
This, however, is often tacitly conflated with the claim that it is morally obligatory to give a large share of your income to such charities – generally the ones endorsed by some specific organization such as GiveWell or Giving What We Can – and that if you commit to doing this, you can stop worrying about your impact on the world. This doesn’t necessarily follow, for a few reasons:
- You may not be the core audience for charity recommenders like GiveWell or Giving What We Can.
- For uncontroversial interventions, money may not be the limiting factor.
Moreover, the broader EA movement that produced these recommendations has some methodological issues that should make you doubt that it’s giving you the most relevant information on how to do good:
- In recommending ways to do good, it centers the role of giving money to charity, implicitly at the expense of more direct ways to do good.
- In evaluating actions, it implicitly uses an act-utilitarian or -consequentialist framework even in cases where rule-utilitarianism would be much more appropriate.
Specific reasons to consider other giving strategies
You may not be the core audience for charity recommenders like GiveWell or Giving What We Can.
I'll focus on GiveWell here because other EA charity recommendations seem to follow their lead on empirical claims, though there's some good independent thought on e.g. the Giving What We Can blog.
GiveWell publishes exceptionally careful research on charities, but you only get the benefit of that carefulness if you pay attention to the details. Not everyone reading GiveWell’s website is the same, and the top charity recommendations are subject to substantial constraint. GiveWell has limited staff capacity, so it simply can’t look into charities below a certain size – it’s focused on recommending charities that can absorb a substantial amount of money from GiveWell donors, who give tens of millions annually. Thus, individual small donors with time to do research should expect to be able to find highly cost-effective giving opportunities that GiveWell missed.
GiveWell also perceives its audience as uninterested in higher expected value but higher risk options. If you take expected value considerations and logical arguments seriously, and want to do the most good or help the most people, you will find yourself drawn towards stranger-sounding charitable endeavors. Things like preventing human extinction, e.g. by mitigating risks from emerging technologies, in order to save many orders of magnitude more future lives than exist today. Or alleviating wild animal suffering. Or structural change such as political advocacy to let people work and live where they want to.
The Open Philanthropy Project, a spinoff from GiveWell, advises the large foundation Good Ventures on its grantmaking, and has focused on these somewhat more controversial giving areas. GiveWell's recommendations, by contrast, focus on things that require fewer imaginative leaps or controversial claims to endorse. As GiveWell co-founder Holden Karnofsky said about Good Ventures's giving in a comment on GiveWell’s 2015 top charity recommendations announcement:
This portfolio does include, and will continue to include, many gifts that are controversial and highly debatable, such that many of the donors I’ve spoken to are uncomfortable with them. By contrast, our top charities work focuses on finding opportunities whose case is largely concrete and verifiable. Donors who wish to effectively support the work of the Open Philanthropy Project can make unrestricted donations to GiveWell, but we don’t want this to be the default for people supporting top charities.
While many people may not take expected value seriously, and prefer a “certain” (not actually certain, of course) chance of saving one life to a 20% chance of saving ten, I’m guessing that intellectuals such as philosophers are substantially more likely to be willing to make bets like the latter. This is an important advantage, and you should not throw it away. The Open Philanthropy Project, a GiveWell spinoff originally called GiveWell Labs, started as an attempt to find charities to recommend to potential donors willing to give to weirder or more speculative charities if it helped them do more good:
GiveWell’s traditional work (the work behind our current top charities) and our work on GiveWell Labs reflect two very different visions of giving.
- The first, giving as consumption, sees giving as analogous to making a purchase. For every $X one spends, one gets some desirable outcome (such as a life saved), and the goal is to find giving opportunities that can deliver these outcomes approximately linearly and with good value-for-money.
- The second, giving as investment, sees giving as analogous to investing in a company. Risk is known to be high, and outcomes hard to foresee in detail (particularly for earlier-stage investments). Rather than asking “What will each $X buy?” one tends to focus more on questions like “Does this organization have a good team and model?” and “Is this organization positioned to have a huge upside, even if one can’t say in advance just what this would look like?”
[...] We believe that GiveWell has played a role in improving the quality of “giving as consumption” opportunities, and will continue to do so. At the same time, we think that the “giving as consumption” framework will likely always apply to only a small fraction of giving opportunities, and broadening our horizons (via GiveWell Labs) is essential.
We think the principal advantages of our current top charities are that:
- They represent the best opportunities we’re aware of to help low-income people with relatively high confidence and relatively short time horizons. If you’re looking to give this year and you don’t know where to start, we’d strongly recommend supporting our top charities.
- Due to the emphasis on thorough vetting, transparency, and following up, our top charities represent excellent learning opportunities, and we feel that one of the most desirable outcomes of giving is learning more that will inform later giving. Supporting our top charities helps GiveWell demonstrate impact and improves our ability to learn, and we are dedicated to sharing what we learn publicly.
- There is an argument for saving money rather than giving, and giving at the point where better information on top giving opportunities is available. We do expect to make substantial progress on GiveWell Labs over the next few years.
- If you have access to other giving opportunities that you understand well, have a great deal of context on and have high confidence in — whether these consist of supporting an established organization or helping a newer one get off the ground — it may make more sense to take advantage of your unusual position and “fund what others won’t,” since GiveWell’s research is available to (and influences) large numbers of people.
If you think you can do better than the typical GiveWell donor with careful thought, you should consider doing your own research. The Open Philanthropy Project, a GiveWell spinoff, provides some food for thought here in terms of areas where money might make a big difference. They haven’t funded everything they’ve looked into, or even everything they think might be promising. But if you already have promising leads that aren’t on the Open Philanthropy Project’s radar, even better!
If you personally don’t have a lot of money to spare, and don’t think it’s worth your time to optimize a very small amount of money, you can pool your donation with that of like-minded people, and create a charity lottery in which only the “winner” has to decide how to give away the money, so that only one of you has to pay the deliberative cost. This can still operate at a scale much smaller than that of GiveWell, with the corresponding advantage in terms of potential small but highly cost-effective giving opportunities.
(UPDATE: There's now been at least one successful implementation of a donor lottery.)
For uncontroversial interventions, money may not be the limiting factor.
When people argue that you should give to a well-known charity endorsed by a respectable organization like GiveWell, they are typically assuming that high-impact “retail giving” is possible. In other words, that there exist charities known to have the most cost-effective interventions, and that the main constraint on these charities is the total amount of money people want to give to maximally cost-effective charities.
If you believe that money is scarce in this way, then major foundations with tens of billions of dollars, that are trying to fund evidence-backed interventions, are inexplicably passing up these giving opportunities. One such foundation is the Gates Foundation, with something like $40 billion to give away. Another is Good Ventures, advised by GiveWell itself, with an eventual $10 billion to give away. It's not plausible that GiveWell's top charities could absorb this amount of money, at current cost-effectiveness levels, in a reasonable amount of time.
On the other hand, if you think that these foundations have better uses for their money than the GiveWell top charities such as AMF, you probably have a better use for the money too. You could give directly to these foundations, or – as suggested above – do your own research or pool your donations with those of other like-minded people.
Charity is the main way to do good
EA claims to be a movement about trying to do the most good through any means. There are fine EA organizations such as 80,000 Hours that are not specifically oriented around charity. But EA does give charitable donations a central role in its model of how to do good, and this is not clearly justified by the facts.
Charity is, of course, more explicitly labeled as do-gooding than other endeavors. But if we’re trying to maximize our actual impact on the well-being of others, and not just the social credit we get for being do-gooders, then it behooves us to ask the question: for an unit of time or money I invest, do I do the most good by giving it to a charity, offering a marketable good or service, doing some direct act of service for a friend or stranger without any intermediating institution, or in some other way? A one-size-fits-all answer is unlikely.
Second, there's the moral licensing effect, where doing one good deed makes people less interested in investing effort into doing others. This isn't a fully general argument against doing good-seeming things – it mainly weighs focusing on things that are especially strongly socially coded as do-gooding. Holding actual impact constant, the more “officially good” your actions are, the more they seem likely to displace other good actions. If your reason for acting is because you have some specific idea of what your actions will do and how, that’s more likely to be net-positive.
Third, as I argued above, money for uncontroversial things does not appear to be the limiting factor for doing good. Coordination and information sharing appear to be much more important constraints.
Philosophers have done a lot of important intellectual work in EA, finding nonobvious ways to do outsized amounts of good. If you as a philosopher want to help, that might be a good way. The work of philosophers like Toby Ord and Nick Bostrom has genuinely advanced humanity’s understanding of how to help people more effectively by examining crucial concerns such as existential risks. Advancing this effort seems pretty valuable.
And let's not forget the value of your day job: corrupting teaching crucial thinking skills to the youth!
More directly relevant to philosophers, EA specifically implements act-utilitarian intuitions around when and how much it is proper to give, in ways that will systematically favor things that are easy to count over things that are hard to count.
There are excellent EA act-utilitarian analyses of these kinds of choices with genuine rigor - see Katja Grace’s work on whether it’s worth the effort be a vegan - but most people most of the time will fail at doing the act-utilitarian calculus properly. Alyssa Vance recently worked through a simple example of how this can lead to overly general recommendations which are wrong in many cases.
CrowdRise has a tag line "If you don't give back, no one will like you,” which I take to be a joke, but in all seriousness, EA often implicitly encourages people to scrounge up spare cash for charity when they should be using the money either to free up time for direct work (since the impact’s a rounding error compared with what they can do at work), invest in themselves to increase their long-run impact, or invest in local opportunities to do good or help people they know do the same. This is a typical act-utilitarian failure mode, inappropriately demanding in ways that actually make long-run outcomes worse.
We need very badly to have a clear moral articulation of what general principles we should use as cognitive approximations, and that’s something where philosophers have a lot to add. How do we sincerely try to optimize – to do the most good we can rather than merely some good – in ways that generalize well, without ending up with a morality that tells us to lie, cheat, steal, and immolate ourselves on the altar of charity? We need help figuring this out.
AMF is great and you should consider giving to it.
None of this is meant to deprecate anyone who, having considered these arguments, chooses to give to AMF anyway. As far as I know, it’s genuinely a solid organization that saves kids from dying of malaria, and a pretty good thing to fund. Just, don’t pauper yourself in the process; your talents are needed, and your long-run well-being matters too, so take care of yourself.
Further reading on GiveWell and Effective Altruism
For more information on AMF and other GiveWell top charities, you can go to GiveWell’s website. Despite my criticisms above, I think their research is very good; about as good as what you’d find in a top academic peer-reviewed journal. But to know what’s in it, you have to read it; the mere fact that they recommend something is not a strong argument that you personally should give to that charity, for the reasons discussed above. I especially recommend their charity reports and intervention reports. Their blog also has a lot of great writing on giving, explaining their methods and reasoning clearly. If you want to think better about philanthropy and doing good in the world, it's excellent food for thought.
(Disclosure: I used to work at GiveWell and have close friends working there. I have no current affiliation with GiveWell and my opinions are my own.)
For a more critical take on GiveWell and Effective Altruism, see my case study about GiveWell’s 2015 recommendations here, and discussion of misconceptions about deworming charities here.
Via Facebook, Kelsey commented:
1) Most EAs have not done the work to have a proper estimate of whether opportunities to make real improvements locally where you have firsthand knowledge of how things work compares with giving abroad where you're trusting fairly abstract metrics to generalize well in an environment where we know much less.
2a) My best guess is that to the extent that you see what seem like high-value opportunities to improve things that you personally know and understand, they beat "retail charity" recommendations.
3) When you don't perceive any such opportunities, then unless you have a particularly demanding career, you are probably not paying enough attention to your local environment and can do more good on current margins by upregulating that (this is "good citizenship") than by trying to extract more money from yourself to give to "retail charities."
4) That said, some people in some situations do have extra money, and it seems pretty likely that they'll do better giving it to GiveWell top charities than e.g. the Red Cross or United Way. However, if they're basically trusting GiveWell to make the recommendation, they should just give the money to GiveWell. (Same RE Giving What We Can.)
5) The amount of time it takes to find good other charities can be pretty high, but the amount of time it takes to be properly persuaded by GiveWell's research - i.e. to have an independent opinion on whether the GiveWell top charities are good giving opportunities - is similarly high!
6) If you know other like-minded people, donor "lotteries" are a more natural way to solve the time cost problem because they don't suffer from the problem where everyone tries to do the same thing. You have little enough to donate that you're not near diminishing returns, and too little to be worth your time researching a charity, so you pool with other like-minded donors and draw straws, and the loser has to spend time picking a charity.
In this framework, GiveWell is just a huge donor lottery in which Holden and Elie have declared themselves the losers in advance. That approach has obvious flaws (way too big RE diminishing returns, selects for self-promoters & methods that sound better than they are) but is still a reasonable thing to pitch into if you need a donor lottery, and don't have enough like-minded people nearby to form a pool, and think that Holden and Elie are at least somewhat good at picking charities.
On the same principle that small cap stocks are undervalued because they're just too small for huge hedge funds to invest in getting the price exactly right, you should expect smaller giving opportunities to be "underpriced" because they're not worth the time of GiveWell / Open Phil / Good Ventures or the Gates Foundation. But if you don't know enough other like-minded people to pool enough $ to justify the time cost, then just giving to (or based on the recommendation of) an org like GiveWell seems like a good choice.
In the particular case you bring up, I agree that in the short run it's not a good deal for most philosophy grad students to do the research to pick a charity for just their own personal donations.
Another thread from Facebook I think worth preserving (with permission) attached to this post:
Ozy: I'm not sure if I agree with your thoughts about high-value opportunities to improve things you personally know and understand. It's possible I don't entirely understand what you mean-- could you give some examples of the kinds of things you're thinking about?
Me: A couple examples off the top of my head:
1. Giving a loan (or gift) to a cash-poor person you know & trust who has a short-run emergency (e.g. their car broke down and they need to be able to get to work) or opportunity. This seems like it basically is what microcredit pretends to be.
2. Contributing time & money to communal resources. Examples from our community: probably a substantial amount of the Rationalist/EA community's intellectual productivity depends on the maintenance of shared spaces, both physical (e.g. common spaces in group houses, the CFAR/MIRI offices), and online (e.g. Less Wrong during its heyday), and these cost both time and $ to maintain. Friends of ours trying to set up an unschooling collective are also a good example.
Note that these are investments in production, on a continuum with investing in oneself.
Ozy: Hrm. On one hand, I agree that it is likely that this is the comparative advantage of (some) small donors compared to foundations. And I also think that (a) EA is more talent-constrained than funding-constrained and (b) transforming money into talent is something that's often more easily done by small donors than big ones.
But for my personal Dunbar's number of people, it seems to me that there are enough people with spare resources that crises similar to the first type usually get resolved relatively quickly, and that therefore I expect my actions about things like that are very replaceable. I'm uncertain how many people are in a similar situation.
This next paragraph is something I have a fairly high degree of uncertainty on. I feel like a lot of the investment in communal resources, support for friends, etc. I see is not motivated altruistically, but instead through reciprocity/trade (in a broad sense). (For instance: "let's all chip in for a really fast router", "I like so-and-so and I want to give them a place to stay while they get on their feet, because that means I can spend time with them"). I think trade generally works better than altruism. So I'm concerned about people doing those things through altruistic motivations, and that drives out the trade motivation, and leads to worse outcomes overall. I think it is probably better to keep altruism for people it is very difficult to trade with (the global poor, animals, future people).
Me: "I think trade generally works better than altruism. So I'm concerned about people doing those things through altruistic motivations, and that drives out the trade motivation, and leads to worse outcomes overall."
I think I agree here. I'm not quite saying that we should directly use altruistic utilitarian intuitions to motivate investment in goods like a fast router. I'm saying that consequentialist meta-ethics suggest that on current margins, many EAs should endorse their tradey/citizeney desires to improve their local environment more on current margins, and indulge them.
There are a few EAs who actually do seem to do trade, self-improvement, etc. for altruistic reasons. Katja Grace is my go-to example of someone who seriously seems to have avoided investing in herself until she was persuaded to do so by altruistic arguments. (See also Paul Christiano's integrity for consequentialists.) But it's tremendously cognitively costly to do the act-utilitarian calculation each time, instead of just engaging your sense of local opportunity directly.
Pingback: Why I Promote the GWWC Pledge | Thing of Things