GiveWell: a case study in effective altruism, part 2

In my prior post on this topic, I laid out seven distinct arguments for limiting Good Ventures funding to the GiveWell top charities. In this post, I explore the first of these:

Good Ventures can find better opportunities to do good than other GiveWell donors can, because it is willing to accept more unconventional recommendations from the Open Philanthropy Project.

I'll start by breaking this up into two claims (disjunctions inside disjunctions!): a bold-sounding claim that the Open Philanthropy Project's impact will be world-historically big, and a milder-sounding claim that it can merely do better than other GiveWell donors.

The bold claim seems largely inconsistent with GiveWell's and the Open Philanthropy Project's public statements, but their behavior sometimes seems consistent with believing it. However, if the bold claim is true, it suggests that the correct allocation from Good Ventures to the GiveWell top charities is zero. In addition, as a bold claim, the burden of evidence ought to be fairly high. As things currently stand, the Open Philanthropy Project is not even claiming that this is true, much less providing us with reason to believe it.

The mild claim sounds much less arrogant, is plausibly consistent with GiveWell's public statements, and is consistent with partial funding of the GiveWell top charities. However, the mild claim, when used as a justification for partial funding of the GiveWell top charities, implies some combination of the following undesirable properties.

  • Other GiveWell donors' next-best options are worthless.
  • Good Ventures and other GiveWell donors have an adversarial relationship, and GiveWell is taking Good Ventures's side.

Before addressing the details of these two sub-claims, I'll take a detour into how to make comparisons between different giving opportunities at different times. It's easy to conflate different considerations here, and it's important to understand why and how discount rates are relevant, in order to think clearly about this sort of comparison.

Argument 1: Better giving opportunities

One might imagine that the Open Philanthropy Project has better giving opportunities than small donors do. People who use GiveWell top charity recommendations might not do very well trying to use the money for good without GiveWell; they might, for instance, place a premium on certainty instead of taking expected value calculations seriously. The implied argument here is not merely that the Open Philanthropy Project has an information advantage, but that it has useful information that cannot be effectively and efficiently communicated to other donors. Therefore, Good Ventures ought to try to induce as much top charity funding as possible from other GiveWell donors. This implies that the right thing for the Open Philanthropy Project or Good Ventures to do is acquire as much money as possible from other donors, and that other donors ought to simply give to the Open Philanthropy Project.

There are two versions of this claim, with very different consequences:

  1. The bold claim: advantage over GiveWell top charities. Good Ventures's alternative uses for the money are better than the GiveWell top charities.
  2. The mild claim: advantage merely over other donors. GiveWell top charities are more valuable than what Good Ventures would otherwise do with the money, but other GiveWell donors would do even worse.

If the bold claim is true, then not only should Good Ventures not fully fund the GiveWell top charities - it should not fund them at all, because funding other things is a better use of the money. Or at least it should cut its funding of them to a level where the bold claim is no longer true. I consider the bold claim plausible.

If the mild claim is true, then Good Ventures should fund as little of the GiveWell top charities it can get away with.

argument-1

I'm going to consider these claims separately, but first there's some conceptual untangling to do. GiveWell's post explaining the splitting strategy is called "Good Ventures and giving now vs. later", and it treats this how much Good Ventures should give right now to the GiveWell top charities. This is better modeled as two separate considerations:

  • For each program, what is the optimal time to give?
  • Which programs should be funded?

How to compare different giving opportunities

I'm going to try to build up an intuition here by working through a series of examples, adding a little bit of complexity at a time. I'm ignoring inflation for simplicity in all these examples.

The time value of money

Suppose you are a philanthropist with a single fund of a billion dollars to give away however you want, and a single giving opportunity that can scale up arbitrarily. Let's say that your giving opportunity is to transfer cash directly to the world's poor, and that the infrastructure is already set up, so that the only benefit is increased consumption by the poor - i.e. they spend the money immediately to make their lives better, in a way that has no compounding gains.

While you hold onto your money, you're investing it to maximize its return. Your fund has an annual return r. Let's say the return on your investment is 6%. If you set aside $1.00 this year, that lets you spend $1.00*1.06=$1.06 next year, or $1.00*1.062=$1.12 the next year, or $1.00*1.0610=$1.79 in ten years. In general, setting aside x this year lets you spend x*(1+r)n in n years. Equivalently, to spend x in n years, you have to set aside x(1+r)-n in today's money. Finance uses the phrase "present value" to describe this - the present value of an x-sized expenditure in n years is x(1+r)-n.

What do you want to do, to maximize the good your money does? Well, if your giving opportunity stays the same every year, and r>0, then you might want to hold onto the money this year. But if you hold onto the money this year, you'll want to hold onto the money next year, and so on. The logical consequence of this is to hold onto the money forever.

Economist Robin Hanson has suggested this as a way to vastly increase the impact of your philanthropy:

Imagine that we discovered a “hole in space”, through which we could see an alternate Earth, filled with people recognizably like us, though different in many ways.  Those people could also see us.

While no objects could move from their side of the hole to ours, small items (but not humans) could move from our side to theirs.  Furthermore, the hole had the amazing property of multiplying everything we sent through by a factor F of a million!  That is, if you tossed a gold coin through the hole, a million identical coins would come out the hole on the other side.

How tempted would you be to toss useful items, like food, through the hole?   Remember, the cost to you, relative to the benefit to them, is 1/F, only one part in a million.  When considering the following variations, and their various combinations, consider not only F = a million, but also ponder what fraction F would make you indifferent to tossing or not:

  1. Your gift goes to a random person on the other side.
  2. Your gift goes to a government on the other side, which controls the hole.
  3. You can specify to whom your gift will go, using some simple descriptors like “poor”, “smart” etc.
  4. We could also do other things to help them, such as by studying a problem of theirs and sending them a report with suggested solutions.  But these other actions don’t get multiplied by F; a million copies of the report doesn’t help more than one copy.
  5. The hole isn’t very reliable, and only one time in a thousand do items you toss through the hole actually get to the people on the other side.  But when the hole does work 1000*F items come out the other side.
  6. You have very good theoretical reasons to think that most likely there are people much like us on the other side of the hole, but you can’t actually see through the hole (though they can see us).

The point of this parable is that interest rates would also greatly leverage any gift you gave the distant future folks.  For example, in 1785 a French author wrote a satire about Ben Franklin, the most famous American to Europeans.  While Franklin was famous for his Poor Richard’s Almanac, the satire mocked American optimism by having “Fortunate Richard” leave money in his will to be invested for 500 years before being given to charity.

Franklin responded by leaving £1000 each to Philadelphia and Boston in his will to be invested for 200 years.  He died in 1790, and by 1990 the funds had grown to 2.3, 5M$ [respectively], giving factors of 35, 76 inflation-adjusted gains, for annual returns of 1.8, 2.2%.  Why has Franklin’s example inspired no copy-cats?  Does no one care to help distant future folks through the multiplier hole of compound interest?

Economists generally assume, with some good reason, that financial investments correspond to real investments - i.e. an improvement in the total productive capacity of the economy. Thus, by not spending the money, you're actually increasing the rate at which the economy improves, making the future genuinely richer. Since consumer surplus exists, i.e. investors generally don't capture the full social value of their investments, this means that some of the good your investment will do actually benefits people each year, but in a way that compounds. As a simplified example, if car companies reinvest more profits into research instead of paying out dividends to shareholders, they should be able to improve the fuel-efficiency, safety, and other desirable attributes of their cars, faster than they otherwise would. In practice, this effect would be diffused across the whole economy in hard-to-pin-down, sometimes meaning that an entirely new technology is developed sooner than would have otherwise been the case, and that longer-term investments in the future become more profitable.

Hanson explains:

[T]hese are the likely consequences of allowing donations to the distant future:

1) The fraction of world income saved would increase, relative to consuming not-donated resources immediately. This effect starts small but increases with time, until savings become a large fraction of world income, after which diminishing returns kicks in.

2) While funds are in saving mode, world consumption would be smaller at first, relative to immediately consuming donor resources, but then after a while it would be higher, though it might eventually fall to zero difference. When such funds switch from saving to paying out, or when thieves steal from them, the consumption of thieves and specified beneficiaries would rise.

3) As investment became a large fraction of world income, interest rates would fall, and the market would take a longer term view of the future consequences of current actions.

4) Some would change their behavior in order to qualify for benefits, according to the conditions specified by the original donors and the agents they authorize to later interpret them.

These changes seem good overall, especially if, as I estimate, the future will have many folks in need.

Time-sensitive giving opportunities

However, it's probably not true that your giving opportunity stays the same each year. Let's say it gets worse every year at rate g. In particular, the utilitarian case for giving to the world's poorest is that the money does more good for them than for the world's richest. There is some empirical support for the claim that the same percentage increase in wealth or income - say, 5% - increases your happiness by about the same amount, regardless of how rich or poor you are. Intuitively, a dollar matters a lot more to someone living on the street or in a slum, than to Warren Buffett.

Since the world's poorest countries are getting richer each year, and faster than the world's richest countries are, the value of an extra dollar given to the poor declines every year. What's more - when GiveDirectly looked at what the poor do with cash transfers, they found that they often make investments in physical goods like metal roofs that are longer-lasting and cheaper to maintain than thatch roofs; overall investments made by developing-world recipients of cash transfers seem to have very high returns.1 So plausibly, g>r - a reasonable value for g might be 10%. In this case, each year you delay giving, your assets increase by 5% in value, but each dollar becomes 10% less effective, meaning that the total impact of giving that year decreases by a discount factor of d=(1+g)/(1+r)=1.1/1.06≈1.04 - your giving opportunities get 4% worse every year, which means that far from hoarding your wealth forever (and giving it away when everyone else is already very rich), you should give it as soon as possible.

This seems plausible with a real billion dollars, if the infrastructure already existed. About a billion people live on less than a dollar a day. In practice, many basic goods such as food may be available to them at lower prices than in the developed world, as is borne out by purchasing power parity calculations, which means that this figure may exaggerate how poor those people are, but one dollar given to any one of them would very likely make a substantial short-term difference in their lives, much more than a dollar given to you or me, or even someone fairly poor in the developed world.

More generally, your impact scales inversely with d. If r and g are constant, then if r<g (i.e. d>1), you maximize your impact by giving as much of your money as you can, as soon as possible, and if r>g (i.e. d<1), you maximize your impact by reinvesting it forever.

Loss of control

One argument against the case for reinvesting forever even if it appears that r>g is that your money becomes less useful after you die, because you are likely to lose control of it, or any enforceable conditions you stipulate for its use are likely to become increasingly misaligned with your intentions as time goes on. This amounts to a claim that g increases sharply at the time of your death. If the increase is large enough, then r<g and the optimal amount of money to give away suddenly changes from none of it to all of it.

In practice, many wealthy people seem to believe this - they give away comparatively little during their lifetimes, and do most of their giving near or at the end of their lives. This may also be in part a risk-mitigation strategy, if they don't know how much they'll want to spend, but it is some support for this framework. Hanson endorses this strategy:

In the past I’ve used Ben Franklin as an example of the possibility of using trusts to save for a very long time. But I think that distracted from my basic point, which can be made just by suggesting that you wait until the end of your life to donate. Waiting longer might in fact be better, but it has more tax and agency issues; you can’t as easily ensure your money is spent the way you want.

I admit that a good reason to donate now is if you believe that we are quickly running out of worthy recipients of charity, either because the world is getting richer and nicer, the charity world is getting more effective, or we happen to live in an unusual time of great need or danger. People who think that global warming and ecological collapse will soon make the world a hell can’t believe this, nor can those who fear great disruption in an em transition. But others may.

I also agree that tax considerations will change the rate of return you can expect, and that by giving over a period of time you may learn from your early gifts to better pick later gifts. But it should be enough to start this learning process when you are older; your life experience will help you learn faster then.

Diminishing returns

In practice, you can't give away a billion dollars with uniform effectiveness each year. In 2015, GiveWell estimated that its top charities were all together likely to need $96.4 million, that there was only a 20% chance they would be able to spend more than $172.2 million, and that at the upper bound there was only a 5% chance they would be able to spend more than $227.5 million.2 Thus, in each year, GiveWell estimates that there are diminishing marginal returns to giving to its top charities.

Suppose that the present year, you are looking to divide up your current assets to give away in a manner that does the most good, setting aside the sum St to give away in each year t, such that ΣSt=A. So for instance, in this year, t=0 and you will give away the exact amount S0. In the next year t=1 you will give away S1 plus any returns on that investment, and so on. In each year, the total impact of your giving is some monotonically positive function of your giving, It=Ut(St). Let's say that total well-being scales logarithmically with the total resources spent on the developing world.

A simple way to model this is to assume a constant annual discount factor q such that It=qtln[St]. The impact-maximizing allocation has to have the property that the marginal value of setting aside an additional dollar is equal for all periods, dIt/dSt=k; otherwise you could do better by transferring a dollar from a period with a lesser marginal impact per dollar allocated, to one with a greater marginal impact per dollar allocated. Therefore, at the impact-maximizing allocation, qt/St=k, so that St=qt/k, and St=qSt-1. In other words, each period receives an allocation equal to q times the prior period's allocation. This gives the intuitive consequence that later periods receive successively smaller allocations.

The above model is easy to work with mathematically but hard to interpret. For the sake of completeness, I'm going to lay out a more complicated but easier to interpret model in a footnote, but I think this is probably fine to skip - the point is that for any giving opportunity, there is an optimal giving allocation over time.3

Learning over time

How does this apply to more unusual or speculative programs? Let's imagine that you've only just heard about something potentially extremely important but hard to evaluate, such as risks due to progress in artificial intelligence. At first, g<0 - your available giving opportunities in AI safety are getting better over time, rather than worse, because you are learning about the field and developing your ability to evaluate programs. Therefore, d<1 and you should hold onto your money, or make "learning grants" only. However, let's imagine that in a few decades AI will approach human level, at which point it is probable that progress will very rapidly accelerate. At this point, the opportunities to spend money on AI safety will rapidly deteriorate, so g>>0 and d>>1 and you should try to spend most or all of your money before this point.

How do we account for uncertainty? As a simple example, suppose that there's a 5% chance of an emergency coming up in the next ten years in which a reserve fund with an initial $100 million allocation would have an outsized impact. The optimal decision here requires a simple scenario analysis. As a simplifying assumption, let's say that the emergency, if it comes up, will occur in exactly ten years. The reserve fund, if spent to avert the emergency, saves an expected million lives. If this doesn't happen, you plan to reallocate the money to GiveDirectly, making the optimal time allocation as above, for the equivalent of an expected thousand lives saved. Therefore, the expected value of this plan is 5%*1,000,000+95%*1,000=50,950 lives saved. If this is greater than the expected impact of spending this money on the best alternative, then the allocation should be made - otherwise, not.

One potential complication here is that a reserve might be held for multiple potential contingencies. If there are two emergencies that would require $100 million with 5% probability in ten years, then there is a 90.25% chance that neither emergency will occur, a 9.5% chance that exactly one emergency will occur, and only a 0.25% chance that both will occur. This means that the first $100 million provides most of the potential value of holding reserves for both contingencies. Suppose that one emergency expenditure would save a million lives, and the other only a hundred thousand. If both emergencies occur, you want to spend the money on the more important one, so there's a 5% chance of saving a million lives, a 5%-0.25%=4.75% chance of saving a thousand lives, and a 90.25% chance of saving a thousand lives, for an expected impact of 5%*1,000,000+4.75%*100,000+90.25%*1,000=55,652 lives saved. The next $100 million reserved only gets spent on an emergency in the 0.25% of cases where both emergencies occur, so its expected impact is 0.25%*100,000+99.75%*1,000=1,248 lives saved. In this case, you're comparatively more likely to be able to do better by giving the money away immediately.

Since in practice there's substantial uncertainty about the size and timing of emergency needs, real-life scenario analyses will need to be more extensive, but the principle is the same.

Value of information

Scenario analysis can also account for expected learning about whether a program is promising for grant-making, by explicitly accounting for the value of information. Suppose our billionaire philanthropist thinks that giving to the world's poorest is a sure thing, but that funding basic scientific research has the potential to contribute a much larger amount of value. If so, they'll be able to spend their whole billion dollars on it. They think there's a 50% chance that they can make good grants, and expect to know this if they research the area for five years. The expected value of setting aside money for this purpose is 50% times the expected value of the scientific research if they decide to fund it, and 50% times the value of, in five years, allocating the money optimally among the remaining options.

Comparing multiple programs

The above series of examples started with a single program, where the only choice is when to give, but finished with implicit comparisons between multiple options. Here's an explicit summary of the method:

For each program and funding level, there is an optimal time-allocation of funds, taking into account both the rate at which one's investments can compound over time, and the rate at which giving opportunities get better or worse over time.

When deciding which of two programs to fund, one should compare the optimally time-allocated versions of those programs. If one such program involves giving now, and another involves holding onto money to give later, or until one has more information, the question is not when to give - it is which program is a better use of the fund allocation. When to give is strictly a downstream consequence of this decision.

If multiple programs interact - for instance, because one program's need for funds is stochastic, and thus affects the money potentially available for another program - then a scenario analysis can explicitly take this into account, but the principle is the same.

The bold claim: advantage over GiveWell top charities

The bold claim of better giving opportunities is a claim that the marginal giving opportunity available to Good Ventures via the Open Philanthropy Project is better than the GiveWell top charities. Or more precisely, that the expected value produced by a dollar held by Good Ventures is greater than the expected value of a dollar given to the GiveWell top charities.

  • Is it plausible that the bold claim is ever true?
  • Is it likely that the bold claim is true in the case of the Open Philanthropy Project?
  • If the bold claim is true, what should the Open Philanthropy Project do? What should they recommend to Good Ventures?

Is the bold claim ever plausible?

Focus areas like funding basic scientific research, improving global decisionmaking, and mitigating global catastrophic risks, especially existential risks, are plausibly much higher expected value than the GiveWell top charities. The long-run survival of humanity is likely lead to access to a very large pool of resources, many orders of magnitude larger than the resources humans currently have access to, so the number of lives affected by small changes in the quality of likely long-run outcomes is similarly unintuitively large:

The effect on total value, then, seems greater for actions that accelerate technological development than for practically any other possible action. Advancing technology (or its enabling factors, such as economic productivity) even by such a tiny amount that it leads to colonization of the local supercluster just one second earlier than would otherwise have happened amounts to bringing about more than 10^29 human lives (or 10^14 human lives if we use the most conservative lower bound) that would not otherwise have existed. Few other philanthropic causes could hope to mach [sic] that level of utilitarian payoff.

[...] If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential. Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.

Therefore, if our actions have even the slightest effect on the probability of eventual colonization, this will outweigh their effect on when colonization takes place. For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”.

It's plausible to me that an organization with the Open Philanthropy Project's priorities and total available funds could have an opportunity cost substantially greater than the value of GiveWell's current top charities, because so much of its focus is on things likely to have a substantial impact on the far future. Existential risk due to emerging technologies is especially likely to have this character. For example, major focus areas are biosecurity and AI safety, both of which may involve mitigating existential risk. This may be true even with otherwise severely diminishing returns, if there are potential future opportunities or emergencies for which it would be useful to have a reserve fund.

Similarly, funding basic scientific research could potentially produce very large real persistent improvements in human well-being, by substantially increasing the total productive capacity of the world economy. Political advocacy also carries the potential for substantial efficiency gains, and even fairly small efficiency gains in percentage terms can amount to a large absolute number.

Is the bold claim likely to be true of the Open Philanthropy Project?

I think it's plausible enough that some person or organization could use ten billion dollars to have a world-historic impact, that I'm not ready to dismiss that sort of claim out of hand. There's also plenty of room in between GiveWell's cost per life saved numbers and the sorts of astronomical numbers suggested above. For example, if you care about factory farmed animals' well-being at some reasonable fraction of the weight you assign to a human's well-being, the former may be substantially easier to help.

However, there are some reasons to doubt its application to the Open Philanthropy Project as currently constituted. One is that most plausible cases for the Open Philanthropy Project's outsized impact end up implying that the case for some other recipient of funds is even stronger. I address this in the section on fine tuning.

The other reason to doubt the bold claim is that the Open Philanthropy Project does not make the bold claim.

The Open Philanthropy Project has not claimed to be substantially more effective on the margin than GiveWell top charities. In the first post in this series, I quoted Open Philanthropy Project co-founder Holden Karnofsky expressing considerable uncertainty about whether the Project's marginal cost-effectiveness was greater at all than that of the GiveWell top charities:

By declining to fund these opportunities, GV is instead holding onto the funds and will spend them later on the best opportunities possible. We’re extremely unsure how these future opportunities will compare to today’s; our best guess (based on the reasoning I laid out) is that today’s are better, but that’s an extremely tentative guess and we don’t feel strongly about it.

This is plainly inconsistent with the bold claim, and as far as I can tell, the Open Philanthropy Project has been quite careful never to make the bold claim. This means that if the bold claim is true, you have to believe that the Open Philanthropy Project is deeply mistaken about its own long-run impact.

What follows from the bold claim?

If the bold claim is true, it does not support partial funding of the GiveWell top charities. Based on the bold claim alone, Good Ventures should not fund the GiveWell top charities at all on current margins, because that would be reallocating money from a more effective program to a less effective program, making things worse on net. This would also imply that people giving to the GiveWell top charities are making a mistake, unless they have a reflectively endorsed preference for interventions backed by conventional scientific evidence, regardless of expected value concerns.

This is worth distinguishing from a similar-sounding argument, that because the expected value of things like averting existential risk is so high, you should give all your money to existing existential risk mitigation charities, since even if there's a tiny chance they work, it's worth it on expected value considerations. That's not how you buy outcomes. Even if it were the best option considered in isolation, once people started acting on such considerations it would create an incentive for any grifter who can talk well to claim that they're an existential risk charity.

What this does imply is that you should focus your attention on getting the focus area right. If you don't have time to think or don't trust your judgment, the best you can do is give to someone whom you trust more than yourself. If you do have time to think and trust your judgment, the thing to do is learn about the field, and come to understand how things work well enough to tell the nonsense from the real thing. The cost of the delay is more than outweighed by the benefit of giving your money preferentially to institutions that are actually doing something. This looks to consistent with parts of the Open Philanthropy Project's actual strategy - performing shallow investigations into a variety of fields followed by deeper dives into the ones that seem most promising, and meanwhile giving to learn.

If GiveWell recommendations are only the best option for those too conservative to back more speculative Open Philanthropy Project style programs, then I'm confused as to why Good Ventures funded the GiveWell top charities at all.

I have heard effective altruists bring up the possibility that this decision was made, not because GiveWell genuinely believes that its top charities are better than what the Open Philanthropy Project will find, but because it would be embarrassing to so blatantly admit that they don't agree with their donors. I am not going to to address the details of that argument. I do not see how any good comes from trying to steelman a hypothetical policy of lying to avoid embarrassment or control others' behavior when the stakes are this high; all it would do is further erode trust in public discourse, and I do not think that GiveWell or the Open Philanthropy Project would thank me for "steelmanning" them in that way.

The mild claim: advantage merely over other donors

The mild claim of better giving opportunities is a claim that Good Ventures has an opportunity cost advantage over other GiveWell donors.

According to this argument, GiveWell top charities are more valuable than Good Ventures's opportunity cost. They are also more valuable than other donors' opportunity cost. Therefore, someone ought to fund them. But Good Ventures's opportunity cost is higher than that of other GiveWell donors, so it's better if other GiveWell donors contribute as much as possible towards the funding gap.

If you assume that the Open Philanthropy Project and other GiveWell donors have similar skill at giving, then the Open Philanthropy Project obviously has a much lower opportunity cost than each donor, because it has to move so much more money. It's possible that GiveWell donors are terrible at giving without GiveWell to guide them, and need to be persuaded by means other than simple argument, but this is pessimistic and adversarial beyond what one might want to assume. It's possible that the Open Philanthropy Project can identify better giving opportunities than the GiveWell top charities, but this implies the bold claim unless some fairly fine-tuned assumptions are true.

A reasonable baseline assumption is that others' opportunity cost is similar to your own. However, at its current rate of giving, Good Ventures will take decades to give away its entire implied endowment, while smaller donors seem unlikely to be similarly capacity constrained. We might reasonably expect substantially diminishing returns from the first to the last billion given away via the Open Philanthropy Project, and at least some of this should be specific to the Open Philanthropy Project's particular methods and capabilities. This implies that Good Ventures's opportunity cost is much lower than its best current giving opportunities. I expect that GiveWell donors are more typically giving small amounts of money relative to GiveWell top charities' room for more funding, giving away their entire charity budget every year, so we should expect their opportunity cost to be similar to the value of their current giving opportunities.

It also seems reasonable to apply some substantial time discounting. Suppose I have to spend $10 billion, and I only have enough time to evaluate $100 million in giving opportunities per year. I can also give away more, based on gut calls, or by empowering others to distribute some of the money without oversight from me. They're not competing against my decisions today - they're competing against my decisions in eighty years. At a 2% annual discount rate, the value of their giving would have to be lower than 14% of the value of mine for holding onto the money to be optimal - i.e. to compete I have to find giving opportunities that were more than 7 times as valuable as theirs. At a 5% discount rate, I'd need to find giving opportunities that were almost 170 times as valuable. So much for ordinary giving options.

Moreover, the Open Philanthropy Project has already attracted the attention of another couple of wealthy potential donors. It is far from certain that the last billion dollars spent by Good Ventures will be the end of the Open Philanthropy Project's direct influence on charitable giving. This makes the diminishing returns and time discounting arguments even stronger.

On the other hand, if the Open Philanthropy Project expects to learn a lot about giving within the next few years, this makes time discounting less formidable. For instance, suppose the Open Philanthropy Project expects much more information in five years about the viability of spending large amounts of money in a cost-effective way to fund basic scientific research, with a 50% probability of success. In that case, the expected value of reserving money for this purpose is 50% of the time-discounted value of the potential high-value research, and 50% of the value of spending the money on opportunities that the Open Philanthropy Project is more confident will exist. Even if the next-best alternative gets 10% worse each year, such a program only has to do twice as well as current options to be competitive, provided that there really is enough information to make a decision in five years, and that such a decision will really be made.

Another possibility is that the Open Philanthropy Project is willing to consider very speculative giving options with high potential payoff. However, any very high value option is likely to support the bold claim that Good Ventures has better options on current margins than the GiveWell top charities, rather than the mild claim that Good Ventures merely has better options on the margin than other GiveWell donors have. (This argument is made in more detail in the next section.)

You might think that Good Ventures has an information advantage because of the amount of expertise the Open Philanthropy Project has accumulated. But even if small donors are spending fairly small amounts of time on the problem individually, they have more brains on the problem per dollar spent than the Open Philanthropy Project does. The Open Philanthropy Project says that it is bottlenecked on management capacity right now, not money, so a small donor who sees an opportunity to do good might do better funding it directly than channeling the opportunity through the Open Philanthropy Project, even if the Open Philanthropy Project's judgment per researcher-hour is superior, simply because the small donor has the bandwidth to process the information and the Open Philanthropy Project doesn't. Centralized decisionmaking would have to have a huge advantage to outweigh this problem. GiveWell explicitly made a closely related argument back in 2013:

If you have access to other giving opportunities that you understand well, have a great deal of context on and have high confidence in — whether these consist of supporting an established organization or helping a newer one get off the ground — it may make more sense to take advantage of your unusual position and “fund what others won’t,” since GiveWell’s research is available to (and influences) large numbers of people.

One possible objection to this argument is that the sort of person who relies on a GiveWell recommendation is for the most part a retail donor, with very little independent ability and commitment to find the best giving opportunities. (Maybe they're just very busy.) If the GiveWell top charities' funding gap were closed, instead of making an independent call which charity is best, they might give to the first alternative institution declaring itself an authority - or spend it on personal consumption. This is a fairly pessimistic view of GiveWell donors, but I can't dismiss it out of hand. If GiveWell went away, current top charity donors might reallocate their giving to unaccountable, not very reliably effective charities. They might give to conventional charities not known for cost-effectiveness, such as like the United Way (the fruit of the Scientific Charity movement, now a charity so conventional it shows up as the default corporate charity in Dilbert) or the Red Cross (the folks who struggled to spend half a billion dollars in Haiti and ended up with severely limited results). Some GiveWell donors may believe this objection about themselves. It's hard to figure out which things work, and unlike almost anyone else GiveWell at least claims to be trying to do that, so they might as well give it where GiveWell says rather than waste it.

It is a very important life skill to be able to recognize when you don't know something, and people who honestly come to that conclusion have my respect. However, I think many GiveWell donors, if they believed that this was the sort of person for whom GiveWell recommendations were intended, would decide that they can do better.

Doing better can take the form of giving to charities that GiveWell or the Open Philanthropy Project have missed or aren't well set up as institutions to support, but that isn't the only way someone could make better use of the money. A potential GiveWell donor who discovers that the top charities have no funding gap this year because it was filled by Good Ventures, might indeed spend the money on consumption or leisure as economic statistics would measure it, but still do good with it. They might spend the money on saving time, in order to pursue a side project of positive social value. They might switch to a lower-paying but otherwise higher-impact job. They might take an unpaid week off work to volunteer for a land use reform organization or organize friends to write to state representatives about a pending criminal justice reform measure in their state. They might simply give the money to a high-potential friend who has fallen on hard times. These are projects that Good Ventures is very unlikely to fund, not only because the Open Philanthropy Project is capacity constrained, but because it is very difficult to determine on a case-by-case basis whether someone's going to follow through on their promise. Individual donors do not face the same problem making deals with themselves.

If you're willing to spend your own time and attention on this and be really curious, I encourage you to try things. Give in ways where you can observe the outcome somehow. Give in ways where, if you're wrong, you'll learn something new about the world. You'll make mistakes, and then you'll do better. Give money to charities you understand well if that makes sense for you, or time, or connect people who would be able to do something great together. Or do something else I haven't thought of.

Finally, if used to justify taking a hard negotiation stance on the GiveWell top charities, this argument implies an adversarial relationship between Good Ventures and other GiveWell donors; that because simply asking for the money will not work, Good Ventures should limit its funding commitment in order to persuade other donors to step up. If true, this complicates GiveWell's role as a neutral judge of the best giving opportunities, and by accepting support from and advising Good Ventures, GiveWell has put itself in an extremely difficult position; it seems very difficult for GiveWell to avoid a conflict of interest in this case.

Fine-tuning

The Open Philanthropy Project's openness to speculative giving opportunities with very high potential payoffs may allow it to do better than other GiveWell donors, even if their opportunity cost is non-negligible. But such low-probability high-reward future opportunities generally do not support the mild claim per se; they are much more likely to support either the bold claim that money should be reallocated from GiveWell top charities to the Open Philanthropy Project, or a humble claim that while some of the Good Ventures endowment should go to the Open Philanthropy Project, the majority should be reallocated to other existing organizations or individuals. The same is true of other ways in which the Open Philanthropy Project may outperform the options available to other GiveWell donors.

In this section, I first explore the mechanisms by which outperformance might happen. Then I lay out a simple mathematical model for thinking about scenarios in which the Open Philanthropy Project has a focus area that substantially outperforms mundane giving opportunities. In both cases, the circumstances under which the current allocation makes sense are fairly narrow.

Mechanisms for Open Philanthropy Project outperformance

It's imaginable that the Open Philanthropy Project could outperform the GiveWell top charities simply by virtue of being a large foundation, with greater ability to coordinate with itself strategically, seek opportunities to leverage its giving, and support programs with a high minimum viable level of funding, than other GiveWell donors have. If this alone is enough, that casts substantial doubt on the entire GiveWell project; it suggests that GiveWell donors would do better by giving to any decent-seeming large charity like the Red Cross or Oxfam, than giving to the GiveWell top charities. If GiveWell's top charities are better than such organizations, then the Open Philanthropy Project can only outperform them by outperform other organizations in its class.

To believe that the Open Philanthropy Project will perform unusually well, you should believe either that the Open Philanthropy Project is doing something new in a way that strongly suggests that it will succeed, or that the Open Philanthropy Project resembles past successful attempts to have this kind of impact.

I am not familiar with any organization constituted similarly to the Open Philanthropy Project - explicitly evaluating focus areas for potential impact per dollar before deciding how to make grants - with a track record of extremely high beneficial impact on the world. My sense is that the near-universal consensus in the charitable world has historically been that donors should follow their passion. (If anyone thinks they have examples, I'd very much like to see this case made.) This is evidence against the argument that the Open Philanthropy Project resembles past successes, but for the argument that the Open Philanthropy Project is doing something that will plausibly outperform past examples.

However, for most plausible specific reasons to expect the Open Philanthropy Project to outperform with increasing returns to scale, there's an existing option that has already invested in the relevant competency. Where this is true, the default option should be to fund that organization. If there are not increasing returns to scale, then it seems like one gets a better shot at the long tail of results by trying several attempts at something like the Open Philanthropy Project, with different staff, methodology, organization, and focus, and perhaps further funding the most successful attempt.

One place to start is a blog post on the Open Philanthropy Project's grantmaking process that describes the ways it's different from other foundations:

[W]e believe that there are a few aspects of our approach that are somewhat more unusual:

  • We put a great deal of time and thought into selecting our focus areas, as well as into selecting program officers to lead our work in those areas. (The former is more unusual than the latter.) We believe that careful choices at these stages enable us to make grants much more effectively than we otherwise could.
  • The primary decision-makers work closely and daily with grant investigators, and are checked in with more than once throughout the investigation process. This contrasts with some other foundations, where a Board or benefactor reviews proposed grants at periodic meetings, sometimes leading to long timelines for decision-making and difficulty predicting with confidence which grants will be approved.
  • We prioritize information-sharing to a degree that we haven’t seen at any other foundation. This includes publishing notes of our conversations with experts and grantees as much as possible, writing publicly about what grants we are making (including fairly detailed discussion of why we are making each grant and what risks or potential downsides we see), and making posts like this one. We’ve written previously< about some of the challenges that we face in being as transparent as we’d like to be. We continue to think that the net benefits of taking this approach are high
  • As part of this, instead of requesting proposals or applications from grantees, we do the reverse: we write (and publish) our own summaries of the grants we make. We feel this is a more natural division of labor. When grantees are responsible for preparing writeups, they often need to spend large amounts of time preparing documents and tailoring them to different funders with different interests. With our approach, grant decisions are made based primarily on conversations or shorter documents, while the funder (Open Philanthropy Project) is responsible for creating a written description of the grant laying out the reasons it is attractive to us.

Here's how I would summarize these considerations:

  • Focus area selection - A key driver of the variation in competently run foundations' impact is the promise of the causes they decide to focus on. The Open Philanthropy Project is unusually good at finding high-potential focus areas with its "important, tractable, neglected" framework.
  • Process efficiencies - The Open Philanthropy Project has a comparatively fast and lean grantmaking process, partly by requiring less bureaucratic oversight, and partly by pre-processing much grant review work through cause area investigations. This means that they can act earlier in the development of a field, and reach potential grantees who might not otherwise judge it worth their time to apply for grants.

And then there's transparency. The Open Philanthropy Project's writeup of the challenges of transparency also articulates some benefits of transparency:

First, the process of drafting and refining public writeups is often valuable for our own thinking and reflection. In the process of discussing and negotiating content with grantees, we often become corrected on key points and gain better understanding of the situation. Writing about our work takes a lot of time, but much of that time is best classified as “refining and checking our thinking” rather than simply “making our thinking public.”

Second, transparency continues to be important for our credibility. This isn’t because all of our readers check all of our claims (in fact, we doubt that any of our readers check the majority of our claims). Rather, it’s because people are able to spot-check our reasoning. Our blog generally tries to summarize the big picture of why our priorities and recommendations are what they are; it links to pages that go into more detail, and these pages in turn use footnotes to provide yet more detail. A reader can pick any claim that seems unlikely, or is in tension with the reader’s background views, or is otherwise striking, and click through until they understand the reasoning behind the claim. This process often takes place in conversation rather than merely online - for example, see our research discussions. For these discussions, we rely on the fact that we’ve previously reached agreement with grantees on acceptable public formulations of our views and reasoning. Some readers do a lot of “spot-checking,” some do a little, and some merely rely on the endorsements of others. But without extensive public documentation of why we believe what we believe, we think we would have much more trouble being credible to all such people.

Finally, we believe that there is currently very little substantive public discussion of philanthropy, and that a new donor’s quest to learn about good giving is unnecessarily difficult. Work on the history of philanthropy is sparse, and doing new work in this area is challenging. Intellectuals tend to focus their thoughts and discussions on questions about public policy rather than philanthropy, making it hard to find good sources of ideas and arguments; we believe this is at least partly because of the dearth of public information about philanthropy.

Here's how I'd summarize these benefits, which are helped by, but not all entirely dependent on, transparency:

  • Explicit analysis - The general practice of evaluating programs based on explicit models and cost-benefit analysis produces exceptional results. The organizational capacity to do this is hard to build and benefits from scale and learning over time.
  • Verifiable reputation - Because so much of the Open Philanthropy Project's decisionmaking process is public, outside donors can verify directly that the Project's reasoning and actions are doing good.
  • Philanthropy research as a public good - The Open Philanthropy Project is researching how to be an effective foundation. To the extent that it learns how to outperform other foundations at all, and persuades other foundations to do likewise, it is causally responsible for their outperformance as well as its own.

In addition to these process-level considerations, I want to consider the possibility that a couple of the Open Philanthropy Project's unique focus areas will allow it to have an outsized impact:

    • Global catastrophic risk mitigation - Making grants to mitigate global catastrophic risks in particular requires a centralized organization in order to accumulate relationships with key players and substantial subject-matter expertise. This also requires deep pockets in order to be able to allocate funds for unexpected emergencies quickly.
    • Expertise in funding basic science - Funding basic scientific research to maximize humanitarian impact requires interdisciplinary expertise in understanding how scientific research works, and assessing the likely impact of scientific advances on human well-being.
    • Ability to fund political advocacy - Because much of the money influenced by the Open Philanthropy Project's research is still the private wealth of Dustin Moskovitz and Cari Tuna, this gives them the flexibility to lawfully spend money directly on political advocacy (more here) that charitable foundations are forbidden to support.

If there's anything missing here, I'd like to see the case made for it.

Focus area selection

GiveWell and the Open Philanthropy Project are effective altruist organizations, and one of the distinguishing characteristics of effective altruism is willingness to be flexible about which areas to focus on, and make such judgments based on arguments and evidence. But GiveWell and Open Philanthropy Project co-founder Holden Karnofsky has publicly written about how you shouldn't expect extreme results based on expected value calculations. It's not clear to me why we should expect a decision process that's explicitly not giving strong priority to extremely high expected values, to yield extremely high expected value actions.

Holden has more recently written about how he's changed his mind about how important strong direct arguments for a focus area are:

I still think these things (feedback loops, selective processes) are very powerful and desirable; that we should be more careful about interventions that don’t involve them; that there is a strong case for preferring charities (such as GiveWell’s top charities) that are relatively stronger in terms of these properties; and that much of the effective altruism community, including the people I’ve been most impressed by, continues to underweight these considerations. However, I have moderated significantly in my view. I now see a reasonable degree of hope for having strong positive impact while lacking these things, particularly when using logical, empirical, and scientific reasoning.

But this is still a very attenuated commitment to seeking exceptionally high expected-value results, and the Open Philanthropy Project still looks like it's largely committed to making comparatively respectable, safe bets.

In a more detailed writeup, Holden mentions specific individuals and organizations which found these focus areas years before the Open Philanthropy Project did. If you think that picking high expected value focus areas based on explicit analysis of expected value is the key to outsized results, this favors moving money to or through those individuals and institutions.

GiveWell Labs (which eventually became the Open Philanthropy Project) started by specializing in investigations into potential focus areas, with little or no grantmaking activity; instead, it simply published its shallow investigations. This model seems plausibly worth pursuing again, as a separate research organization, or part of a newly independent GiveWell. If third parties didn't and don't act on these reports, a natural response would be to invest in making them being clearer and more persuasive. Another possible response would be to investigate what kind of information the target audience is most interested in; this sort of attentiveness to audience is what originally led GiveWell to move from trying to evaluate charities in many categories, to focusing on developing-world health interventions, which seems to have been a wise move.

It's plausible that the Open Philanthropy Project could be an unusual combination of an organization moved at all by this sort of expected-value consideration, and an organization competent to administer the day-to-day business of making and managing grants. One reason to think that such integration might be helpful is that the Open Philanthropy Project is primarily staffed with generalists, not attached to one particular focus area. This suggests that while in the short run it will underperform more specialized foundations with staff more familiar with their focus areas, in the long run it could learn methodological competence in more established focus areas, and transfer this knowledge to more neglected ones. For example, the political judgment and connections the Open Philanthropy Project acquires in working on criminal justice reform, where it is collaborating with established organizations, might help it be more effective in an area like land use reform, which has fewer established advocates. This model, however, seems inconsistent with a policy of delegating such focuses to domain-specific, experienced program officers.

Process efficiencies

Low bureaucratic overhead

The Open Philanthropy Project seems to have less bureaucracy than a typical foundation. However, until we understand why other foundations favor more explicit and extensive controls, it's not obvious why this is a point in the Open Philanthropy Project's favor. One plausible argument is that controls are about making defensible judgments rather than correct ones, and the Open Philanthropy Project is following a hits-based giving model, focused on maximizing its successes rather than minimizing its failures. However, it seems similarly likely that the low amounts of bureaucracy is due to Open Philanthropy Project being a comparatively new organization. In addition, it reports being bottlenecked in terms of management capacity, which suggests that whatever mechanism supports its low bureaucratic overhead is making it difficult for the Open Philanthropy Project to scale up.

The minimum bureaucratic overhead is of course achieved by delegating fund allocation decisions to individuals with excellent track records of finding good programs to run or support, but no formal accountability. (Note that you don't need to be sure who the very best people are for this to be a probable improvement; you only need enough evidence that, if you were an impartial observer, you'd favor their track record over your own.) It is not obvious to me why we should expect this to underperform the Open Philanthropy Project on net, and the comparison seems worth trying.

Grant pre-processing via cause investigations

The other major process efficiency mentioned is the lack of an extensive, laborious grant application process, because the Open Philanthropy Project investigates programs before reaching out to grantees. This does seem like a consideration that would scale well. However, it is not obvious to me why this needs to occur in the same organization that administers grants. To the extent that this is the argument for outsized impact, the obvious way to do this would be to investigate cause areas, and then give money to individuals or organizations already working in or funding the field, to redistribute as they see fit. I give examples in the sections on mitigating global catastrophic risks and funding basic science.

Explicit analysis

It's plausible that cost-benefit analysis of focus areas and particular programs is a generalizable skill. I have had conversations with very smart and analytical friends about programs they were trying to evaluate, where I was able to do a quick back of the envelope calculation using skills I'd learned during my time at GiveWell and the Open Philanthropy Project, and they couldn't.

While this model favors GiveWell and the Open Philanthropy Project over most organizations, it also favors the Gates Foundation, with its greater accumulated expertise, over GiveWell. Billionaire Warren Buffet gave his money to Gates instead of starting his own foundation, explicitly on the grounds that he wanted to take advantage of the Gates Foundation's expertise. If there are increasing returns to scale on this basis, the obvious right move seems to be to either further fund the Gates Foundation, or articulate why and how the Open Philanthropy Project will do better.

Verifiable reputation

While enabling outsiders to verify an organization's reasoning or track record via transparency is a laudable goal, it's not an independent reason to think the organization will be exceptionally high-impact - we should expect to have some specific idea, by means of that open communication, exactly how the organization will outperform others. Therefore, I don't consider this a separate reason to think the Open Philanthropy Project might have an extremely high impact.

Philanthropy research as a public good

This is effectively the same as Argument 3, that the Open Philanthropy Project's main benefit is its influence on the broader giving culture, which will be covered in a later post.

Global catastrophic risk mitigation

It makes sense for a single organization to take a leading role in mitigating global catastrophic risks, but there are already major organizations tasked with such a role, that have accumulated substantial expertise in security issues, and have a great deal of official support and connections. They are even allowed to use lethal force to deal with potential risks. I'm talking about the national security apparatus of major world governments. The obvious comparison class here includes governmental organizations like IARPA.

IARPA has funded excellent research such as Philip Tetlock's Good Judgment Project. IARPA's plausibility as an organization well-placed to have an unusually high positive impact was clear enough to effective altruist Jason Matheny that he bet his career on it, deciding that he could do more good working for the official legitimate authorities than working outside the system. He's in charge of IARPA now.4

It's plausible that the Open Philanthropy Project could fill an important gap by supporting important risk mitigation programs in other countries, which IARPA might have institutional reasons not to do, but in practice much of its giving in these areas has been in the US anyway.

The CDC is another organization with a similar level of legitimacy, ability to call on the resources of other legitimate authorities, and accumulated expertise, specific to biosecurity. Nor is this list exhaustive.

Expertise in funding basic science

It takes expertise to fund scientific research that has a reasonable chance at success. It takes a different sort of generalist skill to assess the impact of potential scientific advances. The Open Philanthropy Project has invested in both. However, the NIH has also invested in both, and has been funding research for a lot longer.

On the other hand, you might imagine that the NIH is excessively bureaucratic and short-termist, and the Open Philanthropy Project has more flexibility to use individual judgment. But does individual judgment scale well? Perhaps the first few tens of millions of dollars allocated by the Open Philanthropy Project's scientific advisors will vastly outperform the same money as allocated by the NIH. But billions of dollars are likely to be bottlenecked on the Open Philanthropy Project's ability to evaluate programs, and this is likely to penalize programs that are hard for them to evaluate regardless of merit.

In addition, there is no particular reason to think that the process used to select the Open Philanthropy Project's scientific advisors is better at finding people who can pick promising scientific research projects, than the process used to select winners of the Nobel Prizes in the sciences. The latter have exceptionally strong track records in identifying promising projects to work on. Even though you should expect an extremely strong force towards regression to the mean, and doing Nobel-quality work often takes a substantial portion of one's life, winning a Nobel prize is a pretty good predictor of personally doing future Nobel quality work; an individual Nobel prize winner in the sciences has about a 0.7% chance of going on to win another Nobel prize.5 If you insist on working through institutions, why is keeping the money better than making unrestricted grants to the academic institutions that produce the most Nobel prize winners?

Ability to fund political advocacy

It’s plausible that if you do a dispassionate cost-benefit analysis, there are big productivity gains to be had on neglected issues, if you’re willing to advocate for important, neglected changes when there’s a strong reason to believe it is likely to work out very well. A good example of this is economist Paul Romer’s idea of charter cities.

I’m not sure why we should expect the Open Philanthropy Project to outperform established research organizations like the RAND Corporation on research in this area, but it’s plausible that explicit prioritization of focus areas on the basis of impact could give the Open Philanthropy Project a substantial advantage, and that the integration of focus area research and implementation could make such an organization worth developing.

Extraordinary impact scenarios

Some of the Open Philanthropy Project's current or potential focus areas directly suggest an outsized impact. We can back out from the size of the potential benefits of such a program area, the total cost implied by a cost per life saved competitive with current GiveWell top charities, rather than either much better (which implies that the bold claim is true) or much worse (in which case they are also likely much worse than GiveWell donors' next-best options). In many cases, these costs seem improbably high. The true cost can only be lower under conditions of sharply diminishing returns, or increasing returns. Therefore, one of the five following is likely true:

  • The program has diminishing returns to additional funding.
  • The program has increasing returns to additional funding.
  • The program isn't feasible.
  • The supposed benefits are not valuable.

If a program is infeasible or its benefits are not valuable (e.g. because one does not care about the far future, or animals), then it is not a way for the Open Philanthropy Project can do better than other donors. If a program has increasing returns, this the practice of funding it up to the level where its marginal value per dollar is similar to that of other programs is exactly backwards - it should likely either be funded at the maximum possible level, or not at all. Diminishing returns to funding are very plausible. They fade off into "negligible marginal impact" well before they imply the sort of absurd cost-benefit proposition implied by constant returns. But for very high impact giving, a diminishing returns model implies that the first few dollars allocated are extremely valuable. This suggests that if there are other potential fund-allocators with a different set of abilities to evaluate expenditures within such a program area, or different information about it, it makes sense to delegate some large amount of fund allocation to them, even if they are substantially less skillful than the Open Philanthropy Project.

fine-tuning-argument

I'll show that costs seem implausibly high, then show the problems with the diminishing returns argument. I consider the other points too obvious to explain.

Implausibly high costs

I'll consider three types of potential benefit that are plausibly neglected by more conservative donors, that might constitute a substantially better value proposition than what such donors might otherwise be willing to spend money on:

  • Advance scientific or economic progress
  • Avert existential risk
  • Help animals

Assuming constant costs, what does such a program have to cost to be roughly competitive with the GiveWell top charities on current margins? GiveWell roughly estimates that GiveDirectly does the equivalent of saving a life-year for about $1,000 per life-year or a life for $30,000, and that AMF does the equivalent of saving a life-year for $100 per life-year or a life for $3,400. Since these calculations are very rough, let's say for the sake of simplicity that it costs about $300 to save a life-year and $10,000 to save a whole life.

Advance progress

Nick Bostrom estimates that advancing total human economic and scientific progress so that we colonize the stars one second earlier adds 10^29 human lives (with 10^14 as an extremely conservative lower bound), conditioned on humanity ever making it that far. Let's say we have a 10% chance of reaching the stars - that gives us a lower bound of 10^13 lives per second of additional progress.

It seems reasonable that some large amount of money could buy a 1% chance of increasing total production by 0.1% over trend, either by funding important scientific advances (so they arrive substantially sooner than they otherwise would) or by advocacy for efficiency-enhancing regulatory chances such as labor mobility, upzoning in high-wage urban areas, or macroeconomic stabilization. Global GDP grows by about 3% annually, so a 0.1% increase is about a thirtieth of a year, or a million seconds. A 1% chance of this is then in expectation about ten thousand seconds. Multiplied by a fairly conservative 10% chance that economic progress is going anywhere good in the long run, and the spending amounts to a thousand of Bostrom's seconds, or 10^16 human lives.

At $10,000 per life saved, the cost would have to be $10^20. By comparison, annual gross world product is about $100 trillion, or $10^14. It seems absurd that the economic reforms or scientific advances necessary to raise world production by a tenth of a percentage point, once, would cost six orders of magnitude more than the current GDP of the world, so the true cost is probably many orders of magnitude lower, if such a thing is feasible at all.

Reduce existential risk

Consider the case of averting existential risk. The big category people seem worried about is anthropogenic risk due to recent and emerging technologies, such as nuclear weapons and AI risk. There wasn't a plausible way for this to lead to human extinction before about 1950, about 65 years ago, so by Laplace's law of succession, this implies something like a 1.5% annual probability that humanity is wiped out. Estimating it a second way, surveys of AI experts have a median estimate of about 30 years to human-level AI, and AI. This suggests about a 2% annual probability. If there's a 50% chance this goes well for us and a 50% chance it wipes us out, that also gives 1% risk of a world-ending catastrophe from AI alone, which does seem something like the main technological existential risk factor for humanity as far as I can tell.

The world contains about 7 billion people, so means that preventing an unfriendly intelligence explosion just for one year is worth 1% * 7 billion = 70 million life-years. Rounding that down to 70 million life-years, how much does that have to cost to be in the same ballpark as current GiveWell top charities? At $300 per life-year, this means that making sure we don't kill ourselves off would have to cost about $20 billion per year, and a 10% reduction in risk would have to cost $2 billion per year. (For comparison, the total cost of the Manhattan project was about $26 billion in 2016 dollars.) But this is just to delay by a year the date at which humanity destroys itself with its own tools, not to actually ensure a good outcome.

If buying us a year improves our chances of passing some significant existential risk filter, the numbers get much, much bigger. Bostrom estimates a lower bound of 1016 human lives if we manage to keep going until the sun blows up, but assume no space colonization at all, 1034 human life-years (let's round that down to the equivalent of 1033 modern human lives) if space colonization but only biological humans, and 1054 life-years (or 1053 lives) if uploads.

That means that a one tenth of one percent improvement in humanity's chances of long-run survival saves in expectation something in the range of 1013 - 1030 - 1050 human lives. Taking the lowest of those, at $10k per life saved, this would imply an expected cost of $1017 Even the lowest estimate implies a cost more than two orders of magnitude above annual gross world product. This again seems silly. If the world had the ability to work on nothing but increasing its chance of survival for a hundred years, do you really think we wouldn't buy a tenth of a percentage point chance of not destroying ourselves?

And taking the more realistic middle or upper estimates (since long-run survival seems unlikely if we don't colonize the stars), the implied cost is much, much higher.

Help animals

About nine billion roaster chickens are raised and slaughtered in fairly bad conditions annually, in the US alone. It's quite plausible that they suffer enough that it would be much better for them never to have lived. They typically live about 42 days. If we assume that a roaster chicken's entire horrific 42-day life is as bad as a single day lived by a human is good, then this wipes out about twenty-five thousand life-years' worth of value each year. Animal welfare advocates are working on ending this through a combination of advocating for reforms to chickens' living and slaughter conditions (improving the extent to which their lives are worth living), reduced meat consumption, and promoting the development of lab-grown meat to replace factory farmed meat. Animal advocacy seems much easier than averting existential risk or basic scientific research, and I think it's reasonable for a funder to think that they can buy a 10% chance at accelerating the process by a year, amounting to an expected improvement of 2.5 million life-years. At $300 per life-year equivalent this implies a cost of about $750 million.

This seems on the high end of plausibility, given that the entire amount of money currently spent on animal welfare is something like $50 million annually. This is way less improbable than the other program areas, though. The cosmic commons considerations don't really apply, as it seems unlikely that we'll take massive animal suffering with us to the stars. Wild animal suffering potentially adds a few orders of magnitude to the estimate, but seems much less feasible to make progress on in the short run.

Diminishing returns

Suppose you think that we can make some progress in these program areas at a reasonable cost, but can't spend an indefinite amount of money, because of diminishing returns? What do we have to assume, for them to be roughly competitive with the GiveWell top charities at current margins?

I'll try applying a simple logarithmic model, and a simple polynomial model, to the case of accelerating progress. In both cases, diminishing returns end up implying that the first few dollars are extraordinarily valuable.

Logarithmic returns

Suppose impact is a logarithmic function of money spent, I(x)=a*ln[x+$1]. Then dI/dx=a/(x+$1). If this is competitive with the GiveWell top charities at current margins (i.e. after spending the ten billionth dollar), then I'($10B)=1/$10K, and a/($10B+$1)=1/$10K, implying that a=1M+$1/$10K≈1M.

This implies a total impact of I($10B)=1M*ln[10B]≈23M, or the equivalent of 23 million lives saved - a fairly small dent in total progress, but not a ridiculous result. What are the implied marginal costs per life saved at lower total expenditures? At $1 billion, the cost per life saved is $1 thousand. At $10 million, it is $10.

Let's say that the next-best alternative is only 10% as good as the Open Philanthropy Project, in expectation, at picking things in the focus area. The humanitarian case for giving a few selected people or institutions $10 million each to distribute is huge, if you can find anyone qualified at all - that amounts to a marginal cost per life saved of $100. "Rolling the dice again" in this way, by delegating some giving to someone with a different set of capacities and able to see different kinds of low-hanging fruit, seems like a really good bet to take if you believe you live in a world with such massively diminishing returns.

Fractional polynomial returns

In the general polynomial case, I(X)=aX<sup>1/b</sup>, so that I'(X)=(a/b)X<sup>1/b-1</sup>. If b=2 then we have a square-root function, a simple first guess. Then, I(X)=a√X, and I'(X)=a/(2√X). What does the value of a have to be, for this to be competitive with the GiveWell top charities on current margins? I'(10B)=a/(2√10B)=a/200K=1/10K, so that a=20. Then the total impact is I($10B)=20√(10B)=2M. This implies an average cost per life saved of $5,000.

What's the marginal impact at lower total expenditures? At $100 million, the implied cost per life saved is $1,000, and at $1 million, the implied marginal cost per life saved is $100. This still suggests that the best course of action is to delegate some substantial portion of the giving to others, if anyone qualified can be found.

This argument applies even more strongly to the case of averting existential risk, and less strongly to the case of helping animals.

The ordinary case

If we rule out spectacularly successful moonshots in unusual focus areas as support for the mild claim, what support is left for it?

The Open Philanthropy Project might continue as it began, as a search process, finding a range of new giving opportunities each year with the same marginal impact per present dollar. This could justify partial funding for the GiveWell top charities. What does this imply about the world?

If investments appreciate faster than giving opportunities deteriorate (r>g), then giving should be delayed for as long as possible. If the opposite is true, (r<g), giving should be frontloaded to maximize impact. Only if by some coincidence r=g, does it make sense to spread out giving over time. If r=g, then initial funds should be allocated equally among all time periods. For foundations with the intent to spend down their endowment within a specified length of time, this means that spending increases exponentially each year with the rate of return on investments, but buys the same quantity of improvement in the world each year, since improvements get correspondingly more expensive.

Conclusions from Argument 1

If the Open Philanthropy Project has a giving advantage over other GiveWell donors, this either requires very pessimistic assumptions about other GiveWell donors, or implies that the Open Philanthropy Project probably has a very large giving advantage over the GiveWell top charities.

If the Open Philanthropy Project has a very large giving advantage, there are almost certainly either increasing or decreasing returns to scale. If returns to scale are increasing, then there is likely a more established organization with a stronger track record that should be funded. If there are diminishing returns to scale, then the impact-maximizing number of Open Philanthropy Projects for Good Ventures to distribute its funds among is probably substantially more than one.

References

References
1 GiveWell's intervention report on cash transfers to the developing world reports:

What return on investment do cash-transfer recipients earn?

[...]

A variant of GiveDirectly

Haushofer and Shapiro 2013, an RCT of a variant of GiveDirectly's program, found that cash transfers increased the likelihood of owning an iron roof by 23 percentage points. The study estimated that iron roofs have annual investment returns of 19% [...].

GiveDirectly also conducted a survey on roof costs. This survey found costs that imply an annual investment return of 48%. [...]

In addition, Haushofer and Shapiro 2013 Policy Brief estimates an annual investment return of 7% or 14% depending on whether thatched roofs have to be replaced once every 2 years or once a year. [...]

Two studies of unconditional wealth transfers in Uganda both found high annual returns of 30%-39% on the original grant. [...]

Conditional cash transfers

The only randomized controlled trial of ongoing cash transfers that discusses the returns that recipients earn on their investments is based on the Oportunidades conditional cash transfer program in Mexico, described above. Gertler, Martinez, and Rubio-Codina (2012) estimates that, four years after the control group began to receive treatment and five and a half years after the treatment group began to receive transfers, the treatment group continued to have consumption 5.6% higher than the control group. This implies, by our calculation, a 1.7% monthly return, and a 21% annual return, on the transfers, which further implies a 3.6% monthly, and 42.6% annual, rate of return on investment.

Using a different method, Gertler, Martinez, and Rubio-Codina (2012) estimates that for every hundred dollars transferred more than two years ago, recipients continue to earn $1.60 per month in additional income, for an annual return of 19.2%, even though they estimate that only 26% of transfers are invested. Since only 26% of transfers are invested, this implies an even higher rate of return on investments, of roughly 75% per year, or 6% per month.

Business grants

[...] In a series of experiments in Sri Lanka, Mexico, and Ghana, researchers giving grants on the order of $100 to micro-enterprises without any paid employees have found high returns on investment, in the range of 6%-46% per month:

A series of papers by de Mel, McKenzie, and Woodruff based on a randomized controlled trial of one-time grants to micro-enterprises in Sri Lanka have found large positive effects on profits for male owners. [...] This translates to a 6-12% monthly real return amongst male-owned businesses (with no measured benefits amongst businesses owned by women). [...]

In a similar randomized experiment conducted in Ghana, with a larger sample size and shorter follow-up period, Fafchamps et al. (2011) found comparable large effects on microenterprise profits for in-kind transfers (~20% return per month), but effects on business profits for cash were statistically indistinguishable from zero. [...]

A similar randomized controlled trial in Mexico, which gave cash or in-kind grants of about $140 to retail micro-enterprises (all owned by men and without paid employees), found returns to capital of 28 to 46% per month over a 3-12 month follow-up period, with indistinguishable differences between cash and in-kind grants.

2 GiveWell's 2015 top charities post:

“Capacity-relevant” funds can include (a) funds that are explicitly targeted at growth (e.g., funds to hire fundraising staff); (b) funds that enable a charity to expand into areas it hasn’t worked in before, which can lead to important learning about whether and how the charity can operate in the new location(s); and (c) funds that would be needed in order to avoid challenging contractions in a charity’s activities which could jeopardize the charity’s long-term growth and funding prospects.

Execution funding allows charities to implement more of their core program but doesn’t appear to have substantial benefits beyond the direct good accomplished by this program. We’ve separated this funding into three levels:

  • Level 1: the amount we expect a charity to need in the coming year. If a charity has less funding than this level, we think it is more likely than not that it will be bottlenecked (or unable to carry out its core program to the fullest extent) by funding in the coming year.
  • Level 2: if a charity has this amount, we think there is an ~80% chance that it will not be bottlenecked by funding.
  • Level 3: if a charity has this amount, we think there is a ~95% chance that it will not be bottlenecked by funding.

[...]

Priority Charity Amount Type Recommendation to Good Ventures Comments
1 DtWI $7.6 Capacity-relevant $7.6 DtWI and AMF are strongest overall
1 AMF $6.5 Capacity-relevant $6.5 See above
1 GD $1.0 Incentive $1.0 Ensuring each top charity receives at least $1 million
1 SCI $1.0 Incentive $1.0 Ensuring each top charity receives at least $1 million
2 GD $8.8 Capacity-relevant $8.8 Not as cost-effective as bednets or deworming, so lower priority, but above non-capacity-relevant gaps
2 DtWI $3.2 Execution Level 2 / possibly capacity-relevant $3.2 Level 1 gap already filled via “capacity-relevant” gap. See footnote for more**
2 AMF $43.8 Execution Level 1 $16.3 Exhausts remaining recommendations to Good Ventures
3 SCI $4.9 Execution Level 1 0 Not as strong as DtWI and AMF in isolation, so ranked below them for same type of gap
3 AMF $24.0 Execution Level 2 0
4 DtWI $8.2 Execution Level 3 0
4 AMF $24.0 Execution Level 3 0
4 SCI $11.6 Execution Level 2 0
5 GD $24.8 Execution Level 1 0
5 SCI $8.8 Execution Level 3 0
6 GD $20.9 Execution Level 2 0
7 GD $28.6 Execution Level 3 0

3 Assuming constant r and g: in the present year, you are looking to divide up your current assets to give away in a manner that does the most good, setting aside the sum St to give away in each year t, such that ΣSt=A. So for instance, in this year, t=0 and you will give away S0. In the next year, t=1 and you will give away (1+r)S1, and so on. In each year, the total impact of your giving is some monotonically positive function of your giving, It=Ut(St). Let's say that total well-being scales logarithmically with the total resources spent on the developing world. But you're not the only source of resources available to the developing world - their available resources are expanding at rate g, so that It=ln[(1+r)tSt+R(1+g)t].

The impact-maximizing allocation has to have the property that the marginal value of setting aside an additional dollar is equal for all periods, dIt/dSt=k; otherwise you could do better by transferring a dollar from a period with a lesser marginal impact per dollar allocated, to one with a greater marginal impact per dollar allocated. Making substitutions b=(1+r)t, c=(1+g)t, we have It=ln[bSt+cR]. Therefore, k=b/bSt+cR, so that St=1/k-Rc/b=1/k-((1+g)/(1+r))tR. Substituting d=(1+g)/(1+r) and a=1/k yields the formulation St=a-dtR. This has the intuitive consequence that money set aside for later periods scales up with the rate of return on one's investment (i.e. how many future dollars a present dollar can buy), and down with the rate at which other resources brought to bear on the problem will increase (since your money does the most at the times when the problem is most neglected).

This is not an universal solution; it gives us relative sizes, but Σt=0St does not converge. If deflator d<1 (i.e. r>g), then St increases without bound - but that's a scenario in which it makes sense to hold onto the money forever anyway, since the marginal impact of the endowment only increases over time. On the other hand, if d>1, eventually St drops below zero - effectively allocating negative aid in the future to finance aid in the present when it's more needed. This is not so intuitively unreasonable - many countries and private firms borrow in order to finance needed capital investments or present expenditures, expecting that their future ability to pay will increase, and the marginal dollar will be less in the future. Since philanthropists can't borrow against developing countries' future GDP, we can interpret this as meaning that after some period of time, even the first dollar allocated for aid is less valuable than the marginal dollar allocated for any prior period, so the correct allocation is nothing. In other words, St=max[a-dtR,0]. Thus, so long as d>1, this formula gives us the ordinary result - money allocated drops off smoothly over time, up until some time in the future when it no longer makes sense to allocate money.

This formula gives us a simple way to check whether we've reached the cutoff point. We can calculate the difference between the allocations for any two periods, St-St+x=a-(dtR-a+dt+xR=dtR(dx-1). For t=0 this is a lower bound on the implied allocation in the first period, R(dx-1), if you assume that the allocation is nonzero up to period x. The lower bound on the total implied allocation can be expressed as Σt=0x-1dtR(dx-t-1). This can be used to check, for any period x, whether there is an implied allocation - if the total implied allocation exceeds the size of the fund, then that period is unfunded.

Let's illustrate this with some specific numbers - let's say g≈0.1, r≈0.05, so that d=1.05. These numbers mean that the resources available for consumption by the bottom billion increase by about 10% per year, and the billion-dollar philanthropic fund grows by about 5% per year. You are still considering a plan to give money to the bottom billion, getting by on about a dollar a day, so suppose R is equal to $365 billion. Then at t=1, the implied allocation in the first period is R(dt-1)=$17,380,952,381 - substantially in excess of a billion. This implies that the full billion dollars should be given away immediately. Intuitively, this makes sense - even though there's diminishing marginal utility, at $365 billion in annual resources, utility is nearly linear for an addition of a mere $1 billion - and this only gets worse in each successive year - so if the first year is a better deal for the first dollar, it's a better deal for the billionth dollar as well.

However, it's not realistic to assume that you have the ability to reach a billion people. GiveDirectly can't reach that many now, and probably won't over the next few years. So let's say instead that they can reach about a million people, such that R is equal to $365 million. Then the difference between the first two allocations is only about $17 million dollars, well under a billion, and the first period that receives zero allocation is t=8, since any allocation in that period would imply a total allocation of $1,245,578,242. The optimal allocation is:

Year Initial funds allocated
0 $195,678,092
1 $177,428,092
2 $158,265,592
3 $138,144,967
4 $117,018,311
5 $94,835,322
6 $71,543,184

Details of the calculation are in this spreadsheet.

4 I haven't talked with Matheny about this and don't know him personally, but this story is clear enough to me based on the public record.
5 Wikipedia reports four individuals as having won multiple Nobel prizes, all in the sciences. The Nobel prize has been awarded to 201 people for physics, 172 for chemistry, and 210 for medicine.

13 thoughts on “GiveWell: a case study in effective altruism, part 2

  1. Pingback: GiveWell: a case study in effective altruism, part 1 | Compass Rose

  2. Pingback: GiveWell: a case study in effective altruism, part 3 | Compass Rose

  3. Carl Shulman

    [Disclosure: I work for FHI, which is mentioned in the post, and consult for OpenPhil, but am not speaking for either.]

    There are a lot of good points and nice analysis in this post, but I still see a lot of problems, including the following:

    "Estimating it a second way, surveys of AI experts have a median estimate of about 30 years to human-level AI, and AI. This suggests about a 3% annual probability."

    Where are you getting that number from? 1-0.5^(1/30)=2.284%.

    Also because of response bias and selection effects, etc, I don't think we can simply take those as a great sample (one would have to zoom in on the more representative surveys, and they had weak response rates; fortunately better surveys are coming).

    " If there's a 50% chance this goes well for us and a 50% chance it wipes us out"
    The surveys cited above give substantially lower numbers for 'existential catastrophe' outcomes, more like 5-10%. I think the numbers would change with more relevant information, and if we had a sample with both AI expertise and superforecaster skills, but one should be consistent in how one uses the survey information across questions (or explain the differences).

    " Let's say we have a 10% chance of reaching the stars - that gives us a lower bound"

    Conversely, you could assign a 10% credence to there being astronomically many simulations of planets like ours that don't get access to all those apparent interstellar resources, and so a policy that improves well-being in those simulations also generates astronomical impacts. E.g. if I eat an apple and enjoy it, then astronomically many almost-identical beings in other simulations will logically do the same thing.

    So the expectation of astronomical stakes can apply to actions not directly focused on the future, just less so (maybe a dozen orders of magnitude cleaving things well-targeted at the long-run vs actions with no long-run impact, but definitely not 10^50 or whatnot).

    I think it's right that at current margins there are enormous gains to be had from targeting neglected impacts on the long-run, and that this means it is possible to spend many billions of dollars with higher expected impact on well-being than AMF. But I would also like to set the standard of explicitness about how the case for focus on the long-run doesn't keep increasing indefinitely with larger universes, as the size of the universe enters both sides of the equation (seemingly local acts and seemingly long-run-focused acts).

    "The CDC is another organization with a similar level of legitimacy, ability to call on the resources of other legitimate authorities, and accumulated expertise, specific to biosecurity."

    The CDC and biosecurity agencies often display a different focus that does not value the tail GCR risks as highly as I think they deserve, e.g. the relative emphasis on small-scale terrorist attacks with agents that cannot spread worldwide contagiously and relative emphasis on existing agents vs emerging technologies and their use to create GCR-level (or elevated x-risk) organisms. An outside funder without the constraints on a government agency may be in a much better position to fund outside research and advocacy that improves prioritization within government (including by causing current and future employees to want to improve it, as with the IARPA example). And the CDC has a budget of $7BB per annum. So I really don't see the case for simply donating billions of dollars to its budget (which also might be clawed back by Congress in the next cycle of appropriations).

    The existence of such agencies (and analogous ones in nuclear risk, etc) is good, but it does not close off all opportunities to do what they are constrained from doing (often by political considerations), or to take action that improves their ability and motivation to address key problems. So I think I disagree a lot with the thrust of the GCR section.

    "This seems on the high end of plausibility, given that the entire amount of money currently spent on animal welfare is something like $50 million annually. "

    Politics can consume very large amounts of money, as can advertising, as can scientific research to improve welfare (animal product substitutes, development of higher welfare breeds, better measurement of farmed and wild animal welfare). Even with a large discount on the welfare of animals with simpler nervous systems, I'd say one can ramp up to spend $10BB to buy (short-run, assuming no technological revolution or GCR, etc) farmed animal QALYS more cheaply than through AMF.

    And if one considers climate change, land use, and similar there are tremendous impacts that could be had on wild animals too.

    Reply
    1. Benquo Post author

      Thanks for the critique 🙂

      >"Estimating it a second way, surveys of AI experts have a median estimate of about 30 years to human-level AI, and AI. This suggests about a 3% annual probability."
      >Where are you getting that number from? 1-0.5^(1/30)=2.284%.

      I was assuming that each year, assuming the thing hasn't happened yet, there's the same probability p that it happens. In other words, the geometric distribution. The mean of the geometric distribution is 1/p=30, so p=1/30, or about 3.33%.

      I agree that the survey results can't be taken straightforwardly at face value, and look forward to looking at the new survey data when it's made public. It would also be good to look into whether predictors of wild success during past AI summers lost anything by making obviously wrong claims, or whether making false claims is a good strategy for success in AI; that should affect how reliable present predictions are, and I think it's quite plausible that these timelines are too short.

      Simulation hypothesis considerations are substantially more important here, and it's plausible to me that we should be doing more figuring out how to think about that. I'm not sure how much work has already been done on this, that's publicly available, beyond Bostrom's original paper - where should I look for more on this?

      Have you written up your thinking wrt feasibility of spending $10BB on cheap animal QALYs?

      Reply
    2. Benquo Post author

      On IARPA and the CDC, which have ~$3 billion and ~$7 billion budgets respectively:

      In either case it seems to me like if you have $10 billion and are interested in security risks such as AI, one obvious think to do is offer IARPA $30 million / year for 5 years to run programs specifically about tail existential risks such as an intelligence explosion. This is a level of funding you can pull off pretty much indefinitely if it works out - though if it works out obviously enough, probably it'll attract other funders too. It's 1% of their current budget, so it's not an absurd amount of extra funding for IARPA to absorb. If it works out unusually well, you can scale the commitment up to 10% of their budget ($300 million) without drawing down your wealth too fast.

      If they say no for what seem like basically silly reasons or accidental institutional reasons, or you try the program and it fails, then of course you try something else, like moving more money through your own foundation and/or other people working on this. If they say no because they think they're already doing all the important things, or are management capacity bottlenecked, then you should believe that there are diminishing returns at this scale. This suggests that there are diminishing returns at your scale too, so you should be interested in exploring the possibility of spread decisionmaking power more widely by delegating funding decisions to many separate individuals and institutions.

      If your prior is already on strongly diminishing returns to scale, then you skip the first step and just start trying to figure out how to delegate the majority of your spending to others, with your foundation as one among many institutions and individuals that largely share your values, managing similarly sized endowments.

      If it simply doesn't occur to you to do the above, and it doesn't occur to the people advising you to do the above either, then you should reconsider to what extent your whole setup is a good way to do something about x-risk, vs other more universal human goals like prestige and political influence and feeling good about yourself.

      Same argument applies with respect to the CDC and biorisk, with slightly different illustration numbers.

      This argument is less valid when you think that there are very large missed opportunities due to political constraints, that you expect not to be bound by. In that case, if you think there are positive returns to scale you ought to ask an institution like Skoll Global Threats fund to spend some of your money. If you expect diminishing returns to scale, you ought to try spreading your funds among many decisionmakers. If you're uncertain, then you ought to try a bit of both and do more of whichever one seems to have a better track record.

      But I haven't seen any *specific* claim about what kinds of GCR spending are unavailable to government bodies. Thoughts on this?

      Reply
      1. Carl Shulman

        ">"Estimating it a second way, surveys of AI experts have a median estimate of about 30 years to human-level AI, and AI. This suggests about a 3% annual probability."
        >Where are you getting that number from? 1-0.5^(1/30)=2.284%.

        I was assuming that each year, assuming the thing hasn't happened yet, there's the same probability p that it happens. In other words, the geometric distribution. The mean of the geometric distribution is 1/p=30, so p=1/30, or about 3.33%."

        You said that the median estimate (of 50% probability) is 30 years, but you used a calculation for the mean time instead of for the median time (which I gave above).

        "In either case it seems to me like if you have $10 billion and are interested in security risks such as AI, one obvious think to do is offer IARPA $30 million / year for 5 years to run programs specifically about tail existential risks such as an intelligence explosion."

        Now you're not just talking about writing a check, but finding people willing and able to pursue an aligned agenda (which the organization as a whole had not autonomously prioritized) in the organization and donating in a way that lets that happen (and doesn't backfire, a risk whose investigation one cannot delegate post-grant). That is a complex and challenging process requiring work to find out how to give money in a way to plug those gaps. And sometimes aligned people tell you they would rather see money used to do what they can't do.

        These biosecurity grants actually fit well into this framework:

        http://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/blue-ribbon-study-panel-biodefense-grant#Room_for_more_funding
        http://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/upmc-center-health-security-emerging-leaders-biosecurity-initiative

        Likewise the big successes of the Nuclear Threat Initiative (such as funding removal of weapons material when there was a dispute between the US and Russia over who would pay, and financially kickstarting an IAEA fuel reserve to remove a roadblock to counterproliferation efforts .

        https://en.wikipedia.org/wiki/Nuclear_Threat_Initiative

        And here are some example grants from the Gates Foundation, giving to the CDC Foundation:

        http://www.cdcfoundation.org/blog-entry/cdc-foundation-receives-13.5-million-grant-from-gates-foundation

        I agree that the potential to give large amounts through regranting agencies with good records is attractive, but just writing a check to the agencies isn't at the frontier of effectiveness, and the more sophisticated versions require substantial work (which is worth doing, but has to be done) to find and succeed with.

        "Skoll Global Threats Fund"

        A reasonable candidate, and worth investigating and working with. However, I would note that Skoll has $4 billion and has pledged to donate almost all of it (and has already donated half his wealth). See his Giving Pledge statement. Good Ventures giving $3 billion to Skoll to equalize at $7 billion each would be rather odd. One could cofund and expand on Skoll grants in areas that one thought were higher priority than Skoll (e.g. if you prioritize nuclear war more relative to water security than does Skoll), but that is no longer simply trusting an aligned party. I would say it's a good idea to try such things but again it's not a case of just writing a check (without absurdity).

        Reply
        1. Benquo Post author

          Oops RE 3% vs 2%, you're right, that was sloppy of me. Updating the post using the 2%. Thanks for the correction.

          Reply
        2. Benquo Post author

          Also - I agree that this stuff is not trivial in time expense. If my claim were "here's an extra thing you can do with the money without straining your management capacity further," I think this would be an extremely relevant criticism. But I'm not quite trying to claim that this the answer to the question "which programs should an Open Phil program officer allocate funds to?".

          I'm claiming that IF you believe that increasing returns to scale are reasonably likely, then once you identify something like GCRs as an exceptionally good potential focus area, asking e.g. IARPA or CDC or Skoll whether they'll take your money to do things about tail risk seems like it ought to be more appealing than allocating the same money to building out the Open Philanthropy Project's own ability to evaluate specific programs in that area.

          This is even more true if you think that Open Phil has a strong advantage at focus area selection, because selecting programs via Open Phil involves reallocating extremely scarce staff time from focus area selection, where they have an unusually good track record, to the details of program evaluation and grantmaking, a very different domain.

          You might argue that grantmaking is itself an opportunity to learn about a focus area and check one's assumptions that it's promising, but I'm not sure why that shouldn't be optimized separately from allocating the bulk of the funds to high-impact programs.

          Reply
  4. Pingback: Matching-donation fundraisers can be harmfully dishonest | Compass Rose

  5. Pingback: GiveWell: a case study in effective altruism, part 6 | Compass Rose

  6. Pingback: GiveWell and partial funding | Compass Rose

  7. Pingback: Totalitarian ethical systems | Compass Rose

  8. Pingback: A drowning child is hard to find | Compass Rose

Leave a Reply

Your email address will not be published. Required fields are marked *