Claim explainer: donor lotteries and returns to scale

Sometimes, new technical developments in the discourse around effective altruism can be difficult to understand if you're not already aware of the underlying principles involved. I'm going to try to explain the connection between one such new development and an important underlying claim. In particular, I'm going to explain the connection between donor lotteries (as recently implemented by Carl Shulman in cooperation with Paul Christiano)1 and returns to scale. (This year I’m making a $100 contribution to this donor lottery, largely for symbolic purposes to support the concept.)

I'm not sure I'm adding much to Carl's original post on making bets to take advantage of returns to scale with this explainer. Please let me know whether you think this added anything or not.

What is a donor lottery?

Imagine ten people each have $1,000 to give to charity this year. They pool their money, and draw one of their names out of a hat. The winner gets to decide how to give away all $10,000. This is an example of a donor lottery.

More generally, a donor lottery is an arrangement where a group of people pool their money and pick one person to give it away. This selection is randomized so that each person has a probability of being selected proportional to their initial contribution.

Selfish reasons to gamble

Let's start with the case of a non-charitable expenditure. Usually, for consumption decisions, we have what economists call diminishing marginal utility. This is because we have limited ability to consume things, and also because we make the best purchases first.

Food is an example of something we have limited appetite for. After a certain point, we just aren't hungry anymore. But we also but the more important things first. Your first couple dollars a day make the difference between going hungry and having enough food. Your next couple dollars a day go to buying convenience or substituting higher-quality-foods, which is a material improvement, but nowhere near as big as the difference between starving and fed.

To take a case that's less universal, but maybe easier to understand the principle in, let's say I'm outfitting a kitchen, and own no knives. I can buy one of two knives – a small knife or a large one. The large knife can do a good job cutting large things, and a bad job cutting small things. The small knife can do a good job cutting small things, and a bad job cutting large things. If I buy one of these knives, I get the benefit of being able to cut things at all for both large and small items, plus the benefit of being able to cut things well in one category. If I buy the second knife, I only improve the situation by the difference between being able to cut things poorly in one category, and being able to cut them well. This is a smaller difference. I'd rather have one knife with certainty, than a 50% chance of getting both.

But sometimes, returns to consumption are increasing. Let's say that I have a spare thousand dollars after meeting all my needs, and there's only one more thing in the world I want that money can buy – a brand-new $100,000 sports car, unique enough that there are no reasonable substitutes. The $1,000 does me no good at all, $99,000 would do me no good at all, but as soon as I have $100,000, I can buy that car.

One thing I might want to do in this situation is gamble. If I can go to a casino and make a bet that has a 1% chance of multiplying my money a hundredfold (ignoring the house's cut for simplicity), then this is a good deal. Here's why. In the situation where I don't make the bet, I have a 100% chance of getting no value from the money. In the situation where I do make the bet, I have a 99% chance of losing the money, which I don't mind since I had no use for it anyway, but a 1% chance of being able to afford that sports car.

But since in practice the house does take a cut at casinos, and winnings are taxed, I might get a better deal by pooling my money together with 100 other like-minded people, and selecting one person at random to get the car. This way, 99% of us are no worse off, and one person gets a car.

The sports car scenario may seem far-fetched, especially once you take into account the prospect of saving up for things, or unexpected expenses. But it's not too far from the principle behind the susu, or ROSCA:

Susus are generally made up of groups of family members or friends, each of whom pledges to put a certain amount of money into a central pot each week. That pot, presided over by a treasurer, whose honesty is vouched for by his or her respected standing among the participants, is then given to one member of the group.

Over the course of a susu's life, each member will receive a payout exactly equal to the total he has put in, which could range from a handful of dollar bills to several thousand dollars; members earn no interest on the money they set aside. After a complete cycle, the members either regroup and start over or go their separate ways.

In communities where people either don't have access to savings or don't have the self-control to avoid spending down their savings on short-run emergencies, the susu is the opposite of consumption smoothing - it enables participants to bunch their spending together to make important long-run investments.2

A susu bears a strong resemblance to a partially randomized version of a donor lottery, for private gain.

Gambling for the greater good

Similarly, if you’re trying to do the most good with your money, you might want to take into account returns to scale. As in the case of consumption, the "normal" case is diminishing returns to scale, because you're going to want to fund the best things you know of first. But you might think that the returns to scale are increasing in one of two ways:

  • Diminishing marginal costs
  • Increasing marginal benefits

Diminishing marginal costs

Let’s say that your charity budget for this year is $5,000, and your best guess is that it will take about five hours of research to make a satisfactory giving decision. You expect that you’ll be giving to charities for which $5,000 is a small amount, so that they have roughly constant returns to scale with respect to your donation. (This matters because what we care about are benefits, not costs.) In particular, for the sake of simplicity, let’s say that you think that the best charity you’re likely to find can add a healthy year to someone’s life for $250, so your donation can buy 20 life-years.

Under these circumstances, suppose that someone you trust offers you a bet with a 90% probability of getting nothing, and a 10% probability of getting back ten times what you put in. In this case, if you make a $5,000 bet, your expected giving is 10% * 10 * $5,000 = $5,000, the same as before. And if you expect the same impact per dollar up to $50,000, then if you win, your donation saves $50,000 / $250 = 200 life-years for beneficiaries of this charity. Since you only have a 10% chance of winning, your expected impact is 20 life-years, same as before.

But you only need to spend time evaluating charities if you win, so your expected time expenditure is 10% * 5 = 0.5 hours. This is strictly better – you have the same expected impact, for a tenth the expected research time.

These numbers are made up and in practice you don’t know what the impact of your time will be, but the point is that if you’re constrained by time to evaluate donations, you can get a better deal through lotteries.

Increasing marginal benefits

The smooth case

Of course, if you’re giving away $50,000, you might be motivated to spend more than five hours on this. Let’s say that you think that you can find a charity that’s 10% more effective if you spend ten hours on it. Then in the winning scenario, you’re spending an extra five hours to save an extra 20 lives, not a bad deal. Your expected lives saved is then 22, higher than in the original case, and your expected time allocation is 1 hour, still much less than before.

The lumpy case

Let’s say that you know someone considering launching a new program, which you believe would be a better value per dollar than anything else you can find in a reasonable amount of time. But they can only run the program if they get a substantial amount of initial funds; for half as much, they can’t do anything. They’ve tried a “kickstarter” style pledge drive, but there aren’t enough other donors interested. You have a good reason to believe that this isn’t because you’re mistaken about the program.

You’d fund the whole thing yourself, but you only have 10% of the needed funds on hand. Once again, you’d want to play the odds.

Lotteries, double-counting, and shared values

One objection I’ve seen potential participants raise against donor lotteries is that they’d feel obliged to take into account the values of other participants if they won. This objection is probably related to the prevalence of double-counting schemes to motivate people to give.

I previously wrote about ways in which "matching donation" drives only seem like they double your impact because of double-counting:

But the main problem with matching donation fundraisers is that even when they aren't lying about the matching donor's counterfactual behavior, they misrepresent the situation by overassigning credit for funds raised.

I'll illustrate this with a toy example. Let's say that a charity - call it Good Works - has two potential donors, Alice and Bob, who each have $1 to give, and don't know each other. Alice decides to double her impact by pledging to match the next $1 of donations. If this works, and someone gives because of her match offer, then she'll have caused $2 to go to Good Works. Bob sees the match offer and reasons similarly: if he gives $1, this causes another $1 to go to Good Works, so his impact is doubled - he'll have caused Good Works to receive $2.

But if Alice and Bob each assess their impact as $2 of donations, then the total assessed impact is $4 - even though Good Works only receives $2. This is what I mean when I say that credit is overassigned - if you add up the amount of funding each donor is supposed to have caused, you get number that exceeds the total amount of funds raised.

If you tried to justify donor lotteries this way, it would look like this: Let's say you and nine other people each put in $10,000. You have a 10% chance of getting to give away $100,000. But if you lose, the other nine people still want to give to something that fulfills your values at least somewhat. So you are giving away more than $10,000 in expectation. This is double-counting because if you apply it consistently to each member of the group in turn, it assigns credit for more funding than the entire group is responsible for. It only works if you think you're getting one over on the other people if you win.

For instance, maybe you'd really spend your winnings on a sports car, but giving the money to an effective charity seems better than nothing, so they're fulfilling your values, but you're not fulfilling theirs.

Naturally, some people feel bad about getting one over on people, and consequently feel some obligation to take their values into account.

There are some circumstances under which this could be reasonable. People could be pooling their donations even though they're risk-averse about charities, simply in order to economize on research time. But in the central case of donor lotteries, everyone likes the deal they're getting, even if the estimate the value of other donors' planned use of the money at zero.

The right way to evaluate the expected value of a donor lottery is to only take the deal if you'd take the same deal from a casino or financial instrument where you didn't think you were value-aligned with your counterparty. Assume, if you will, that everyone else just wants a sports car. If you do this, you won't double-count your impact by pretending that you win even if you lose.

Claim: returns to scale for individual donations

Donor lotteries were originally proposed as a response to an argument based on returns to scale:

  • Some effective altruists used “lumpy” returns to scale (for instance, where extra money matters only when it tips the balance over to hiring an additional person) to justify favoring charities that turn funds into impact more smoothly.
  • Some effective altruists say that small donors should defer to GiveWell’s recommendations because for the time it makes to spend on allocating a small donation, they shouldn’t expect to do better than GiveWell.

In his original post on making use of randomization to increase scale, Carl Shulman summarizes the case against these arguments:

In a recent blog post Will MacAskill described a donation opportunity that he thought was attractive, but less so for him personally because his donation was smaller than a critical threshold:

This expenditure is also pretty lumpy, and I don’t expect them to get all their donations from small individual donations, so it seems to me that donating 1/50th of the cost of a program manager isn’t as good as 1/50th of the value of a program manager.

When this is true, it can be better to exchange a donation for a 1/50 chance of a donation 50 times as large. One might also think that when donating $1,000,000 rather than $1 one can afford to spend more time and effort in evaluating opportunities, get more access to charities, and otherwise enjoy some of the advantages possessed by large foundations.

Insofar as one believes that there are such advantages, it doesn't make sense to be defeatist about obtaining them. In some ways resources like GiveWell and Giving What We Can are designed to let the effective altruist community mimic a large effectiveness-oriented foundation. One can give to the Gates Foundation, or substitute for Good Ventures to keep its cash reserves high.

However, it is also possible to take advantage of economies of scale by purchasing a lottery (in one form or another), a small chance of a large donation payoff. In the event the large donation case arises, then great efforts can be made to use it wisely and to exploit the economies of scale.

There's more than one reason you might choose to trust the recommendations of GiveWell or Giving What We Can, or directly give to either, or to the Gates Foundation. One consideration is that there are returns to scale for delegating your decisions to larger organizations. Insofar as this is why donors give based on GiveWell recommendations, GiveWell is serving as a sort of nonrandomized donor lottery in which the GiveWell founders declared themselves the winners in advance. The benefit of this structure is that it's available. The obvious disadvantage is that it's hard to verify shared values.

Of course, there are other good reasons why you might give based on GiveWell's recommendation. For instance, you might especially trust their judgment based on their track record. The proposal of donor lotteries is interesting because it separates out the returns to scale consideration, so it can be dealt with on its own, instead of being conflated with other things.

Even if your current best guess is that you should trust the recommendations of a larger donor, if you are uncertain about this, and expect that spending time thinking it through would help make your decision better, then a donor lottery allows you to allocate that research time more efficiently, and make better delegation decisions. There's nothing stopping you from giving to a larger organization if you win, and decide that's the best thing.

So, the implications of a position on returns to scale are:

  • If you think that there are increasing returns to scale for the amount of money you have to allocate, then you should be interested in giving money to larger donors who share your values, or giving based on their recommendations. But you should be even more interested in participating in a donor lottery.
  • If you think that there are diminishing returns to scale for the amount of money you have to move, then you should not be interested in giving money to larger donors, participating in a donor lottery, accepting money from smaller donors, or making recommendations for smaller donors to follow.

With those implications in mind, here are some claims it might be good to argue about:

(Cross-posted to LessWrong and Arbital.)

References

References
1 This phrasing was suggested by Paul. Here's how Carl describes their roles: "I came up with the idea and basic method, then asked Paul if he would provide a donor lottery facility. He did so, and has been taking in entrants and solving logistical issues as they come up."
2 More on susus here and here. More on ROSCAs here, here, here, and here.

When I was trying to find where I'd originally heard about these and didn't remember what they were called, I Googled for poor people in developing countries using lotteries as savings, but most relevant-looking results were about using lotteries to trick poor people into saving. Almost none were about what poor people were already doing to solve their actually existing problems. It turns out, sometimes the poor can do financial engineering when they need to. The global poor aren't necessarily poor because they're stupid or helpless. Seems pretty plausible that in many cases, they're poor because they haven't got enough money.

9 thoughts on “Claim explainer: donor lotteries and returns to scale

  1. Daniel Dewey

    Nice post!

    On first blush, I don't think your overassigned-credit argument works. In general, I'm skeptical of reductios that end with "then total credit > total impact", because AFAIK it's sometimes correct to overassign credit: http://www.stafforini.com/txt/Parfit%20-%20Five%20mistakes%20in%20moral%20mathematics.pdf

    I do think the point you're arguing for is roughly correct, but I'd argue for it this way:

    Suppose A and B each put $1 into a lottery. If A expects B to "be nice" (take A's values into account if B wins), but A will ignore B's values if A wins (A is not nice), then A "should" play the lottery to take advantage of B. However, this is terribly non-cooperative, and not good for EAs to do.

    If A and B are equally nice, A's incentive goes away. Suppose that A and B each have a niceness of .25; that is, each will spend 25% = $0.5 of the $2 pool on the other's values if they win. Assuming linear returns, that means that A wins -> $1.5 of A-value, B wins -> $0.5 of A-value, for an expected value of $1 A-value. Therefore A shouldn't play the lottery for "we're all nice" reasons.

    Reply
    1. Benquo Post author

      Thanks for the link and argument. I suspect that the argument from overassignment works for all and only those cases where you're dealing with something that scales linearly. Note that Parfit's Second Rescue Mission is an example of gains from scale:

      The Second Rescue Mission. As before, the lives of a hundred people are in danger. These people can be saved if I and three other people join in a rescue mission. We four are the only people who could join this mission. If any of us fails to join, all of the hundred people will die. If I fail to join, I could go elsewhere and save, single-handedly, fifty other lives.

      One doesn't just count up the time investment here and say the person involved is quadrupling their impact. They're tipping the balance on a stepwise function. The right thing to do here, which Parfit does, is a scenario analysis:

      On the Revised Share-of-the-Total View, I ought to go elsewhere and save these fifty others. If instead I join this rescue mission, my share of the benefit produced is only the equivalent of saving twenty-five lives. I can therefore do more good if I go elsewhere and save the fifty. This is clearly false, since if I act in this way fifty more lives will be lost. I ought to join this rescue mission.

      I originally talked about overassignment as an argument against double-counting inputs where you're controlling (and being controlled by) others' actions. Seems to be that if you can measure the outputs (including opportunity cost for others' contributions) then you should just do that and not worry about your "share" of the inputs.

      Reply
      1. Daniel Dewey

        Makes sense -- the linear scaling comment seems right to me (though I haven't thought super carefully). Thanks!

        Reply
  2. Aceso Under Glass

    This isn't a counterargument, more an example that illustrates the principle: I think I am an example of a person who would not benefit from a donor lottery. I'm supporting Tostan with both money and writing that induces other people to donate. A lottery doesn't save me it doesn't save me any research time because I need to do it for my writing anyway (I was originally planning on stopping at one but there are now compelling reasons to keep going). And to the extent my donations are acting as a credible signal that I believe what I'm writing, I think many people will find it more compelling than buying a ticket in a donor lottery. This rhymes with the double counting of matching, but I don't think it is the same. I explicitly did not hold my money back for matching, because I would donate to the organization without it (although if I hadn't moved so much money through writing, I would scale back on that).

    Reply
    1. Aceso Under Glass

      Also relevant: you can often get medium size charities to talk to you for a surprisingly small donation ($1,000-5,000). I don't think that argues either way: you could play the lottery so you could make a bunch of small donations at once, or you could make small donations every year until you find one you like and then play the lottery.

      Reply
      1. Benquo Post author

        Seems like that's an argument for roughly constant or slightly diminishing returns to scale once you hit that threshold, if your impact mainly comes from building a relationship with the charity. (i.e. a sufficiently large "test" donation to one charity ought to be at least as good as a 10% chance of ten "test" donations.)

        Reply
        1. Aceso Under Glass

          Depends. I think one of the most important things is using your comparative advantage. If you have secret knowledge of 3 charities (like I did with Tostan), funding for test donations for a 4th has greatly diminished marginal returns.

          Reply
    2. Benquo Post author

      Agreed, this is a great example of when a donor lottery is NOT a good fit for your giving. Negative examples are really important for clarifying concepts, thanks for providing one!

      Reply
  3. Pingback: Totalitarian ethical systems | Compass Rose

Leave a Reply to Benquo Cancel reply

Your email address will not be published. Required fields are marked *