Category Archives: Effective Altruism

Matching-donation fundraisers can be harmfully dishonest

Anna Salamon, executive director of CFAR (named with permission), recently wrote to me asking for my thoughts on fundraisers using matching donations. (Anna, together with co-writer Steve Rayhawk, has previously written on community norms that promote truth over falsehood.) My response made some general points that I wish were more widely understood:

  • Pitching matching donations as leverage (e.g. "double your impact") misrepresents the situation by overassigning credit for funds raised.
  • This sort of dishonesty isn't just bad for your soul, but can actually harm the larger world - not just by eroding trust, but by causing people to misallocate their charity budgets.
  • "Best practices" for a charity tend to promote this kind of dishonesty, because they're precisely those practices that work no matter what your charity is doing.
  • If your charity is impact-oriented - if you care about outcomes rather than institutional success - then you should be able to do substantially better than "best practices".

Continue reading

GiveWell: a case study in effective altruism, part 5

This is part of a series of blog posts examining seven arguments I laid out for limiting Good Ventures funding to the GiveWell top charities. My prior post considered the third through fifth arguments, on influence, access, and independence. In this post, I consider the sixth and seventh arguments:

Argument 6: If no one else is willing to fund a program, then this is evidence that the program should not be funded. Crowding out other donors destroys this source of independent validation.

Argument 7: If Good Ventures fully funds every high-value giving opportunity it finds, this could lead to other donors preemptively abandoning programs the Open Philanthropy Project is looking into, thus substantially reducing the amount of effective giving in the Open Philanthropy Project's perceived current and potential focus areas.

Argument 6 is sometimes an important consideration, but is a poor fit for the GiveWell top charities, to the extent that most donors are already largely moved by GiveWell's recommendations. Argument 7 points to a real problem, and one that reflects poorly on the effective altruism movement, but the problem is not mainly concentrated in the area of funding. Continue reading

GiveWell: a case study in effective altruism, part 4

This is part of a series of blog posts examining seven arguments I laid out for limiting Good Ventures funding to the GiveWell top charities. My prior post considered the second argument, that even assuming symmetry between Good Ventures and other GiveWell donors, Good Ventures should not fund more than its fair share of the top charities, because it has a legitimate interest in preserving its bargaining power. In this post, I consider the third through fifth arguments:

Argument 3: The important part of GiveWell's and the Open Philanthropy Project's value proposition is not the programs they can fund with the giving they're currently influencing, but influencing much larger amounts of charitable action in the future. For this reason it would be bad to get small donors out of the habit of giving based on GiveWell recommendations.

Argument 4: The amount of money Good Ventures will eventually disburse based on the Open Philanthropy Project's recommendations gives them access to potential grantees who would be interested in talking to one of the world's very largest foundations, but would not spend time on exploratory conversations with smaller potential donors who are not already passionate about their program area.

Argument 5: GiveWell, the Open Philanthropy Project, and their grantees and top charities, cannot make independent decisions if they rely exclusively or almost exclusively on one major donor. They do not want Good Ventures to crowd out other donors, because it makes them more dependent on Good Ventures, which will reduce the integrity of their decisionmaking process, and therefore the quality of their recommendations.

If you already think that GiveWell is doing good, Argument 3 should make you more excited about it - and implies that Good Ventures should be looking for ways to give away money faster in order to build a clear track record of success sooner.

Argument 4 seems plausible at some margin - if Good Ventures gives away most of its money quickly, then it will become a small foundation and have access to fewer potential grantees. But it would be a surprising coincidence if the amount of money Good Ventures will eventually give away were very close to this access threshold. If giving money away freely now will make it difficult to change behavior later once remaining funds are close to the access threshold, this is an argument for communicating this intention in advance, which may require Good Ventures and its donors to make their funding commitments more explicit.

I deal with two components of Argument 5 separately. First, GiveWell's top charities may become less effective if dependent on a single primary donor. Second, GiveWell and the Open Philanthropy Project have a legitimate interest in preserving their own independence.

The top charities independence consideration seems unlikely to uniformly apply to all the GiveWell top charities; each has a different funding situation and donor base, so this seems like a situation worth assessing on a case-by-case basis, not with a blanket 50-50 donation split between Good Ventures and everyone else.

To the extent that Good Ventures becoming the dominant GiveWell donor threatens GiveWell's institutional independence, this problem seems built into the current institutional structure of GiveWell and the Open Philanthropy Project, in ways that aren't materially resolved by Good Ventures only partially funding the top charities. Continue reading

Costs are not benefits

You're about to brush your teeth but you're all out of toothpaste, so you walk over to the drugstore. They're out of your favorite toothpaste too, but there's a shampoo available for the same price. On the efficient market hypothesis, you expect that the market prices already contain all relevant information about the products, so you have no reason to think the shampoo is less valuable to you. So you buy it, go home, and wash your hair.

What's wrong with this story?

The problem is that you're using a marginalist heuristic. The efficient market hypothesis applies to markets in which people have roughly the same preferences. Everyone wants roughly the same thing from their financial investments - to make more money. So at any given level of risk, you should expect to evaluate tradeoffs the same as anyone else does.

In the case of the drugstore, you have a lot of information about whether you prefer shampoo or toothpaste, that is unlikely to be reflected in the price. The efficient market hypothesis suggests that you shouldn't expect to get a much better deal in a nearby store, but not that you should be indifferent between all similarly priced goods. You value toothpaste over shampoo a lot more than any price difference is likely to reflect, because you have what is called an inframarginal preference: you need toothpaste, and you've already got enough shampoo.

Critch just reposted an old argument in favor of voting, by doing a back of the envelope calculation of its expected impact. The model is perfectly fine, but to estimate the value, he uses a related cost. I don't think this seems like a reasonable thing to do if you're not making the shampoo-for-toothpaste error. Continue reading

GiveWell: a case study in effective altruism, part 3

This is part of a series of blog posts examining seven arguments I laid out for limiting Good Ventures funding to the GiveWell top charities. My prior post considered the first argument, that the Open Philanthropy Project, and thus Good Ventures, has superior judgment to that of GiveWell donors. In this post, I consider the second argument:

Even if Good Ventures isn't special, it should expect that some of its favorite giving opportunities will be ones that others can't recognize as good ideas, due to different judgment, expertise, and values. If the Open Philanthropy Project does not expect to be able to persuade other future donors, but would be able to persuade Good Ventures, then these opportunities will only be funded in the future if Good Ventures holds onto its money for long enough. Thus, while Good Ventures may currently have a lower opportunity cost than individual GiveWell donors, this will quickly change if it commits to fully funding the GiveWell top charities.

This post is my most direct response to GiveWell's blog post explaining the reasoning behind its "splitting" recommendation.

Argument 2: Bargaining power

In its blog post on giving now vs later, GiveWell discusses potential policies it might have recommended to Good Ventures on funding the GiveWell top charities' funding gap. Good Ventures and individual GiveWell donors may have very different opinions on what else their money should be spent on, but still agree that the optimal allocation of resources should prioritize the GiveWell top charities.

Without holding the view that Good Ventures currently has a higher opportunity cost than individual GiveWell donors, GiveWell might still believe that committing to fully funding the GiveWell top charities' funding gaps would be a mistake on the part of Good Ventures. GiveWell might believe that this commitment would be bad because it cedes all of Good Ventures's bargaining power to other GiveWell donors.

GiveWell begins with a principled argument, asking whether Good Ventures should respond to each additional dollar given by other GiveWell donors by giving less ("funging"), more ("matching"), or the same amount ("splitting"). GiveWell recommends splitting, and in the first major section, I explore the principled case for this, assuming the conditions of symmetry laid out above. I argue that the principled case for splitting is only coherent under very pessimistic assumptions about effective altruists' ability to cooperate with one another. These assumptions may be justified, but as far as I can tell, haven't been seriously tested.

GiveWell goes on to make a specific recommendation that Good Ventures's "fair share" of the GiveWell top charities is 50% of the top charities' total room for more funding. In the second major section of this post, I see whether this recommendation seems intuitively fair, trying a couple of different simple back-of-the-envelope quantitative comparisons. I argue that the most intuitive relative allocation assigns substantially more of the funding burden to Good Ventures at present. Continue reading

GiveWell: a case study in effective altruism, part 2

In my prior post on this topic, I laid out seven distinct arguments for limiting Good Ventures funding to the GiveWell top charities. In this post, I explore the first of these:

Good Ventures can find better opportunities to do good than other GiveWell donors can, because it is willing to accept more unconventional recommendations from the Open Philanthropy Project.

I'll start by breaking this up into two claims (disjunctions inside disjunctions!): a bold-sounding claim that the Open Philanthropy Project's impact will be world-historically big, and a milder-sounding claim that it can merely do better than other GiveWell donors.

The bold claim seems largely inconsistent with GiveWell's and the Open Philanthropy Project's public statements, but their behavior sometimes seems consistent with believing it. However, if the bold claim is true, it suggests that the correct allocation from Good Ventures to the GiveWell top charities is zero. In addition, as a bold claim, the burden of evidence ought to be fairly high. As things currently stand, the Open Philanthropy Project is not even claiming that this is true, much less providing us with reason to believe it.

The mild claim sounds much less arrogant, is plausibly consistent with GiveWell's public statements, and is consistent with partial funding of the GiveWell top charities. However, the mild claim, when used as a justification for partial funding of the GiveWell top charities, implies some combination of the following undesirable properties.

  • Other GiveWell donors' next-best options are worthless.
  • Good Ventures and other GiveWell donors have an adversarial relationship, and GiveWell is taking Good Ventures's side.

Continue reading

GiveWell: a case study in effective altruism, part 1

Direct critiques of effective altruism have tended to take a form ill-suited to persuade the sort of person who is excited about it. One critique points somewhat vaguely at the virtues of intuition and first-hand knowledge, and implies that thinking is not a good way to make decisions. Others have criticized effective altruism's tendency in practice towards centralization and top-down decisionmaking, and implied that making comparisons across different programs is immoral. What's missing is a critique by someone sympathetic to the things that make effective altruism appealing: a desire to follow the evidence wherever it leads, use explicit methods of evaluation whenever possible, and be sensitive to considerations of scope.

I am going to try to begin that sympathetic critique here by looking at GiveWell, a nonprofit that tries to recommend the best giving opportunities. GiveWell is a good test case because it is now fairly central to the effective altruist movement, and it has been unusually honest and open about its decisionmaking processes. As it has developed and grown, it has had to deal with some of the tensions inherent in the effective altruist project in practice.

In the course of implementing effective altruist ideals, GiveWell has accumulated massive conflicts of interest, along with ever-larger amounts of money, power, and influence. When I hear this discussed, people generally justify it by saying that it is in the service of a higher impact on the world. Such a standard allows for a lot of moral flexibility in practice. If GiveWell wishes to be held to that standard, then we need to actually hold it to that standard - the standard of maximizing expected value - and see how it measures up.

That’s an extremely high standard to meet. GiveWell’s written that you shouldn’t take expected-value calculations literally. Maybe any attempt to maximize impact by explicitly evaluating options should be scope-limited, and moderated by common sense. But if you accept that defense, then the normal rules apply, and we should be skeptical of any organization whose conduct is justified by the fully general mandate to do the most good.

We can’t have it both ways.

GiveWell has recently written about coordination between donors. GiveWell wrote that up to explain why it recently recommended that a major funder commit, in some circumstances, to not fully funding the charities GiveWell recommends to the public, based on concerns of crowding out other donors. My post is largely a response to this. Continue reading

Minimum viable impact purchases

Several months ago I did some work on a trial basis for AI Impacts. It went well enough, but the process of agreeing in advance on what work needs to be done felt cumbersome. It's not uncommon that midway through a project, it turns out that it makes sense to do a different thing than what you'd originally envisioned - and because I was doing this for someone else, I had to check in at each such point. This didn't just slow down the process, but made the whole thing less motivating for me.

Later, I did my own research project. When natural pivot points came up, this didn't trigger a formal check-in - I just continued to do the thing that made the most sense. I think that I did better work this way, and steered more quickly towards the highest-value aspect of my research. Part of this is because, since I wasn't accountable to anyone else for the work, I could follow my own inner sense of what needed to be done.

I was talking with Katja about my work, and she mentioned that AI Impacts might potentially be interested in funding some of this work. I explained the motivation problem mentioned in the prior paragraphs, and wondered out loud whether AI Impacts might be interested in funding projects retrospectively, after I'd already completed them. Katja responded that in principle this sounded like a much better deal than funding projects prospectively, in large part because it would take less management effort on her part. This also felt like a much better deal to me than being funded prospectively, again because I wouldn't have to worry so much about checking in and fulfilling promises.

I've talked with friends about this consideration, and a few mentioned the fact that sometimes people are hired as researchers with a fairly vague or flexible research mandate, or prefunded to do more like their prior work, in the hope that they'll produce similarly valuable work in the future. But making promises like that, even if very abstract, also makes it difficult for me to proceed in a spirit of play, discovery, and curiosity, which is how I do some of my best work.

It also offends my sense of integrity to accept money for the promise to do one thing, or even one class of thing, when my real plan is to adopt a flexible stance - my best judgment might tell me to radically change course, and at this stage I fully intend to listen to it. For instance, I might decide that I should switch from research to writing and advocacy (what I'm doing now). I might even learn something that persuades me to make a bigger commitment to another course of action, starting or join some organization with a better-defined role.}

What doesn't offend my sense of integrity is to accept money explicitly for past work, with no promises about the future.

Then it clicked - this is the logic behind impact certificates. Continue reading

Lessons learned from modeling AI takeoffs, and a call for collaboration

I am now able to think about things like AI risk and feel like the concepts are real, not just verbal. This was the point of my modeling the world project. I’ve generated a few intuitions around what’s important in AI risk, including a few considerations that I think are being neglected. There are a few directions my line of research can be extended, and I’m looking for collaborators to pick this up and run with it.

I intend to write about much of this in more depth, but since EA Global is coming up I want a simple description to point to here. These are just loose sketches, so I'm trying to describe rather than persuade.

New intuitions around AI risk

  • Better cybersecurity norms would probably reduce the chance of an accidental singleton.
  • Transparency on AI projects’ level of progress reduces the pressure towards an AI arms race.
  • Safety is convergent - widespread improvements in AI value alignment improve our chances at a benevolent singleton, even in an otherwise multipolar dynamic.

Continue reading

Effective Altruism is not a no-brainer

Ozy writes that Effective Altruism avoids the typical failure modes of people in developed countries intervening in developing ones, because it is evidence-based, humble, and respects the autonomy of the recipients of the intervention. The basic reasoning is that Effective Altruists pay attention to empirical evidence, focus on what's shown to work, change what they're doing when it looks like it's not working, and respect the autonomy of the people for whose benefit they're intervening.

Effective Altruism is not actually safe from the failure modes alluded to:

  • Effective Altruism is not humble. Its narrative in practice relies on claims of outsized benefits in terms of hard-to-measure things like life outcomes, which makes humility quite difficult. Outsized benefits probably require going out on a limb and doing extraordinary things.
  • Effective Altruism is less evidence based than EAs think. People talk about some EA charities as producing large improvements in life outcomes with certainty, but this is often not happening. And when the facts disagree with our hopes, we seem pretty good at ignoring the facts.
  • Effective Altruism is not about autonomy. Some EA charities are good at respecting the autonomy of beneficiaries, but this is nowhere near central to the movement, and many top charities are not about autonomy at all, and are much better fits for the stereotype of rich Westerners deciding that they know what's best for people in poor countries.
  • Standard failure modes are standard. We need a model of what causes them, and how we're different, in order to be sure we're avoiding them.

Continue reading