A parent I know reports (some details anonymized):
Recently we bought my 3-year-old daughter a "behavior chart," in which she can earn stickers for achievements like not throwing tantrums, eating fruits and vegetables, and going to sleep on time. We successfully impressed on her that a major goal each day was to earn as many stickers as possible.
This morning, though, I found her just plastering her entire behavior chart with stickers. She genuinely seemed to think I'd be proud of how many stickers she now had.
The Effective Altruism movement has now entered this extremely cute stage of cognitive development. EA is more than three years old, but institutions age differently than individuals.
What is a confidence game?
In 2009, investment manager and con artist Bernie Madoff pled guilty to running a massive fraud, with $50 billion in fake return on investment, having outright embezzled around $18 billion out of the $36 billion investors put into the fund. Only a couple of years earlier, when my grandfather was still alive, I remember him telling me about how Madoff was a genius, getting his investors a consistent high return, and about how he wished he could be in on it, but Madoff wasn't accepting additional investors.
What Madoff was running was a classic Ponzi scheme. Investors gave him money, and he told them that he'd gotten them an exceptionally high return on investment, when in fact he had not. But because he promised to be able to do it again, his investors mostly reinvested their money, and more people were excited about getting in on the deal. There was more than enough money to cover the few people who wanted to take money out of this amazing opportunity.
Ponzi schemes, pyramid schemes, and speculative bubbles are all situations in investors' expected profits are paid out from the money paid in by new investors, instead of any independently profitable venture. Ponzi schemes are centrally managed – the person running the scheme represents it to investors as legitimate, and takes responsibility for finding new investors and paying off old ones. In pyramid schemes such as multi-level-marketing and chain letters, each generation of investor recruits new investors and profits from them. In speculative bubbles, there is no formal structure propping up the scheme, only a common, mutually reinforcing set of expectations among speculators driving up the price of something that was already for sale.
The general situation in which someone sets themself up as the repository of others' confidence, and uses this as leverage to acquire increasing investment, can be called a confidence game.
Some of the most iconic Ponzi schemes blew up quickly because they promised wildly unrealistic growth rates. This had three undesirable effects for the people running the schemes. First, it attracted too much attention – too many people wanted into the scheme too quickly, so they rapidly exhausted sources of new capital. Second, because their rates of return were implausibly high, they made themselves targets for scrutiny. Third, the extremely high rates of return themselves caused their promises to quickly outpace what they could plausibly return to even a small share of their investor victims.
Madoff was careful to avoid all these problems, which is why his scheme lasted for nearly half a century. He only promised plausibly high returns (around 10% annually) for a successful hedge fund, especially if it was illegally engaged in insider trading, rather than the sort of implausibly high returns typical of more blatant Ponzi schemes. (Charles Ponzi promised to double investors' money in 90 days.) Madoff showed reluctance to accept new clients, like any other fund manager who doesn't want to get too big for their trading strategy.
He didn't plaster stickers all over his behavior chart – he put a reasonable number of stickers on it. He played a long game.
Not all confidence games are inherently bad. For instance, the US national pension system, Social Security, operates as a kind of Ponzi scheme, it is not obviously unsustainable, and many people continue to be glad that it exists. Nominally, when people pay Social Security taxes, the money is invested in the social security trust fund, which holds interest-bearing financial assets that will be used to pay out benefits in their old age. In this respect it looks like an ordinary pension fund.
However, the financial assets are US Treasury bonds. There is no independently profitable venture. The Federal Government of the United States of America is quite literally writing an IOU to itself, and then spending the money on current expenditures, including paying out current Social Security benefits.
The Federal Government, of course, can write as large an IOU to itself as it wants. It could make all tax revenues part of the Social Security program. It could issue new Treasury bonds and gift them to Social Security. None of this would increase its ability to pay out Social Security benefits. It would be an empty exercise in putting stickers on its own chart.
If the Federal government loses the ability to collect enough taxes to pay out social security benefits, there is no additional capacity to pay represented by US Treasury bonds. What we have is an implied promise to pay out future benefits, backed by the expectation that the government will be able to collect taxes in the future, including Social Security taxes.
There's nothing necessarily wrong with this, except that the mechanism by which Social Security is funded is obscured by financial engineering. However, this misdirection should raise at least some doubts as to the underlying sustainability or desirability of the commitment. In fact, this scheme was adopted specifically to give people the impression that they had some sort of property rights over their social Security Pension, in order to make the program politically difficult to eliminate. Once people have "bought in" to a program, they will be reluctant to treat their prior contributions as sunk costs, and willing to invest additional resources to salvage their investment, in ways that may make them increasingly reliant on it.
Not all confidence games are intrinsically bad, but dubious programs benefit the most from being set up as confidence games. More generally, bad programs are the ones that benefit the most from being allowed to fiddle with their own accounting. As Daniel Davies writes, in The D-Squared Digest One Minute MBA - Avoiding Projects Pursued By Morons 101:
Good ideas do not need lots of lies told about them in order to gain public acceptance. I was first made aware of this during an accounting class. We were discussing the subject of accounting for stock options at technology companies. […] One side (mainly technology companies and their lobbyists) held that stock option grants should not be treated as an expense on public policy grounds; treating them as an expense would discourage companies from granting them, and stock options were a vital compensation tool that incentivised performance, rewarded dynamism and innovation and created vast amounts of value for America and the world. The other side (mainly people like Warren Buffet) held that stock options looked awfully like a massive blag carried out my management at the expense of shareholders, and that the proper place to record such blags was the P&L account.
Our lecturer, in summing up the debate, made the not unreasonable point that if stock options really were a fantastic tool which unleashed the creative power in every employee, everyone would want to expense as many of them as possible, the better to boast about how innovative, empowered and fantastic they were. Since the tech companies' point of view appeared to be that if they were ever forced to account honestly for their option grants, they would quickly stop making them, this offered decent prima facie evidence that they weren't, really, all that fantastic.
However, I want to generalize the concept of confidence games from the domain of financial currency, to the domain of social credit more generally (of which money is a particular form that our society commonly uses), and in particular I want to talk about confidence games in the currency of credit for achievement.
If I were applying for a very important job with great responsibilities, such as President of the United States, CEO of a top corporation, or head or board member of a major AI research institution, I could be expected to have some relevant prior experience. For instance, I might have had some success managing a similar, smaller institution, or serving the same institution in a lesser capacity. More generally, when I make a bid for control over something, I am implicitly claiming that I have enough social credit – enough of a track record – that I can be expected to do good things with that control.
In general, if someone has done a lot, we should expect to see an iceberg pattern where a small easily-visible part suggests a lot of solid but harder-to-verify substance under the surface. One might be tempted to make a habit of imputing a much larger iceberg from the combination of a small floaty bit, and promises. But, a small easily-visible part with claims of a lot of harder-to-see substance is easy to mimic without actually doing the work. As Davies continues:
The Vital Importance of Audit. Emphasised over and over again. Brealey and Myers has a section on this, in which they remind callow students that like backing-up one's computer files, this is a lesson that everyone seems to have to learn the hard way. Basically, it's been shown time and again and again; companies which do not audit completed projects in order to see how accurate the original projections were, tend to get exactly the forecasts and projects that they deserve. Companies which have a culture where there are no consequences for making dishonest forecasts, get the projects they deserve. Companies which allocate blank cheques to management teams with a proven record of failure and mendacity, get what they deserve.
If you can independently put stickers on your own chart, then your chart is no longer reliably tracking something externally verified. If forecasts are not checked and tracked, or forecasters are not consequently held accountable for their forecasts, then there is no reason to believe that assessments of future, ongoing, or past programs are accurate. Adopting a wait-and-see attitude, insisting on audits for actual results (not just predictions) before investing more, will definitely slow down funding for good programs. But without it, most of your funding will go to worthless ones.
Open Philanthropy, OpenAI, and closed validation loops
The Open Philanthropy Project recently announced a $30 million grant to the $1 billion nonprofit AI research organization OpenAI. This is the largest single grant it has ever made. The main point of the grant is to buy influence over OpenAI’s future priorities; Holden Karnofsky, Executive Director of the Open Philanthropy Project, is getting a seat on OpenAI’s board as part of the deal. This marks the second major shift in focus for the Open Philanthropy Project.
The first shift (back when it was just called GiveWell) was from trying to find the best already-existing programs to fund (“passive funding”) to envisioning new programs and working with grantees to make them reality (“active funding”). The new shift is from funding specific programs at all, to trying to take control of programs without any specific plan.
To justify the passive funding stage, all you have to believe is that you can know better than other donors, among existing charities. For active funding, you have to believe that you’re smart enough to evaluate potential programs, just like a charity founder might, and pick ones that will outperform. But buying control implies that you think you’re so much better, that even before you’ve evaluated any programs, if someone’s doing something big, you ought to have a say.
When GiveWell moved from a passive to an active funding strategy, it was relying on the moral credit it had earned for its extensive and well-regarded charity evaluations. The thing that was particularly exciting about GiveWell was that they focused on outcomes and efficiency. They didn't just focus on the size or intensity of the problem a charity was addressing. They didn't just look at financial details like overhead ratios. They asked the question a consequentialist cares about: for a given expenditure of money, how much will this charity be able to improve outcomes?
However, when GiveWell tracks its impact, it does not track objective outcomes at all. It tracks inputs: attention received (in the form of visits to its website) and money moved on the basis of its recommendations. In other words, its estimate of its own impact is based on the level of trust people have placed in it.
So, as GiveWell built out the Open Philanthropy Project, its story was: We promised to do something great. As a result, we were entrusted with a fair amount of attention and money. Therefore, we should be given more responsibility. We represented our behavior as praiseworthy, and as a result people put stickers on our chart. For this reason, we should be advanced stickers against future days of praiseworthy behavior.
Then, as the Open Philanthropy Project explored active funding in more areas, its estimate of its own effectiveness grew. After all, it was funding more speculative, hard-to-measure programs, but a multi-billion-dollar donor, which was largely relying on the Open Philanthropy Project's opinions to assess efficacy (including its own efficacy), continued to trust it.
What is missing here is any objective track record of benefits. What this looks like to me, is a long sort of confidence game – or, using less morally loaded language, a venture with structural reliance on increasing amounts of leverage – in the currency of moral credit.
Version 0: GiveWell and passive funding
First, there was GiveWell. GiveWell’s purpose was to find and vet evidence-backed charities. However, it recognized that charities know their own business best. It wasn’t trying to do better than the charities; it was trying to do better than the typical charity donor, by being more discerning.
GiveWell’s thinking from this phase is exemplified by co-founder Elie Hassenfeld’s Six tips for giving like a pro:
When you give, give cash – no strings attached. You’re just a part-time donor, but the charity you’re supporting does this full-time and staff there probably know a lot more about how to do their job than you do. If you’ve found a charity that you feel is excellent – not just acceptable – then it makes sense to trust the charity to make good decisions about how to spend your money.
GiveWell similarly tried to avoid distorting charities’ behavior. Its job was only to evaluate, not to interfere. To perceive, not to act. To find the best, and buy more of the same.
How did GiveWell assess its effectiveness in this stage? When GiveWell evaluates charities, it estimates their cost-effectiveness in advance. It assesses the program the charity is running, through experimental evidence of the form of randomized controlled trials. GiveWell also audits the charity to make sure they’re actually running the program, and figure out how much it costs as implemented. This is an excellent, evidence-based way to generate a prediction of how much good will be done by moving money to the charity.
As far as I can tell, these predictions are untested.
One of GiveWell’s early top charities was VillageReach, which helped Mozambique with TB immunization logistics. GiveWell estimated that VillageReach could save a life for $1,000. But this charity is no longer recommended. The public page says:
VillageReach (www.villagereach.org) was our top-rated organization for 2009, 2010 and much of 2011 and it has received over $2 million due to GiveWell's recommendation. In late 2011, we removed VillageReach from our top-rated list because we felt its project had limited room for more funding. As of November 2012, we believe that that this project may have room for more funding, but we still prefer our current highest-rated charities above it.
GiveWell reanalyzed the data it based its recommendations on, but hasn’t published an after-the-fact retrospective of long-run results. I asked GiveWell about this by email. The response was that such an assessment was not prioritized because GiveWell had found implementation problems in VillageReach's scale-up work as well as reasons to doubt its original conclusion about the impact of the pilot program. It's unclear to me whether this has caused GiveWell to evaluate charities differently in the future.
I don't think someone looking at GiveWell's page on VillageReach would be likely to reach the conclusion that GiveWell now believes its original recommendation was likely erroneous. GiveWell's impact page continues to count money moved to VillageReach without any mention of the retracted recommendation. If we assume that the point of tracking money moved is to track the benefit of moving money from worse to better uses, then repudiated programs ought to be counted against the total, as costs, rather than towards it.
GiveWell has recommended the Against Malaria Foundation for the last several years as a top charity. AMF distributes long-lasting insecticide-treated bed nets to prevent mosquitos from transmitting malaria to humans. Its evaluation of AMF does not mention any direct evidence, positive or negative, about what happened to malaria rates in the areas where AMF operated. (There is a discussion of the evidence that the bed nets were in fact delivered and used.) In the supplementary information page, however, we are told:
Previously, AMF expected to collect data on malaria case rates from the regions in which it funded LLIN distributions: […] In 2016, AMF shared malaria case rate data […] but we have not prioritized analyzing it closely. AMF believes that this data is not high quality enough to reliably indicate actual trends in malaria case rates, so we do not believe that the fact that AMF collects malaria case rate data is a consideration in AMF’s favor, and do not plan to continue to track AMF's progress in collecting malaria case rate data.
The data was noisy, so they simply stopped checking whether AMF’s bed net distributions do anything about malaria.
If we want to know the size of the improvement made by GiveWell in the developing world, we have their predictions about cost-effectiveness, an audit trail verifying that work was performed, and their direct measurement of how much money people gave because they trusted GiveWell. The predictions on the final target – improved outcomes – have not been tested.
GiveWell is actually doing unusually well as far as major funders go. It sticks to describing things it's actually responsible for. By contrast, the Gates Foundation, in a report to Warren Buffet claiming to describe its impact, simply described overall improvement in the developing world, a very small rhetorical step from claiming credit for 100% of the improvement. GiveWell at least sticks to facts about GiveWell's own effects, and this is to its credit. But, it focuses on costs it has been able to impose, not benefits it has been able to create.
The Centre for Effective Altruism's William MacAskill made a related point back in 2012, though he talked about the lack of any sort of formal outside validation or audit, rather than focusing on empirical validation of outcomes:
As far as I know, GiveWell haven't commissioned a thorough external evaluation of their recommendations. […] This surprises me. Whereas businesses have a natural feedback mechanism, namely profit or loss, research often doesn't, hence the need for peer-review within academia. This concern, when it comes to charity-evaluation, is even greater. If GiveWell's analysis and recommendations had major flaws, or were systematically biased in some way, it would be challenging for outsiders to work this out without a thorough independent evaluation. Fortunately, GiveWell has the resources to, for example, employ two top development economists to each do an independent review of their recommendations and the supporting research. This would make their recommendations more robust at a reasonable cost.
GiveWell's page on self-evaluation says that it discontinued external reviews in August 2013. This page links to an explanation of the decision, which concludes:
We continue to believe that it is important to ensure that our work is subjected to in-depth scrutiny. However, at this time, the scrutiny we’re naturally receiving – combined with the high costs and limited capacity for formal external evaluation – make us inclined to postpone major effort on external evaluation for the time being.
- >If someone volunteered to do (or facilitate) formal external evaluation, we’d welcome this and would be happy to prominently post or link to criticism.
- We do intend eventually to re-institute formal external evaluation.
Four years later, assessing the credibility of this assurance is left as an exercise for the reader.
Version 1: GiveWell Labs and active funding
Then there was GiveWell Labs, later called the Open Philanthropy Project. It looked into more potential philanthropic causes, where the evidence base might not be as cut-and-dried as that for the GiveWell top charities. One thing they learned was that in many areas, there simply weren’t shovel-ready programs ready for funding – a funder has to play a more active role. This shift was described by GiveWell co-founder Holden Karnofsky in his 2013 blog post, Challenges of passive funding:
By “passive funding,” I mean a dynamic in which the funder’s role is to review others’ proposals/ideas/arguments and pick which to fund, and by “active funding,” I mean a dynamic in which the funder’s role is to participate in – or lead – the development of a strategy, and find partners to “implement” it. Active funders, in other words, are participating at some level in “management” of partner organizations, whereas passive funders are merely choosing between plans that other nonprofits have already come up with.
My instinct is generally to try the most “passive” approach that’s feasible. Broadly speaking, it seems that a good partner organization will generally know their field and environment better than we do and therefore be best positioned to design strategy; in addition, I’d expect a project to go better when its implementer has fully bought into the plan as opposed to carrying out what the funder wants. However, (a) this philosophy seems to contrast heavily with how most existing major funders operate; (b) I’ve seen multiple reasons to believe the “active” approach may have more relative merits than we had originally anticipated. […]
- In the nonprofit world of today, it seems to us that funder interests are major drivers of which ideas that get proposed and fleshed out, and therefore, as a funder, it’s important to express interests rather than trying to be fully “passive.”
- While we still wish to err on the side of being as “passive” as possible, we are recognizing the importance of clearly articulating our values/strategy, and also recognizing that an area can be underfunded even if we can’t easily find shovel-ready funding opportunities in it.
GiveWell earned some credibility from its novel, evidence-based outcome-oriented approach to charity evaluation. But this credibility was already – and still is – a sort of loan. We have GiveWell's predictions or promises of cost effectiveness in terms of outcomes, and we have figures for money moved, from which we can infer how much we were promised in improved outcomes. As far as I know, no one's gone back and checked whether those promises turned out to be true.
In the meantime, GiveWell then leveraged this credibility by extending its methods into more speculative domains, where less was checkable, and donors had to put more trust in the subjective judgment of GiveWell analysts. This was called GiveWell Labs. At the time, this sort of compounded leverage may have been sensible, but it's important to track whether a debt has been paid off or merely rolled over.
Version 2: The Open Philanthropy Project and control-seeking
Finally, the Open Philanthropy made its largest-ever single grant to purchase its founder a seat on a major organization’s board. This represents a transition from mere active funding to overtly purchasing influence:
The Open Philanthropy Project awarded a grant of $30 million ($10 million per year for 3 years) in general support to OpenAI. This grant initiates a partnership between the Open Philanthropy Project and OpenAI, in which Holden Karnofsky (Open Philanthropy’s Executive Director, “Holden” throughout this page) will join OpenAI’s Board of Directors and, jointly with one other Board member, oversee OpenAI’s safety and governance work.
We expect the primary benefits of this grant to stem from our partnership with OpenAI, rather than simply from contributing funding toward OpenAI’s work. While we would also expect general support for OpenAI to be likely beneficial on its own, the case for this grant hinges on the benefits we anticipate from our partnership, particularly the opportunity to help play a role in OpenAI’s approach to safety and governance issues.
Clearly the value proposition is not increasing available funds for OpenAI, if OpenAI’s founders’ billion-dollar commitment to it is real:
Sam, Greg, Elon, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research are donating to support OpenAI. In total, these funders have committed $1 billion, although we expect to only spend a tiny fraction of this in the next few years.
The Open Philanthropy Project is neither using this money to fund programs that have a track record of working, nor to fund a specific program that it has prior reason to expect will do good. Rather, it is buying control, in the hope that Holden will be able to persuade OpenAI not to destroy the world, because he knows better than OpenAI’s founders.
How does the Open Philanthropy Project know that Holden knows better? Well, it’s done some active funding of programs it expects to work out. It expects those programs to work out because they were approved by a process similar to the one used by GiveWell to find charities that it expects to save lives.
If you want to acquire control over something, that implies that you think you can manage it more sensibly than whoever is in control already. Thus, buying control is a claim to have superior judgment - not just over others funding things (the original GiveWell pitch), but over those being funded.
In a footnote to the very post announcing the grant, the Open Philanthropy Project notes that it has historically tried to avoid acquiring leverage over organizations it supports, precisely because it’s not sure it knows better:
For now, we note that providing a high proportion of an organization’s funding may cause it to be dependent on us and accountable primarily to us. This may mean that we come to be seen as more responsible for its actions than we want to be; it can also mean we have to choose between providing bad and possibly distortive guidance/feedback (unbalanced by other stakeholders’ guidance/feedback) and leaving the organization with essentially no accountability.
This seems to describe two main problems introduced by becoming a dominant funder:
- People might accurately attribute causal responsibility for some of the organization's conduct to the Open Philanthropy Project.
- The Open Philanthropy Project might influence the organization to behave differently than it otherwise would.
The first seems obviously silly. I've been trying to correct the imbalance where Open Phil is criticized mainly when it makes grants, by criticizing it for holding onto too much money.
The second really is a cost as well as a benefit, and the Open Philanthropy Project has been absolutely correct to recognize this. This is the sort of thing GiveWell has consistently gotten right since the beginning and it deserves credit for making this principle clear and – until now – living up to it.
But discomfort with being dominant funders seems inconsistent with buying a board seat to influence OpenAI. If the Open Philanthropy Project thinks that Holden’s judgment is good enough that he should be in control, why only here? If he thinks that other Open Philanthropy Project AI safety grantees have good judgment but OpenAI doesn’t, why not give them similar amounts of money free of strings to spend at their discretion and see what happens? Why not buy people like Eliezer Yudkowsky, Nick Bostrom, or Stuart Russell a seat on OpenAI’s board?
On the other hand, the Open Philanthropy Project is right on the merits here with respect to safe superintelligence development. Openness makes sense for weak AI, but if you’re building true strong AI you want to make sure you’re cooperating with all the other teams in a single closed effort. I agree with the Open Philanthropy Project’s assessment of the relevant risks. But it's not clear to me how often joining the bad guys to prevent their worst excesses is a good strategy, and it seems like it has to often be a mistake. Still, I’m mindful of heroes like John Rabe, Chiune Sugihara, and Oscar Schindler. And if I think someone has a good idea for improving things, it makes sense to reallocate control from people who have worse ideas, even if there's some potential better allocation.
On the other hand, is Holden Karnofsky the right person to do this? The case is mixed.
He listens to and engages with the arguments from principled advocates for AI safety research, such as Nick Bostrom, Eliezer Yudkowsky, and Stuart Russell. This is a point in his favor. But, I can think of other people who engage with such arguments. For instance, OpenAI founder Elon Musk has publicly praised Bostrom’s book Superintelligence, and founder Sam Altman has written two blog posts summarizing concerns about AI safety reasonably cogently. Altman even asked Luke Muehlhauser, former executive director of MIRI, for feedback pre-publication. He's met with Nick Bostrom. That suggests a substantial level of direct engagement with the field, although Holden has engaged for a longer time, more extensively, and more directly.
Another point in Holden’s favor, from my perspective, is that under his leadership, the Open Philanthropy Project has funded the most serious-seeming programs for both weak and strong AI safety research. But Musk also managed to (indirectly) fund AI safety research at MIRI and by Nick Bostrom personally, via his $10 million FLI grant.
The Open Philanthropy Project also says that it expects to learn a lot about AI research from this, which will help it make better decisions on AI risk in the future and influence the field in the right way. This is reasonable as far as it goes. But remember that the case for positioning the Open Philanthropy Project to do this relies on the assumption that the Open Philanthropy Project will improve matters by becoming a central influencer in this field. This move is consistent with reaching that goal, but it is not independent evidence that the goal is the right one.
Overall, there are good narrow reasons to think that this is a potential improvement over the prior situation around OpenAI – but only a small and ill-defined improvement, at considerable attentional cost, and with the offsetting potential harm of increasing OpenAI's perceived legitimacy as a long-run AI safety organization.
And it’s worrying that Open Philanthropy Project’s largest grant – not just for AI risk, but ever (aside from GiveWell Top Charity funding) – is being made to an organization at which Holden’s housemate and future brother-in-law is a leading researcher. The nepotism argument is not my central objection. If I otherwise thought the grant were obviously a good idea, it wouldn’t worry me, because it’s natural for people with shared values and outlooks to become close nonprofessionally as well. But in the absence of a clear compelling specific case for the grant, it’s worrying.
Altogether, I'm not saying this is an unreasonable shift, considered in isolation. I’m not even sure this is a bad thing for the Open Philanthropy Project to be doing – insiders may have information that I don’t, and that is difficult to communicate to outsiders. But as outsiders, there comes a point when someone’s maxed out their moral credit, and we should wait for results before actively trying to entrust the Open Philanthropy Project and its staff with more responsibility.
EA Funds and self-recommendation
The Centre for Effective Altruism is actively trying to entrust the Open Philanthropy Project and its staff with more responsibility.
The concerns of CEA’s CEO William MacAskill about GiveWell have, as far as I can tell, never been addressed, and the underlying issues have only become more acute. But CEA is now working to put more money under the control of Open Philanthropy Project staff, through its new EA Funds product – a way for supporters to delegate giving decisions to expert EA “fund managers” by giving to one of four funds: Global Health and Development, Animal Welfare, Long-Term Future, and Effective Altruism Community.
The Effective Altruism movement began by saying that because very poor people exist, we should reallocate money from ordinary people in the developed world to the global poor. Now the pitch is in effect that because very poor people exist, we should reallocate money from ordinary people in the developed world to the extremely wealthy. This is a strange and surprising place to end up, and it’s worth retracing our steps. Again, I find it easiest to think of three stages:
- Money can go much farther in the developing world. Here, we’ve found some examples for you. As a result, you can do a huge amount of good by giving away a large share of your income, so you ought to.
- We’ve found ways for you to do a huge amount of good by giving away a large share of your income for developing-world interventions, so you ought to trust our recommendations. You ought to give a large share of your income to these weird things our friends are doing that are even better, or join our friends.
- We’ve found ways for you to do a huge amount of good by funding weird things our friends are doing, so you ought to trust the people we trust. You ought to give a large share of your income to a multi-billion-dollar foundation that funds such things.
Stage 1: The direct pitch
At first, Giving What We Can (the organization that eventually became CEA) had a simple, easy to understand pitch:
Giving What We Can is the brainchild of Toby Ord, a philosopher at Balliol College, Oxford. Inspired by the ideas of ethicists Peter Singer and Thomas Pogge, Toby decided in 2009 to commit a large proportion of his income to charities that effectively alleviate poverty in the developing world.
Discovering that many of his friends and colleagues were interested in making a similar pledge, Toby worked with fellow Oxford philosopher Will MacAskill to create an international organization of people who would donate a significant proportion of their income to cost-effective charities.
Giving What We Can launched in November 2009, attracting significant media attention. Within a year, 64 people had joined the society, their pledged donations amounting to $21 million. Initially run on a volunteer basis, Giving What We Can took on full-time staff in the summer of 2012.
In effect, its argument was: "Look, you can do huge amounts of good by giving to people in the developing world. Here are some examples of charities that do that. It seems like a great idea to give 10% of our income to those charities."
GWWC was a simple product, with a clear, limited scope. Its founders believed that people, including them, ought to do a thing – so they argued directly for that thing, using the arguments that had persuaded them. If it wasn't for you, it was easy to figure that out; but a surprisingly large number of people were persuaded by a simple, direct statement of the argument, took the pledge, and gave a lot of money to charities helping the world's poorest.
Stage 2: Rhetoric and belief diverge
Then, GWWC staff were persuaded you could do even more good with your money in areas other than developing-world charity, such as existential risk mitigation. Encouraging donations and work in these areas became part of the broader Effective Altruism movement, and GWWC's umbrella organization was named the Centre for Effective Altruism. So far, so good.
But this left Effective Altruism in an awkward position; while leadership often personally believe the most effective way to do good is far-future stuff or similarly weird-sounding things, many people who can see the merits of the developing-world charity argument reject the argument that because the vast majority of people live in the far future, even a very small improvement in humanity’s long-run prospects outweighs huge improvements on the global poverty front. They also often reject similar scope-sensitive arguments for things like animal charities.
Giving What We Can's page on what we can achieve still focuses on global poverty, because developing-world charity is easier to explain persuasively. However, EA leadership tends to privately focus on things like AI risk. Two years ago many attendees at the EA Global conference in the San Francisco Bay Area were surprised that the conference focused so heavily on AI risk, rather than the global poverty interventions they’d expected.
Stage 3: Effective altruism is self-recommending
Shortly before the launch of the EA Funds I was told in informal conversations that they were a response to demand. Giving What We Can pledge-takers and other EA donors had told CEA that they trusted it to GWWC pledge-taker demand. CEA was responding by creating a product for the people who wanted it.
This seemed pretty reasonable to me, and on the whole good. If someone wants to trust you with their money, and you think you can do something good with it, you might as well take it, because they’re estimating your skill above theirs. But not everyone agrees, and as the Madoff case demonstrates, "people are begging me to take their money" is not a definitive argument that you are doing anything real.
In practice, the funds are managed by Open Philanthropy Project staff:
We want to keep this idea as simple as possible to begin with, so we’ll have just four funds, with the following managers:
- Global Health and Development - Elie Hassenfeld
- Animal Welfare – Lewis Bollard
- Long-run future – Nick Beckstead
- Movement-building – Nick Beckstead
(Note that the meta-charity fund will be able to fund CEA; and note that Nick Beckstead is a Trustee of CEA. The long-run future fund and the meta-charity fund continue the work that Nick has been doing running the EA Giving Fund.)
It’s not a coincidence that all the fund managers work for GiveWell or Open Philanthropy. First, these are the organisations whose charity evaluation we respect the most. The worst-case scenario, where your donation just adds to the Open Philanthropy funding within a particular area, is therefore still a great outcome. Second, they have the best information available about what grants Open Philanthropy are planning to make, so have a good understanding of where the remaining funding gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is important, but isn’t currently addressed by Open Philanthropy.
In past years, Giving What We Can recommendations have largely overlapped with GiveWell’s top charities.
In the comments on the launch announcement on the EA Forum, several people (including me) pointed out that the Open Philanthropy Project seems to be having trouble giving away even the money it already has, so it seems odd to direct more money to Open Philanthropy Project decisionmakers. CEA’s senior marketing manager replied that the Funds were a minimum viable product to test the concept:
I don't think the long-term goal is that OpenPhil program officers are the only fund managers. Working with them was the best way to get an MVP version in place.
This also seemed okay to me, and I said so at the time.
[NOTE: I've edited the next paragraph to excise some unreliable information. Sorry for the error, and thanks to Rob Wiblin for pointing it out.]
After they were launched, though, I saw phrasings that were not so cautious at all, instead making claims that this was generally a better way to give. As of writing this, if someone on the effectivealtruism.org website clicks on "Donate Effectively" they will be led directly to a page promoting EA Funds. When I looked at Giving What We Can’s top charities page in early April, it recommended the EA Funds "as the highest impact option for donors."
This is not a response to demand, it is an attempt to create demand by using CEA’s authority, telling people that the funds are better than what they're doing already. By contrast, GiveWell's Top Charities page simply says:
Our top charities are evidence-backed, thoroughly vetted, underfunded organizations.
This carefully avoids any overt claim that they're the highest-impact option available to donors. GiveWell avoids saying that because there's no way they could know it, so saying it wouldn't be truthful.
The wording has since been qualified with “for most donors”, which is a good change. But the thing I’m worried about isn’t just the explicit exaggerated claims – it’s the underlying marketing mindset that made them seem like a good idea in the first place. EA seems to have switched from an endorsement of the best things outside itself, to an endorsement of itself. And it's concentrating decisionmaking power in the Open Philanthropy Project.
Effective altruism is overextended, but it doesn't have to be
There is a saying in finance, that was old even back when Keynes said it. If you owe the bank a million dollars, then you have a problem. If you owe the bank a billion dollars, then the bank has a problem.
In other words, if someone extends you a level of trust they could survive writing off, then they might call in that loan. As a result, they have leverage over you. But if they overextend, putting all their eggs in one basket, and you are that basket, then you have leverage over them; you're too big to fail. Letting you fail would be so disastrous for their interests that you can extract nearly arbitrary concessions from them, including further investment. For this reason, successful institutions often try to diversify their investments, and avoid overextending themselves. Regulators, for the same reason, try to prevent banks from becoming "too big to fail."
The Effective Altruism movement is concentrating decisionmaking power and trust as much as possible, in a way that's setting itself up to invest ever increasing amounts of confidence to keep the game going.
The alternative is to keep the scope of each organization narrow, overtly ask for trust for each venture separately, and make it clear what sorts of programs are being funded. For instance, Giving What We Can should go back to its initial focus of global poverty relief.
Like many EA leaders, I happen to believe that anything you can do to steer the far future in a better direction is much, much more consequential for the well-being of sentient creatures than any purely short-run improvement you can create now. So it might seem odd that I think Giving What We Can should stay focused on global poverty. But, I believe that the single most important thing we can do to improve the far future is hold onto our ability to accurately build shared models. If we use bait-and-switch tactics, we are actively eroding the most important type of capital we have – coordination capacity.
If you do not think giving 10% of one's income to global poverty charities is the right thing to do, then you can't in full integrity urge others to do it – so you should stop. You might still believe that GWWC ought to exist. You might still believe that it is a positive good to encourage people to give much of their income to help the global poor, if they wouldn't have been doing anything else especially effective with the money. If so, and you happen to find yourself in charge of an organization like Giving What We Can, the thing to do is write a letter to GWWC members telling them that you've changed your mind, and why, and offering to give away the brand to whoever seems best able to honestly maintain it.
If someone at the Centre for Effective Altruism fully believes in GWWC's original mission, then that might make the transition easier. If not, then one still has to tell the truth and do what's right.
And what of the EA Funds? The Long-Term Future Fund is run by Open Philanthropy Project Program Officer Nick Beckstead. If you think that it's a good thing to delegate giving decisions to Nick, then I would agree with you. Nick's a great guy! I'm always happy to see him when he shows up at house parties. He's smart, and he actively seeks out arguments against his current point of view. But the right thing to do, if you want to persuade people to delegate their giving decisions to Nick Beckstead, is to make a principled case for delegating giving decisions to Nick Beckstead. If the Centre for Effective Altruism did that, then Nick would almost certainly feel more free to allocate funds to the best things he knows about, not just the best things he suspects EA Funds donors would be able to understand and agree with.
If you can't directly persuade people, then maybe you're wrong. If the problem is inferential distance, then you've got some work to do bridging that gap.
There's nothing wrong with setting up a fund to make it easy. It's actually a really good idea. But there is something wrong with the multiple layers of vague indirection involved in the current marketing of the Far Future fund – using global poverty to sell the generic idea of doing the most good, then using CEA's identity as the organization in charge of doing the most good to persuade people to delegate their giving decisions to it, and then sending their money to some dude at the multi-billion-dollar foundation to give away at his personal discretion. The same argument applies to all four Funds.
Likewise, if you think that working directly on AI risk is the most important thing, then you should make arguments directly for working on AI risk. If you can't directly persuade people, then maybe you're wrong. If the problem is inferential distance, it might make sense to imitate the example of someone like Eliezer Yudkowsky, who used indirect methods to bridge the inferential gap by writing extensively on individual human rationality, and did not try to control others' actions in the meantime.
If Holden thinks he should be in charge of AI safety research, then he should ask Good Ventures for funds to actually start an AI safety research organization. I'd be excited to see what he'd come up with if he had full control of and responsibility for such an organization. But I don't think anyone has a good plan to work directly on AI risk, and I don't have one either, which is why I'm not directly working on it or funding it. My plan for improving the far future is to build human coordination capacity.
(If, by contrast, Holden just thinks there needs to be coordination between different AI safety organizations, the obvious thing to do would be to work with FLI on that, e.g. by giving them enough money to throw their weight around as a funder. They organized the successful Puerto Rico conference, after all.)
Another thing that would be encouraging would be if at least one of the Funds were not administered entirely by an Open Philanthropy Project staffer, and ideally an expert who doesn't benefit from the halo of "being an EA." For instance, Chris Blattman is a development economist with experience designing programs that don't just use but generate evidence on what works. When people were arguing about whether sweatshops are good or bad for the global poor, he actually went and looked by performing a randomized controlled trial. He's leading two new initiatives with J-PAL and IPA, and expects that directors designing studies will also have to spend time fundraising. Having funding lined up seems like the sort of thing that would let them spend more time actually running programs. And more generally, he seems likely to know about funding opportunities the Open Philanthropy Project doesn't, simply because he's embedded in a slightly different part of the global health and development network.
Narrower projects that rely less on the EA brand and more on what they're actually doing, and more cooperation on equal terms with outsiders who seem to be doing something good already, would do a lot to help EA grow beyond putting stickers on its own behavior chart. I'd like to see EA grow up. I'd be excited to see what it might do.
- Good programs don't need to distort the story people tell about them, while bad programs do.
- Moral confidence games – treating past promises and trust as a track record to justify more trust – are an example of the kind of distortion mentioned in (1), that benefits bad programs more than good ones.
- The Open Philanthropy Project's Open AI grant represents a shift from evaluating other programs' effectiveness, to assuming its own effectiveness.
- EA Funds represents a shift from EA evaluating programs' effectiveness, to assuming EA's effectiveness.
- A shift from evaluating other programs' effectiveness, to assuming one's own effectiveness, is an example of the kind of "moral confidence game" mentioned in (2).
- EA ought to focus on scope-limited projects, so that it can directly make the case for those particular projects instead of relying on EA identity as a reason to support an EA organization.
- EA organizations ought to entrust more responsibility to outsiders who seem to be doing good things but don't overtly identify as EA, instead of trying to keep it all in the family.
(Cross-posted at LessWrong and the EA Forum.
Disclosure: I know many people involved at many of the organizations discussed, and I used to work for GiveWell. I have no current institutional affiliation to any of them. Everyone mentioned has always been nice to me and I have no personal complaints.))
Related: Is Stupidity Strength?,
Is Stupidity Strength Part 2: Confidence, Is Stupidity Strength? Part 3: Evolutionary Game Theory, Is Stupidity Strength? Part 4: Are VCs Stupid?
Claim 1: Good programs don't need to distort the story people tell about them, while bad programs do.
If you want to discuss this claim, I encourage you to do it as a reply to this comment.
I don't think this is that straightforward. You can play the hyperbole game, or you can point out that the hyperbole game is flawed. These are somewhat mutually exclusive and the audience for the second, more sophisticated claim, is smaller. The trade off of audience size vs audience quality and longer term incentives is non-obvious IMO.
There are lots of people who will only listen to distorted stories because undistorted stories are complicated. I prefer your previous assertion that distortions work equally well for everyone, and the truth only works for good programs. Whether this translates into a "need" to distort depends on what you mean. There probably aren't enough nuance-appreciating people in the world to maintain MSF's current funding in the way they get it now, with small donations. They could switch to foundations, but they would have to answer foundations in a way they don't with small donors.
Hypothetically, an organization run by really awesome people funded through small donations driven by a somewhat distorted story could end up with *more* integrity than an organization that is funded by foundations looking at the literal truth, because it gives them freedom from oversight. Obviously that freedom is only good if the people running the organization are both good and competent, but I think it's worth keeping in mind the distinction between accountability as a terminal vs instrumental goal.
I think this strategy fails in the long run the same way Hillary Clinton's presidential campaign failed - someone totally shameless can outcompete someone trying to tell only a mildly distorted story. Trump could adopt any posture he wanted based on short-run political expedience, Clinton actually seemed more fake because she had neither the actorly virtues, nor the hard-to-fake signs of honest intent to inform.
I'm not saying orgs need to push all the details and nuances on people who aren't interested. I'm saying that the honest thing to do is to try to communicate the most decision-relevant info, instead of clever arguing. If you do the former, you can gracefully scale up for people who want more detail, and it's easy to explain when you've changed your mind (because you've already told people what some of your cruxes are, so "this evidence challenged my crux" is likely a short inferential gap).
Eliezer's Protected From Myself is relevant here, and I think it applies to all distortions, not just outright lying:
Obviously compression of some sort has to be OK, but distortions and losses of info are always a cost because you might not be right about which actions are preferable.
Claim 2: "Moral confidence games" – treating past promises and trust as a track record to justify more trust – are an example of the kind of distortion mentioned in Claim 1, that benefits bad programs more than good ones.
If you want to discuss this claim, I encourage you to do it as a reply to this comment.
Claim 3: The Open Philanthropy Project's Open AI grant represents a shift from evaluating other programs' effectiveness, to assuming its own effectiveness.
If you want to discuss this claim, I encourage you to do it as a reply to this comment.
This and the move towards less public facing analysis is also a move away from epistemic humility. This is obviously complicated (see the long discussion on the EA forum recently), but overall I have a negative reaction to it.
Not exactly related to the claim but to your arguments in support of the claim.
You quote GiveWell and summarize it as "The data was noisy, so they simply stopped checking whether AMF’s bed net distributions do anything about malaria." Sure. You seem to imply that this is bad and we should do something about it (what?). Why not take what they said at face value and believe that it is simply too hard/costly/impossible to measure the effects? Especially in the case of GiveWell, it seems weird to not choose that default position and instead demand that they produce evidence of impact.
Later, you say "In the meantime, GiveWell then leveraged this credibility by extending its methods into more speculative domains, where less was checkable, and donors had to put more trust in the subjective judgment of GiveWell analysts." What donors? Doesn't all, or nearly all of the money come from Good Ventures? Obviously Good Ventures has much more access to GiveWell/OPP staff, which could easily justify the extra trust.
Attacking my arguments for the claim is perfectly on-topic, so thanks - and thanks for the opportunity to clarify my reasoning 🙂
At first, GiveWell labs wasn't exclusively a Good Ventures thing, or at least wasn't described as such. Perhaps Dustin and Cari encouraged it, though.
Good Ventures didn't find GiveWell by chance - GiveWell had (and largely still has) a reputation as skeptical and trustworthy, people like Peter Singer were approvingly mentioning it in books, EAs were excited about it. If it hadn't been actually moving money with its recommendations, there would have been much less social proof for trusting it.
Now, EA Funds is trying to move more money to Open Philanthropy Project decisionmakers.
Was it possible to donate to GiveWell Labs? I thought they recommended grants that were then funded by Good Ventures. I wasn't following GiveWell at the time though, so I could easily be mistaken. Certainly at the moment it sounds like all of GiveWell Experimental and GiveWell Incubation Grants and the Open Philanthropy Project grants are funded by Good Ventures. Is there anywhere where I can support any of these things directly (besides EA Funds)?
Of course Dustin and Cari found GiveWell based on their reputation, but their continued support is not just based on the public-facing side of GiveWell. As an analogy, the first time I try out a restaurant, it's because of their public reputation (Yelp reviews + friend's recommendations). Every subsequent time, it's almost entirely based on my past experiences at that restaurant.
EA Funds definitely does require donors to put more trust into OPP decisionmakers, and I think this is a reason why the decisionmakers go to some lengths to explain what they will do with funds -- it's not an easy task to convince someone "Giving money for X to allocate is the best thing to do", because there are so many beliefs and values that could make that not true. On the other hand, "Giving money to X so that they can allocate it according to these guidelines" is a lot easier to make a case for, because people can easily understand guidelines. Continuing with the restaurant analogy, you don't (usually) ask the chef to surprise you with whatever he thinks you would like best, because you have a lot of tastes and preferences that the waiter couldn't possibly know about. But on the other hand, you would still let the chef make your food, because given your selection from the menu the chef will do a better job making it than you would. (Obviously that's not the only reason but it is one of them.)
I think there are a couple of important disanalogies between restaurants and charitable programs.
In a restaurant, once you've had the food, it's easy to know whether you've gotten what you want (excepting things like food safety, which is why there are professional food safety inspectors, part of why there are review websites, etc). But if I follow an erroneous charity recommendation, how do I find out afterwards?
In a restaurant, tastes differ a lot from person to person, and taste is largely what matters, so it can make a lot of sense to hold onto control of what the chef makes for you. If I'm giving away money for altruistic reasons, what matters is the beneficiaries' taste, not mine. I don't get the same gains from holding onto control here.
I don't think these disanalogies apply here.
For the first one, I agree that for the average donor it would absolutely be a problem that they can't tell whether a specific intervention actually worked. I expect it is far, far less of a problem for Good Ventures, since they can talk to people at OPP to understand their reasoning. In addition, they are currently only spending a tiny fraction of Good Ventures' money, and most of their grants can be evaluated on a ~2-year timescale (they've even made quantitative predictions on recent ones). This means that they'll be able to evaluate how well they're doing after they've spent probably < 10% of the amount they eventually intend to donate.
In any case, even if I did believe that Good Ventures is trusting Open Phil's reasoning too much based on past unevaluated trust, that still doesn't sound like "EA is self-recommending" -- my model of how this happened is that Dustin and Cari approached GiveWell, asked about some sort of partnership, and they worked out the details from there. In such a scenario, what would you have wanted Open Phil to do -- say "we haven't really evaluated ourselves yet, so you shouldn't put too much trust in us, you're going to be better off finding someone else to do this"?
For the second one, I think it's pretty obvious that different people have different "tastes" for charities -- it depends strongly on what you value. Even if you are a moral realist, empirically speaking, EAs have different enough views on ethics that you cannot find one thing to do that would be the best according to everyone (even if the exact consequences of every action possible were known). And then of course, EAs' empirical beliefs are sufficiently different (because we aren't ideal Bayesian reasoners) that again you can't find one thing that everyone will agree is the best.
Claim 4: EA Funds represents a shift from EA evaluating programs' effectiveness, to assuming EA's effectiveness.
If you want to discuss this claim, I encourage you to do it as a reply to this comment.
Claim 5: A shift from evaluating other programs' effectiveness, to assuming one's own effectiveness, is an example of the kind of "moral confidence game" mentioned in Claim 2.
If you want to discuss this claim, I encourage you to do it as a reply to this comment.
Claim 6: EA ought to focus on scope-limited projects, so that it can directly make the case for those particular projects instead of relying on EA identity as a reason to support an EA organization.
If you want to discuss this claim, I encourage you to do it as a reply to this comment.
Thanks for tracking this topic so closely, and for writing this piece. I'm still digesting its implications.
To me, claims 6&7 seem like your central prescriptions. Would you hold up any EA organizations as getting these two things *right*, and should get kudos for it?
Some EA orgs that get parts of this right:
Charity Science has a clear mission to move money to efficient charities. They started with a grant writing experiment, the results were less than they'd hoped, and they stopped trying in order to focus on other things they might be better at. Charity Science Health also managed to pick one thing and try to do it. So did Wave.
I mentioned GWWC in its very early years as a clear example of this, when it was mainly just Toby Ord and eventually William MacAskill telling people what they thought was right.
Many EA-recommended charities are better at this than EA organizations. For instance, AMF and GiveDirectly are doing well-defined tasks, and trying to be extremely easy to evaluate so you don't have to take their word for much.
I can't think of as many successes for (7), though that critique doesn't matter so much if you succeed at (6) anyhow. Open Philanthropy Project has been hiring program officers from outside EA, and that seems potentially promising, depending on how much free rein they actually have.
I think 80,000 Hours might could very easily become the right sort of organization by splitting up its brand - for instance, there's a place for an org specializing in earning to give advocacy. It would, very explicitly, have to accept that it's not for everybody - which I think is the case for GWWC too - but admitting that you have a particular perspective makes it much easier to be honest about your biases.
You give GWWC as an example of an organisation which has overextended and should go back to its original focus, but I think you missed an important step in your account of what happened. The story as you tell it is:
1. GWWC is founded in 2009 with a mission of encouraging a 'tithe' directed at eliminating Global Poverty. Its founders themselves do this and believe in it.
2. The founders gradually shift towards thinking other areas might be higher impact. You didn't put a date on this, but if I had to I would say this was around 2013 - 2014. I'm curious if you disagree, because this will become important later.
3. There is now a disconnect between GWWC encouraging a Global-Poverty-only pledge while its founders/managers do not themselves donate 10% to Global Poverty charities.
Hence setting up a situation where you say "If you do not think giving 10% of one's income to global poverty charities is the right thing to do, then you can't in full integrity urge others to do it – so you should stop.". I agree with the quote as given. But actually this isn't the current state of affairs; GWWC's pledge doesn't mention Global Poverty anymore, and it hasn't since December 2014 (not coincidentally, this is close to the end of date range I gave for 2). So setting the history aside, the actual situation now is:
1. A large fraction (though by no means all) of GWWC's leadership think areas other than Global Poverty are relatively more important *right now*.
2. However, many of them think donating 10% of your income to effective charities is a good idea. The pledge urges people to give to effective charities. There's no tension there; they are doing a thing and urging others to do it as well.
3. In practice, the vast majority of money donated via GWWC members goes to Global Poverty charities (citation needed, there used to be a pretty graphic on GWWC's webpage showing the breakdown but it seems to be gone now. But I expect we will agree on this anyway).
4. GWWC recommends Givewell's top charities good evidence-backed choices for its members to donate to, and these are all Global Poverty for largely orthogonal reasons.
5. If, hypothetically, GWWC's members were donating significant amount to Animal charities and GWWC had similar confidence in ACE, I agree GWWC should openly and prominently list ACE's top charities alongside Givewell rather than engaging in 'bait and switch', and to hell with the persuasive implications. But that isn't the case. Right now Global Poverty is prominently displayed *because in practice it's what the members are actually doing*.
I should say that I think this post is overall really excellent and I'm still working through the full implications. The GWWC pledge changing from Global Poverty to Cause Neutral happens to be something I thought very hard about and gave feedback on at the time though, and I think the arguments for doing so were and are notably stronger than this post gives them credit for. Some of these were:
1. In practice people donating 10% to the most effective animal charities and people donating 10% to the most effective Global Poverty charities have a lot in common, and it doesn't really make sense to divide them into separate camps when they can work together. My housemate is Earning to Give (like me) in Finance (like me) but thinks AI Risk is by far the most important thing (unlike me). In practice the first two points of overlap matter *far* more for our ability to coordinate than the third point of difference. In fact the third point is almost a added bonus rather than a problem; I can point AI-inclined people to him and he can point Poverty-inclined people to me.
2. (1) is doubly true given the long-term nature of the pledge. Right now I am a 'Global Poverty' person. But maybe in 10 years I will be an 'Animal charities' person. Forcing me to take and untake pledges as my beliefs change creates needless friction; as GWWC is currently structured this doesn't force me to do anything. To make it worse, this problem was actively preventing large numbers of people from taking the pledge, weakening our coordination ability.
So overall, I'm cautiously in favour of 'scope-limited' projects, but then I think GWWC's project should be 'Encouraging people for whom it is appropriate to donate 10% of their income to effective charities'. I don't see positive value in going more narrow than that.
The main reason I'm against that is that empirically GWWC seems to have trouble doing that in a way that doesn't end up implicitly promising one thing and delivering another. This suggests that the different strategies are not good fits for one another.
On the object level, I think things like AI risk are a very poor fit for the retail charity model, while developing-world charity is a comparatively good fit. Earning to give because you looked at how to do the most good with your career, found an organization doing good work, considered working for them, but concluded they were cash-limited (which I think happened with many major MIRI donors) is a very different path than the one GWWC is leading people along.
I sort of see your point in the second paragraph, but it seems weak compared to other considerations. If those major MIRI donors aren't a good fit for either form of GWWC (e.g. because they might want to stop donating and work directly at any moment when cash floods in), they shouldn't join GWWC in either case. In the current state Try Giving would at least be a good fit for them and it wouldn't in your suggested alternative.
Can you expand on your first paragraph? I'm guessing our 'empirical' observations are likely to be different, but while I know what mine are (and I tried to indicate what they are in my second (1)), I don't really know what yours are or where they differ. To elaborate on mine, I see GWWC's function as providing community support to its members; people are more likely to do something (and keep doing it) when they see others doing it and have positive relationships with those others. The support and reinforcement I need to keep donating significantly while donating to global poverty and EA movement building is much the same as I needed when far more of my donations went to poverty and it'll much be the same if I decide to donate to other areas in future. To state the obvious differently, the impact on me of my donations is almost-entirely a function of how much money I donate, not where it goes. I would be genuinely surprised if I were unusual in that respect, and so dividing people up by where the money goes doesn't make much sense to me. I have some idea of what we lose by doing that, but what is gained?
Claim 7: EA organizations ought to entrust more responsibility to outsiders who seem to be doing good things but don't overtly identify as EA, instead of trying to keep it all in the family.
If you want to discuss this claim, I encourage you to do it as a reply to this comment.
This is one dimension of the calcification that I mentioned previously. Every time I engage with people from EA orgs on this issue they claim that they think about it without any specifics. Based on my own base rate for behaviors when I tell someone I have 'thought about something a lot' I continue to be fairly skeptical. There's a MASSIVE under appreciation of declarative vs procedural knowledge in communities populated by analytically gifted people.
> If the Open Philanthropy Project thinks that Holden’s judgment is good enough that he should be in control, why only here? If he thinks that other Open Philanthropy Project AI safety grantees have good judgment but OpenAI doesn’t, why not give them similar amounts of money free of strings to spend at their discretion and see what happens? Why not buy people like Eliezer Yudkowsky, Nick Bostrom, or Stuart Russell a seat on OpenAI’s board?
At first glance, I believe both these things are good ideas and Open Phil should seriously consider doing them. Open Phil probably can't reasonably get more people board seats because board seats have increasing marginal costs, but it can give lots of money to other organizations like MIRI and FHI and will probably buy a lot more value per $30 million.
Also if they're buying a board seat, I'd rather they gave it to an expert on AI safety like Yudkowsky or Bostrom and not a relative outsider like Holden. I can understand why Open Phil wants to give board seats to their own people but that must be weighted against the value of giving the board seat to the most suitable person, whether or not they're associated with Open Phil. (But I expect there are a lot of politics involved in getting a board seat that make this non-trivial.)
Pingback: Bad intent is a behavior, not a feeling | Compass Rose
I asked a version of “Why not buy people like Eliezer Yudkowsky, Nick Bostrom, or Stuart Russell a seat on OpenAI’s board?” on Open Phil's blog & Holden replied:
"Hi Milan, we did discuss different possibilities for who would take the seat, and considered the possibility of someone outside Open Phil, but I’m going to decline to go into detail on how we ultimately made the call. I will note that I’ve been looping in other AI safety folks (such as those you mention) pretty heavily as I’ve thought through my goals for this partnership, and I recognize that there are often arguments for deferring to their judgment on particular questions."
Pingback: Why I am not a Quaker (even though it often seems as though I should be) | Compass Rose
Pingback: On the construction of beacons | Compass Rose
Pingback: Totalitarian ethical systems | Compass Rose
Pingback: A drowning child is hard to find | Compass Rose
Pingback: Alarm fatigue and systematic critique | Compass Rose
> We promised to do something great. As a result, we were entrusted with a fair amount of attention and money. Therefore, we should be given more responsibility. We represented our behavior as praiseworthy, and as a result people put stickers on our chart. For this reason, we should be advanced stickers against future days of praiseworthy behavior.
That doesn't seem quite fair. One could (and many did) evaluate GiveWell by assessing the quality of GiveWell's arguments. GW publishes blog post explaining their reasoning, people read those blog posts and evaluate the quality of the reasoning, and on _that_ basis award GW social credit or not.
Given the extent to which people I talk to are surprised when I point out which things GiveWell simply never checked, I don't think this is happening for very many people.
Pingback: Approval Extraction Advertised as Production | Compass Rose
Pingback: Bob the Builder, and the Neo-Puritan Deal | Compass Rose
Pingback: Aphorisms from Twitter | Compass Rose