Category Archives: Effective Altruism

Oops Prize update

I'm overdue to publish an update on the Oops Prize. It fell off my priority list because I received exactly one nomination. I followed up with the nominee and couldn't get enough clarification to confirm eligibility, but my sense was that while the nominee clearly changed their mind, it wasn't a particularly clear case of public error correction as specified in the prize criteria.

Since the Oops Prize remains unclaimed, I'm offering it again this year. To clarify, I don't think the prize amount is enough to incentivize overt error-correction on its own, but it might be enough to give people an incentive to bother informing me if such error correction is in fact happening.

If anyone at an EA Global conference this year publicly repudiates an old belief, and the efforts they made and asked others to make on this basis, and explained what they're doing differently, then I'd like to celebrate this. Since talk is cheap, I'm offering $1,000 in prize money for the best example of such error-correcting; $900 to the person who most clearly reports changing their mind about something big they’d already invested their time or money or credibility in and asked others to invest in, and $100 to the first person to nominate them. Self-nomination is encouraged.

To qualify, an entry has to have the following attributes:

  • It is explicitly error correction, not an account that spins things to look like a series of successes evolving over time, or "I used to think X, and now I think X'."
  • The nominee successfully encouraged a public commitment of resources based on the original belief (e.g. funds raised or volunteer hours).
  • There is a record of the error-correction statement. If it's not a recorded talk, an independent witness (neither the nominator nor prizewinner) is enough evidence.
  • It happened at EA Global, and either was part of a scheduled talk, or an independent witness (neither the nominator nor the nominee) believes that at least ten people were present.

Anyone who speaks at EA Global this year is eligible for the prize, including leaders of EA organizations such as CEA, EAG leadership, and GiveWell / Open Philanthropy Project staff. If no qualifying entries are submitted, then no prize will be awarded. I am the sole, unaccountable judge of this, but will get people to check my work if I don't think anyone's eligible or feel like I'm too close to the person I think should win.

You can send nominations to me by email at benjaminrhoffman@gmail.com. If the error-correction is already publicly available, or if the nominee gives me permission, I’ll announce the winner by the end of the year. If there is no public recording and the nominee isn’t OK with the error-correction being publicized in this way, then I reserve the right to award them only a partial prize or none at all.

Humans need places

If, when you try to improve the world, you think about people but not about communities, you will tend to favor unsustainable net outflows of resources from your community. I wrote about this in Why I am not a Quaker. Effective Altruist (EA) and Rationalist communities such as the one in the San Francisco Bay Area suffer from this problem. Occasionally individuals - more often than not women, more often than not uncompensated and publicly unacknowledged - do something constructive about this problem. I’m now aware of one such effort where the person involved (Sarah Spikes) is publicly willing to accept support: The Berkeley REACH. The fundraiser page is here.

Continue reading

Cash transfers are not necessarily wealth transfers

Here’s a common argument:

The problem with the poor is that they haven’t got enough money. There’s ample empirical evidence backing this up. Therefore, the obviously-correct poverty intervention is to simply give the poor cash. You might be able to do better than this, but it’s a solid baseline and you should often expect to find that interventions are worse than cash.

There are technical reasons to be skeptical of cash transfers - which is why it is so important that the cash transfer charity GiveDirectly is carefully researching what actually happens when they give people cash - but until fairly recently, these objections seemed to me like abstruse nitpicks about an intervention that was almost analytically certain to be strongly beneficial.

But they’re not just nitpicks. Cash isn’t actually the same thing as human well-being, and the assumption that it can be simply exchanged into pretty much anything else is not obviously true.

Of course, saying “X is possibly wrong” isn’t very helpful unless we have a sense of how it’s likely to be wrong, under what circumstances. It’s no good to treat cash transfers just the same as before, but be more gloomy about it.

I’m going to try to communicate an underlying model that generates the appropriate kind of skepticism about interventions like cash transfers, in a way that’s intuitive and not narrowly technical. I’ll begin with a parable, and then talk about how it relates to real-world cases. Continue reading

Why I am not a Quaker (even though it often seems as though I should be)

In the past year, I have noticed that the Society of Friends (also known as the Quakers) has come to the right answer long before I or most people did, on a surprising number of things, in a surprising range of domains. And yet, I do not feel inclined to become one of them. Giving credit where credit is due is a basic part of good discourse, so I feel that I owe an explanation.

The virtues of the Society of Friends are the virtues of liberalism: they cultivate honest discourse and right action, by taking care not to engage in practices that destroy individual discernment. The failings of the Society of Friends are the failings of liberalism: they do not seem to have the organizational capacity to recognize predatory systems and construct alternatives.

Fundamentally, Quaker protocols seem like a good start, but more articulated structures are necessary, especially more closed systems of production. Continue reading

Oops Prize

if you don’t correct errors, you don’t get anything done, because you stay wrong. I don't think we do enough to reward saying oops.

Lately, I’ve been complaining about ways the EA community’s been papering over problems in ways that forgo this sort of learning. But while complaining is important, on its own it doesn’t offer any specific vision for how to do things. At the recent EA Global conference in Boston, I was reflecting with a friend on what sorts of positive norms I would like to see in the discourse.

One example of something I wish I saw more of, is people publicly and very clearly saying, "we tried X, it didn’t work, so now we’re stopping.” Or, “I used to believe X, and as a result asked people to do Y, but now I don’t believe X anymore and don’t think Y is a particularly good use of resources.” People often invest a lot of social capital in their current beliefs and plans; admitting that you were wrong can cost you valuable social momentum and mean you have to start over. You might worry that people will associate you with wrongness. We need communities where instead, clear admissions of error or failure are publicly acknowledged as signs of integrity, and commitment to communal learning and shared model-building.

So I'm offering a prize. But first, let me give an example of the sort of thing we need to be praising more loudly more often. Continue reading

Effective Altruism is self-recommending

A parent I know reports (some details anonymized):

Recently we bought my 3-year-old daughter a "behavior chart," in which she can earn stickers for achievements like not throwing tantrums, eating fruits and vegetables, and going to sleep on time. We successfully impressed on her that a major goal each day was to earn as many stickers as possible.

This morning, though, I found her just plastering her entire behavior chart with stickers. She genuinely seemed to think I'd be proud of how many stickers she now had.

The Effective Altruism movement has now entered this extremely cute stage of cognitive development. EA is more than three years old, but institutions age differently than individuals. Continue reading

An OpenAI board seat is surprisingly expensive

The Open Philanthropy Project recently bought a seat on the board of the billion-dollar nonprofit AI research organization OpenAI for $30 million. Some people have said that this was surprisingly cheap, because the price in dollars was such a low share of OpenAI's eventual endowment: 3%.

To the contrary, this seat on OpenAI's board is very expensive, not because the nominal price is high, but precisely because it is so low.

If OpenAI hasn’t extracted a meaningful-to-it amount of money, then it follows that it is getting something other than money out of the deal. The obvious thing it is getting is buy-in for OpenAI as an AI safety and capacity venture. In exchange for a board seat, the Open Philanthropy Project is aligning itself socially with OpenAI, by taking the position of a material supporter of the project. The important thing is mutual validation, and a nominal donation just large enough to neg the other AI safety organizations supported by the Open Philanthropy Project is simply a customary part of the ritual.

By my count, the grant is larger than all the Open Philanthropy Project's other AI safety grants combined.

(Cross-posted at LessWrong.)

OpenAI makes humanity less safe

If there's anything we can do now about the risks of superintelligent AI, then OpenAI makes humanity less safe.

Once upon a time, some good people were worried about the possibility that humanity would figure out how to create a superintelligent AI before they figured out how to tell it what we wanted it to do.  If this happened, it could lead to literally destroying humanity and nearly everything we care about. This would be very bad. So they tried to warn people about the problem, and to organize efforts to solve it.

Specifically, they called for work on aligning an AI’s goals with ours - sometimes called the value alignment problem, AI control, friendly AI, or simply AI safety - before rushing ahead to increase the power of AI.

Some other good people listened. They knew they had no relevant technical expertise, but what they did have was a lot of money. So they did the one thing they could do - throw money at the problem, giving it to trusted parties to try to solve the problem. Unfortunately, the money was used to make the problem worse. This is the story of OpenAI. Continue reading

Against responsibility

I am surrounded by well-meaning people trying to take responsibility for the future of the universe. I think that this attitude – prominent among Effective Altruists – is causing great harm. I noticed this as part of a broader change in outlook, which I've been trying to describe on this blog in manageable pieces (and sometimes failing at the "manageable" part).

I'm going to try to contextualize this by outlining the structure of my overall argument.

Why I am worried

Effective Altruists often say they're motivated by utilitarianism. At its best, this leads to things like Katja Grace's excellent analysis of when to be a vegetarian. We need more of this kind of principled reasoning about tradeoffs.

At its worst, this leads to some people angsting over whether it's ethical to spend money on a cup of coffee when they might have saved a life, and others using the greater good as license to say things that are not quite true, socially pressure others into bearing inappropriate burdens, and make ever-increasing claims on resources without a correspondingly strong verified track record of improving people's lives. I claim that these actions are not in fact morally correct, and that people keep winding up endorsing those conclusions because they are using the wrong cognitive approximations to reason about morality.

Summary of the argument

  1. When people take responsibility for something, they try to control it. So, universal responsibility implies an attempt at universal control.
  2. Maximizing control has destructive effects:
    • An adversarial stance towards other agents.
    • Decision paralysis.
  3. These failures are not accidental, but baked into the structure of control-seeking. We need a practical moral philosophy to describe strategies that generalize better, and that benefit from the existence of other benevolent agents rather than treating them primarily as threats.

Continue reading