At the formation of the Berkeley REACH in April of this year, I wrote in support of projects like it, and announced that I'd personally be contributing to it. Now that I've decided to discontinue the latter, I feel that I owe a public accounting of my reasons. Continue reading
For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath.
- The Gospel according to Matthew
r > g
-Thomas Piketty, Capital in the Twenty-First Century
From Jesus to Piketty, it is a commonplace that wealth is a positive feedback loop.
Under one model, differential ability to steward capital, plus compounding gains, implies that perfectly benevolent people with more money than most should keep it more often than a naive expected utility maximization would suggest. On the other hand, conquering empires also experience compounding gains; the ability to leverage force into more force implies that this is a harmful positive feedback loop. Continue reading
I'm overdue to publish an update on the Oops Prize. It fell off my priority list because I received exactly one nomination. I followed up with the nominee and couldn't get enough clarification to confirm eligibility, but my sense was that while the nominee clearly changed their mind, it wasn't a particularly clear case of public error correction as specified in the prize criteria.
Since the Oops Prize remains unclaimed, I'm offering it again this year. To clarify, I don't think the prize amount is enough to incentivize overt error-correction on its own, but it might be enough to give people an incentive to bother informing me if such error correction is in fact happening.
If anyone at an EA Global conference this year publicly repudiates an old belief, and the efforts they made and asked others to make on this basis, and explained what they're doing differently, then I'd like to celebrate this. Since talk is cheap, I'm offering $1,000 in prize money for the best example of such error-correcting; $900 to the person who most clearly reports changing their mind about something big they’d already invested their time or money or credibility in and asked others to invest in, and $100 to the first person to nominate them. Self-nomination is encouraged.
To qualify, an entry has to have the following attributes:
- It is explicitly error correction, not an account that spins things to look like a series of successes evolving over time, or "I used to think X, and now I think X'."
- The nominee successfully encouraged a public commitment of resources based on the original belief (e.g. funds raised or volunteer hours).
- There is a record of the error-correction statement. If it's not a recorded talk, an independent witness (neither the nominator nor prizewinner) is enough evidence.
- It happened at EA Global, and either was part of a scheduled talk, or an independent witness (neither the nominator nor the nominee) believes that at least ten people were present.
Anyone who speaks at EA Global this year is eligible for the prize, including leaders of EA organizations such as CEA, EAG leadership, and GiveWell / Open Philanthropy Project staff. If no qualifying entries are submitted, then no prize will be awarded. I am the sole, unaccountable judge of this, but will get people to check my work if I don't think anyone's eligible or feel like I'm too close to the person I think should win.
You can send nominations to me by email at email@example.com. If the error-correction is already publicly available, or if the nominee gives me permission, I’ll announce the winner by the end of the year. If there is no public recording and the nominee isn’t OK with the error-correction being publicized in this way, then I reserve the right to award them only a partial prize or none at all.
If, when you try to improve the world, you think about people but not about communities, you will tend to favor unsustainable net outflows of resources from your community. I wrote about this in Why I am not a Quaker. Effective Altruist (EA) and Rationalist communities such as the one in the San Francisco Bay Area suffer from this problem. Occasionally individuals - more often than not women, more often than not uncompensated and publicly unacknowledged - do something constructive about this problem. I’m now aware of one such effort where the person involved (Sarah Spikes) is publicly willing to accept support: The Berkeley REACH. The fundraiser page is here.
Here’s a common argument:
The problem with the poor is that they haven’t got enough money. There’s ample empirical evidence backing this up. Therefore, the obviously-correct poverty intervention is to simply give the poor cash. You might be able to do better than this, but it’s a solid baseline and you should often expect to find that interventions are worse than cash.
There are technical reasons to be skeptical of cash transfers - which is why it is so important that the cash transfer charity GiveDirectly is carefully researching what actually happens when they give people cash - but until fairly recently, these objections seemed to me like abstruse nitpicks about an intervention that was almost analytically certain to be strongly beneficial.
But they’re not just nitpicks. Cash isn’t actually the same thing as human well-being, and the assumption that it can be simply exchanged into pretty much anything else is not obviously true.
Of course, saying “X is possibly wrong” isn’t very helpful unless we have a sense of how it’s likely to be wrong, under what circumstances. It’s no good to treat cash transfers just the same as before, but be more gloomy about it.
I’m going to try to communicate an underlying model that generates the appropriate kind of skepticism about interventions like cash transfers, in a way that’s intuitive and not narrowly technical. I’ll begin with a parable, and then talk about how it relates to real-world cases. Continue reading
This is a compact account of my current working hypothesis for what's wrong with our culture and what needs to be done. Continue reading
In the past year, I have noticed that the Society of Friends (also known as the Quakers) has come to the right answer long before I or most people did, on a surprising number of things, in a surprising range of domains. And yet, I do not feel inclined to become one of them. Giving credit where credit is due is a basic part of good discourse, so I feel that I owe an explanation.
The virtues of the Society of Friends are the virtues of liberalism: they cultivate honest discourse and right action, by taking care not to engage in practices that destroy individual discernment. The failings of the Society of Friends are the failings of liberalism: they do not seem to have the organizational capacity to recognize predatory systems and construct alternatives.
Fundamentally, Quaker protocols seem like a good start, but more articulated structures are necessary, especially more closed systems of production. Continue reading
if you don’t correct errors, you don’t get anything done, because you stay wrong. I don't think we do enough to reward saying oops.
Lately, I’ve been complaining about ways the EA community’s been papering over problems in ways that forgo this sort of learning. But while complaining is important, on its own it doesn’t offer any specific vision for how to do things. At the recent EA Global conference in Boston, I was reflecting with a friend on what sorts of positive norms I would like to see in the discourse.
One example of something I wish I saw more of, is people publicly and very clearly saying, "we tried X, it didn’t work, so now we’re stopping.” Or, “I used to believe X, and as a result asked people to do Y, but now I don’t believe X anymore and don’t think Y is a particularly good use of resources.” People often invest a lot of social capital in their current beliefs and plans; admitting that you were wrong can cost you valuable social momentum and mean you have to start over. You might worry that people will associate you with wrongness. We need communities where instead, clear admissions of error or failure are publicly acknowledged as signs of integrity, and commitment to communal learning and shared model-building.
So I'm offering a prize. But first, let me give an example of the sort of thing we need to be praising more loudly more often. Continue reading
A parent I know reports (some details anonymized):
Recently we bought my 3-year-old daughter a "behavior chart," in which she can earn stickers for achievements like not throwing tantrums, eating fruits and vegetables, and going to sleep on time. We successfully impressed on her that a major goal each day was to earn as many stickers as possible.
This morning, though, I found her just plastering her entire behavior chart with stickers. She genuinely seemed to think I'd be proud of how many stickers she now had.
The Effective Altruism movement has now entered this extremely cute stage of cognitive development. EA is more than three years old, but institutions age differently than individuals. Continue reading
The Open Philanthropy Project recently bought a seat on the board of the billion-dollar nonprofit AI research organization OpenAI for $30 million. Some people have said that this was surprisingly cheap, because the price in dollars was such a low share of OpenAI's eventual endowment: 3%.
To the contrary, this seat on OpenAI's board is very expensive, not because the nominal price is high, but precisely because it is so low.
If OpenAI hasn’t extracted a meaningful-to-it amount of money, then it follows that it is getting something other than money out of the deal. The obvious thing it is getting is buy-in for OpenAI as an AI safety and capacity venture. In exchange for a board seat, the Open Philanthropy Project is aligning itself socially with OpenAI, by taking the position of a material supporter of the project. The important thing is mutual validation, and a nominal donation just large enough to neg the other AI safety organizations supported by the Open Philanthropy Project is simply a customary part of the ritual.
By my count, the grant is larger than all the Open Philanthropy Project's other AI safety grants combined.
(Cross-posted at LessWrong.)