Category Archives: Cooperation

Financial investment is just a symbolic representation of investment projected onto a low-dimensional space inside a control system run by the US government

Nassim Nicholas Taleb recommends that instead of the balanced portfolio of investments recommended by portfolio theory, we follow a "barbell" strategy of putting most of our assets in a maximally safe, stable investment, and making small, sustainable bets with very high potential upside. If taken literally, this can't work because no such safe asset class exists. Continue reading

Culture, interpretive labor, and tidying one's room

While tidying my room, I felt the onset of the usual cognitive fatigue. But this time, I didn't just want to bounce off the task - I was curious. When I inspected the fatigue, to see what it was made of, it felt similar to when I'm trying to thread a rhetorical needle - for instance, between striking too neutral a tone for anyone to understand the relevance of what I'm saying, and too bold of a tone for my arguments to be taken literally. In short, I was shouldering a heavy burden of interpretive labor.

Why would tidying my room involve interpretive labor?  Continue reading

Oops Prize update

I'm overdue to publish an update on the Oops Prize. It fell off my priority list because I received exactly one nomination. I followed up with the nominee and couldn't get enough clarification to confirm eligibility, but my sense was that while the nominee clearly changed their mind, it wasn't a particularly clear case of public error correction as specified in the prize criteria.

Since the Oops Prize remains unclaimed, I'm offering it again this year. To clarify, I don't think the prize amount is enough to incentivize overt error-correction on its own, but it might be enough to give people an incentive to bother informing me if such error correction is in fact happening.

If anyone at an EA Global conference this year publicly repudiates an old belief, and the efforts they made and asked others to make on this basis, and explained what they're doing differently, then I'd like to celebrate this. Since talk is cheap, I'm offering $1,000 in prize money for the best example of such error-correcting; $900 to the person who most clearly reports changing their mind about something big they’d already invested their time or money or credibility in and asked others to invest in, and $100 to the first person to nominate them. Self-nomination is encouraged.

To qualify, an entry has to have the following attributes:

  • It is explicitly error correction, not an account that spins things to look like a series of successes evolving over time, or "I used to think X, and now I think X'."
  • The nominee successfully encouraged a public commitment of resources based on the original belief (e.g. funds raised or volunteer hours).
  • There is a record of the error-correction statement. If it's not a recorded talk, an independent witness (neither the nominator nor prizewinner) is enough evidence.
  • It happened at EA Global, and either was part of a scheduled talk, or an independent witness (neither the nominator nor the nominee) believes that at least ten people were present.

Anyone who speaks at EA Global this year is eligible for the prize, including leaders of EA organizations such as CEA, EAG leadership, and GiveWell / Open Philanthropy Project staff. If no qualifying entries are submitted, then no prize will be awarded. I am the sole, unaccountable judge of this, but will get people to check my work if I don't think anyone's eligible or feel like I'm too close to the person I think should win.

You can send nominations to me by email at benjaminrhoffman@gmail.com. If the error-correction is already publicly available, or if the nominee gives me permission, I’ll announce the winner by the end of the year. If there is no public recording and the nominee isn’t OK with the error-correction being publicized in this way, then I reserve the right to award them only a partial prize or none at all.

Humans need places

If, when you try to improve the world, you think about people but not about communities, you will tend to favor unsustainable net outflows of resources from your community. I wrote about this in Why I am not a Quaker. Effective Altruist (EA) and Rationalist communities such as the one in the San Francisco Bay Area suffer from this problem. Occasionally individuals - more often than not women, more often than not uncompensated and publicly unacknowledged - do something constructive about this problem. I’m now aware of one such effort where the person involved (Sarah Spikes) is publicly willing to accept support: The Berkeley REACH. The fundraiser page is here.

Continue reading

Nightmare of the Perfectly Principled

My actual literal nightmares about civilizational collapse somehow manage to be insanely optimistic about human nature.

I dreamt that in response to the news of the Trumps’ probable successful intimidation or bribery of their New York prosecutors, the US devolved into a lawless hellscape, since the last shreds of pretense of “we’re punishing you because it’s what the law says” were gone. In my dream, I successively wished I’d transferred more of my assets to paper, then money, then gold, then firearms, as I realized how far things had gone.

If I’d been thinking sanely, the thing I should have wished I’d accumulated is the only real source of safety in a state of war: a bigger, better gang. But fundamentally, I should have known better than to imagine that things would collapse quickly.

What was I getting wrong? I was tacitly assuming that the majority of people were perfectly principled. Continue reading

Defense against discourse

So, some writer named Cathy O’Neil wrote about futurists’ opinions about AI risk. This piece focused on futurists as social groups with different incentives, and didn’t really engage with the content of their arguments. Instead, she points out considerations like this:

First up: the people who believe in the singularity and are not worried about it. […] These futurists are ready and willing to install hardware in their brains because, as they are mostly young or middle-age white men, they have never been oppressed.

She doesn’t engage with the content of their arguments about the future. I used to find this sort of thing inexplicable and annoying. Now I just find it sad but reasonable. Continue reading

On the construction of beacons

I am afraid of the anglerfish. Maybe this is why the comments on my blog tend to be so consistently good.

Recently, a friend was telling me about the marketing strategy for a project of theirs. They favored growth, in a way that I was worried would destroy value. I struggled to articulate my threat model, until I hit upon the metaphor of that old haunter of my dreamscape, the anglerfish. Continue reading

Why I am not a Quaker (even though it often seems as though I should be)

In the past year, I have noticed that the Society of Friends (also known as the Quakers) has come to the right answer long before I or most people did, on a surprising number of things, in a surprising range of domains. And yet, I do not feel inclined to become one of them. Giving credit where credit is due is a basic part of good discourse, so I feel that I owe an explanation.

The virtues of the Society of Friends are the virtues of liberalism: they cultivate honest discourse and right action, by taking care not to engage in practices that destroy individual discernment. The failings of the Society of Friends are the failings of liberalism: they do not seem to have the organizational capacity to recognize predatory systems and construct alternatives.

Fundamentally, Quaker protocols seem like a good start, but more articulated structures are necessary, especially more closed systems of production. Continue reading