Category Archives: Discourse norms

Authoritarian empiricism

(Excerpts from a conversation with my friend Mack, very slightly edited for clarity and flow, including getting rid of most of the metaconversation.)

Ben: Just spent 2 full days offline for the holiday - feeling good about it, I needed it.

Mack: Good!

Ben: Also figured out some stuff about acculturation I got and had to unlearn, that was helpful

Mack: I'm interested if you feel like elaborating

Ben: OK, so, here's the deal.

I noticed over the first couple days of Passover that the men in the pseudo-community I grew up in seem to think there's a personal moral obligation to honor contracts, pretty much regardless of the coercion involved. The women seem to get that this increases the amount of violence in the world by quite a lot relative to optimal play, but they don't really tell the men. This seems related somehow to a thing where the men feel anxious about the prospect of modeling people as autonomous subjects - political creatures - instead of just objectifying them, but when they slap down attempts to do that, they pretend they're insisting on rigor and empiricism.

Which I'd wrongly internalized, as a kid, as good-faith critiques of my epistemics. Continue reading

MeToo is good

In Locker room talk, I suggested that apparent coordination to shield sexual assaulters, harassers, or abusers might be much more local than it seemed. Since then, Donald Trump won the presidential election with a narrow majority, and the MeToo movement took off. The way the two phenomena have played out seem like strong evidence for the hypothesis that there were multiple strong coalitions with very different priorities, hidden from each other.

Half the country was at least willing to hold their noses for Trump, which I felt was a somewhat surprising display of tolerance for unambiguously awful behavior, but the apparently entrenched Harvey Weinstein was quickly dethroned, and a sitting Senator was removed, suggesting that in some places the coalition against sexual abuses has great power.

What's amazing to me, though, is how discriminating the MeToo phenomenon has been, and how resistant it's been to spurious scapegoating dynamics.

Continue reading

Model-building and scapegoating

When talking about undesirable traits, we may want to use simple labels. On one hand, simple labels have the virtue of efficiently pointing to an important cluster of behavioral predictions. On the other, they tend to focus attention on the question of whether the person so described is good or bad, instead of on building shared models about the causal structure underlying the perceived problem.

Slate Star Codex recently posted a dialogue exploring this through the example of the term "lazy." (Ozy's response is also worth reading.) I think that Scott's analysis itself unfortunately focuses attention on the question of whether assigning simple labels to adverse traits is good or bad (or alternately, true or false) instead of on building shared models about the causal structure underlying the perceived problem.

When I call someone lazy, I am doing two things. The first is communicating factual information about that person, which can help others avoid incurring costs by trusting the lazy person with some important tasks. This is shared model-building, and it's going to be more salient if you're focused on allocating resources to mitigate harm and produce things of value. In other words, if you're engaged in a community of shared production.

The second is creating a shared willingness to direct blame at that person. Once there's common knowledge that someone's considered blameworthy, they become the default target for exclusion if the group experiences a threat. This can be as simple as killing them and taking their stuff, so there's more per survivor to go around, but this can also take the form of deflecting the hostility of outsiders to the supposed one bad apple. This dynamic is called scapegoating, and it's going to be more salient when zero-sum dynamics are more salient.  Continue reading

Oops Prize update

I'm overdue to publish an update on the Oops Prize. It fell off my priority list because I received exactly one nomination. I followed up with the nominee and couldn't get enough clarification to confirm eligibility, but my sense was that while the nominee clearly changed their mind, it wasn't a particularly clear case of public error correction as specified in the prize criteria.

Since the Oops Prize remains unclaimed, I'm offering it again this year. To clarify, I don't think the prize amount is enough to incentivize overt error-correction on its own, but it might be enough to give people an incentive to bother informing me if such error correction is in fact happening.

If anyone at an EA Global conference this year publicly repudiates an old belief, and the efforts they made and asked others to make on this basis, and explained what they're doing differently, then I'd like to celebrate this. Since talk is cheap, I'm offering $1,000 in prize money for the best example of such error-correcting; $900 to the person who most clearly reports changing their mind about something big they’d already invested their time or money or credibility in and asked others to invest in, and $100 to the first person to nominate them. Self-nomination is encouraged.

To qualify, an entry has to have the following attributes:

  • It is explicitly error correction, not an account that spins things to look like a series of successes evolving over time, or "I used to think X, and now I think X'."
  • The nominee successfully encouraged a public commitment of resources based on the original belief (e.g. funds raised or volunteer hours).
  • There is a record of the error-correction statement. If it's not a recorded talk, an independent witness (neither the nominator nor prizewinner) is enough evidence.
  • It happened at EA Global, and either was part of a scheduled talk, or an independent witness (neither the nominator nor the nominee) believes that at least ten people were present.

Anyone who speaks at EA Global this year is eligible for the prize, including leaders of EA organizations such as CEA, EAG leadership, and GiveWell / Open Philanthropy Project staff. If no qualifying entries are submitted, then no prize will be awarded. I am the sole, unaccountable judge of this, but will get people to check my work if I don't think anyone's eligible or feel like I'm too close to the person I think should win.

You can send nominations to me by email at benjaminrhoffman@gmail.com. If the error-correction is already publicly available, or if the nominee gives me permission, I’ll announce the winner by the end of the year. If there is no public recording and the nominee isn’t OK with the error-correction being publicized in this way, then I reserve the right to award them only a partial prize or none at all.

Defense against discourse

So, some writer named Cathy O’Neil wrote about futurists’ opinions about AI risk. This piece focused on futurists as social groups with different incentives, and didn’t really engage with the content of their arguments. Instead, she points out considerations like this:

First up: the people who believe in the singularity and are not worried about it. […] These futurists are ready and willing to install hardware in their brains because, as they are mostly young or middle-age white men, they have never been oppressed.

She doesn’t engage with the content of their arguments about the future. I used to find this sort of thing inexplicable and annoying. Now I just find it sad but reasonable. Continue reading

On the construction of beacons

I am afraid of the anglerfish. Maybe this is why the comments on my blog tend to be so consistently good.

Recently, a friend was telling me about the marketing strategy for a project of theirs. They favored growth, in a way that I was worried would destroy value. I struggled to articulate my threat model, until I hit upon the metaphor of that old haunter of my dreamscape, the anglerfish. Continue reading