Tag Archives: Social Justice


I didn’t really have good role models for boundaries, and didn’t hear them talked about much as a kid, so when I first heard people talking about them, I tried to fit them into my existing categories. But that didn’t work very well, so they felt like nonsense.

It looked like when people were expressing boundaries, they were drawing on nothing but their own preferences to determine them - so maybe boundaries were a kind of strong preference? But people seemed to use some sort of moralistic language around boundaries. People who “violated boundaries” weren’t just costly to interact with, but behaving wrongly, viewed as dangerous, to be excluded from one’s life. Then maybe boundaries were like absolute standards of right and wrong? But that didn’t work either, since they were determined so subjectively. Continue reading

Specific Techniques for Inclusion

One lovely thing about having a bunch of rationalist friends is that if I whine about a problem, I get a bunch of specific ideas about how to fix it. Sometimes the whining has to be very specific, though.

What I Complained About

Some people don't feel comfortable on Less Wrong or in other rationalist communities. Apophemi wrote about why they don't identify with the rationalist community because some of the language and topics under discussion feel to them like personal threats. Less Wrong discussed the post here, though sadly I think a lot of people got mindkilled.

Apophemi's post was most directly a response to some stuff on Scott's blog, Slate Star Codex. Scott's response was basically that yes, there's a need for fora where particular groups of people can feel safe - but Less Wrong and the rationalist community are supposed to be a place that's safe for rationalists - where you won't get banned or ostracized or hated for bringing up an unpopular idea just because the evidence supports it. Implicitly Scott was modeling the discussion as considering two options: the status quo, or ban certain controversial topics entirely because they make some people uncomfortable.

Ben Kuhn then responded that Scott was ignoring the middle ground, and there are plenty of things rationalists can do to make the community feel more welcoming to people who are being excluded, without banning discussion of controversial topics.

Sounds reasonable enough. What's my problem with that? Not a single example.

It's easy enough to claim there's a middle ground - but there are reasons it might not be feasible in practice. For example, in some cases it really could be the existence of a discussion of the topic that's offensive, not the way people discuss it. (I think Apophemi feels this way about some things.) In others, there's very little gain from partial compliance. If right now 25% of Less Wronger commenters consistently and avoidably misgender people, and as a result of a campaign to educate people on how not to do this, half of them learn how to do it right, that's still 12.5% of commenters misgendering people - more than enough that it's still going to be a consistent low-level annoyance to people who don't identify with a traditional gender, or women who don't have obviously feminine pseuds, etc. So that's kind of a wasted effort.

So I whined about this, on Facebook. To my amazement and delight, I got some actual specific responses, from Robby and Ruthie. Since there was some overlap, I've tried to aggregate the discussion into a single list of ideas, plus my attempt to explain what these mean and why they might help. I combined ones that I think are basically the same idea, and dropped ones that are either about banning stuff (since the whole point was to find out whether there is in fact a feasible middle ground) or everyone refraining from bad behavior (I don't think that's a feasible solution unless we ban defectors, which also fails to satisfy the "middle ground" requirement).

Trigger Warnings

(Robby and Ruthie)

Some topics reliably make some people freak out. You might have had a very bad experience with something and find it difficult to discuss, or certain words might be associated with actual threats to your safety, in your experience. If you have enough self-knowledge to know that you will not be able to participate in those discussions rationally (or that you could, but the emotional cost is higher than you're willing to bear), then it would be helpful to have a handy warning that the article you're about to read contains a "trigger" that's relevant to you.

This concept can be useful outside of personal traumatic events too. There's a lot we don't know for sure about the human ancestral environment, but one thing that's pretty likely is that the part of the brain with social skills didn't evolve to deal with political groups with millions of members. Any political opinion favoring something that threatens you is going to feel like a meaningful threat to your well-being, to some extent, unless you unlearn this (if that's even possible). Since politics is the mind-killer, you're likely to have this response even if people are just discussing opinions that are often signs of affiliation with your group's political enemies. This is possible to unlearn, but it could be really helpful to know what kind of discussion you're going into, in advance.

For example, I'm Jewish by birth. When people start saying nice things about Hitler and the Nazis, it makes me feel sad, and a little threatened. If it's just a discussion of pretty uniforms or monetary policy,  and not really about killing Jews at all, then there's no reason at all to construe it as a direct threat to my safety - but it's still helpful for me to be able to steel myself for the inevitable emotional reaction in advance.

Content warnings have the advantage of being fairly unambiguous. Someone who believes in "human biodiversity" might not agree that their discussion about it is threatening to black people - but I bet they'd agree that the discussion involves making generalizations about people based on racial categories. Someone who wants to vent about bad experiences involving white men might not agree that they are calling me a bad person - but I bet they'd agree that they are sharing anecdotes that are not necessarily representative, about people in a certain demographic.

The other nice thing about this solution is that right now it basically hasn't been done at all on Less Wrong. It seems reasonably likely that if a few prominent posters modeled this behavior (or a few commenters consistently suggested the addition of trigger or other offensive content warnings at the top of certain posts), it would be widely adopted.

The downside is that some uses of trigger warnings, while widespread on the internet (so there may be an off-the-shelf solution), would require a technical implementation, which means someone actually has to modify the site's code. This limits the set of people who can implement that part, but it's not insurmountable.

I'm not really sure this one has any clear disadvantages, except that some people may find content warnings themselves offensive.

Add a tag system for common triggers, so people can at a glance see where an information-hazard topic or conversation thread has arisen, and navigate the site safely. This is a really easy and obvious solution to Apophemi and Scott's dispute, and it benefits both of them (since it can be used both to tag politics/SJ discussion and to tag e.g. rape discussions), so I'm amazed this proposal hasn't been the central object of discussion in the conversation so far.


Widely implemented. We can help people who acknowledge that they don't want to be around certain topics stay away from. It also gives those who want to be part of overly frank discussions a response to give to those who criticize them for being overly frank.


Make it Explicit That People From Underrepresented Groups are Welcome

(Robby and Ruthie)

The downside of this one is that for women, at least, it's kind of already been done. A few years ago there were a bunch of front-page posts on the topic of what if anything needed to change to make sure women weren't unnecessarily pushed away by Less Wrong. But apparently Eliezer's old post on the topic actually offended some women, who felt stereotyped and misunderstood by it. A post with the same goal that didn't cause those reactions might do better.

I don't feel like this is a very good summary so I'm going to quote Robby and Ruthie directly:

Express an interest in women joining the site. Make your immediate reaction to the idea of improved gender ratios 'oh cool that means we get more people, including people with importantly different skills and backgrounds', not 'why would we want more women on this site?' or a change of topic to e.g. censorship.

- Robby

If more women posted and commented they might move the overall tone of discourse in a direction more appealing for other women. Maybe not. You could do blinded studies (have women and men write anonymized posts about anything, ask women and men which they would upvote, downvote). Again, this would be hard to do well.

- Ruthie

Put in an extra effort to draw women researchers, academics, LW-post-writers, speakers, etc.


Recruit More Psychologists


I can't substantively improve on the original:

If LW is primarily a site about human rationality (as opposed to being primarily a site about Friendly Artificial Intelligence), then it should be dominated by psychologists, not by programmers. Psychologists are mostly women. Advertising to psych people would therefore simultaneously make this site better at human-rationality scholarship and empiricism, and better at gender equity.




An "Ombudsman" is someone who works for an institution, and whose primary responsibility is listening to people's complaints and working with the institution to resolve them. A dedicated person is important for two reasons. First, it can be easier to communicate a complaint to someone who wasn't directly involved in doing the thing you're complaining about. Second, the site/community leaders may not have the time, attention, willingness, or expertise to listen to or understand a particular kind of complaint - maybe their comparative advantage is in building new things, not listening to people's problems.

I have no idea how this would work, but it was suggested to help solve problems on the EA facebook group and seems to have traction at least as an idea there. If they implement it and are successful, LW could follow suit.


Write Rationalist-Friendly Explanations

It would be silly if rationalists weren't at least a little bit better about rationality than everyone else. Unfortunately, this means everyone else is a little bit worse, on average. Including feminists. That doesn't mean they're wrong, but it does mean that many popular explanations of feminist, antiracist, and social justice concepts may mix together some good points with some real howlers. These explanations may also come across as outright hostile to the typical Less Wrong demographic. So as a result, many rationalists will not read these things, or will read them and reject them as making no sense (and this is sometimes a correct judgment).

The problem is that some of these ideas are true or helpful even if someone didn't argue for them properly, and feminists or others on Less Wrong might have to explain the whole thing all over again every single time they want to have a productive discussion with a new person using a concept like sexism. This is a lot of extra work, and understandably frustrating. A carefully-argued account of some key relevant concepts would be extremely valuable, and might even be an appropriate addition to the Sequences. Brienne's post on gender bias is a great start, and there's probably lots of other great stuff out there hiding in between the ninety percent.

Build resources (FAQs, blog posts, etc.) educating LWers about e.g. gender bias and accumulation of advantage. Forcing women to re-argue things like 'is sexism a thing?' every time they want to treat it as a premise is exhausting and alienating.


Get Data

This one's a real head-slapper - Less Wrong is supposed to be all about this. There's a problem and we don't know how to solve it. How about we get more information about what's causing it? Find the people who would be contributing to or benefiting from the rationalist community if only they didn't feel pushed away or excluded by some things we do. (And the people who only just barely think it's worth it - they're probably similar to the people who just barely think it's not worth it.)

Collect and analyze more-than-anecdata on women and minority behavior around LW

The existing survey data may have a lot of insight. Adding more targeted questions to next year's survey could help more. It's hard to give surveys to the category of people who feel like they were turned away from LW, but if anyone can think of a good way to reach this group, we may be able to learn something from them.

Try to find out more about how people perceive different kinds of rhetoric

This would be hard, but I'd be really interested in the outcome. Some armchair theories about how friendly different kinds of people expect discourse to be strike me as plausible. If there are really differences, offense might be prevented by using different words to say the same things. If not, we could stop throwing this accusation around.


Go Meta


Less Wrong is supposed to be all about this one too. Some people consistently think other people are unreasonable and find it difficult to have a conversation with them - and vice versa. Maybe we should see if there are any patterns to this? Like the illusion of transparency, or taking offense being perceived as an attack on the offender's status.

One of my favorite patterns is when person A says that behavior X (described very abstractly) is horrible, and person B says how can you possibly expect people to refrain from behavior X. Naturally, they each decide that the other is a bad person, and also wrong on the internet. Then after much arguing, person A gives an example, and person B says "That's what you were talking about the whole time? People actually do that?! No wonder you're so upset about it!" Or person B gives an example of the behavior they think is reasonable, and person A says "I thought it went without saying that your example is okay. Why would you think anyone objected to that? It's perfectly reasonable!" It's kind of a combination of the illusion of transparency and generalizing from one example, where you try to make sense of the other person's abstract language by mapping it onto the most similar event you have personally experienced or heard about.

I bet there are lots of other patterns that, if we understood them better, we could build shortcuts around.

If well-intentioned people understood why conversations about gender so often become so frustrating before having a conversation about gender, it might lead to higher quality conversations about gender.


Taboo Unhelpful Words More


Rationalist Taboo is when, if you seem to disagree about what a word means, you stop using it and use more specific language instead. Sometimes this can dissolve a disagreement entirely. In other cases, it just keeps the conversation substantive, about things rather than definitions. I definitely recall reading discussions on Less Wrong and thinking, "somebody should suggest tabooing the word 'feminist' here" (or "sexist" or "racist"). Guess what? I'm somebody! I'll try to remember to do that next time; I think a few people committed to helping on this one could be super helpful.

Taboo words

Possibly on a per-conversation basis. "Feminist" is a pretty loaded word for me, and people say things like this which don't apply closely to me, and I feel threatened because I identify with the word.

Scott Alexander also suggested this in the same context in his response to Apophemi on his blog (a bit more than halfway down the page). It can improve the quality of discourse simply by forcing people to use relevant categories instead of easy ones.

Higher standards of justification for sensitive topics

A lot of plausible-but-badly-justified assertions about gender are thrown around, and not always subjected to much scrutiny. These can put harmful ideas in people's minds without at least giving us reason to believe that they're true, and they're slippery to argue against. Saying exactly what you mean and justifying it is probably the best way to defend against unreasonable accusations of sexism. If people accuse you of sexism, they'll at least be reasonable. I think taboo words can go a long way towards achieving this.


Build a Norm That You Can Safely Criticize and Be Criticized For "Offensive" Behavior


I have no idea how hard or easy this is. Less Wrong seems like it's already an unusually safe place to say "oops, I was wrong." But somehow people seem not to do a good job becoming more curious about certain things like sexism. If I understand correctly (her wording's a little telegraphic to me), Ruthie suggested a stock phrase for people correcting their own language, "let me try again." It would be nice to come up with a similarly friendly way to say that you think someone is talking in an unhelpful way, but don't intend to thereby lower their status - you just want to point it out so they will change their behavior to stop hurting you.

Better ways to call people out for bad behavior

Right now, talking about gender in almost any form is asking for a fight. I hold my tongue about a lot of minor things that bother me because calling people out causes people to get defensive instead of considering correcting myself. A strong community norm of taking criticism in certain form seriously could help us not quarrel about minor things. Someone I know suggested "let me try again" as a template for correcting offensive speech, and I like the idea a lot.

Successfully correcting when called out can also help build goodwill. If you are sometimes willing to change your rhetoric, I take you more seriously when you say it's important when you aren't.

Our only current mechanism is downvoting, but it's hard to tell why a thing has been downvoted.


A Call For Action

If you are at all involved or interested in the rationalist community: The next time you are tempted to spend your precious time or energy complaining about how the community excludes people, or complaining about how the people who feel excluded want complete control over what is talked about instead, consider spending that resource on advancing one of these projects instead, to make the problem actually go away.