Tag Archives: Scott Alexander

On the construction of beacons

I am afraid of the anglerfish. Maybe this is why the comments on my blog tend to be so consistently good.

Recently, a friend was telling me about the marketing strategy for a project of theirs. They favored growth, in a way that I was worried would destroy value. I struggled to articulate my threat model, until I hit upon the metaphor of that old haunter of my dreamscape, the anglerfish. Continue reading

Why I am not a Quaker (even though it often seems as though I should be)

In the past year, I have noticed that the Society of Friends (also known as the Quakers) has come to the right answer long before I or most people did, on a surprising number of things, in a surprising range of domains. And yet, I do not feel inclined to become one of them. Giving credit where credit is due is a basic part of good discourse, so I feel that I owe an explanation.

The virtues of the Society of Friends are the virtues of liberalism: they cultivate honest discourse and right action, by taking care not to engage in practices that destroy individual discernment. The failings of the Society of Friends are the failings of liberalism: they do not seem to have the organizational capacity to recognize predatory systems and construct alternatives.

Fundamentally, Quaker protocols seem like a good start, but more articulated structures are necessary, especially more closed systems of production. Continue reading

The Appearances and The Things Themselves

Here's a neat puzzle by Scott:

My dermatology lecture this morning presents: one of those Two Truths and a Lie games. You choose which two you think are true and – special house rule – give explanations for why. The explanations do not require specialized medical knowledge beyond the level of a smart amateur. Answers tomorrow-ish.

1. Significantly more Americans get skin cancer on the left half of the face than on the right half.

2. People who had acne as children live on average four years longer than those who did not.

3. In very early studies, Botox has shown great promise as a treatment for depression.

My thoughts below the fold, you may want to guess first.

Continue reading

Null Results

Humans tend to look for evidence that reinforces our beliefs, not evidence that contradicts them. This is called confirmation bias. A related problem is that people tend to publish results that show various treatments or interventions working, but not results that show them failing to work, because the latter is less interesting. This is called publication bias.

In the spirit of combating those things, I'm going to share some things that didn't work for me.

Rolfing

Rolfing is basically a kind of massage (though Rolfers insist there's a difference) with a slight amount of evidence that it produces feelings of relaxation over longer periods of time than other massage methods. Rolfers are certified by the Rolfing Institute, and put you through a series of ten sessions. Then you're done, permanently. They say it takes a few months for the changes to become manifest, and it's been a few months, so I'm now ready to talk about what it did for me.

Rolfing is famously uncomfortable. During sessions I was asked to give lots of feedback about how intense the pressure was on a scale from 1 to 10 (the target was something like 6 or 7), and since I am a tough guy idiot I was reluctant to actually say "hey, that's an 8," so I probably got some unnecessary pain. Some of the parts of the massage are also kind of weird-feeling, like the session that focused on my chest - I'm not used to strong pressure on my ribs or sternum, but ultimately it was bearable and I felt very good after the sessions.

During the sessions, when the pressure was particularly intense and a little painful I turned it into kind of a mindfulness practice. I would focus on the sensation, in detail, and to some extent that defused the distress. This should be familiar to many people who have meditated. And it's positive evidence for the effectiveness of meditation in teaching a certain kind of mental control.

The really nice thing about Rolfing was that I finally got a massage with a sufficient amount of pressure. Even though it was sometimes painful, it was a satisfying experience. I think this preference for massages with a lot of pressure might be hereditary. When I was a kid I'd give my mom shoulder massages so hard my hands hurt, and that was just barely enough pressure for her. I'm totally the same way - I've never gotten a shoulder massage that had too much pressure - although I have had just barely enough, during Rolfing. If you don't like lots of pressure you probably won't like Rolfing.

Some minor aspects of my posture seemed to be improved, and I got unsolicited compliments on my posture from people who didn't know I was going through a course of Rolfing, but now several months later I seem to have backslid significantly. For example, my feet, which used to point outwards but after Rolfing pointed straight ahead, point outwards again. I haven't experienced the kind of enhanced bodily awareness that I heard about anecdotally that initially interested me in Rolfing.

On the whole it wasn't a huge win, but might have just barely been worth the time and money for me. If you don't have or make much money, and don't have reasons other than general well-being to try it, I can't recommend it; it's too costly an experiment.

Anonymous Feedback Form

On Less Wrong, Gwern wrote about putting up a personal anonymous feedback form. I thought the idea was really cool, so I put one up myself January 16th. There are links on my about page on Twitter, Facebook, and here. Aside from my own tests, I have received zero responses.

Hypotheses:

1) I'm not doing anything wrong, so nobody needs to get in touch with me.

2) Nobody really cares what I do so they're not motivated to provide feedback.

3) The feedback form is insufficiently visible.

4) Somehow the link or form is broken. (I had a friend test it out and it seemed to work.)

5) I'm so much less popular than Gwern that zero results over two months is not very far statistically from the expected results

Love

"Has your kind really evolved separate information-processing mechanisms for deoxyribose nucleic acid versus electrochemical transmission of synaptic spikes?"

"I don't really understand the question's purpose," Akon said.  "Our genes are made of deoxyribose nucleic acid.  Our brains are made of neurons that transmit impulses through electrical and chemical -"

The fake man's head collapsed to his hands, and he began to bawl like a baby.

[...]

The fake man suddenly unfolded his head from his hands.  His cheeks were depicted as streaked with tears, but the face itself had stopped crying.  "To wait so long," the voice said in a tone of absolute tragedy.  "To wait so long, and come so far, only to discover that nowhere among the stars is any trace of love."

"Love?" Akon repeated.  "Caring for someone else?  Wanting to protect them, to be with them?  If that translated correctly, then 'love' is a very important thing to us."

"But!" cried the figure in agony, at a volume that made Akon jump.  "But when you have sex, you do not untranslatable 2!  A fake, a fake, these are only imitation words -"

"What is 'untranslatable 2'?" Akon said; and then, as the figure once again collapsed in inconsolable weeping, wished he hadn't.

"They asked if our neurons and DNA were separate," said the Ship's Engineer.  "So maybe they have only one system. [...]"

"They share each other's thoughts when they have sex," the Master of Fandom completed.  "Now there's an old dream.  And they would develop emotions around that, whole patterns of feeling we don't have ourselves...  Huh.  I guess we do lack their analogue of love."

-Three Worlds Collide

I used to think that traditional literary descriptions of the emotion of sexual love were just hyperbole for dramatic or comic effect, and really people just felt caring or lust or both. Recently I found out that some people say they actually have those sensations in that way, though I still can't quite alieve it yet.

Here's an interesting take on the emotion of love. Read the whole thing. It's good.

What I want to know about this is - do you recognize this emotion? Forget the author's opinions about what you should do about love, and what love means, for a moment - forget it's even called love. Does the author's description of how the emotion manifests physically ring true? Is this one specific recognizable feeling that you have actually experienced?

I don't recognize this sensation at all:

Love is a feeling. It's hot and fluttery and tingly. I get it in my guts and chest and face. The feeling is accompanied by a series of enthusiastic thoughts, such as "This person is the greatest person ever", "I wonder how I can make this person feel good", and/or "I want to climb onto this person and put my face close to their face and smoosh my body onto their body."

I know what feeling goes along with the first thought. And I know what feeling goes along with the third. (The second is ambiguous.) They're completely different feelings. I don't think my experience is vanishingly rare, either - but I'm not sure, which is why I want to hear from you, whether or not you've experienced the emotion described in the linked article.

I've had deep feelings of caring toward some people. I've experienced admiration. I've experienced lust. I've had all these feelings toward some people. But for me they are totally different feelings: they feel completely different. And I never thought of using the word "love" to describe any of them, alone.

Generally, when I say I love someone, I am talking about a more permanent disposition, over a longer period of time than a single emotion. I mean that I generally feel caring toward them, and don't expect that disposition to change in the foreseeable future.

I think there's a specific emotion or combination of feelings that some people experience, and mean when they talk about love, that other people never experience. It's hard for me to write this, because even thinking about this makes me worry that I'm defective, that I'll never love the people I care about the way they would want to be loved.

I think people who talk about love often talk past each other because of the typical mind fallacy, and the illusion of transparency. People who have that "love" emotion see other people who don't bonding romantically and talking about love, and assume that they feel the same thing inside. People who don't have that emotion see people who do making long-term commitments and talking about love, and assume that it's just a word for the behavior.

I don't like summarizing anything from Wittenstein's Philosophical Investigations because unlike some other books, an adequate summary would be nearly as long as the whole thing. But in one place Wittgenstein gives an analogy for purely personal subjective experience.

Imagine that everyone carries around a small box. And no one looks into any box aside from their own. But people say "there is a beetle in my box", and refer to the thing in their box as a beetle. Now, is it meaningful to ask whether someone else's box really has a beetle? It's not a falsifiable statement - after all, you're not going to look inside someone else's box to find out whether their "beetle" looks like yours. The subjective experience of love is very similar to this beetle in a box.

Not quite, though - the description in the linked article is specific enough that I can tell whether I've had that experience. It would be nice to have a more precise vocabulary of emotions and sensations, to avoid this type of confusion. I wonder what other confusions are caused by a large number of people missing out on some "universal human experiences."

Of course I have the alternate hypothesis that people who describe love as an emotion are the mistaken ones. That they're just giving another name to lust, or to caring, when experienced under special circumstances. But limerence is apparently a real thing, and if that's real, then the milder feelings of romantic love are less improbable, so they're likely real as well.

Specific Techniques for Inclusion

One lovely thing about having a bunch of rationalist friends is that if I whine about a problem, I get a bunch of specific ideas about how to fix it. Sometimes the whining has to be very specific, though.

What I Complained About

Some people don't feel comfortable on Less Wrong or in other rationalist communities. Apophemi wrote about why they don't identify with the rationalist community because some of the language and topics under discussion feel to them like personal threats. Less Wrong discussed the post here, though sadly I think a lot of people got mindkilled.

Apophemi's post was most directly a response to some stuff on Scott's blog, Slate Star Codex. Scott's response was basically that yes, there's a need for fora where particular groups of people can feel safe - but Less Wrong and the rationalist community are supposed to be a place that's safe for rationalists - where you won't get banned or ostracized or hated for bringing up an unpopular idea just because the evidence supports it. Implicitly Scott was modeling the discussion as considering two options: the status quo, or ban certain controversial topics entirely because they make some people uncomfortable.

Ben Kuhn then responded that Scott was ignoring the middle ground, and there are plenty of things rationalists can do to make the community feel more welcoming to people who are being excluded, without banning discussion of controversial topics.

Sounds reasonable enough. What's my problem with that? Not a single example.

It's easy enough to claim there's a middle ground - but there are reasons it might not be feasible in practice. For example, in some cases it really could be the existence of a discussion of the topic that's offensive, not the way people discuss it. (I think Apophemi feels this way about some things.) In others, there's very little gain from partial compliance. If right now 25% of Less Wronger commenters consistently and avoidably misgender people, and as a result of a campaign to educate people on how not to do this, half of them learn how to do it right, that's still 12.5% of commenters misgendering people - more than enough that it's still going to be a consistent low-level annoyance to people who don't identify with a traditional gender, or women who don't have obviously feminine pseuds, etc. So that's kind of a wasted effort.

So I whined about this, on Facebook. To my amazement and delight, I got some actual specific responses, from Robby and Ruthie. Since there was some overlap, I've tried to aggregate the discussion into a single list of ideas, plus my attempt to explain what these mean and why they might help. I combined ones that I think are basically the same idea, and dropped ones that are either about banning stuff (since the whole point was to find out whether there is in fact a feasible middle ground) or everyone refraining from bad behavior (I don't think that's a feasible solution unless we ban defectors, which also fails to satisfy the "middle ground" requirement).

Trigger Warnings

(Robby and Ruthie)

Some topics reliably make some people freak out. You might have had a very bad experience with something and find it difficult to discuss, or certain words might be associated with actual threats to your safety, in your experience. If you have enough self-knowledge to know that you will not be able to participate in those discussions rationally (or that you could, but the emotional cost is higher than you're willing to bear), then it would be helpful to have a handy warning that the article you're about to read contains a "trigger" that's relevant to you.

This concept can be useful outside of personal traumatic events too. There's a lot we don't know for sure about the human ancestral environment, but one thing that's pretty likely is that the part of the brain with social skills didn't evolve to deal with political groups with millions of members. Any political opinion favoring something that threatens you is going to feel like a meaningful threat to your well-being, to some extent, unless you unlearn this (if that's even possible). Since politics is the mind-killer, you're likely to have this response even if people are just discussing opinions that are often signs of affiliation with your group's political enemies. This is possible to unlearn, but it could be really helpful to know what kind of discussion you're going into, in advance.

For example, I'm Jewish by birth. When people start saying nice things about Hitler and the Nazis, it makes me feel sad, and a little threatened. If it's just a discussion of pretty uniforms or monetary policy,  and not really about killing Jews at all, then there's no reason at all to construe it as a direct threat to my safety - but it's still helpful for me to be able to steel myself for the inevitable emotional reaction in advance.

Content warnings have the advantage of being fairly unambiguous. Someone who believes in "human biodiversity" might not agree that their discussion about it is threatening to black people - but I bet they'd agree that the discussion involves making generalizations about people based on racial categories. Someone who wants to vent about bad experiences involving white men might not agree that they are calling me a bad person - but I bet they'd agree that they are sharing anecdotes that are not necessarily representative, about people in a certain demographic.

The other nice thing about this solution is that right now it basically hasn't been done at all on Less Wrong. It seems reasonably likely that if a few prominent posters modeled this behavior (or a few commenters consistently suggested the addition of trigger or other offensive content warnings at the top of certain posts), it would be widely adopted.

The downside is that some uses of trigger warnings, while widespread on the internet (so there may be an off-the-shelf solution), would require a technical implementation, which means someone actually has to modify the site's code. This limits the set of people who can implement that part, but it's not insurmountable.

I'm not really sure this one has any clear disadvantages, except that some people may find content warnings themselves offensive.

Add a tag system for common triggers, so people can at a glance see where an information-hazard topic or conversation thread has arisen, and navigate the site safely. This is a really easy and obvious solution to Apophemi and Scott's dispute, and it benefits both of them (since it can be used both to tag politics/SJ discussion and to tag e.g. rape discussions), so I'm amazed this proposal hasn't been the central object of discussion in the conversation so far.

-Robby

Widely implemented. We can help people who acknowledge that they don't want to be around certain topics stay away from. It also gives those who want to be part of overly frank discussions a response to give to those who criticize them for being overly frank.

-Ruthie

Make it Explicit That People From Underrepresented Groups are Welcome

(Robby and Ruthie)

The downside of this one is that for women, at least, it's kind of already been done. A few years ago there were a bunch of front-page posts on the topic of what if anything needed to change to make sure women weren't unnecessarily pushed away by Less Wrong. But apparently Eliezer's old post on the topic actually offended some women, who felt stereotyped and misunderstood by it. A post with the same goal that didn't cause those reactions might do better.

I don't feel like this is a very good summary so I'm going to quote Robby and Ruthie directly:

Express an interest in women joining the site. Make your immediate reaction to the idea of improved gender ratios 'oh cool that means we get more people, including people with importantly different skills and backgrounds', not 'why would we want more women on this site?' or a change of topic to e.g. censorship.

- Robby

If more women posted and commented they might move the overall tone of discourse in a direction more appealing for other women. Maybe not. You could do blinded studies (have women and men write anonymized posts about anything, ask women and men which they would upvote, downvote). Again, this would be hard to do well.

- Ruthie

Put in an extra effort to draw women researchers, academics, LW-post-writers, speakers, etc.

-Robby

Recruit More Psychologists

(Robby)

I can't substantively improve on the original:

If LW is primarily a site about human rationality (as opposed to being primarily a site about Friendly Artificial Intelligence), then it should be dominated by psychologists, not by programmers. Psychologists are mostly women. Advertising to psych people would therefore simultaneously make this site better at human-rationality scholarship and empiricism, and better at gender equity.

-Robby

Ombudsperson

(Ruthie)

An "Ombudsman" is someone who works for an institution, and whose primary responsibility is listening to people's complaints and working with the institution to resolve them. A dedicated person is important for two reasons. First, it can be easier to communicate a complaint to someone who wasn't directly involved in doing the thing you're complaining about. Second, the site/community leaders may not have the time, attention, willingness, or expertise to listen to or understand a particular kind of complaint - maybe their comparative advantage is in building new things, not listening to people's problems.

I have no idea how this would work, but it was suggested to help solve problems on the EA facebook group and seems to have traction at least as an idea there. If they implement it and are successful, LW could follow suit.

-Ruthie

Write Rationalist-Friendly Explanations

It would be silly if rationalists weren't at least a little bit better about rationality than everyone else. Unfortunately, this means everyone else is a little bit worse, on average. Including feminists. That doesn't mean they're wrong, but it does mean that many popular explanations of feminist, antiracist, and social justice concepts may mix together some good points with some real howlers. These explanations may also come across as outright hostile to the typical Less Wrong demographic. So as a result, many rationalists will not read these things, or will read them and reject them as making no sense (and this is sometimes a correct judgment).

The problem is that some of these ideas are true or helpful even if someone didn't argue for them properly, and feminists or others on Less Wrong might have to explain the whole thing all over again every single time they want to have a productive discussion with a new person using a concept like sexism. This is a lot of extra work, and understandably frustrating. A carefully-argued account of some key relevant concepts would be extremely valuable, and might even be an appropriate addition to the Sequences. Brienne's post on gender bias is a great start, and there's probably lots of other great stuff out there hiding in between the ninety percent.

Build resources (FAQs, blog posts, etc.) educating LWers about e.g. gender bias and accumulation of advantage. Forcing women to re-argue things like 'is sexism a thing?' every time they want to treat it as a premise is exhausting and alienating.

-Robby

Get Data
(Ruthie)

This one's a real head-slapper - Less Wrong is supposed to be all about this. There's a problem and we don't know how to solve it. How about we get more information about what's causing it? Find the people who would be contributing to or benefiting from the rationalist community if only they didn't feel pushed away or excluded by some things we do. (And the people who only just barely think it's worth it - they're probably similar to the people who just barely think it's not worth it.)

Collect and analyze more-than-anecdata on women and minority behavior around LW

The existing survey data may have a lot of insight. Adding more targeted questions to next year's survey could help more. It's hard to give surveys to the category of people who feel like they were turned away from LW, but if anyone can think of a good way to reach this group, we may be able to learn something from them.

Try to find out more about how people perceive different kinds of rhetoric

This would be hard, but I'd be really interested in the outcome. Some armchair theories about how friendly different kinds of people expect discourse to be strike me as plausible. If there are really differences, offense might be prevented by using different words to say the same things. If not, we could stop throwing this accusation around.

-Ruthie

Go Meta

(Ruthie)

Less Wrong is supposed to be all about this one too. Some people consistently think other people are unreasonable and find it difficult to have a conversation with them - and vice versa. Maybe we should see if there are any patterns to this? Like the illusion of transparency, or taking offense being perceived as an attack on the offender's status.

One of my favorite patterns is when person A says that behavior X (described very abstractly) is horrible, and person B says how can you possibly expect people to refrain from behavior X. Naturally, they each decide that the other is a bad person, and also wrong on the internet. Then after much arguing, person A gives an example, and person B says "That's what you were talking about the whole time? People actually do that?! No wonder you're so upset about it!" Or person B gives an example of the behavior they think is reasonable, and person A says "I thought it went without saying that your example is okay. Why would you think anyone objected to that? It's perfectly reasonable!" It's kind of a combination of the illusion of transparency and generalizing from one example, where you try to make sense of the other person's abstract language by mapping it onto the most similar event you have personally experienced or heard about.

I bet there are lots of other patterns that, if we understood them better, we could build shortcuts around.

If well-intentioned people understood why conversations about gender so often become so frustrating before having a conversation about gender, it might lead to higher quality conversations about gender.

-Ruthie

Taboo Unhelpful Words More

(Ruthie)

Rationalist Taboo is when, if you seem to disagree about what a word means, you stop using it and use more specific language instead. Sometimes this can dissolve a disagreement entirely. In other cases, it just keeps the conversation substantive, about things rather than definitions. I definitely recall reading discussions on Less Wrong and thinking, "somebody should suggest tabooing the word 'feminist' here" (or "sexist" or "racist"). Guess what? I'm somebody! I'll try to remember to do that next time; I think a few people committed to helping on this one could be super helpful.

Taboo words

Possibly on a per-conversation basis. "Feminist" is a pretty loaded word for me, and people say things like this which don't apply closely to me, and I feel threatened because I identify with the word.

Scott Alexander also suggested this in the same context in his response to Apophemi on his blog (a bit more than halfway down the page). It can improve the quality of discourse simply by forcing people to use relevant categories instead of easy ones.

Higher standards of justification for sensitive topics

A lot of plausible-but-badly-justified assertions about gender are thrown around, and not always subjected to much scrutiny. These can put harmful ideas in people's minds without at least giving us reason to believe that they're true, and they're slippery to argue against. Saying exactly what you mean and justifying it is probably the best way to defend against unreasonable accusations of sexism. If people accuse you of sexism, they'll at least be reasonable. I think taboo words can go a long way towards achieving this.

-Ruthie

Build a Norm That You Can Safely Criticize and Be Criticized For "Offensive" Behavior

(Ruthie)

I have no idea how hard or easy this is. Less Wrong seems like it's already an unusually safe place to say "oops, I was wrong." But somehow people seem not to do a good job becoming more curious about certain things like sexism. If I understand correctly (her wording's a little telegraphic to me), Ruthie suggested a stock phrase for people correcting their own language, "let me try again." It would be nice to come up with a similarly friendly way to say that you think someone is talking in an unhelpful way, but don't intend to thereby lower their status - you just want to point it out so they will change their behavior to stop hurting you.

Better ways to call people out for bad behavior

Right now, talking about gender in almost any form is asking for a fight. I hold my tongue about a lot of minor things that bother me because calling people out causes people to get defensive instead of considering correcting myself. A strong community norm of taking criticism in certain form seriously could help us not quarrel about minor things. Someone I know suggested "let me try again" as a template for correcting offensive speech, and I like the idea a lot.

Successfully correcting when called out can also help build goodwill. If you are sometimes willing to change your rhetoric, I take you more seriously when you say it's important when you aren't.

Our only current mechanism is downvoting, but it's hard to tell why a thing has been downvoted.

-Ruthie

A Call For Action

If you are at all involved or interested in the rationalist community: The next time you are tempted to spend your precious time or energy complaining about how the community excludes people, or complaining about how the people who feel excluded want complete control over what is talked about instead, consider spending that resource on advancing one of these projects instead, to make the problem actually go away.