Tag Archives: Less Wrong

My life so far: motives and morals

This is the story of my life, through the lens of motivations, of actions I took to steer myself towards long-term outcome, of the way the self that stretches out in causal links over long periods of time produced the self I have at this moment. This is only one of the many ways to tell the story of my life. Continue reading

The Appearances and The Things Themselves

Here's a neat puzzle by Scott:

My dermatology lecture this morning presents: one of those Two Truths and a Lie games. You choose which two you think are true and – special house rule – give explanations for why. The explanations do not require specialized medical knowledge beyond the level of a smart amateur. Answers tomorrow-ish.

1. Significantly more Americans get skin cancer on the left half of the face than on the right half.

2. People who had acne as children live on average four years longer than those who did not.

3. In very early studies, Botox has shown great promise as a treatment for depression.

My thoughts below the fold, you may want to guess first.

Continue reading

Specific Techniques for Inclusion

One lovely thing about having a bunch of rationalist friends is that if I whine about a problem, I get a bunch of specific ideas about how to fix it. Sometimes the whining has to be very specific, though.

What I Complained About

Some people don't feel comfortable on Less Wrong or in other rationalist communities. Apophemi wrote about why they don't identify with the rationalist community because some of the language and topics under discussion feel to them like personal threats. Less Wrong discussed the post here, though sadly I think a lot of people got mindkilled.

Apophemi's post was most directly a response to some stuff on Scott's blog, Slate Star Codex. Scott's response was basically that yes, there's a need for fora where particular groups of people can feel safe - but Less Wrong and the rationalist community are supposed to be a place that's safe for rationalists - where you won't get banned or ostracized or hated for bringing up an unpopular idea just because the evidence supports it. Implicitly Scott was modeling the discussion as considering two options: the status quo, or ban certain controversial topics entirely because they make some people uncomfortable.

Ben Kuhn then responded that Scott was ignoring the middle ground, and there are plenty of things rationalists can do to make the community feel more welcoming to people who are being excluded, without banning discussion of controversial topics.

Sounds reasonable enough. What's my problem with that? Not a single example.

It's easy enough to claim there's a middle ground - but there are reasons it might not be feasible in practice. For example, in some cases it really could be the existence of a discussion of the topic that's offensive, not the way people discuss it. (I think Apophemi feels this way about some things.) In others, there's very little gain from partial compliance. If right now 25% of Less Wronger commenters consistently and avoidably misgender people, and as a result of a campaign to educate people on how not to do this, half of them learn how to do it right, that's still 12.5% of commenters misgendering people - more than enough that it's still going to be a consistent low-level annoyance to people who don't identify with a traditional gender, or women who don't have obviously feminine pseuds, etc. So that's kind of a wasted effort.

So I whined about this, on Facebook. To my amazement and delight, I got some actual specific responses, from Robby and Ruthie. Since there was some overlap, I've tried to aggregate the discussion into a single list of ideas, plus my attempt to explain what these mean and why they might help. I combined ones that I think are basically the same idea, and dropped ones that are either about banning stuff (since the whole point was to find out whether there is in fact a feasible middle ground) or everyone refraining from bad behavior (I don't think that's a feasible solution unless we ban defectors, which also fails to satisfy the "middle ground" requirement).

Trigger Warnings

(Robby and Ruthie)

Some topics reliably make some people freak out. You might have had a very bad experience with something and find it difficult to discuss, or certain words might be associated with actual threats to your safety, in your experience. If you have enough self-knowledge to know that you will not be able to participate in those discussions rationally (or that you could, but the emotional cost is higher than you're willing to bear), then it would be helpful to have a handy warning that the article you're about to read contains a "trigger" that's relevant to you.

This concept can be useful outside of personal traumatic events too. There's a lot we don't know for sure about the human ancestral environment, but one thing that's pretty likely is that the part of the brain with social skills didn't evolve to deal with political groups with millions of members. Any political opinion favoring something that threatens you is going to feel like a meaningful threat to your well-being, to some extent, unless you unlearn this (if that's even possible). Since politics is the mind-killer, you're likely to have this response even if people are just discussing opinions that are often signs of affiliation with your group's political enemies. This is possible to unlearn, but it could be really helpful to know what kind of discussion you're going into, in advance.

For example, I'm Jewish by birth. When people start saying nice things about Hitler and the Nazis, it makes me feel sad, and a little threatened. If it's just a discussion of pretty uniforms or monetary policy,  and not really about killing Jews at all, then there's no reason at all to construe it as a direct threat to my safety - but it's still helpful for me to be able to steel myself for the inevitable emotional reaction in advance.

Content warnings have the advantage of being fairly unambiguous. Someone who believes in "human biodiversity" might not agree that their discussion about it is threatening to black people - but I bet they'd agree that the discussion involves making generalizations about people based on racial categories. Someone who wants to vent about bad experiences involving white men might not agree that they are calling me a bad person - but I bet they'd agree that they are sharing anecdotes that are not necessarily representative, about people in a certain demographic.

The other nice thing about this solution is that right now it basically hasn't been done at all on Less Wrong. It seems reasonably likely that if a few prominent posters modeled this behavior (or a few commenters consistently suggested the addition of trigger or other offensive content warnings at the top of certain posts), it would be widely adopted.

The downside is that some uses of trigger warnings, while widespread on the internet (so there may be an off-the-shelf solution), would require a technical implementation, which means someone actually has to modify the site's code. This limits the set of people who can implement that part, but it's not insurmountable.

I'm not really sure this one has any clear disadvantages, except that some people may find content warnings themselves offensive.

Add a tag system for common triggers, so people can at a glance see where an information-hazard topic or conversation thread has arisen, and navigate the site safely. This is a really easy and obvious solution to Apophemi and Scott's dispute, and it benefits both of them (since it can be used both to tag politics/SJ discussion and to tag e.g. rape discussions), so I'm amazed this proposal hasn't been the central object of discussion in the conversation so far.

-Robby

Widely implemented. We can help people who acknowledge that they don't want to be around certain topics stay away from. It also gives those who want to be part of overly frank discussions a response to give to those who criticize them for being overly frank.

-Ruthie

Make it Explicit That People From Underrepresented Groups are Welcome

(Robby and Ruthie)

The downside of this one is that for women, at least, it's kind of already been done. A few years ago there were a bunch of front-page posts on the topic of what if anything needed to change to make sure women weren't unnecessarily pushed away by Less Wrong. But apparently Eliezer's old post on the topic actually offended some women, who felt stereotyped and misunderstood by it. A post with the same goal that didn't cause those reactions might do better.

I don't feel like this is a very good summary so I'm going to quote Robby and Ruthie directly:

Express an interest in women joining the site. Make your immediate reaction to the idea of improved gender ratios 'oh cool that means we get more people, including people with importantly different skills and backgrounds', not 'why would we want more women on this site?' or a change of topic to e.g. censorship.

- Robby

If more women posted and commented they might move the overall tone of discourse in a direction more appealing for other women. Maybe not. You could do blinded studies (have women and men write anonymized posts about anything, ask women and men which they would upvote, downvote). Again, this would be hard to do well.

- Ruthie

Put in an extra effort to draw women researchers, academics, LW-post-writers, speakers, etc.

-Robby

Recruit More Psychologists

(Robby)

I can't substantively improve on the original:

If LW is primarily a site about human rationality (as opposed to being primarily a site about Friendly Artificial Intelligence), then it should be dominated by psychologists, not by programmers. Psychologists are mostly women. Advertising to psych people would therefore simultaneously make this site better at human-rationality scholarship and empiricism, and better at gender equity.

-Robby

Ombudsperson

(Ruthie)

An "Ombudsman" is someone who works for an institution, and whose primary responsibility is listening to people's complaints and working with the institution to resolve them. A dedicated person is important for two reasons. First, it can be easier to communicate a complaint to someone who wasn't directly involved in doing the thing you're complaining about. Second, the site/community leaders may not have the time, attention, willingness, or expertise to listen to or understand a particular kind of complaint - maybe their comparative advantage is in building new things, not listening to people's problems.

I have no idea how this would work, but it was suggested to help solve problems on the EA facebook group and seems to have traction at least as an idea there. If they implement it and are successful, LW could follow suit.

-Ruthie

Write Rationalist-Friendly Explanations

It would be silly if rationalists weren't at least a little bit better about rationality than everyone else. Unfortunately, this means everyone else is a little bit worse, on average. Including feminists. That doesn't mean they're wrong, but it does mean that many popular explanations of feminist, antiracist, and social justice concepts may mix together some good points with some real howlers. These explanations may also come across as outright hostile to the typical Less Wrong demographic. So as a result, many rationalists will not read these things, or will read them and reject them as making no sense (and this is sometimes a correct judgment).

The problem is that some of these ideas are true or helpful even if someone didn't argue for them properly, and feminists or others on Less Wrong might have to explain the whole thing all over again every single time they want to have a productive discussion with a new person using a concept like sexism. This is a lot of extra work, and understandably frustrating. A carefully-argued account of some key relevant concepts would be extremely valuable, and might even be an appropriate addition to the Sequences. Brienne's post on gender bias is a great start, and there's probably lots of other great stuff out there hiding in between the ninety percent.

Build resources (FAQs, blog posts, etc.) educating LWers about e.g. gender bias and accumulation of advantage. Forcing women to re-argue things like 'is sexism a thing?' every time they want to treat it as a premise is exhausting and alienating.

-Robby

Get Data
(Ruthie)

This one's a real head-slapper - Less Wrong is supposed to be all about this. There's a problem and we don't know how to solve it. How about we get more information about what's causing it? Find the people who would be contributing to or benefiting from the rationalist community if only they didn't feel pushed away or excluded by some things we do. (And the people who only just barely think it's worth it - they're probably similar to the people who just barely think it's not worth it.)

Collect and analyze more-than-anecdata on women and minority behavior around LW

The existing survey data may have a lot of insight. Adding more targeted questions to next year's survey could help more. It's hard to give surveys to the category of people who feel like they were turned away from LW, but if anyone can think of a good way to reach this group, we may be able to learn something from them.

Try to find out more about how people perceive different kinds of rhetoric

This would be hard, but I'd be really interested in the outcome. Some armchair theories about how friendly different kinds of people expect discourse to be strike me as plausible. If there are really differences, offense might be prevented by using different words to say the same things. If not, we could stop throwing this accusation around.

-Ruthie

Go Meta

(Ruthie)

Less Wrong is supposed to be all about this one too. Some people consistently think other people are unreasonable and find it difficult to have a conversation with them - and vice versa. Maybe we should see if there are any patterns to this? Like the illusion of transparency, or taking offense being perceived as an attack on the offender's status.

One of my favorite patterns is when person A says that behavior X (described very abstractly) is horrible, and person B says how can you possibly expect people to refrain from behavior X. Naturally, they each decide that the other is a bad person, and also wrong on the internet. Then after much arguing, person A gives an example, and person B says "That's what you were talking about the whole time? People actually do that?! No wonder you're so upset about it!" Or person B gives an example of the behavior they think is reasonable, and person A says "I thought it went without saying that your example is okay. Why would you think anyone objected to that? It's perfectly reasonable!" It's kind of a combination of the illusion of transparency and generalizing from one example, where you try to make sense of the other person's abstract language by mapping it onto the most similar event you have personally experienced or heard about.

I bet there are lots of other patterns that, if we understood them better, we could build shortcuts around.

If well-intentioned people understood why conversations about gender so often become so frustrating before having a conversation about gender, it might lead to higher quality conversations about gender.

-Ruthie

Taboo Unhelpful Words More

(Ruthie)

Rationalist Taboo is when, if you seem to disagree about what a word means, you stop using it and use more specific language instead. Sometimes this can dissolve a disagreement entirely. In other cases, it just keeps the conversation substantive, about things rather than definitions. I definitely recall reading discussions on Less Wrong and thinking, "somebody should suggest tabooing the word 'feminist' here" (or "sexist" or "racist"). Guess what? I'm somebody! I'll try to remember to do that next time; I think a few people committed to helping on this one could be super helpful.

Taboo words

Possibly on a per-conversation basis. "Feminist" is a pretty loaded word for me, and people say things like this which don't apply closely to me, and I feel threatened because I identify with the word.

Scott Alexander also suggested this in the same context in his response to Apophemi on his blog (a bit more than halfway down the page). It can improve the quality of discourse simply by forcing people to use relevant categories instead of easy ones.

Higher standards of justification for sensitive topics

A lot of plausible-but-badly-justified assertions about gender are thrown around, and not always subjected to much scrutiny. These can put harmful ideas in people's minds without at least giving us reason to believe that they're true, and they're slippery to argue against. Saying exactly what you mean and justifying it is probably the best way to defend against unreasonable accusations of sexism. If people accuse you of sexism, they'll at least be reasonable. I think taboo words can go a long way towards achieving this.

-Ruthie

Build a Norm That You Can Safely Criticize and Be Criticized For "Offensive" Behavior

(Ruthie)

I have no idea how hard or easy this is. Less Wrong seems like it's already an unusually safe place to say "oops, I was wrong." But somehow people seem not to do a good job becoming more curious about certain things like sexism. If I understand correctly (her wording's a little telegraphic to me), Ruthie suggested a stock phrase for people correcting their own language, "let me try again." It would be nice to come up with a similarly friendly way to say that you think someone is talking in an unhelpful way, but don't intend to thereby lower their status - you just want to point it out so they will change their behavior to stop hurting you.

Better ways to call people out for bad behavior

Right now, talking about gender in almost any form is asking for a fight. I hold my tongue about a lot of minor things that bother me because calling people out causes people to get defensive instead of considering correcting myself. A strong community norm of taking criticism in certain form seriously could help us not quarrel about minor things. Someone I know suggested "let me try again" as a template for correcting offensive speech, and I like the idea a lot.

Successfully correcting when called out can also help build goodwill. If you are sometimes willing to change your rhetoric, I take you more seriously when you say it's important when you aren't.

Our only current mechanism is downvoting, but it's hard to tell why a thing has been downvoted.

-Ruthie

A Call For Action

If you are at all involved or interested in the rationalist community: The next time you are tempted to spend your precious time or energy complaining about how the community excludes people, or complaining about how the people who feel excluded want complete control over what is talked about instead, consider spending that resource on advancing one of these projects instead, to make the problem actually go away.

CFAR - Second Impression and a Bleg

TLDR: CFAR is awesome because they do things that work. They've promised to do more research into some of the "epistemic rationality" areas that I'd wished to see more progress on. They have matched donations through 31 January 2014, please consider giving if you can.

UPDATE: CFAR now has a post up on Less Wrong explaining what they are working on and why you should give. Here's the official version: http://lesswrong.com/lw/jej/why_cfar/

Second Thoughts on CFAR

You may have seen my first-impression review of the Center For Applied Rationality's November workshop in Ossining, NY. I've had more than a month to think it over, and on balance I'm pretty impressed.

For those of you who don't already know, CFAR (the Center For Applied Rationality) is an organization dedicated to developing training to help people overcome well-studied cognitive biases, and thus become more effective at accomplishing their goals. If you've never heard of CFAR before, you should check out their about page before continuing here.

The first thing you need to understand about CFAR is that they teach stuff that actually works, in a way that works. This is because they have a commitment to testing their beliefs, abandoning ideas that don't work out, and trying new things until they find something that works. As a workshop participant I benefited from that, it was clear that the classes were way better honed, specific, and action-oriented than they'd been in 2011.

At the time I expressed some disappointment that a lot of epistemic rationality stuff seemed to have been neglected, postponed, or abandoned. Even though some of those things seem objectively much harder than some of the personal effectiveness training CFAR seems to have focused on, they're potentially high-value in saving the world.

The Good News

After my post, Anna Salamon from CFAR reached out to see if we could figure out some specific things they should try again. I think this was a helpful conversation for both of us. Anna explained to me a few things that helped me understand what CFAR was doing:

1) Sometimes an "epistemic rationality" idea turns into a "personal effectiveness" technique when operationalized.

For example consider the epistemic rationality idea of beliefs as anticipations, rather than just verbal propositions. The idea is that you should expect to observe something differently in the world if a belief is true, than if it's false. Sounds pretty obvious, right? But the "Internal Simulator," where you imagine how surprised you will be if your plan doesn't work out, is a non-obvious application of that technique.

2) Some of the rationality techniques I'd internalized from the Sequences at Less Wrong, that seemed obvious to me, are not obvious to a lot of people going to the workshops, so some of the epistemic rationality training going on was invisible to me.

For example, some attendees hadn't yet learned the Bayesian way of thinking about information - that you should have a subjective expectation based on the evidence, even when the evidence isn't conclusive yet, and there are mathematical rules governing how you should treat this partial evidence. So while I didn't get much out of the Bayes segment, that's because I've already learned the thing that class is supposed to teach.

3) CFAR already tried a bunch of stuff.

They did online randomized trials of some epistemic rationality techniques and published the results. They tried a bunch of ways to teach epistemic rationality stuff and found that it didn't work (which is what I'd guessed). They'd found ways to operationalize bits of epistemic rationality.

4) The program is not just the program.

Part of CFAR's mission is the actual rationality-instruction it does. But another part is taking people possibly interested in rationality, and introducing them to the broader community of people interested in existential risk mitigation or other effective altruism, and epistemic rationality. Even if CFAR doesn't know how to teach all these things yet, combining people who know each of these things will produce a community with the virtues the world needs.

In the course of the conversation, Anna asked me why I cared about this so much - what was my "Something to Protect"? This question helped me clarify what I really was worried about.

In my post on effective altruism, I mentioned that a likely extremely high-leverage way to help the world was to help people working on mitigating existential risk. The difficulty is that the magnitude of the risks, and the impact of the mitigation efforts, is really, really hard to assess. An existential risk is not something like malaria, where we can observe how often it occurs. By definition we haven't observed even one event that kills off all humans. So how can we assess the tens or hundreds of potential threats?

A while before, Anna had shared a web applet that let you provide your estimates for, e.g., the probability each year of a given event like global nuclear war or the development of friendly AI, and it would tell you the probability that humanity survived a certain number of years. I tried it out, and in the process, realized that:

Something Is Wrong With My Brain and I Don't Know How to Fix It

For one of these rates, I asked myself the probability in each year, and got back something like 2%.

But then I asked myself the probability in a decade, and got back something like 5%.

A century? 6%.

That can't be right. My intuitions seem obviously inconsistent. But how do I know which one to use, or how to calibrate them?

Eliezer Yudkowsky started writing the Sequences to fix whatever was wrong with people's brains that was stopping them from noticing and doing something about existential risk. But a really big part of this is gaining the epistemic rationality skills necessary to follow highly abstract arguments, modeling events that we have not and cannot observe, without getting caught by shiny but false arguments.

I know my brain is inadequate to the task right now. I read Yudkowsky's arguments in the FOOM Debate and I am convinced. I read Robin Hanson's arguments and am convinced. I read Carl Shulman's arguments and am convinced. But they don't all agree! To save the world effectively - instead of throwing money in the direction of the person who has most recently made a convincing argument - we need to know how to judge these things.

In Which I Extract Valuable Concessions from CFAR in Exchange for Some Money

Then it turned out CFAR was looking for another match-pledger for their upcoming end/beginning of year matched donations fundraiser. Anna suggested that CFAR might be willing to agree to commit to certain epistemic rationality projects in exchange. I was skeptical at first - if CFAR didn't already think these were first-best uses of its money, why should I think I have better information? - but on balance I can't think of a less-bad outcome than what we actually got, because I do think these things are urgently needed, and I think that if CFAR isn't doing them now, it will only get harder to pivot from its current program of almost exclusively teaching instrumental rationality and personal effectiveness.

We hashed out what kinds of programs CFAR would be willing to do on the Epistemic Rationality front, and agreed that these things would get done if enough money is donated to activate my pledge:

  • Participate in Tetlock's Good Judgment Project to learn more about what rationality skills help make good predictions, or would help but are missing
  • Do three more online randomized experiments to test more epistemic rationality techniques
  • Do one in-person randomized trial of an epistemic rationality training technique.
  • Run three one-day workshops on epistemic rationality, with a mixture of old and new material, as alpha tests.
  • Bring at least one epistemic rationality technique up to the level where it goes into the full workshops.

And of course CFAR will continue with a lot of the impressive work it's already been doing.

Here are the topics that I asked them to focus on for new research:

Here are the major "epistemic rationality" areas where I'd love to see research:
  • Noticing Confusion (& doing something about it)
  • Noticing rationalization, and doing something to defuse it, e.g. setting up a line of retreat
  • Undistractability/Eye-on-the-ball/12th virtue/"Cut the Enemy"/"Intent to Win" (this kind of straddles epistemic and instrumental rationality AFAICT but distractions usually look like epistemic failures)
  • Being specific / sticking your neck out / being possibly wrong instead of safely vague / feeling an "itch" to get more specific when you're being vague
Here are some advanced areas that seem harder (because I have no idea how to do these things) but would also count:
  • Reasoning about / modeling totally new things. How to pick the right "reference classes."
  • Resolving scope-insensitivity (e.g. should I "shut up and multiply" or "shut up and divide"). Especially about probabilities *over time* (since there are obvious X-Risk applications).
  • How to assimilate book-learning / theoretical knowledge (can be broken down into how to identify credible sources, how to translate theoretical knowledge into procedural knowledge)

If you're anything like me, you think that these programs would be awesome. If so, please consider giving to CFAR, and helping me spend my money to buy this awesomeness.

The Bad News

For some reason, almost one month into their two-month fundraiser, CFAR has no post up on Less Wrong promoting it. As I was writing this post, CFAR had raised less than $10,000 compared to a total of $150,000 in matching funds pledged. (UPDATE: CFAR now has an excellent post up explaining their plan and the fundraiser is doing much better.)

CFAR Fundraiser Progress Bar

Huge oopses happen, even to very good smart organizations, but it's relevant evidence around operational competence. Then again I kind of have an idiosyncratic axe to grind with respect to CFAR and operational competence, as is obvious if you read my first-impression review. But it's still a bad sign, for an organization working on a problem this hard, to fail some basic tests like this. You should probably take that into account.

It's weak evidence, though.

CFAR Changed Me for the Better

The ultimate test of competence for an organization like CFAR is not operational issues like whether people can physically get to and from the workshops or whether anyone knows about the fundraiser. The test is, does CFAR make people who take its training better at life?

In my case there was more than one confounding factor (I'd started working with a life coach a few weeks before and read Scott Adams's new book a few weeks after - Less Wrong review here), but I have already benefited materially from my experience:

I had three separate insights related to how I think about my career that jointly let me actually start to plan and take action. In particular, I stopped letting the best be the enemy of the good, noticed that my goals can be of different kinds, and figured out which specific component of my uncertainty was the big scary one and took actual steps to start resolving it.

A couple of things in my life improved immediately as if by magic. I started working out every morning, for example, for the first time since college. I'm still not sure how that happened. I didn't consciously expend any willpower.

Several other recent improvements in my life of comparable size are partially attributable to CFAR as well. (The other main contributors are my excellent life coach, Scott Adams's book, and the cumulative effect of everything else I've done, seen, heard, and read.)

Several of the classes that seemed hard to use at the time became obviously useful in hindsight. For example, I started noticing things where a periodic "Strategic Review" would be helpful.

In addition, I learned how to be "greedy" about asking other people for questions and advice when I thought it would be helpful. This has been tremendously useful already.

I'll end the way I began, with a summary:

The problems humanity is facing in this century are unprecedented in both severity and difficulty. To meet these challenges, we need people who are rational enough to sanely and evaluate the risks and possible solutions, effective enough to get something done, and good enough to take personal responsibility for making sure something happens. CFAR is trying to create a community of such people. Almost no one else is even trying.

CFAR is awesome because they do things that work. They've promised to do more research into some of the "epistemic rationality" areas that I'd wished to see more progress on. They have a fundraiser with matched donations through 31 January 2014, please consider giving if you can.

Wait vs Interrupt Culture

At this past weekend's CFAR Workshop (about which, by the way, I plan to have another post soon with less whining and more serious discussion), someone mentioned that they were uncomfortable with pauses in conversation, and that got me thinking about different conversational styles.

Growing up with friends who were disproportionately male and disproportionately nerdy, I learned that it was a normal thing to interrupt people. If someone said something you had to respond to, you'd just start responding. Didn't matter if it "interrupted" further words - if they thought you needed to hear those words before responding, they'd interrupt right back.

Occasionally some weird person would be offended when I interrupted, but I figured this was some bizarre fancypants rule from before people had places to go and people to see. Or just something for people with especially thin skins or delicate temperaments, looking for offense and aggression in every action.

Then I went to St. John's College - the talking school (among other things). In Seminar (and sometimes in Tutorials) there was a totally different conversational norm. People were always expected to wait until whoever was talking was done. People would apologize not just for interrupting someone who was already talking, but for accidentally saying something when someone else looked like they were about to speak. This seemed totally crazy. Some people would just blab on unchecked, and others didn't get a chance to talk at all. Some people would ignore the norm and talk over others, and nobody interrupted them back to shoot them down.

But then a few interesting things happened:

1) The tutors were able to moderate the discussions, gently. They wouldn't actually scold anyone for interrupting, but they would say something like, "That's interesting, but I think Jane was still talking," subtly pointing out a violation of the norm.

2) People started saying less at a time.

#1 is pretty obvious - with no enforcement of the social norm, a no-interruptions norm collapses pretty quickly. But #2 is actually really interesting. If talking at all is an implied claim that what you're saying is the most important thing that can be said, then polite people keep it short.

With 15-20 people in a seminar, this also meant that no one could try to force the conversation in a certain direction. When you're done talking, the conversation is out of your hands. This can be frustrating at first, but with time, you learn to trust not your fellow conversationalists, but the conversation itself, to go where it needs to. If you haven't said enough, then you trust that someone will ask you a question, and you'll say more.

When people are interrupting each other - when they're constantly tugging the conversation back and forth between their preferred directions - then the conversation itself is just a battle of wills. But when people just put in one thing at a time, and trust their fellows to only say things that relate to the thing that came right before - at least, until there's a very long pause - then you start to see genuine collaboration.

And when a lull in the conversation is treated as an opportunity to think about the last thing said, rather than an opportunity to jump in with the thing you were holding onto from 15 minutes ago because you couldn't just interrupt and say it - then you also open yourself up to being genuinely surprised, to seeing the conversation go somewhere that no one in the room would have predicted, to introduce ideas that no one brought with them when they sat down at the table.

By the time I graduated, I'd internalized this norm, and the rest of the world seemed rude to me for a few months. Not just because of the interrupting - but more because I'd say one thing, politely pause, and then people would assume I was done and start explaining why I was wrong - without asking any questions! Eventually, I realized that I'd been perfectly comfortable with these sorts of interactions before college. I just needed to code-switch! Some people are more comfortable with a culture of interrupting when you want to, and accepting interruptions. Others are more comfortable with a culture of waiting their turn, and courteously saying only one thing at a time, not trying to cram in a whole bunch of arguments for their thesis.

Now, I've praised the virtues of wait culture because I think it's undervalued, but there's plenty to say for interrupt culture as well. For one, it's more robust in "unwalled" circumstances. If there's no one around to enforce wait culture norms, then a few jerks can dominate the discussion, silencing everyone else. But someone who doesn't follow "interrupt" norms only silences themselves.

Second, it's faster and easier to calibrate how much someone else feels the need to talk, when they're willing to interrupt you. It takes willpower to stop talking when you're not sure you were perfectly clear, and to trust others to pick up the slack. It's much easier to keep going until they stop you.

So if you're only used to one style, see if you can try out the other somewhere. Or at least pay attention and see whether you're talking to someone who follows the other norm. And don't assume that you know which norm is the "right" one; try it the "wrong" way and maybe you'll learn something.

 

Cross-posted at Less Wrong.