Category Archives: Uncategorized

Ask Ben: Seafood in DC

A friend writes:

DC food question for you: a friend of mine will be visiting DC this weekend from Idaho and was hoping to have seafood while in town. Do you have any restaurant recommendations? Thank you!

If they can get up to Silver Spring, Crisfield is fantastic. Not fancy at all, not very expensive, but the seafood is fresh and excellent. They do it all - crab, fish, oysters.

In general terms Pesce is the best place for cooked fish in DC, it is somewhat upscale but not extremely pricey. A bit less expensive is Grillfish.

Other restaurants that will do great things with seafood include Tosca (a white-tablecloth Italian place that might be the best restaurant in DC, and again upscale but not extremely so), and Acadiana and DC Coast.

For sushi, my go-to is Sushi Taro, but I have also heard great things about Kotobuki Sushi.

There are lots of great places for oysters, Pearl Dive is probably the best regarded, though Clyde's is also fine. I've heard good things about Hank's Oyster Bar and Johnny's Half-Shell is supposed to be good too.

Specific Techniques for Inclusion

One lovely thing about having a bunch of rationalist friends is that if I whine about a problem, I get a bunch of specific ideas about how to fix it. Sometimes the whining has to be very specific, though.

What I Complained About

Some people don't feel comfortable on Less Wrong or in other rationalist communities. Apophemi wrote about why they don't identify with the rationalist community because some of the language and topics under discussion feel to them like personal threats. Less Wrong discussed the post here, though sadly I think a lot of people got mindkilled.

Apophemi's post was most directly a response to some stuff on Scott's blog, Slate Star Codex. Scott's response was basically that yes, there's a need for fora where particular groups of people can feel safe - but Less Wrong and the rationalist community are supposed to be a place that's safe for rationalists - where you won't get banned or ostracized or hated for bringing up an unpopular idea just because the evidence supports it. Implicitly Scott was modeling the discussion as considering two options: the status quo, or ban certain controversial topics entirely because they make some people uncomfortable.

Ben Kuhn then responded that Scott was ignoring the middle ground, and there are plenty of things rationalists can do to make the community feel more welcoming to people who are being excluded, without banning discussion of controversial topics.

Sounds reasonable enough. What's my problem with that? Not a single example.

It's easy enough to claim there's a middle ground - but there are reasons it might not be feasible in practice. For example, in some cases it really could be the existence of a discussion of the topic that's offensive, not the way people discuss it. (I think Apophemi feels this way about some things.) In others, there's very little gain from partial compliance. If right now 25% of Less Wronger commenters consistently and avoidably misgender people, and as a result of a campaign to educate people on how not to do this, half of them learn how to do it right, that's still 12.5% of commenters misgendering people - more than enough that it's still going to be a consistent low-level annoyance to people who don't identify with a traditional gender, or women who don't have obviously feminine pseuds, etc. So that's kind of a wasted effort.

So I whined about this, on Facebook. To my amazement and delight, I got some actual specific responses, from Robby and Ruthie. Since there was some overlap, I've tried to aggregate the discussion into a single list of ideas, plus my attempt to explain what these mean and why they might help. I combined ones that I think are basically the same idea, and dropped ones that are either about banning stuff (since the whole point was to find out whether there is in fact a feasible middle ground) or everyone refraining from bad behavior (I don't think that's a feasible solution unless we ban defectors, which also fails to satisfy the "middle ground" requirement).

Trigger Warnings

(Robby and Ruthie)

Some topics reliably make some people freak out. You might have had a very bad experience with something and find it difficult to discuss, or certain words might be associated with actual threats to your safety, in your experience. If you have enough self-knowledge to know that you will not be able to participate in those discussions rationally (or that you could, but the emotional cost is higher than you're willing to bear), then it would be helpful to have a handy warning that the article you're about to read contains a "trigger" that's relevant to you.

This concept can be useful outside of personal traumatic events too. There's a lot we don't know for sure about the human ancestral environment, but one thing that's pretty likely is that the part of the brain with social skills didn't evolve to deal with political groups with millions of members. Any political opinion favoring something that threatens you is going to feel like a meaningful threat to your well-being, to some extent, unless you unlearn this (if that's even possible). Since politics is the mind-killer, you're likely to have this response even if people are just discussing opinions that are often signs of affiliation with your group's political enemies. This is possible to unlearn, but it could be really helpful to know what kind of discussion you're going into, in advance.

For example, I'm Jewish by birth. When people start saying nice things about Hitler and the Nazis, it makes me feel sad, and a little threatened. If it's just a discussion of pretty uniforms or monetary policy,  and not really about killing Jews at all, then there's no reason at all to construe it as a direct threat to my safety - but it's still helpful for me to be able to steel myself for the inevitable emotional reaction in advance.

Content warnings have the advantage of being fairly unambiguous. Someone who believes in "human biodiversity" might not agree that their discussion about it is threatening to black people - but I bet they'd agree that the discussion involves making generalizations about people based on racial categories. Someone who wants to vent about bad experiences involving white men might not agree that they are calling me a bad person - but I bet they'd agree that they are sharing anecdotes that are not necessarily representative, about people in a certain demographic.

The other nice thing about this solution is that right now it basically hasn't been done at all on Less Wrong. It seems reasonably likely that if a few prominent posters modeled this behavior (or a few commenters consistently suggested the addition of trigger or other offensive content warnings at the top of certain posts), it would be widely adopted.

The downside is that some uses of trigger warnings, while widespread on the internet (so there may be an off-the-shelf solution), would require a technical implementation, which means someone actually has to modify the site's code. This limits the set of people who can implement that part, but it's not insurmountable.

I'm not really sure this one has any clear disadvantages, except that some people may find content warnings themselves offensive.

Add a tag system for common triggers, so people can at a glance see where an information-hazard topic or conversation thread has arisen, and navigate the site safely. This is a really easy and obvious solution to Apophemi and Scott's dispute, and it benefits both of them (since it can be used both to tag politics/SJ discussion and to tag e.g. rape discussions), so I'm amazed this proposal hasn't been the central object of discussion in the conversation so far.

-Robby

Widely implemented. We can help people who acknowledge that they don't want to be around certain topics stay away from. It also gives those who want to be part of overly frank discussions a response to give to those who criticize them for being overly frank.

-Ruthie

Make it Explicit That People From Underrepresented Groups are Welcome

(Robby and Ruthie)

The downside of this one is that for women, at least, it's kind of already been done. A few years ago there were a bunch of front-page posts on the topic of what if anything needed to change to make sure women weren't unnecessarily pushed away by Less Wrong. But apparently Eliezer's old post on the topic actually offended some women, who felt stereotyped and misunderstood by it. A post with the same goal that didn't cause those reactions might do better.

I don't feel like this is a very good summary so I'm going to quote Robby and Ruthie directly:

Express an interest in women joining the site. Make your immediate reaction to the idea of improved gender ratios 'oh cool that means we get more people, including people with importantly different skills and backgrounds', not 'why would we want more women on this site?' or a change of topic to e.g. censorship.

- Robby

If more women posted and commented they might move the overall tone of discourse in a direction more appealing for other women. Maybe not. You could do blinded studies (have women and men write anonymized posts about anything, ask women and men which they would upvote, downvote). Again, this would be hard to do well.

- Ruthie

Put in an extra effort to draw women researchers, academics, LW-post-writers, speakers, etc.

-Robby

Recruit More Psychologists

(Robby)

I can't substantively improve on the original:

If LW is primarily a site about human rationality (as opposed to being primarily a site about Friendly Artificial Intelligence), then it should be dominated by psychologists, not by programmers. Psychologists are mostly women. Advertising to psych people would therefore simultaneously make this site better at human-rationality scholarship and empiricism, and better at gender equity.

-Robby

Ombudsperson

(Ruthie)

An "Ombudsman" is someone who works for an institution, and whose primary responsibility is listening to people's complaints and working with the institution to resolve them. A dedicated person is important for two reasons. First, it can be easier to communicate a complaint to someone who wasn't directly involved in doing the thing you're complaining about. Second, the site/community leaders may not have the time, attention, willingness, or expertise to listen to or understand a particular kind of complaint - maybe their comparative advantage is in building new things, not listening to people's problems.

I have no idea how this would work, but it was suggested to help solve problems on the EA facebook group and seems to have traction at least as an idea there. If they implement it and are successful, LW could follow suit.

-Ruthie

Write Rationalist-Friendly Explanations

It would be silly if rationalists weren't at least a little bit better about rationality than everyone else. Unfortunately, this means everyone else is a little bit worse, on average. Including feminists. That doesn't mean they're wrong, but it does mean that many popular explanations of feminist, antiracist, and social justice concepts may mix together some good points with some real howlers. These explanations may also come across as outright hostile to the typical Less Wrong demographic. So as a result, many rationalists will not read these things, or will read them and reject them as making no sense (and this is sometimes a correct judgment).

The problem is that some of these ideas are true or helpful even if someone didn't argue for them properly, and feminists or others on Less Wrong might have to explain the whole thing all over again every single time they want to have a productive discussion with a new person using a concept like sexism. This is a lot of extra work, and understandably frustrating. A carefully-argued account of some key relevant concepts would be extremely valuable, and might even be an appropriate addition to the Sequences. Brienne's post on gender bias is a great start, and there's probably lots of other great stuff out there hiding in between the ninety percent.

Build resources (FAQs, blog posts, etc.) educating LWers about e.g. gender bias and accumulation of advantage. Forcing women to re-argue things like 'is sexism a thing?' every time they want to treat it as a premise is exhausting and alienating.

-Robby

Get Data
(Ruthie)

This one's a real head-slapper - Less Wrong is supposed to be all about this. There's a problem and we don't know how to solve it. How about we get more information about what's causing it? Find the people who would be contributing to or benefiting from the rationalist community if only they didn't feel pushed away or excluded by some things we do. (And the people who only just barely think it's worth it - they're probably similar to the people who just barely think it's not worth it.)

Collect and analyze more-than-anecdata on women and minority behavior around LW

The existing survey data may have a lot of insight. Adding more targeted questions to next year's survey could help more. It's hard to give surveys to the category of people who feel like they were turned away from LW, but if anyone can think of a good way to reach this group, we may be able to learn something from them.

Try to find out more about how people perceive different kinds of rhetoric

This would be hard, but I'd be really interested in the outcome. Some armchair theories about how friendly different kinds of people expect discourse to be strike me as plausible. If there are really differences, offense might be prevented by using different words to say the same things. If not, we could stop throwing this accusation around.

-Ruthie

Go Meta

(Ruthie)

Less Wrong is supposed to be all about this one too. Some people consistently think other people are unreasonable and find it difficult to have a conversation with them - and vice versa. Maybe we should see if there are any patterns to this? Like the illusion of transparency, or taking offense being perceived as an attack on the offender's status.

One of my favorite patterns is when person A says that behavior X (described very abstractly) is horrible, and person B says how can you possibly expect people to refrain from behavior X. Naturally, they each decide that the other is a bad person, and also wrong on the internet. Then after much arguing, person A gives an example, and person B says "That's what you were talking about the whole time? People actually do that?! No wonder you're so upset about it!" Or person B gives an example of the behavior they think is reasonable, and person A says "I thought it went without saying that your example is okay. Why would you think anyone objected to that? It's perfectly reasonable!" It's kind of a combination of the illusion of transparency and generalizing from one example, where you try to make sense of the other person's abstract language by mapping it onto the most similar event you have personally experienced or heard about.

I bet there are lots of other patterns that, if we understood them better, we could build shortcuts around.

If well-intentioned people understood why conversations about gender so often become so frustrating before having a conversation about gender, it might lead to higher quality conversations about gender.

-Ruthie

Taboo Unhelpful Words More

(Ruthie)

Rationalist Taboo is when, if you seem to disagree about what a word means, you stop using it and use more specific language instead. Sometimes this can dissolve a disagreement entirely. In other cases, it just keeps the conversation substantive, about things rather than definitions. I definitely recall reading discussions on Less Wrong and thinking, "somebody should suggest tabooing the word 'feminist' here" (or "sexist" or "racist"). Guess what? I'm somebody! I'll try to remember to do that next time; I think a few people committed to helping on this one could be super helpful.

Taboo words

Possibly on a per-conversation basis. "Feminist" is a pretty loaded word for me, and people say things like this which don't apply closely to me, and I feel threatened because I identify with the word.

Scott Alexander also suggested this in the same context in his response to Apophemi on his blog (a bit more than halfway down the page). It can improve the quality of discourse simply by forcing people to use relevant categories instead of easy ones.

Higher standards of justification for sensitive topics

A lot of plausible-but-badly-justified assertions about gender are thrown around, and not always subjected to much scrutiny. These can put harmful ideas in people's minds without at least giving us reason to believe that they're true, and they're slippery to argue against. Saying exactly what you mean and justifying it is probably the best way to defend against unreasonable accusations of sexism. If people accuse you of sexism, they'll at least be reasonable. I think taboo words can go a long way towards achieving this.

-Ruthie

Build a Norm That You Can Safely Criticize and Be Criticized For "Offensive" Behavior

(Ruthie)

I have no idea how hard or easy this is. Less Wrong seems like it's already an unusually safe place to say "oops, I was wrong." But somehow people seem not to do a good job becoming more curious about certain things like sexism. If I understand correctly (her wording's a little telegraphic to me), Ruthie suggested a stock phrase for people correcting their own language, "let me try again." It would be nice to come up with a similarly friendly way to say that you think someone is talking in an unhelpful way, but don't intend to thereby lower their status - you just want to point it out so they will change their behavior to stop hurting you.

Better ways to call people out for bad behavior

Right now, talking about gender in almost any form is asking for a fight. I hold my tongue about a lot of minor things that bother me because calling people out causes people to get defensive instead of considering correcting myself. A strong community norm of taking criticism in certain form seriously could help us not quarrel about minor things. Someone I know suggested "let me try again" as a template for correcting offensive speech, and I like the idea a lot.

Successfully correcting when called out can also help build goodwill. If you are sometimes willing to change your rhetoric, I take you more seriously when you say it's important when you aren't.

Our only current mechanism is downvoting, but it's hard to tell why a thing has been downvoted.

-Ruthie

A Call For Action

If you are at all involved or interested in the rationalist community: The next time you are tempted to spend your precious time or energy complaining about how the community excludes people, or complaining about how the people who feel excluded want complete control over what is talked about instead, consider spending that resource on advancing one of these projects instead, to make the problem actually go away.

Wall, Staircase, Hallway: Obstacles Are Scary

When I try to make long-term plans, my attention generally slides most easily towards plans that involve things that I have already done, or that I know how to do. It makes sense to focus on the things known to be easy first, but what doesn't make sense is the degree to which I am averse to obstacles.

Right now, I see obstacles at three levels:

Wall

At a distance, every obstacle or new challenge looks like an impassable wall. I can get around it, maybe under or over it, but it just doesn't make sense to go through it. When I think about going from my bedroom to my bathroom, punching me-sized holes in the walls between them and walking through just doesn't seem like a viable plan, so I don't think about it.

This makes sense for literal walls (usually), but not for obstacles like a task that requires a skill I don't have yet. When I make plans at this high level with any attention at all paid to feasibility, I end up excluding every possible plan that would involve learning a skill or otherwise doing or figuring out something new.

This is a problem.

 

Staircase

Sometimes, if I focus in on one particular obstacle, it turns out to have a bunch of different parts, many of which are soluble, and only a few of which are actually hard, by which I mean they require attention or willpower. Then instead of a wall, it looks like a long, steep staircase. I know I can get up to the top, but it's work.

 

Hallway

Finally, if I focus on the staircase, sometimes it's just a series of tasks, none of which requires much willpower in the moment, it's just sticking to it that requires willpower. If I can set up the series in a manageable way, then the staircase flattens to a hallway and doesn't feel like it's work at all, just another path I can take.

 

Obstacles are Scary

I don't seem to gradually see my obstacles with greater resolution as I think about them more. Instead, it feels like a discontinuous, sudden shift in perspective. Moreover, my brain seems to think that there really are three different kinds of things that can appear to be obstacles. If I look more closely at a wall and it turns out to be a staircase or a hallway, it doesn't feel like I just have a higher-resolution picture - it feels like I was objectively mistaken about what kind of obstacle this was.

I think this is because obstacles are scary. More precisely, plans that involve a route through an obstacle are scary. Making a commitment I don't know how to execute yet feels like making a promise I know I can't keep (because it would involve walking through a wall). I really, really don't like doing that. 

Making a commitment I know will be a lot of work feels like, well, a lot of work.

So if I realize that a wall is just a staircase, it feels like all the routes through that staircase have switched from lies into promises I can keep.

I'd like to be able to think about this differently. I'd like to be able to imagine obstacles probabilistically, with a certain chance I'll be able to figure it out - and to automatically think of "think about how to get past this obstacle" as a step in the plan, instead of either making "impossible" plans or avoiding opportunities for growth altogether. I suppose the next step is to find out whether anyone else has had success making this change.

Have you?

Don't Worry, Be Canny

Oops

My girlfriend is [...] triggered [...] by many discussions of charity – whenever ze hears about it, ze starts worrying ze is a bad person for not donating more money to charity, has a mental breakdown, and usually ends up shaking and crying for a little while.

I just wrote a post on giving efficiently.

I just wrote another asking people to give to CFAR.

And I'm pretty sure I mentioned both to the person in question.

Oops.

Of course I put a disclaimer up front about how I'm not talking about how much to give, just how to use your existing charity budget better. But of course that doesn't matter unless it actually worked - which it likely didn't.

Of course I would have acted differently if I'd had more information up front - but I don't get extra points for ignorance; the expected consequence is just as bad.

I'm going to try and write an antidote to the INFINITE GUILT that can feel like the natural response to Peter Singer style arguments. It probably won't work, but I doubt it will hurt. (If it does, let me know. If there's bad news, I want to hear it!)

 

You Don't Have To Be a Good Person To Be a Good Person

What are you optimizing for, anyway, being a good person or helping people?

If you care about helping people, then you should think of yourself as a manager, with a team of one. You can't fire this person, or replace them, or transfer them to another department. All you can do is try to motivate them as best you can.

Are you going to try to work this person into the ground, use up 100% of their capacity every day, helping others? No! The mission of the firm is "helping people," but that's not necessarily your employee's personal motivation. If they burn out and lose motivation, you can't replace them - you have to build them back up again. Instead, you should try really, really hard to keep this person happy. This person, of course, being you.

If telling them they should try harder gets them motivated, then fine, do that. But if it doesn't - if it makes them curl up into a ball and be sad instead, then try something else. Ask them if they need to give up on some of the work. Ask them if there's anything they need that they aren't getting. Because if your one employee at the firm of You isn't happy to be there, you'd better figure out how to make that happen. That's your number one job as manager - because without you, you don't have anyone.

That doesn't make the firm any less committed to helping people. As your own manager, you are doing your best to make sure helping-people activities happen, as much and as effectively as possible. But that means treating yourself like a human being, with basic decency and respect for your own needs.

 

Alright, suppose you do care about "being good." Maybe you believe in virtue ethics or deontology or have some other values where you have an idea of what a good person is, independent of maximizing an utilitarian consequence.

The same result follows. You should take whatever action maximizes your "goodness," but again, you don't have perfect control over yourself. You're a manager with one permanent employee. There's no point in asking more than they can do, unless they like that (some people say they do) - look for the things that actually do motivate them, and make sure their needs get met. That's the only way to keep them motivated to work towards being a "good person" in the long term; all the burnout considerations still apply.

 

What Do You Mean By "You"?

There's not really just one you. You have lots of parts! The part that wants to help people is probably distinct from the part that wants to feel like a good person, which is in turn distinct from the part that has needs like physical well-being. You all have to come to some sort of negotiated agreement if you want to actually get anything done.

In my own life, it was a major breakthrough, for example, to realize that my desire to steer the world toward a better state - my desire to purchase "altruons" with actions or dollars - is distinct from my desire to feel good about getting things done and be validated for doing good work that makes a clear difference. Once I realized these were two very different desires, I could at least try to give each part some of what it wanted.

Pretending your political opponents don't exist is not a viable strategy in multiple-person politics. It's no better in single-person politics. You have three options:

1) Crush the opposition.

If exercising is a strong net positive for you, but part of you is whining "I'm tired, I don't wanna," you can just overpower it with willpower.

In politics, there are all sorts of fringe groups that pretty much get totally ignored. For example, legalization of cocaine doesn't seem to have gone anywhere in the US, even though I'm sure there are a few people who feel very, very strongly about it. No concessions whatsoever seem to have been made.

The advantages of this strategy are that you get what you think what you want, without giving up anything in exchange, and get practice using your willpower (which may get stronger with use).

The disadvantages are that you can't do it without a majority, that some parts of you don't get their needs met, and that if you're tired or distracted the government may be overturned by the part of yourself that has been disenfranchised.

2) Engage in "log-rolling."

Sometimes the part of you that's resisting may want something that's easy to give it. For example, I just finished the first draft of a short story. Prior to that I hadn't finished a work of fiction in at least ten years. I'd started plenty, of course, so clearly there was some internal resistance.

My strategy this time was to get used to writing anything at all, regularly, and "finishing" (i.e. giving up and publishing) things, whether I think they're good or not. Get used to writing at all, and worry about getting good once I've installed the habit of writing.

But I stalled out anyway when writing fiction. Eventually, instead of just fighting myself with willpower when I noticed that I was stalling, I engaged myself in dialogue:

"Why don't you want to keep writing?"

"I can't think of what to write next."

"You literally can't think of what to write? Or you don't like your ideas?"

"I don't like the ideas."

"Why not?"

"Because I think they're bad. I'm trying to write something good, like you asked, but all I have is bad ideas."

"Darn it, self, I didn't ask you to write something good. I asked you to write something at all. Go ahead and write the bad version. We'll worry about writing something good later."

"Oh, is that all you wanted? That's easy!"

And I happily went back to work and kept writing.

Sometimes the best you can do is give everyone just part of what they want, though. There are people who believe that the rich US should give much of its excess wealth to poor people. If you believe this, what's a better strategy? Start a magazine called "America Is Bad And It Should Feel Bad", or try to expand our guest-worker visa program? One, and only one, of these will increase the wealth of poor foreigners at all.

The advantage of this approach is that is probably maximizes your short-term happiness, more of your needs get met, and it saves willpower for things where this approach is not viable.

3) Lose.

If you can't crush the opposition, and you can't trade with them, then you lose. If you're losing, and you have spent five minutes thinking about it and can't think of either a viable way to win or an idea-generating method you expect to work, then give up. Stop expending willpower on it, accept the bad consequence, and get on with your life.

I'm a bad person? Okay, I'm a bad person, I'd still like to help people, though. What's for lunch?


The Bottom Line


KIRK: I wish I were on a long sea voyage somewhere. Not too much deck tennis, no frantic dancing, and no responsibility. Why me? I look around that Bridge, and I see the men waiting for me to make the next move. And Bones, what if I'm wrong?
MCCOY: Captain, I
KIRK: No, I don't really expect an answer.
MCCOY: But I've got one. Something I seldom say to a customer, Jim. In this galaxy, there's a mathematical probability of three million Earth-type planets. And in all of the universe, three million million galaxies like this. And in all of that, and perhaps more, only one of each of us. Don't destroy the one named Kirk.

Pay Today or Pay More Tomorrow

I am 27 years old. I recently bought a life insurance policy with a face value of $100,000. This policy will last my whole life - in other words, no matter when I die, the payout happens. It cost me roughly $10,000 in today's money. If this is surprising to you, or you think the insurance company got a bad deal, then read this.

Everyone makes choices about whether they'd rather have something now, or something else later. Almost no one understands the economic concepts that describes these tradeoffs. they're called "present value" and "discount rate."

I will start by describing some simple examples that use these concepts, without using the jargon. Then I will explain what these all have in common. I'm not going to explain how to use these in real-life situations, but if you're interested, please let me know in the comments and I'll write a follow-up post.

Return on Investment

I'll start with a simplified example, with made-up numbers. Abby has a bank account with a bunch of money in it earning 2% guaranteed interest per year. She also owns a bond that would pay out $1,000 if she cashes it out now, or $1,030 if she cashes it out in a year. Should she cash it out now, or a year later?

Let's say that in any case she wouldn't use the money until a year from now. Then if she cashes out the bond now, she can immediately deposit the money, and in a year, she'll have $1,020. But that's less than the $1,030 she'd get if she held onto the bond for a year.

On the other hand, suppose she wants to use the money right now. Then if she cashes out the bond now, she has an immediate $1,000 to spend. On the other hand, let's say she holds onto the bond, and withdraws $1,000 from her bank account. Then in a year, she has $1,020 less in her account than she would have, but an extra $1,030 from the bond, putting her $10 ahead of the first strategy. So in this case too she should hold onto the bond for another year.

It should be easy to see that if the bond only returned $1,010 in a year, Abby comes out ahead by cashing out now, again regardless of whether she wants to use the money now or later. Because the bond gives her a lower return on investment (1%) than her savings account does (2%).

Then suppose the bond pays out $1,030 in a year, but her bank account offers 4% interest this year. Then Abby also comes out ahead by cashing out now, because the bond's return (3%) is less than the interest she gets on her bank account.

Cost of Funds

Brian doesn't have any savings - he a student. But he has a good credit rating and is able to borrow at 5% interest per year, and is allowed to pay off his loans at any time.

He is deciding whether to rent a textbook for $100, or buy it for $150 and sell it back used to his school's bookstore in a year for $55.

If Brian rents his textbook, then after a year, he will owe $105, including interest, and have no textbook. On the other hand, if he buys his textbook, then after a year, he will owe $157.50. He can then sell his textbook back to the bookstore for $55, use that to pay down his debt, and owe only $102.50. So buying the textbook is a better deal.

Suppose instead Brian can only borrow at 10% interest. Then if Brian rents his textbook, after a year, he will owe $110. On the other hand, if he buys his textbook, then after a year, he will owe $165-$55=$110. So he should be indifferent between the two alternatives.

If Brian has to pay 15% interest, then if he rents his textbook, after a year he owes $115, but if he buys, then after a year he owes $125, so he comes out ahead by renting.

On the other hand, suppose at the 5% rate of interest, Brian can only collect $50 for his textbook after a year. Then instead of owing $102.50 at the end of a year, he'd owe $107.50, more than the $105 he'd owe if he rented, so in that case renting again becomes more advantageous.

Present Value

In each of the above examples, a future amount of money was related to a present amount of money, by either how much money you'd have if you used the current money in the best way available (either investing or paying off debt), or how much money you would have to have now, to produce the future money. The first is called the "future value" of money, and the second is called the "present value" of money.

When Abby is choosing between $1,000 now and $1,030 in a year, the "future value" of $1,000 is how much money she'd have at the end of a year if she put the money in her bank account yielding 2%. To get this, you multiply by (100%+2%=1.00+0.02=1.02): $1,000 * 1.02 = $1,020. This is less than the one-year future value of $1,030 in a year, which is of course $1,030.

The "present value" of the year-later $1,030 is the amount Abby would need today to produce that amount in a year. To calculate the value a year in the past, you simply do the opposite of what you did when calculating the value a year in the future: you simply divide by (100%+2%=102%=1.02), to get $1,030/1.02=$1009.80, more than the present value of $1,000 today (which is of course $1,000).

Another way to show this is algebraically:
PV*1.02=FV
PV=FV/1.02

Now let's look at the first example involving Brian. Brian is comparing making a single payment today, with making a payment today plus receiving a payment in a year.

Since Brian has to pay 5% interest on money he borrows, the future value of the textbook rental expense is how much Brian will owe in a year if he borrows the money, or $100*1.05=$105. The future value of the purchase price of the textbook is $150*1.05=$157.50, and the future value of the $55 Brian will receive for his textbook in a year is just $55. So the net future value of Brian's textbook expenses if he buys is $157.50-$55.00=$102.50, less than the $105 future value of the rental fee.

The present value of the renting option, $100 today, is of course $100. The present value of the textbook's price today is also the same as the price, $150. The present value of getting $55 in a year is the amount of debt he'd have to pay off now, to owe $55 less in a year: $55/1.05=$52.38. So the present value of the cost of buying and selling back later is $150-$52.38=$97.62, less than the $100 textbook rental fee. So the buying option costs less, in present value terms, as well.

The key here is that by converting each value, whether positive or negative, into the equivalent value for a single time period - whether the present or the future - we end up with numbers that can be directly added and subtracted to find out which amount is higher on net.


Discount Rate


You may have noticed that in Abby's case we were using the rate at which she could expect return on her savings to equate future and present amounts, but in Brian's case we looked at the interest rate he'd have to pay to borrow money. These might seem like quite different things, but in finance, there's little difference between spending saved money and borrowing money; in both cases money in the future is worth more than money in the present, and we assume a fixed conversion factor. Instead of calling it a cost of borrowing sometimes and an expected return on investment at other times, economics abstracts this into the more general term "discount rate", which is basically the extra share you can demand if you get your money in a year instead of today, or the share of your money you should expect to give up if you get your money today instead of a year from now.

This is related to the economic concept of "opportunity cost," which I will cover in a future post.

I will also cover how to deal with a series of future payments in a future post - and in the process show you that if you believe in discount rates, the future isn't as big a deal as it seems.

Which means, of course, that this is the first post in a series.

Fire, Telepathy, Bandwidth

Literacy is an amazing power. But it comes at a terrible price. And no, I don't just mean memory.

Writing is Magic

Through the magic of psychometric tracery we are able to share the thoughts of fellow literates across great distances of time and space, just by reading their inscriptions. Moreover, psychometric tracery has a permanence that memory does not, so we can preserve our own thoughts more completely and precisely, for longer, by writing them down, than by remembering them. The modern bureaucratic state and firm owe their existence to writing - the world would collapse without it. This has probably been true ever since the first great cities learned the Art.

But Great Magic Comes at a Great Price

Just like meetings summon a very knowledgeable demon at the price of the temporary suspension of their participants' minds, writing comes at a price as well. The most common criticism is that literate people have worse memories. As usual, Plato said it best. I'm just going to paraphrase, if you want the original, I highly recommend reading the Phaedrus.

In Phaedrus, Plato has Socrates tell a story about the invention of writing. He says that Theuth, the god-inventor, presented his inventions to the god-king Thamus, and among them was writing, which Theuth praised as an aid to both wisdom and memory. Thamus replied that Theuth was too optimistic; writing was a drug that counterfeited memory, and actively harmed wisdom. People would be able to "recite" many true opinions that they just looked up, but out of prolonged reliance on reference texts, would have less of the understanding that would have enabled them to generate these opinions in the first place.

Elsewhere in Phaedrus, Socrates says that the true practice of philosophy cannot be written down, because to teach philosophy you cannot speak in the same way to everyone. Philosophy is not a set of opinions, it is more like a fire burning in the soul of a person, which can only be transmitted by prolonged contact in which the other person's soul can catch fire. Plato says much the same, writing in his own voice, in the Seventh Letter, which I recommend less but has the virtue of being short.

Of course, everyone ignores this and goes on to assume Plato put his philosophy into writing. Well, almost everyone.

Was Plato Right?

I don't actually think the degradation of memory is a problem. If anything, it's freed up mental space for better things to remember. Instead of memorizing facts, we can keep track of a large number of ways to obtain facts. We've increased our total power to obtain true opinions.

The understanding thing is a little more problematic.

Talking to Yourself

When you think something through in your own mind, you have access to all your own thoughts. You know what you mean by all the words you use. You can communicate with yourself in any mode - visual, auditory, tactile, nonverbal.

Verbal conversation with another person is necessarily lower bandwidth - meaning that less information is communicated at a time. In exchange, you get two separate minds, with different strengths, processing the information simultaneously. A clarifying question from your interlocutor can help you notice that actually, no, you don't quite understand what you mean by that word, or the nonverbal assumptions you were making aren't ones you endorse, or the big fuzzy thing you were confused about seems clearer when you break it down into pieces small enough to talk about.

Another problem with verbal communication is error. Disagreements about definitions or word usage often derail substantive conversations. This can be (but rarely is) addressed by frequent stopping or interrupting at the first moment someone uses a term that seems unclear. The underlying disposition of curiosity that makes this possible, and the readiness to abandon or discard words to try to ascend to the things themselves, is part of the philosophical attitude Plato believed it would be impossible to convince someone of by writing down correct opinions.

Latency and Throughput

Verbal communication at all has serious problems, and writing has even more. A big one is latency.

I am borrowing the concept of latency and throughput from computing. They are two measurements of how fast you can transfer information.
Throughput measures the overall rate of information transfer over time. Latency measures how long it takes to move a small piece of information and get a response.

Written communication generally has high throughput but high latency. This is obviously true for things like physical letters in envelopes, but tends to be true electronically as well, because people tend to wander off and do something else instead of waiting for a response. So even some short conversations can extend over days, months, or even years.

One common response to this problem is to try to use higher throughput to compensate for latency. Instead of saying just one thing, people make long, structured arguments, explicitly defining terms and anticipating counterarguments or questions instead of waiting for them. In other words, they try to take the conversation as far as they can with a simulation of the other person inside their heads.

In cases where the questions or objections are easy or simple ones, this is effective - it is a convenient shortcut with a long and glorious tradition, dating back even to the days where such arguments were communicated by speechmaking and not writing at all, for example in politics and other adversarial environments where one could not trust one's interlocutor to ask fair questions and work with you to get to answers constructively. But for the hard questions, people just end up talking past each other, and have debates instead of conversations.

Good Conversation Takes Practice

This is especially problematic because it increases the opportunity cost of difficult conversations. Easy conversations get cheaper with writing (where the potential throughput is basically unlimited), so we have more of them - but the hard conversations are almost no cheaper at all by comparison. So we have very few. After all, the difficulties you have with a novel concept may be very different from the difficulties I have with it, requiring conversations that go in totally different directions, or at different speeds, or examining different parts of our vocabulary. Because of this, even if you do manage to make the points I need to hear, it doesn't necessarily scale up well - republishing the original won't reliably communicate the same thing again.

But wait - it gets worse. Good conversation about difficult things takes practice. Most people are never properly trained, because proper training is expensive and the benefits are unobvious, so they don't know what to do when the opportunity arises to learn something difficult - and instead just try to have a debate, linking to articles, citing research, making long structured arguments and explicit definitions, and trying to anticipate counterarguments before they come up. If they've started out on the wrong track, it's exhausting for even a skilled conversational partner to apply the brakes, especially because someone trained in the art of good philosophical conversation is specifically acculturated not to try to exert a disproportionate influence over the conversation.

My hope is that simply making more people aware of this failure mode will help them avoid it, but I'm not very confident this will help.

Doubt, Science, and Magical Creatures

Doubt

I grew up in a Jewish household, so I didn't have Santa Claus to doubt - but I did have the tooth fairy.

It was hard for me to believe that a magical being I had never seen somehow knew whenever any child lost their tooth, snuck into their house unobserved without setting off the alarms, for unknown reasons took the tooth, and for even less fathomable reasons left a dollar and a note in my mom's handwriting.

On the other hand, the alternative hypothesis was no less disturbing: my parents were lying to me.

Of course I had to know which of these terrible things was true. So one night, when my parents were out (though I was still young enough to have a babysitter), I noticed that my tooth was coming out and decided that this would be...

A Perfect Opportunity for an Experiment.

I reasoned that if my parents didn't know about the tooth, they wouldn't be able to fake a tooth fairy appearance. I would find a dollar and note under my pillow if, but only if, the tooth fairy were real.

I solemnly told the babysitter, "I lost my tooth, but don't tell Mom and Dad. It's important - it's science!" Then at the end of the night I went to my bedroom, put the tooth under the pillow, and went to sleep. The next morning, I woke up and looked under my pillow. The tooth was gone, and in place there was a dollar and a note from the "tooth fairy."

This could have been the end of the story. I could have decided that I'd performed an experiment that would come out one way if the tooth fairy were real, and a different way if the tooth fairy were not. But I was more skeptical than that. I thought, "What's more likely? That a magical creature took my tooth? Or that the babysitter told my parents?"

I was furious the possibility of such an egregious violation of experimental protocol, and never trusted that babysitter in the lab again.

An Improvement in Experimental Design

The next time, I was more careful. I understood that the flaw in the previous experiment had been failure to adequately conceal the information from my parents. So the next time I lost a tooth, I told no one. As soon as I felt it coming loose in my mouth, I ducked into the bathroom, ran it under the tap to clean it, wrapped it in a tissue, stuck it in my pocket, and went about my day as if nothing had happened. That night, when no one was around to see, I put the tooth under my pillow before I went to sleep.

In the morning, I looked under the pillow. No note. No dollar. Just that tooth. I grabbed the incriminating evidence and burst into my parents bedroom, demanding to know:

"If, as you say, there is a tooth fairy, then how do you explain THIS?!"

What can we learn from this?

The basic idea of the experiment was ideal. It was testing a binary hypothesis, and was expected to perfectly distinguish between the two possibilities. However, if I had known then what I know now about rationality, I could have done better.

As soon as my first experiment produced an unexpected positive result, just by learning that fact, I knew why it had happened, and what I needed to fix in the experiment to produce strong evidence. Prior to the first experiment would have been a perfect opportunity to apply the "Internal Simulator," as CFAR calls it - imagining in advance getting each of the two possible results, and what I think afterwards - do I think the experiment worked? Do I wish I'd done something differently? - in order to give myself the opportunity to correct those errors in advance instead of performing a costly experiment (I had a limited number of baby teeth!) to find them.

Cross-posted at Less Wrong.

CFAR - Second Impression and a Bleg

TLDR: CFAR is awesome because they do things that work. They've promised to do more research into some of the "epistemic rationality" areas that I'd wished to see more progress on. They have matched donations through 31 January 2014, please consider giving if you can.

UPDATE: CFAR now has a post up on Less Wrong explaining what they are working on and why you should give. Here's the official version: http://lesswrong.com/lw/jej/why_cfar/

Second Thoughts on CFAR

You may have seen my first-impression review of the Center For Applied Rationality's November workshop in Ossining, NY. I've had more than a month to think it over, and on balance I'm pretty impressed.

For those of you who don't already know, CFAR (the Center For Applied Rationality) is an organization dedicated to developing training to help people overcome well-studied cognitive biases, and thus become more effective at accomplishing their goals. If you've never heard of CFAR before, you should check out their about page before continuing here.

The first thing you need to understand about CFAR is that they teach stuff that actually works, in a way that works. This is because they have a commitment to testing their beliefs, abandoning ideas that don't work out, and trying new things until they find something that works. As a workshop participant I benefited from that, it was clear that the classes were way better honed, specific, and action-oriented than they'd been in 2011.

At the time I expressed some disappointment that a lot of epistemic rationality stuff seemed to have been neglected, postponed, or abandoned. Even though some of those things seem objectively much harder than some of the personal effectiveness training CFAR seems to have focused on, they're potentially high-value in saving the world.

The Good News

After my post, Anna Salamon from CFAR reached out to see if we could figure out some specific things they should try again. I think this was a helpful conversation for both of us. Anna explained to me a few things that helped me understand what CFAR was doing:

1) Sometimes an "epistemic rationality" idea turns into a "personal effectiveness" technique when operationalized.

For example consider the epistemic rationality idea of beliefs as anticipations, rather than just verbal propositions. The idea is that you should expect to observe something differently in the world if a belief is true, than if it's false. Sounds pretty obvious, right? But the "Internal Simulator," where you imagine how surprised you will be if your plan doesn't work out, is a non-obvious application of that technique.

2) Some of the rationality techniques I'd internalized from the Sequences at Less Wrong, that seemed obvious to me, are not obvious to a lot of people going to the workshops, so some of the epistemic rationality training going on was invisible to me.

For example, some attendees hadn't yet learned the Bayesian way of thinking about information - that you should have a subjective expectation based on the evidence, even when the evidence isn't conclusive yet, and there are mathematical rules governing how you should treat this partial evidence. So while I didn't get much out of the Bayes segment, that's because I've already learned the thing that class is supposed to teach.

3) CFAR already tried a bunch of stuff.

They did online randomized trials of some epistemic rationality techniques and published the results. They tried a bunch of ways to teach epistemic rationality stuff and found that it didn't work (which is what I'd guessed). They'd found ways to operationalize bits of epistemic rationality.

4) The program is not just the program.

Part of CFAR's mission is the actual rationality-instruction it does. But another part is taking people possibly interested in rationality, and introducing them to the broader community of people interested in existential risk mitigation or other effective altruism, and epistemic rationality. Even if CFAR doesn't know how to teach all these things yet, combining people who know each of these things will produce a community with the virtues the world needs.

In the course of the conversation, Anna asked me why I cared about this so much - what was my "Something to Protect"? This question helped me clarify what I really was worried about.

In my post on effective altruism, I mentioned that a likely extremely high-leverage way to help the world was to help people working on mitigating existential risk. The difficulty is that the magnitude of the risks, and the impact of the mitigation efforts, is really, really hard to assess. An existential risk is not something like malaria, where we can observe how often it occurs. By definition we haven't observed even one event that kills off all humans. So how can we assess the tens or hundreds of potential threats?

A while before, Anna had shared a web applet that let you provide your estimates for, e.g., the probability each year of a given event like global nuclear war or the development of friendly AI, and it would tell you the probability that humanity survived a certain number of years. I tried it out, and in the process, realized that:

Something Is Wrong With My Brain and I Don't Know How to Fix It

For one of these rates, I asked myself the probability in each year, and got back something like 2%.

But then I asked myself the probability in a decade, and got back something like 5%.

A century? 6%.

That can't be right. My intuitions seem obviously inconsistent. But how do I know which one to use, or how to calibrate them?

Eliezer Yudkowsky started writing the Sequences to fix whatever was wrong with people's brains that was stopping them from noticing and doing something about existential risk. But a really big part of this is gaining the epistemic rationality skills necessary to follow highly abstract arguments, modeling events that we have not and cannot observe, without getting caught by shiny but false arguments.

I know my brain is inadequate to the task right now. I read Yudkowsky's arguments in the FOOM Debate and I am convinced. I read Robin Hanson's arguments and am convinced. I read Carl Shulman's arguments and am convinced. But they don't all agree! To save the world effectively - instead of throwing money in the direction of the person who has most recently made a convincing argument - we need to know how to judge these things.

In Which I Extract Valuable Concessions from CFAR in Exchange for Some Money

Then it turned out CFAR was looking for another match-pledger for their upcoming end/beginning of year matched donations fundraiser. Anna suggested that CFAR might be willing to agree to commit to certain epistemic rationality projects in exchange. I was skeptical at first - if CFAR didn't already think these were first-best uses of its money, why should I think I have better information? - but on balance I can't think of a less-bad outcome than what we actually got, because I do think these things are urgently needed, and I think that if CFAR isn't doing them now, it will only get harder to pivot from its current program of almost exclusively teaching instrumental rationality and personal effectiveness.

We hashed out what kinds of programs CFAR would be willing to do on the Epistemic Rationality front, and agreed that these things would get done if enough money is donated to activate my pledge:

  • Participate in Tetlock's Good Judgment Project to learn more about what rationality skills help make good predictions, or would help but are missing
  • Do three more online randomized experiments to test more epistemic rationality techniques
  • Do one in-person randomized trial of an epistemic rationality training technique.
  • Run three one-day workshops on epistemic rationality, with a mixture of old and new material, as alpha tests.
  • Bring at least one epistemic rationality technique up to the level where it goes into the full workshops.

And of course CFAR will continue with a lot of the impressive work it's already been doing.

Here are the topics that I asked them to focus on for new research:

Here are the major "epistemic rationality" areas where I'd love to see research:
  • Noticing Confusion (& doing something about it)
  • Noticing rationalization, and doing something to defuse it, e.g. setting up a line of retreat
  • Undistractability/Eye-on-the-ball/12th virtue/"Cut the Enemy"/"Intent to Win" (this kind of straddles epistemic and instrumental rationality AFAICT but distractions usually look like epistemic failures)
  • Being specific / sticking your neck out / being possibly wrong instead of safely vague / feeling an "itch" to get more specific when you're being vague
Here are some advanced areas that seem harder (because I have no idea how to do these things) but would also count:
  • Reasoning about / modeling totally new things. How to pick the right "reference classes."
  • Resolving scope-insensitivity (e.g. should I "shut up and multiply" or "shut up and divide"). Especially about probabilities *over time* (since there are obvious X-Risk applications).
  • How to assimilate book-learning / theoretical knowledge (can be broken down into how to identify credible sources, how to translate theoretical knowledge into procedural knowledge)

If you're anything like me, you think that these programs would be awesome. If so, please consider giving to CFAR, and helping me spend my money to buy this awesomeness.

The Bad News

For some reason, almost one month into their two-month fundraiser, CFAR has no post up on Less Wrong promoting it. As I was writing this post, CFAR had raised less than $10,000 compared to a total of $150,000 in matching funds pledged. (UPDATE: CFAR now has an excellent post up explaining their plan and the fundraiser is doing much better.)

CFAR Fundraiser Progress Bar

Huge oopses happen, even to very good smart organizations, but it's relevant evidence around operational competence. Then again I kind of have an idiosyncratic axe to grind with respect to CFAR and operational competence, as is obvious if you read my first-impression review. But it's still a bad sign, for an organization working on a problem this hard, to fail some basic tests like this. You should probably take that into account.

It's weak evidence, though.

CFAR Changed Me for the Better

The ultimate test of competence for an organization like CFAR is not operational issues like whether people can physically get to and from the workshops or whether anyone knows about the fundraiser. The test is, does CFAR make people who take its training better at life?

In my case there was more than one confounding factor (I'd started working with a life coach a few weeks before and read Scott Adams's new book a few weeks after - Less Wrong review here), but I have already benefited materially from my experience:

I had three separate insights related to how I think about my career that jointly let me actually start to plan and take action. In particular, I stopped letting the best be the enemy of the good, noticed that my goals can be of different kinds, and figured out which specific component of my uncertainty was the big scary one and took actual steps to start resolving it.

A couple of things in my life improved immediately as if by magic. I started working out every morning, for example, for the first time since college. I'm still not sure how that happened. I didn't consciously expend any willpower.

Several other recent improvements in my life of comparable size are partially attributable to CFAR as well. (The other main contributors are my excellent life coach, Scott Adams's book, and the cumulative effect of everything else I've done, seen, heard, and read.)

Several of the classes that seemed hard to use at the time became obviously useful in hindsight. For example, I started noticing things where a periodic "Strategic Review" would be helpful.

In addition, I learned how to be "greedy" about asking other people for questions and advice when I thought it would be helpful. This has been tremendously useful already.

I'll end the way I began, with a summary:

The problems humanity is facing in this century are unprecedented in both severity and difficulty. To meet these challenges, we need people who are rational enough to sanely and evaluate the risks and possible solutions, effective enough to get something done, and good enough to take personal responsibility for making sure something happens. CFAR is trying to create a community of such people. Almost no one else is even trying.

CFAR is awesome because they do things that work. They've promised to do more research into some of the "epistemic rationality" areas that I'd wished to see more progress on. They have a fundraiser with matched donations through 31 January 2014, please consider giving if you can.

Sharks are Forever

Back in the 5th or 6th grade my science teacher was telling the class about sharks. She said something about how sharks are an example of a perfected product of evolution, and that some sharks have been around basically unchanged for thousands of years. I'm now quite sure that she meant, some species of shark. But at the time, I thought:

If she meant "species," surely she would have said "species." Therefore, if she didn't, by modus tollens, she must mean that some individual sharks have been around for thousands of years. Unchanging. Undying. All-consuming.

I'm sure that this was like many subtle childhood misunderstandings, insofar as it didn't affect my day-to-day life very much. I don't interact with elderly sharks very often. I've never had to take a shark's vital readings, or card a shark at a bar. There's basically nothing in my life where I would need to know how old a shark is. Until Freshman year of college, that is.

In Freshman Lab (non-Johnnies can think of it as intro biology), my tutor (professor) Mr. K made some point about aging - in particular, about how animals that reproduce sexually instead of by cell division don't destroy the original in the process of making copies. He noted that it seems like all such animals have a natural aging process. They only get so old before they start declining with age, and they can only age so long before they die. But I had the perfect counterexample.

"Excuse me," I said, "but what about sharks?"

"Well, what about sharks?" responded Mr. K.

"We all know that sharks are immortal, right?"

...