Monthly Archives: December 2013

Fire, Telepathy, Bandwidth

Literacy is an amazing power. But it comes at a terrible price. And no, I don't just mean memory.

Writing is Magic

Through the magic of psychometric tracery we are able to share the thoughts of fellow literates across great distances of time and space, just by reading their inscriptions. Moreover, psychometric tracery has a permanence that memory does not, so we can preserve our own thoughts more completely and precisely, for longer, by writing them down, than by remembering them. The modern bureaucratic state and firm owe their existence to writing - the world would collapse without it. This has probably been true ever since the first great cities learned the Art.

But Great Magic Comes at a Great Price

Just like meetings summon a very knowledgeable demon at the price of the temporary suspension of their participants' minds, writing comes at a price as well. The most common criticism is that literate people have worse memories. As usual, Plato said it best. I'm just going to paraphrase, if you want the original, I highly recommend reading the Phaedrus.

In Phaedrus, Plato has Socrates tell a story about the invention of writing. He says that Theuth, the god-inventor, presented his inventions to the god-king Thamus, and among them was writing, which Theuth praised as an aid to both wisdom and memory. Thamus replied that Theuth was too optimistic; writing was a drug that counterfeited memory, and actively harmed wisdom. People would be able to "recite" many true opinions that they just looked up, but out of prolonged reliance on reference texts, would have less of the understanding that would have enabled them to generate these opinions in the first place.

Elsewhere in Phaedrus, Socrates says that the true practice of philosophy cannot be written down, because to teach philosophy you cannot speak in the same way to everyone. Philosophy is not a set of opinions, it is more like a fire burning in the soul of a person, which can only be transmitted by prolonged contact in which the other person's soul can catch fire. Plato says much the same, writing in his own voice, in the Seventh Letter, which I recommend less but has the virtue of being short.

Of course, everyone ignores this and goes on to assume Plato put his philosophy into writing. Well, almost everyone.

Was Plato Right?

I don't actually think the degradation of memory is a problem. If anything, it's freed up mental space for better things to remember. Instead of memorizing facts, we can keep track of a large number of ways to obtain facts. We've increased our total power to obtain true opinions.

The understanding thing is a little more problematic.

Talking to Yourself

When you think something through in your own mind, you have access to all your own thoughts. You know what you mean by all the words you use. You can communicate with yourself in any mode - visual, auditory, tactile, nonverbal.

Verbal conversation with another person is necessarily lower bandwidth - meaning that less information is communicated at a time. In exchange, you get two separate minds, with different strengths, processing the information simultaneously. A clarifying question from your interlocutor can help you notice that actually, no, you don't quite understand what you mean by that word, or the nonverbal assumptions you were making aren't ones you endorse, or the big fuzzy thing you were confused about seems clearer when you break it down into pieces small enough to talk about.

Another problem with verbal communication is error. Disagreements about definitions or word usage often derail substantive conversations. This can be (but rarely is) addressed by frequent stopping or interrupting at the first moment someone uses a term that seems unclear. The underlying disposition of curiosity that makes this possible, and the readiness to abandon or discard words to try to ascend to the things themselves, is part of the philosophical attitude Plato believed it would be impossible to convince someone of by writing down correct opinions.

Latency and Throughput

Verbal communication at all has serious problems, and writing has even more. A big one is latency.

I am borrowing the concept of latency and throughput from computing. They are two measurements of how fast you can transfer information.
Throughput measures the overall rate of information transfer over time. Latency measures how long it takes to move a small piece of information and get a response.

Written communication generally has high throughput but high latency. This is obviously true for things like physical letters in envelopes, but tends to be true electronically as well, because people tend to wander off and do something else instead of waiting for a response. So even some short conversations can extend over days, months, or even years.

One common response to this problem is to try to use higher throughput to compensate for latency. Instead of saying just one thing, people make long, structured arguments, explicitly defining terms and anticipating counterarguments or questions instead of waiting for them. In other words, they try to take the conversation as far as they can with a simulation of the other person inside their heads.

In cases where the questions or objections are easy or simple ones, this is effective - it is a convenient shortcut with a long and glorious tradition, dating back even to the days where such arguments were communicated by speechmaking and not writing at all, for example in politics and other adversarial environments where one could not trust one's interlocutor to ask fair questions and work with you to get to answers constructively. But for the hard questions, people just end up talking past each other, and have debates instead of conversations.

Good Conversation Takes Practice

This is especially problematic because it increases the opportunity cost of difficult conversations. Easy conversations get cheaper with writing (where the potential throughput is basically unlimited), so we have more of them - but the hard conversations are almost no cheaper at all by comparison. So we have very few. After all, the difficulties you have with a novel concept may be very different from the difficulties I have with it, requiring conversations that go in totally different directions, or at different speeds, or examining different parts of our vocabulary. Because of this, even if you do manage to make the points I need to hear, it doesn't necessarily scale up well - republishing the original won't reliably communicate the same thing again.

But wait - it gets worse. Good conversation about difficult things takes practice. Most people are never properly trained, because proper training is expensive and the benefits are unobvious, so they don't know what to do when the opportunity arises to learn something difficult - and instead just try to have a debate, linking to articles, citing research, making long structured arguments and explicit definitions, and trying to anticipate counterarguments before they come up. If they've started out on the wrong track, it's exhausting for even a skilled conversational partner to apply the brakes, especially because someone trained in the art of good philosophical conversation is specifically acculturated not to try to exert a disproportionate influence over the conversation.

My hope is that simply making more people aware of this failure mode will help them avoid it, but I'm not very confident this will help.

Doubt, Science, and Magical Creatures

Doubt

I grew up in a Jewish household, so I didn't have Santa Claus to doubt - but I did have the tooth fairy.

It was hard for me to believe that a magical being I had never seen somehow knew whenever any child lost their tooth, snuck into their house unobserved without setting off the alarms, for unknown reasons took the tooth, and for even less fathomable reasons left a dollar and a note in my mom's handwriting.

On the other hand, the alternative hypothesis was no less disturbing: my parents were lying to me.

Of course I had to know which of these terrible things was true. So one night, when my parents were out (though I was still young enough to have a babysitter), I noticed that my tooth was coming out and decided that this would be...

A Perfect Opportunity for an Experiment.

I reasoned that if my parents didn't know about the tooth, they wouldn't be able to fake a tooth fairy appearance. I would find a dollar and note under my pillow if, but only if, the tooth fairy were real.

I solemnly told the babysitter, "I lost my tooth, but don't tell Mom and Dad. It's important - it's science!" Then at the end of the night I went to my bedroom, put the tooth under the pillow, and went to sleep. The next morning, I woke up and looked under my pillow. The tooth was gone, and in place there was a dollar and a note from the "tooth fairy."

This could have been the end of the story. I could have decided that I'd performed an experiment that would come out one way if the tooth fairy were real, and a different way if the tooth fairy were not. But I was more skeptical than that. I thought, "What's more likely? That a magical creature took my tooth? Or that the babysitter told my parents?"

I was furious the possibility of such an egregious violation of experimental protocol, and never trusted that babysitter in the lab again.

An Improvement in Experimental Design

The next time, I was more careful. I understood that the flaw in the previous experiment had been failure to adequately conceal the information from my parents. So the next time I lost a tooth, I told no one. As soon as I felt it coming loose in my mouth, I ducked into the bathroom, ran it under the tap to clean it, wrapped it in a tissue, stuck it in my pocket, and went about my day as if nothing had happened. That night, when no one was around to see, I put the tooth under my pillow before I went to sleep.

In the morning, I looked under the pillow. No note. No dollar. Just that tooth. I grabbed the incriminating evidence and burst into my parents bedroom, demanding to know:

"If, as you say, there is a tooth fairy, then how do you explain THIS?!"

What can we learn from this?

The basic idea of the experiment was ideal. It was testing a binary hypothesis, and was expected to perfectly distinguish between the two possibilities. However, if I had known then what I know now about rationality, I could have done better.

As soon as my first experiment produced an unexpected positive result, just by learning that fact, I knew why it had happened, and what I needed to fix in the experiment to produce strong evidence. Prior to the first experiment would have been a perfect opportunity to apply the "Internal Simulator," as CFAR calls it - imagining in advance getting each of the two possible results, and what I think afterwards - do I think the experiment worked? Do I wish I'd done something differently? - in order to give myself the opportunity to correct those errors in advance instead of performing a costly experiment (I had a limited number of baby teeth!) to find them.

Cross-posted at Less Wrong.

CFAR - Second Impression and a Bleg

TLDR: CFAR is awesome because they do things that work. They've promised to do more research into some of the "epistemic rationality" areas that I'd wished to see more progress on. They have matched donations through 31 January 2014, please consider giving if you can.

UPDATE: CFAR now has a post up on Less Wrong explaining what they are working on and why you should give. Here's the official version: http://lesswrong.com/lw/jej/why_cfar/

Second Thoughts on CFAR

You may have seen my first-impression review of the Center For Applied Rationality's November workshop in Ossining, NY. I've had more than a month to think it over, and on balance I'm pretty impressed.

For those of you who don't already know, CFAR (the Center For Applied Rationality) is an organization dedicated to developing training to help people overcome well-studied cognitive biases, and thus become more effective at accomplishing their goals. If you've never heard of CFAR before, you should check out their about page before continuing here.

The first thing you need to understand about CFAR is that they teach stuff that actually works, in a way that works. This is because they have a commitment to testing their beliefs, abandoning ideas that don't work out, and trying new things until they find something that works. As a workshop participant I benefited from that, it was clear that the classes were way better honed, specific, and action-oriented than they'd been in 2011.

At the time I expressed some disappointment that a lot of epistemic rationality stuff seemed to have been neglected, postponed, or abandoned. Even though some of those things seem objectively much harder than some of the personal effectiveness training CFAR seems to have focused on, they're potentially high-value in saving the world.

The Good News

After my post, Anna Salamon from CFAR reached out to see if we could figure out some specific things they should try again. I think this was a helpful conversation for both of us. Anna explained to me a few things that helped me understand what CFAR was doing:

1) Sometimes an "epistemic rationality" idea turns into a "personal effectiveness" technique when operationalized.

For example consider the epistemic rationality idea of beliefs as anticipations, rather than just verbal propositions. The idea is that you should expect to observe something differently in the world if a belief is true, than if it's false. Sounds pretty obvious, right? But the "Internal Simulator," where you imagine how surprised you will be if your plan doesn't work out, is a non-obvious application of that technique.

2) Some of the rationality techniques I'd internalized from the Sequences at Less Wrong, that seemed obvious to me, are not obvious to a lot of people going to the workshops, so some of the epistemic rationality training going on was invisible to me.

For example, some attendees hadn't yet learned the Bayesian way of thinking about information - that you should have a subjective expectation based on the evidence, even when the evidence isn't conclusive yet, and there are mathematical rules governing how you should treat this partial evidence. So while I didn't get much out of the Bayes segment, that's because I've already learned the thing that class is supposed to teach.

3) CFAR already tried a bunch of stuff.

They did online randomized trials of some epistemic rationality techniques and published the results. They tried a bunch of ways to teach epistemic rationality stuff and found that it didn't work (which is what I'd guessed). They'd found ways to operationalize bits of epistemic rationality.

4) The program is not just the program.

Part of CFAR's mission is the actual rationality-instruction it does. But another part is taking people possibly interested in rationality, and introducing them to the broader community of people interested in existential risk mitigation or other effective altruism, and epistemic rationality. Even if CFAR doesn't know how to teach all these things yet, combining people who know each of these things will produce a community with the virtues the world needs.

In the course of the conversation, Anna asked me why I cared about this so much - what was my "Something to Protect"? This question helped me clarify what I really was worried about.

In my post on effective altruism, I mentioned that a likely extremely high-leverage way to help the world was to help people working on mitigating existential risk. The difficulty is that the magnitude of the risks, and the impact of the mitigation efforts, is really, really hard to assess. An existential risk is not something like malaria, where we can observe how often it occurs. By definition we haven't observed even one event that kills off all humans. So how can we assess the tens or hundreds of potential threats?

A while before, Anna had shared a web applet that let you provide your estimates for, e.g., the probability each year of a given event like global nuclear war or the development of friendly AI, and it would tell you the probability that humanity survived a certain number of years. I tried it out, and in the process, realized that:

Something Is Wrong With My Brain and I Don't Know How to Fix It

For one of these rates, I asked myself the probability in each year, and got back something like 2%.

But then I asked myself the probability in a decade, and got back something like 5%.

A century? 6%.

That can't be right. My intuitions seem obviously inconsistent. But how do I know which one to use, or how to calibrate them?

Eliezer Yudkowsky started writing the Sequences to fix whatever was wrong with people's brains that was stopping them from noticing and doing something about existential risk. But a really big part of this is gaining the epistemic rationality skills necessary to follow highly abstract arguments, modeling events that we have not and cannot observe, without getting caught by shiny but false arguments.

I know my brain is inadequate to the task right now. I read Yudkowsky's arguments in the FOOM Debate and I am convinced. I read Robin Hanson's arguments and am convinced. I read Carl Shulman's arguments and am convinced. But they don't all agree! To save the world effectively - instead of throwing money in the direction of the person who has most recently made a convincing argument - we need to know how to judge these things.

In Which I Extract Valuable Concessions from CFAR in Exchange for Some Money

Then it turned out CFAR was looking for another match-pledger for their upcoming end/beginning of year matched donations fundraiser. Anna suggested that CFAR might be willing to agree to commit to certain epistemic rationality projects in exchange. I was skeptical at first - if CFAR didn't already think these were first-best uses of its money, why should I think I have better information? - but on balance I can't think of a less-bad outcome than what we actually got, because I do think these things are urgently needed, and I think that if CFAR isn't doing them now, it will only get harder to pivot from its current program of almost exclusively teaching instrumental rationality and personal effectiveness.

We hashed out what kinds of programs CFAR would be willing to do on the Epistemic Rationality front, and agreed that these things would get done if enough money is donated to activate my pledge:

  • Participate in Tetlock's Good Judgment Project to learn more about what rationality skills help make good predictions, or would help but are missing
  • Do three more online randomized experiments to test more epistemic rationality techniques
  • Do one in-person randomized trial of an epistemic rationality training technique.
  • Run three one-day workshops on epistemic rationality, with a mixture of old and new material, as alpha tests.
  • Bring at least one epistemic rationality technique up to the level where it goes into the full workshops.

And of course CFAR will continue with a lot of the impressive work it's already been doing.

Here are the topics that I asked them to focus on for new research:

Here are the major "epistemic rationality" areas where I'd love to see research:
  • Noticing Confusion (& doing something about it)
  • Noticing rationalization, and doing something to defuse it, e.g. setting up a line of retreat
  • Undistractability/Eye-on-the-ball/12th virtue/"Cut the Enemy"/"Intent to Win" (this kind of straddles epistemic and instrumental rationality AFAICT but distractions usually look like epistemic failures)
  • Being specific / sticking your neck out / being possibly wrong instead of safely vague / feeling an "itch" to get more specific when you're being vague
Here are some advanced areas that seem harder (because I have no idea how to do these things) but would also count:
  • Reasoning about / modeling totally new things. How to pick the right "reference classes."
  • Resolving scope-insensitivity (e.g. should I "shut up and multiply" or "shut up and divide"). Especially about probabilities *over time* (since there are obvious X-Risk applications).
  • How to assimilate book-learning / theoretical knowledge (can be broken down into how to identify credible sources, how to translate theoretical knowledge into procedural knowledge)

If you're anything like me, you think that these programs would be awesome. If so, please consider giving to CFAR, and helping me spend my money to buy this awesomeness.

The Bad News

For some reason, almost one month into their two-month fundraiser, CFAR has no post up on Less Wrong promoting it. As I was writing this post, CFAR had raised less than $10,000 compared to a total of $150,000 in matching funds pledged. (UPDATE: CFAR now has an excellent post up explaining their plan and the fundraiser is doing much better.)

CFAR Fundraiser Progress Bar

Huge oopses happen, even to very good smart organizations, but it's relevant evidence around operational competence. Then again I kind of have an idiosyncratic axe to grind with respect to CFAR and operational competence, as is obvious if you read my first-impression review. But it's still a bad sign, for an organization working on a problem this hard, to fail some basic tests like this. You should probably take that into account.

It's weak evidence, though.

CFAR Changed Me for the Better

The ultimate test of competence for an organization like CFAR is not operational issues like whether people can physically get to and from the workshops or whether anyone knows about the fundraiser. The test is, does CFAR make people who take its training better at life?

In my case there was more than one confounding factor (I'd started working with a life coach a few weeks before and read Scott Adams's new book a few weeks after - Less Wrong review here), but I have already benefited materially from my experience:

I had three separate insights related to how I think about my career that jointly let me actually start to plan and take action. In particular, I stopped letting the best be the enemy of the good, noticed that my goals can be of different kinds, and figured out which specific component of my uncertainty was the big scary one and took actual steps to start resolving it.

A couple of things in my life improved immediately as if by magic. I started working out every morning, for example, for the first time since college. I'm still not sure how that happened. I didn't consciously expend any willpower.

Several other recent improvements in my life of comparable size are partially attributable to CFAR as well. (The other main contributors are my excellent life coach, Scott Adams's book, and the cumulative effect of everything else I've done, seen, heard, and read.)

Several of the classes that seemed hard to use at the time became obviously useful in hindsight. For example, I started noticing things where a periodic "Strategic Review" would be helpful.

In addition, I learned how to be "greedy" about asking other people for questions and advice when I thought it would be helpful. This has been tremendously useful already.

I'll end the way I began, with a summary:

The problems humanity is facing in this century are unprecedented in both severity and difficulty. To meet these challenges, we need people who are rational enough to sanely and evaluate the risks and possible solutions, effective enough to get something done, and good enough to take personal responsibility for making sure something happens. CFAR is trying to create a community of such people. Almost no one else is even trying.

CFAR is awesome because they do things that work. They've promised to do more research into some of the "epistemic rationality" areas that I'd wished to see more progress on. They have a fundraiser with matched donations through 31 January 2014, please consider giving if you can.

Sharks are Forever

Back in the 5th or 6th grade my science teacher was telling the class about sharks. She said something about how sharks are an example of a perfected product of evolution, and that some sharks have been around basically unchanged for thousands of years. I'm now quite sure that she meant, some species of shark. But at the time, I thought:

If she meant "species," surely she would have said "species." Therefore, if she didn't, by modus tollens, she must mean that some individual sharks have been around for thousands of years. Unchanging. Undying. All-consuming.

I'm sure that this was like many subtle childhood misunderstandings, insofar as it didn't affect my day-to-day life very much. I don't interact with elderly sharks very often. I've never had to take a shark's vital readings, or card a shark at a bar. There's basically nothing in my life where I would need to know how old a shark is. Until Freshman year of college, that is.

In Freshman Lab (non-Johnnies can think of it as intro biology), my tutor (professor) Mr. K made some point about aging - in particular, about how animals that reproduce sexually instead of by cell division don't destroy the original in the process of making copies. He noted that it seems like all such animals have a natural aging process. They only get so old before they start declining with age, and they can only age so long before they die. But I had the perfect counterexample.

"Excuse me," I said, "but what about sharks?"

"Well, what about sharks?" responded Mr. K.

"We all know that sharks are immortal, right?"

...

Bye Grandma

My first two memories of my grandmother:

1) When I was a baby, she loved to hold me up and say "SOOO big!" She even bought a statue of this scene.

2) Up until I was 15 or so, whenever my family went out to dinner she would order a "Dewars, on the rocks, with a twist." Word for word.

Precise, elegant, complete. That was my grandmother, in a glass.

My grandmother cared about being an elegant lady. Though she never lost some Great Depression-era thrifty habits, she appreciated fine things: good art, good music, good food, the city of New York. She never really liked the suburbs she lived most of her life in; a native of Washington Heights, she missed Manhattan.

She treated me like a grownup as early as she could, and never dumbed things down for me. If she wanted to make a witty remark, but knew it would go right over my little head, she said it anyway. When I was too young to know that card games were anything other than what I played with grandma and grandpa, she told me, "I used to think that playing cards was for degenerates, but then I found out that I liked it." I get it now. Another time (she loved to tell this story), when I was little, my mother was off her feet for some reason so my grandma had to pick me up from school. She took me to the ice skating rink for my skating lesson, but I didn't know where I was supposed to go because my mom and always gotten me where I needed to be, and my grandmother didn't know because she'd never been to the place before. It was an unhappy afternoon, and I must have hated it, because the next day when I saw my grandma come by to pick me up from school, I lay down on the floor and started kicking, yelling, "I'm not going! I'm not going ice skating!" Another person might have tried to scold me into compliance, or to wait out the tantrum, or to soothe me with gentle words. My grandma knew me better than that: I had simply made an error of fact, which she immediately corrected. "Ben," she told me, "we're not going ice skating." "Oh," I replied, and got up and followed her out.

My grandfather was very different. He had a big personality, and he would be off singing and playing with me and the other children, while my grandmother sat with the other adults in conversation, because she found it more interesting. I feel like I spent the first 20 years of my life getting to know my grandfather, and wish I could have spent the next 20 getting to know my grandmother, but she would have been the first to point out the practical flaw in this plan: when I was 20, she was 83.

My grandmother was always forthcoming with advice, whether it was wanted or not. "You shouldn't eat that." "I don't like your hair that way, you should cut it shorter." "You should talk to so-and-so about a job." She loved her family and wanted us to put our best feet forward, look good, and do well, and nothing made her happier than to learn of and talk about our successes.

Her honesty made her easy to buy gifts for. A love for fine food - and in particular for excellent chocolates - is one thing we shared. One year, I found some wonderful chocolates to send her, and when she called me about them she was over the moon. The next year, those chocolates had been discontinued, so I found another brand recommended by the same source. When she called she said, "I wanted to thank you for the chocolates, but I thought you'd want to know, last year's were better."

She knew what she liked, and what she didn't, and she lived only as long as she was able to enjoy the things she liked. A few weeks before her death, she played bridge with friends. She was so physically exhausted by it that she declared it her last - but she came out ahead and took home money.

My grandmother died on the morning of Thursday, December 12th, 2013. She was 90 years old. I will miss her honesty, her elegance, and her love.

Continue reading

Give Smart, Help More

This post is about helping people more effectively. I'm not going to try to pitch you on giving more. I'm going to try to convince you to give smarter.

There's a summary at the bottom if you don't feel like reading the whole thing.

Do you want to help people? At least a little bit?

Imagine that there is a switch in front of you, in the middle position. It can only be flipped once. Flip it up, and one person somewhere on the other side of the world is cured of a deadly disease. Flip it down, and ten people are cured. You don't know any of these people personally, they are randomly selected. You will never see their faces or hear their stories. And this isn't a trick questiogn - they're not all secretly Pol Pot or something.

What do you do? Do you flip it up, flip it down, or leave it as it is? Make sure you think of the answer before you look ahead.

...

...

...

...

...

...

...

...

...

...

I'm going to assume you flipped the switch down. If you didn't, this post is not for you.

If you did, then why did you do that? Not because down is easier or more pleasant. Because it helps more people, and costs you nothing more. So if you made that choice and did it for that reason, you want to help people. Even people you don't know and will never meet. This might not be a preference that is particularly salient or relevant in your life right now, but when you chose between a world where more people are helped and a world where fewer people are helped, you chose the one where more people are helped. To summarize:

We agree that it is good to help people, and better to help more people, even if they're strangers or foreigners.

You probably already donate to charity. Why?

Most Americans give at least some money to charity. In 2010, in a Pew Research Center study, 95% of Americans said that they gave to a charitable organization specifically to help with the earthquake in Haiti. So when you add people who give, but didn't give for that, you end up with nearly everyone. And if you look at tax returns, the IRS reports that in 2011, out of 46 million people who itemized deductions, 38 million listed charitable contributions. That's 82%. So either way, most people give. Which means you probably do. (I'm assuming that most of my readers are in countries sufficiently similar to America for the conclusion to transfer.)

Why do you give to charity? That's actually a complicated question. People give for lots of reasons. You might be motivated by the simple fact that people will be helped, yes. But there are lots of other valid reasons to give to charity. You could want to support a cause that someone you care about is involved in, like sponsoring someone's charitable walk. You could want to express your solidarity with and membership in an institution like a church or community center. You could just value the warm fuzzy feeling that comes along with the stories you hear about what the charity does.

So here are some reasons why we give:

  • Warm fuzzies.
  • Group affiliation
  • Supporting friends
  • The simple preference for people to be helped.

All of these things are okay reasons to give, and I'm going to repeat that later for emphasis. I'm going to say some things that sound like I'm hating on warm fuzzies, but I'm really not.  To be clear: Warm fuzzies are nice! They feel good! You should do things that feel good! They just shouldn't be confused with other things that are good for different reasons.

The Power of Smarter Giving

I'm going to make up some numbers here.

Imagine three people: Kelsey, Meade, and Shun. They have the same job, that they all enjoy, and make $50,000 per year. They each give $1,000 per year to charity, 2% of their income. But they want to help people more.

Let's say that they give to a charity that tries to save lives by providing health care to people who can't access it. Each of their $1,000 donations purchases interventions that collectively add one year to one person's life, on average. That's actually a pretty good deal already - I'd certainly buy a year of extra life for myself, for that kind of money. I'm going to call that "helping one person," though we understand that it's just an average.

But now they each want to help more people. Kelsey decides to just give more, by cutting back on other expenses. Less savings, more meals at home, shorter vacations. Kelsey's able to scrape together an extra $1,000, so Kelsey's now giving $2,000, adding a year to two people's lives on average. On the other hand, Kelsey has fewer of other enjoyable things.

Meade decides instead of cutting back on expenses, to put in extra hours to get promoted to a job that's more stressful but pays better. After six months of this, let's say Meade is successful, and gets a 10% pay bump. Then Meade gives all that extra money to charity. That's $6,000 now that Meade is giving, adding on average a year of life each to 6 people.

Now how about Shun? Shun is lazy, like me. Shun decides that they don't want to work hard to help people. But Shun is willing to do 3 hours of research online, to find the best way to save lives. Shun finds a charity where outside researchers agree that a $1,000 donation on average adds a year of life to each of 10 people. Maybe because they focus on the cheapest treatments, like vaccines. Maybe because they operate in poor countries where expenses are lower, and there's more low-hanging health care fruit. Either way, Shun spend 3 hours doing research and now Shun's $1,000 per year adds a year of life to each of 10 people.

To summarize: Kelsey is scraping by to give $2,000 to give 2 people an extra year of life. Meade put in six months' extra hours at work - and has a more stressful job - and their $6,000 gives 6 people an extra year of life. Shun spent just 3 hours doing research on the internet, stills has the job Shun loves and gets to live the way Shun likes, and their $1,000 now gives 10 people an extra year of life each.

Kelsey - 2

Meade - 6

Shun - 10

Who would you rather be?

I don't want to deprecate any of these strategies. Sometimes your situation is different. Kelsey's a great person for trying to help people. There are a lot of reasons that Meade's strategy could be better than it sounds. But Shun went for the low-hanging fruit - and was able to help the most people, but suffered the least for it.

If my numbers are realistic, then researching different charities' effectiveness is an incredibly cheap way to help more people.

Why is this the case? Because in the numbers I made up, there was an order of magnitude effectiveness difference between two charities. One charity helped ten times as many people per dollar as another did.

This is sometimes true in the real world. Some charitable activities work better than others.

GiveWell, an organization that evaluates how effectively charities produce positive outcomes, thinks that there is a difference in effectiveness between two of their top-rated charities by a factor of between 2 and 3.

To repeat: one of GiveWell's top-rated charities is 2-3 times as effective as another. GiveWell only has three top-rated charities.

Then think about how different these numbers must be, on average, from the non-top-rated charities - or unrateable ones that don't try to measure outcomes at all. So a factor of 10 isn't unrealistic - but even if it's a factor of 2, that's a better return on time invested than Meade got - they might have worked more than three extra hours every week!

How do I do the research?

Was Shun's three hours of research a realistic estimate? It wouldn't be if nobody were already out there helping you - but fortunately there are now several organizations designed to help you figure out where your money does the most good.

The most famous one is probably still Charity Navigator. Charity Navigator basically reports on charities' finances, which is helpful in figuring out whether your money is going toward the programs you think it is, or whether it is going toward executives' paychecks and fancy gala fundraisers. Charity Navigator is a good first step, if all you want to do is weed out charities that are literally scams.

But we should be more ambitious. Remember, we don't just want to be not cheated. We're happiest if people actually get helped. And to know that, we don't just need to know how much program your money buys - we need to know if that program works.

GiveWell, AidGrade, Giving What We Can, and The Life You Can Save are all organizations that try to evaluate charities not just by how much work they do, but whether they can show that their work improves outcomes in some measurable way. All three seem to have mutual respect for one another, and I know there have been some friendly debates between GWWC and GiveWell on methodology.

If you want to search for more stuff on this, a good internet search term is "Effective Altruism".

If you really, really don't feel like spending a few hours doing research, you'll do fine giving to one of GiveWell's top 3.

Existential Risk: A Special Case

I want to put in a special plug here for a category of charity that gets neglected, where I think you can get a lot of bang for your buck in terms of results, and that's charities that try to mitigate existential risk.

An existential risk is something that might be unlikely - or hard to estimate - but if it happens, it would wipe out humanity. Even a small reduction in the chance of an extinction event could help a lot of people - because you'd be saving not only people at the time, but future generations. Giving What We Can has recently acknowledged this as a promising area for high-impact giving, and GiveWell's shown some interest as well.

Examples of existential risk are:

  • Nuclear Weapons
  • Biotechnology
  • Nanotechnology
  • Asteroids
  • Artificial Intelligence

Organizations that focus on existential risk include:

  • The Future of Humanity Institute (FHI) takes an academic approach, mostly focused on raising awareness and assessing risks.
  • The Lifeboat Foundation - I actually used to give to them, but I'm not sure quite what they really do, so I put that on hold - I may pick it up later if I learn something encouraging.
  • The Machine Intelligence Research Institute (MIRI) is working on the specific problem of avoiding an unfriendly intelligence explosion - by building friendly artificial intelligence. They believe this will also help solve many other existential risks.

In particular, MIRI is holding a fundraiser where new large donors (someone who has not yet given a total of $5,000 to MIRI) who make a donation of $5,000 or more, are matched 3:1 on the whole donation. Please consider it if you think MIRI's work is important. [UPDATE: This was a success and is now over.]

But Didn't You Say Meade Had a Good Strategy Too?

Yes. If you are super serious about helping people a lot, you might want to consider making career choices partly on that basis. I don't have a lot to say about this personally, but 80,000 hours specializes in helping people with this kind of thing.

One thing I can add is that it's easy to get intimidated by the difficulty of the optimal career choice for helping and thereby avoid making a knowably better choice. Don't do that. Better is better. Don't worry about making the perfect choice - you can always change your mind later when you think things through more.

Leveraged Giving and Meta-Charity

When we talk about leverage in giving, people usually take it literally and think about matched donations. Matched donations are fine, they double effectiveness and that's great, but a factor of 5-10 from research will be more important than a factor of 2 from matched giving.

But there's another kind of leverage - giving in ways that increases the effectiveness or quality of others' giving. For example, you could give to GWWC, AidGrade, or GiveWell, and this would mean that everyone else who gives based on their recommendations makes a sligjtly more effective choice - or that they're able to convince more people to give at all. You could probably do a quick back-of-the-envelope Fermi estimate to figure out what the impact is - whether there's a multiplier effect or not. Giving What We Can actually gives some numbers themselves - and I know that if GiveWell thinks they can't use the money, they'll just pass it along to their top-rated charities.

There's also a special case of leverage, and that's the Center For Applied Rationality, or CFAR. CFAR is trying to help people think better and more clearly, and act more effectively to accomplish their goals. A large part of their motivation for this, is to create a large community of people interested in effective altruism, with the skills to recognize the high-impact causes, and the personal effectiveness to actually do something to help. If your lifetime donations just create one highly motivated person, then you've "broken even" - in other words you've helped at least as many people as you would have, giving directly. But right now it's a much more leveraged opportunity, because CFAR plans to eventually become self-sustaining, but for their next few years they'll still probably depend on donations to supplement any fees they can charge for their training.

This year I'm part of the group matching donations for CFAR's end-of-year fundraiser. If you want to spend some of my money to try to build a community of true guardians of humanity, please do! [UPDATE: This fundraiser also concluded, successfully.]

So I should give all my charity budget to the one most effective charity?

Probably not.

Now, that's not because of "diversification". The National Center for Charitable Statistics (NCCS) estimates that there are about half a million charities in the US alone. That's plenty of diversity - I don't think anything's at risk of being neglected just because you give your whole charity budget to the best one.

The reason why you don't want to give everything to the charity you think helps the most, is those four reasons people give:

  • Warm fuzzies.
  • Group affiliation
  • Supporting friends
  • The simple preference for people to be helped.

And there are probably lots of others, but for now I'll just group them all together as "warm fuzzies" for the sake of brevity.

If you force yourself to pretend that you only care about helping, you'll feel bad about missing out on your warm fuzzies, and eventually you'll find an excuse to abandon the strategy.

I want to be clear that all of these are okay reasons to give! Some people, when they hear this argument, assume that it means, "Some of my donations are motivated by my selfish desire for warm fuzzies. This is wrong! I should just give to charity to help people. I shouldn't spend any charity money on feeling good about myself."

You are a human being and you deserve to be happy. Also you probably won't stick with a strategy that reliably makes you feel bad. So unfortunately, the exact optimal helping-strategy is unlikely to work for you (though if it does, that's fine too).

Fortunately, we can get most of the way to a maximum-help strategy without giving up on your other motivations, because of:

One Weird Trick to Get Warm Fuzzies on the Cheap

The human brain has a defect called scope insensitivity (but don't click through until you read this section, there's a spoiler). It basically means that the part of us that has feelings doesn't understand about very large or very small quantities. So while you intellectually might have a preference for helping more people over fewer, you'll get the same feel-good hit from helping one person and hearing their touching story, as you would from helping a group of ten.

In a classic experiment, researchers told people, assigned randomly into three groups, about an ecological problem that was going to kill some birds, but could be fixed. They asked participants how much they would personally be willing to pay to fix the problem. The only thing they changed from group to group, was how many birds would be affected.

One group was told 2,000 birds were affected, and they were willing to pay on average $80 each. The other two groups were told 20,000 and 200,000 birds were affected, respectively. How much do you think they were willing to pay? Try to actually guess before you look at the answer.

...

...

...

...

...

...

...

...

...

...

Here's how much the average person in each group was willing to pay:

2,000 birds: $80

20,000 birds: $78

200,000 birds: $88

So, basically the same, with some random variation.

Why do we care about this? Because it suggests that you should be able to get your warm fuzzies with a very small donation. Your emotions don't care how much you helped - they care whether you helped at all.

So you should consider setting aside a small portion of your charity budget for the year, and spreading it equally between everything it seems like a good idea to give to. It probably wouldn't cost you much to literally not say no to anything - just give every cause you like a dollar! You might even get more good vibes this way, then before, when you were trying to accomplish helping and warm fuzzies with the exact same donations.

Then give the rest to charity you think is most effective.

Summary:

You probably already want to help people you don't know, and give to charity. Researching charities' effectiveness in producing outcomes is a cheap way of making your donation help more people.

These organizations can help with your research:

Because of scope insensitivity, you should try to get your warm fuzzies and your effective helping done separately: designate a small portion of your charity budget for warm fuzzies, and give a tiny bit to every cause you'll fee l good about.

You may also be interested in some higher leverage options. CFAR is trying to create more people who care about effective altruism are effective enough to make a difference, and they have a matched donations fundraiser going on right now, which I'm one of the matchers for. [UPDATE: This fundraiser was successful, and is now over.]

Existential Risk is another field where beneficial effects are underestimated and you should consider giving, especially to FHI or MIRI.

MIRI in particular has a matched donations fundraiser going on now, where new large donors (>$5,000) will be matched at a 3:1 rate. [UPDATE: This fundraiser was successful, and is now over.]

Cross-posted at the Effective Altruism Society of DC blog.

Two Tones, One Mouth

One known technology for producing a musical tone with the human body is to use the vocal chords and hum or sing.

Another is to use the lips and whistle.

Since these methods are partially independent, it seemed to me as if I ought to be able to produce two-part harmony on my own.

Step one was to be able to whistle and sing, separately, which I've been able to do since I was little. (If anyone wants help learning how to whistle, feel free to ask; I can at least try to help you).

Step two was to learn how to, while making one tone with my voice, whistle. That wasn't much harder. I just sang a note, and while I was doing that, moved my lips into whistling position. It didn't take long to produce two distinct sounds at the same time - although there is some interference.

Step three was to figure out how to change the pitch of the whistle while sustaining the vocal tone. This was also pretty easy - if you can change the pitch of your whistle, you can change the pitch of your whistle while singing at a single pitch. It feels pretty much the same.

Step four was to be able to alter the vocal pitch while keeping the whistle constant. This is actually hard, because the whistle tone and the vocal tone seem to interact somehow. So I'd have to change my lip position just to keep the whistle at the same pitch as I altered my vocal pitch. Eventually I got it, after a day or two of annoying everyone around me.

Step five was to be able to alter them simultaneously, so as to sistle in two part harmony. This was really, really hard. I think what makes this the hardest step is that it's not about learning the physical positions - it's about the cognitive ability to track the two melodies simultaneously.

The way you sing or whistle of course is not by consciously consulting a giant lookup table between pitches and physical behaviors. Instead, you learn to associate a tone with a bodily behavior, so that when you think of the tone, your body prepares to produce that tone automatically. I think the same tone-memory in my mind is linked to both a whistling and singing behavior. So when I tried to keep track of two pitches, I didn't have the cognitive skill of remembering which tone went with which part of my behavior. Instead I got leakage - I'd try to move my vocal pitch and move my whistling pitch instead, or vice versa. (This is also part of what made step four a little hard.)

On top of that there are range problems - it's harder to whistle below than above your vocal pitch, and you're stuck with your maximum comfortable vocal and whistling ranges - minus a little bit since you have less room to for example help your whistle by changing your mouth shape.

However, after having a bunch of fun with that, eventually I got to the point where I could do some mediocre two-part harmony. Here are some examples I recorded today - they sound pretty terrible, but the point is that I can do it at all:

Happy Birthday

By the Waters of Babylon

Dona Nobis Pacem

Rex Coeli

I'm Sorry but I Kant Let You Do That

A friend linked this article in the New York Times. This passage is an idea that I had seen mentioned, but never actually explained, and it drove me bonkers - the idea that certain kinds of traditional Western ideas of rationality and freedom were sexist. This is a really clear explanation and the idea makes a lot more sense to me now:

[Most feminist philosophers] argue that, among other things, [Immanuel Kant] is committed to a conception of personhood that unfairly and inaccurately privileges our rationality and autonomy over the social, interdependent, embodied, and emotional aspects of our lives.  This misrepresentation of human nature encourages us to think about people as fundamentally independent, and this, they argue, leads to the exploitation of those people (usually women) who are responsible for caring for those people who are not independent (children, the elderly, the disabled and infirm — all of us at some point in our lives, really).

The article is about an incident David Foster Wallace described in his essay. “Getting Away from Already Being Pretty Much Away from It All”:

David Foster Wallace describes a visit to the Illinois State Fair. The friend who accompanies him, whom he calls Native Companion because she’s a local, gets on one of the fair’s rides. While she’s hanging upside down, the men operating the ride stop it so that her dress falls over her head and they can ogle her. After she gets off the ride, Wallace and Native Companion have a heated discussion about the incident. He thinks she’s been sexually harassed and thinks something should be done about it.

Wallace's companion replies that she doesn't think it's a big deal, she can either ignore it and feel okay, or get angry and let it ruin her day. The article points out that there are sound Kantian reasons to believe that the woman had a duty to object.

I didn't much care for the rest of the article's analysis, though, as it seems to describe every plausible response as justified on Kantian principles, including doing nothing at all:

The obligation to resist oppression is this sort of duty: there are lots of things one can to do fulfill it. Native Companion could confront the carnies directly. She could lodge a formal complaint with the fair’s management. We might even think that she actually is resisting her oppression—that by refusing to feel humiliated, refusing to let the carnies dictate when and how she can have fun, and refusing to believe that their sexually objectifying her demeans her moral status in any way—she’s actually resisting her oppression internally.

How does that provide any moral guidance at all?

My guess is that women have a Kantian duty to other women (& themselves as women), all else being equal, to discourage actions that oppress women considered as a class, whether or not the action displeases the particular woman involved. (Just a guess and I am not sure that Kantian morality is correct either; if not, then whether you have a Kantian duty to do something doesn't determine whether you ought to so it.)

There's something in Wallace's story that mucks this up a bit, though - it's not clear whether Native Companion really was totally fine with what happened or whether that's just the story she told herself. Let's turn up the contrast a bit: suppose I were walking down the street, and felt a sudden craving for chocolate. As I pass a chocolate store, an employee runs out and shoves a free sample into my mouth. What is my duty?

Even though I was not harmed (and was even benefited) by this action, as a pedestrian it is my duty to express indignation, because the salesperson could not have reasonably expected that their behavior would be welcome. To thank them would be harming pedestrians with food allergies or other strict dietary preferences, or who simply don't enjoy chocolate, and even my future pedestrian self if the next surprise sample is not to my taste. So while I have been helped as a chocolate craver, I have been harmed as a pedestrian more deeply, and must scold the salesperson.

Handshakes. What's New?

Handshakes

I had some recent conversational failures online, that went roughly like this:

"Hey."
"Hey."
"How are you?"
The end.

At first I got upset at the implicit rudeness of my conversation partner walking away and ignoring the question. But then I decided to get curious instead and posted the exchange on Facebook with a request for feedback. Unsurprisingly I learned more this way.

Some kind friends helped me troubleshoot this, and in the process of figuring out how online conversation differs from in-person conversation, I realized what these things do in live conversation. They act as a kind of implicit communication protocol by which two parties negotiate how much interaction they're willing to have.

Consider this live conversation:

"Hi."
"Hi."
The end.

No mystery here. Two people acknowledged one another's physical presence, and then the interaction ended. This is bare-bones maintenance of your status as persons who can relate to one another socially. There is no intimacy, but at least there is acknowledgement of someone else's existence. A day with "Hi" alone is less lonely than a day without it.

"Hi."
"Hi, how's it going?"
"Can't complain. And you?"
"Life."

This exchange establishes the parties as mutually sympathetic - the kind of people who would ask about each other's emotional state - but still doesn't get to real intimacy. It is basically just a drawn-out version of the example with just "Hi". The exact character of the third and fourth line don't matter much, as there is no real content. For this reason, it isn't particularly rude to leave the question totally unanswered if you're already rounding a corner - but if you're in each other's company for a longer period of time, you're supposed to give at least a pro forma answer.

This kind of thing drives crazy the kind of people who actually want to know how someone is, because people often assume that the question is meant insincerely. I'm one of the people driven crazy. But this kind of mutual "bidding up" is important because sometimes people don't want to have a conversation, and if you just launch into your complaint or story or whatever it is you may end up inadvertently cornering someone who doesn't feel like listening to it.

You could ask them explicitly, but people sometimes feel uncomfortable turning down that kind of request. So the way to open a substantive topic of conversation is to leave a hint and let the other person decide whether to pick it up. So here are some examples of leaving a hint:

"Hi."
"Hi."
"Anything interesting this weekend?"
"Oh, did a few errands, caught up on some reading. See you later."

This is a way to indicate interest in more than just a "Fine, how are you?" response. What happened here is that one party asked about the weekend, hoping to elicit specific information to generate a conversation. The other politely technically answered the question without any real information, declining the opportunity to talk about their life.

"Hi."
"Hi."
"Anything interesting happen over the weekend?"
"Oh, did a few errands, caught up on some reading."
"Ugh, I was going to go to a game, but my basement flooded and I had to take care of that instead."
"That's tough."
"Yeah."
"See you around."

Here, the person who first asked about the weekend didn't get an engaged response, but got enough of a pro forma response to provide cover for an otherwise out of context complaint and bid for sympathy. The other person offered perfunctory sympathy, and ended the conversation.

Here's a way for the recipient of a "How are you?" to make a bid for more conversation:

"Hi."
"Hi."
"How are you?"
"Oh, my basement flooded over the weekend."
"That's tough."
"Yeah."
"See you around."

So the person with the flooded basement provided a socially-appropriate snippet of information - enough to be a recognizable bid for sympathy, but little enough not to force the other person to choose between listening to a long complaint or rudely cutting off the conversation.

Here's what it looks like if the other person accepts the bid:

"Hi."
"Hi."
"How are you?"
"Oh, my basement flooded over the weekend."
"Wow, that's tough. Is the upstairs okay?"
"Yeah, but it's a finished basement so I'm going to have to get a bunch of it redone because of water damage."
"Ooh, that's tough. Hey, if you need a contractor, I had a good experience with mine when I had my kitchen done."
"Thanks, that would be a big help, can you email me their contact info?"

By asking a specific follow-up question the other person indicated that they wanted to hear more about the problem - which gave the person with the flooded basement permission not just to answer the question directly, but to volunteer additional information / complaints.

You can do the same thing with happy events, of course:

"Hi."
"Hi."
"How are you?"
"I'm getting excited for my big California vacation."
"Oh really, where are you going?"
"We're flying out to Los Angeles, and then we're going to spend a few days there but then drive up to San Francisco, spend a day or two in town, then go hiking in the area."
"Cool. I used to live in LA, let me know if you need any recommendations."
"Thanks, I'll come by after lunch?"

So what went wrong online? Here's the conversation again so you don't have to scroll back up:

"Hey."
"Hey."
"How are you?"
The end.

Online, there are no external circumstances that demand a "Hi," such as passing someone (especially someone you know) in the hallway or getting into an elevator.

If you import in-person conversational norms, the "Hi" is redundant - but instead online it can function as a query as to whether the other person is actually "present" and available for conversation. (You don't want to start launching into a conversation just because someone's status reads "available" only to find out they're in the  middle of something else and don't  have time to read what you wrote.)

Let's say you've mutually said "Hi." If you were conversing in person, the next thing to do would be to query for a basic status update, asking something like, "How are you?". But "Hi" already did the work of "How are you?". Somehow the norm of "How are you?" being a mostly insincere query doesn't get erased, even though "Hi" does its work - so some people think you're being bizarrely redundant. Others might actually tell you how they are.

To be safe, it's best to open with a short question apropos to what you want to talk about - or, since it's costless online and serves the same function as "Hi", just start with "How are you?" as your opener.

What's New?

I recently had occasion to explain to someone how to respond when someone asks "what's new?", and in the process, ended up explaining some stuff I hadn't realized until the moment I tried to explain it. So I figured this might be a high-value thing to explain to others here on the blog.

Of course, sometimes "what's new?" is just part of a passing handshake with no content - I covered that in the first section. But if you're already in a context where you know you're going to be having a conversation, you're supposed to answer the question, otherwise you get conversations like this:

"Hi."
"Hi."
"What's new?"
"Not much. How about you?"
"Can't complain."
Awkward silence.

So I'm talking about  cases where you actually have to answer the question.

The problem is that some people, when asked "What's New?", will try to think about when they last met the person asking, and all the events in their life since then, sorted from most to least momentous. This is understandably an overwhelming task.

The trick to responding correctly is to think of your conversational partner's likely motives for asking. They are very unlikely to want a complete list. Nor do they necessarily want to know the thing in your life that happened that's objectively most notable. Think about it - when's the last time you wanted to know those things?

Instead, what's most likely the case is that they want to have a conversation about a topic you are comfortable with, are interested in, and have something to say about. "What's New?" is an offer they are making, to let you pick the life event you most feel like discussing at that time. So for example, if the dog is sick but you'd rather talk about a new book you're reading, you get to talk about the book and you can completely fail to mention the dog. You're not lying, you're answering the question as intended.

Sacrificial Rituals

One way of understanding what things cost is to imagine them as sacrificial rituals.

Blood doping is a sacrificial ritual whereby a drop of blood is permanently sacrificed for a future drop of blood.

Meetings are a sacrificial ritual whereby multiple victims are simultaneously suspended in purgatory for a length of time, to summon a demon with the sum of their knowledge, but intelligence equal only to that of the average member, divided by the number of victims.

Employment is an sacrificial ritual whereby the subject is imperfectly enthralled for the bulk of their day by a demon, and in exchange receives a substance that may itself be sacrificed to enthrall lesser demons through economancy, with powers proportional to the amount of substance used. Many wizards use an even powerful spell called Full-Time Employment, in which they commit to a long period of enthrallment in exchange for a more than proportionally larger amount of the enthralling substance.

Many other economantic spells have a similar structure to this.