Tag Archives: Effective Altruism

Effective Altruism is not a no-brainer

Ozy writes that Effective Altruism avoids the typical failure modes of people in developed countries intervening in developing ones, because it is evidence-based, humble, and respects the autonomy of the recipients of the intervention. The basic reasoning is that Effective Altruists pay attention to empirical evidence, focus on what's shown to work, change what they're doing when it looks like it's not working, and respect the autonomy of the people for whose benefit they're intervening.

Effective Altruism is not actually safe from the failure modes alluded to:

  • Effective Altruism is not humble. Its narrative in practice relies on claims of outsized benefits in terms of hard-to-measure things like life outcomes, which makes humility quite difficult. Outsized benefits probably require going out on a limb and doing extraordinary things.
  • Effective Altruism is less evidence based than EAs think. People talk about some EA charities as producing large improvements in life outcomes with certainty, but this is often not happening. And when the facts disagree with our hopes, we seem pretty good at ignoring the facts.
  • Effective Altruism is not about autonomy. Some EA charities are good at respecting the autonomy of beneficiaries, but this is nowhere near central to the movement, and many top charities are not about autonomy at all, and are much better fits for the stereotype of rich Westerners deciding that they know what's best for people in poor countries.
  • Standard failure modes are standard. We need a model of what causes them, and how we're different, in order to be sure we're avoiding them.

Continue reading

My life so far: motives and morals

This is the story of my life, through the lens of motivations, of actions I took to steer myself towards long-term outcome, of the way the self that stretches out in causal links over long periods of time produced the self I have at this moment. This is only one of the many ways to tell the story of my life. Continue reading

The performance of pain as a political tactic

This post uses activism around factory farming as an example, but I don’t mean to criticize animal welfare activism in particular. It’s just an especially available example to me of a broader pattern. My selection of example is maybe even biased towards better causes - or causes I approve of more - since I tend to associate with people doing things I approve of. Animals on factory farms seem to suffer a lot, this can probably be changed at fairly little cost, and we should do so.

This is also not the opinion of my employer. I want to make that absolutely clear. This is my private opinion, it’s not based on the opinion of anyone else where I work as far as I know, and it’s not indicative of my employer's future actions.

The Personal

Before a recent Effective Altruist event in San Francisco, some potential participants complained about the plan to serve meat. There were two main types of arguments made against serving animal products. One was the utilitarian argument against eating meat. Factory farmed meat, so the argument goes, provides much less enjoyment to the eater than suffering to the eaten. I find this argument plausible, though difficult to judge.

The second argument was that the presence of meat would make vegans (and many people associated with the Effective Altruist movement are vegans) uncomfortable. It would make them feel unwelcome. Some said it would be offensive, it would make them feel the way a barbecue featuring roasted two-year-old human would make me feel. This complaint seemed pretty valid to me on the face of it, and presumably the organizers agreed - the food ended up being animal-free. However, something about the argument made and still makes me uneasy.

Continue reading

Want to Summon Less Animal Suffering?

I've been thinking about Julia Galef's chart of how much of an animal's live an unit of food costs. Her summary of her approach:

If you’re bothered by the idea of killing animals for food, then going vegetarian might seem like an obvious response. But if you want your diet to kill as few animals as possible, then eschewing meat is actually quite an indirect, and sometimes even counterproductive, strategy. The question you should be asking yourself about any given food is not, “Is this food animal flesh?” The question you should be asking yourself is, “How many animal lives did this food cost?”

She ends up with this chart:

But as we know, from a hedonic utilitarian perspective, the moral cost of consuming animals is not measured in their deaths, but the net suffering in their lives - and lives are made of days.

I am not a hedonic utilitarian, but I think that for big-picture issues, utilitarianism is an important heuristic for figuring out what the right answer is, since at least it handles addition and multiplication better than our unaided moral intuitions.

So I looked up how long each of these animals typically lives, to estimate how many days of each animal's life you are responsible for when you consume 1,000 calories of that animal. The Food and Agriculture Organization of the United Nations says that beef cattle are slaughtered at 36 months old (i.e. 1,096 days), and pork hogs at 6 months old (183 days). Wikipedia says that chickens raised for meat typically live about 6 weeks (42 days), and laying hens about 2 years (730 days), and that dairy cows live an average of four years (1,461 days) before dying or being culled and sold as beef.

Using those figures yields the following:

Days per 1000 Calories

 

Eggs now appear to be even worse than eating chicken flesh, since broiler chickens live very short lives compared to laying hens. Similarly, beef loses its advantage relative to pork, which comes from smaller but faster-maturing animals. Dairy still seems pretty okay, comparatively.

A few complicating factors need to be explored before this becomes a reliable guide, though.

First, not all animal suffering is of equal intensity. It may actually be net good for well-treated animals to exist. It is still not obvious that factory farmed animals would rather never have been born, or how to count their suffering.

Second, it is not obvious how to attribute responsibility for animals that are not themselves raised to feed humans. For example, Julia divided laying hens' yield in half to account for killed male chicks, but the chicks obviously live almost no time. If you double the yield back to what it would have been before this adjustment, eggs come out about the same as chicken. Similarly, if a dairy cow is killed for beef, it seems like this should lower milk drinkers' and cheese eaters' day counts, since beef eaters contribute to the viability of raising dairy cattle too.

Finally, there may be different elasticity for different animal products; because lowered demand leads to lowered prices, which increase demand, the industry might reduce production by less than a full chicken for each chicken you abstain from buying, and this rate may differ by animal.

What am I going to do? I like eggs a lot, and think they're cheap and pretty good for me, and I basically believe Katja Grace's argument that I shouldn't put in a lot of work to change my behavior on this, so I'm going to keep eating them, though I'll continue to preferentially buy free range eggs where available. I have a rationalization that laying hens don't suffer nearly as much as broilers, so I'll flinch a little bit each time I consider eating chicken. I was favoring eating beef over pork due to Julia's analysis, but I will stop doing that now that I know the two are pretty much equivalent in terms of animal-days.

[UPDATE: Brian Tomasik has an unsurprisingly somewhat more thoroughly worked out treatment of this line of thinking here.]

Birthday Wish

Dear Friends,

For those of you who were able to come celebrate my birthday with me, thank you. And for those of you who couldn't make it, you were missed, but not loved the less for it.

On the topic of presents – while none will be turned away, I’m fortunate to mostly have enough things in my life. If you’d like to do something for me to celebrate my birthday, I’m going to ask you to take an action instead. I’m going to ask you to take an action to help others.
Continue reading

Whatever Is Not Best Is Forbidden

At this year's CFAR Alumni Reunion, Leah Libresco hosted a series of short talks on Effective Altruism. She now has a post up on an issue Anna Salamon brought up, the disorienting nature of some EA ideas:

For some people, getting involved in effective altruism is morally disorienting — once you start translating the objects and purchases around you into bednets, should you really have any of them? Should you skip a gruel diet so you can keep your strength up, work as an I-banker, and “earn to give” — funneling your salary into good causes? Ruminating on these questions can lead to analysis paralysis — plus a hefty serving of guilt.

In the midst of our discussion, I came up with a speculative hypothesis about what might drive this kind of reaction to Effective Altruism. While people were sharing stories about their friends, some of their anxious behaviors and thoughts sounded akin to Catholic scrupulosity. One of the more exaggerated examples of scrupulosity is a Catholic who gets into the confessional, lists her sins, receives absolution, and then immediately gets back into line, worried that she did something wrong in her confession, and should now confess that error.

Both of these obviously bear some resemblance to anxiety/OCD, period, but I was interested in speculating a little about why. In Jonathan Haidt’s The Righteous Mind, he lays out a kind of factor analysis of what drives people’s moral intuitions. In his research, some moral foundations (e.g. care/harm) are pretty common to everyone, but some (sanctity/degradation or “purity”) are more predictive in some groups than others.

My weak hypothesis is that effective altruism can feel more like a “purity” decision than other modes of thought people have used to date. You can be inoculated against moral culture shock by previous exposure to other purity-flavored kinds of reasoning (deontology, religion, etc), but, if not (and maybe if you’re also predisposed to anxiety), the sudden clarity about a bestmode of action, that is both very important, and very unlikely for you pull off everyday may trigger scrupulosity.

EAs sometimes seem to think of the merit of an action as a binary quality, where either it is obligatory because it has the "bestness" attribute and outweighs the opportunity cost, or it is forbidden because it doesn't. You're allowed to take care of yourself, and do the best known thing given imperfect information, but only if it's "best.” This framing is exhausting and paralyzing because you're never doing anything positively good, everything is either obligatory or forbidden.

It doesn't have to be that way; we can distinguish between intrapersonal and interpersonal opportunity cost.

I'm not a public utility, I'm a person. If I help others in an inefficient way, or with less of my resources than I could have employed, then I've helped others. If last year I gave to a very efficient charity, but this year I switched to a less efficient charity, then I helped others last year, and helped others again this year. Those are things to celebrate.

But if I pressure or convince someone else to divert their giving from a more efficient to a less efficient charity, or support a cause that itself diverts resources from more efficient causes, then I have actually harmed others on net.

Cross-posted at the Effective Altruism Society of DC bloc.

Don't Worry, Be Canny

Oops

My girlfriend is [...] triggered [...] by many discussions of charity – whenever ze hears about it, ze starts worrying ze is a bad person for not donating more money to charity, has a mental breakdown, and usually ends up shaking and crying for a little while.

I just wrote a post on giving efficiently.

I just wrote another asking people to give to CFAR.

And I'm pretty sure I mentioned both to the person in question.

Oops.

Of course I put a disclaimer up front about how I'm not talking about how much to give, just how to use your existing charity budget better. But of course that doesn't matter unless it actually worked - which it likely didn't.

Of course I would have acted differently if I'd had more information up front - but I don't get extra points for ignorance; the expected consequence is just as bad.

I'm going to try and write an antidote to the INFINITE GUILT that can feel like the natural response to Peter Singer style arguments. It probably won't work, but I doubt it will hurt. (If it does, let me know. If there's bad news, I want to hear it!)

 

You Don't Have To Be a Good Person To Be a Good Person

What are you optimizing for, anyway, being a good person or helping people?

If you care about helping people, then you should think of yourself as a manager, with a team of one. You can't fire this person, or replace them, or transfer them to another department. All you can do is try to motivate them as best you can.

Are you going to try to work this person into the ground, use up 100% of their capacity every day, helping others? No! The mission of the firm is "helping people," but that's not necessarily your employee's personal motivation. If they burn out and lose motivation, you can't replace them - you have to build them back up again. Instead, you should try really, really hard to keep this person happy. This person, of course, being you.

If telling them they should try harder gets them motivated, then fine, do that. But if it doesn't - if it makes them curl up into a ball and be sad instead, then try something else. Ask them if they need to give up on some of the work. Ask them if there's anything they need that they aren't getting. Because if your one employee at the firm of You isn't happy to be there, you'd better figure out how to make that happen. That's your number one job as manager - because without you, you don't have anyone.

That doesn't make the firm any less committed to helping people. As your own manager, you are doing your best to make sure helping-people activities happen, as much and as effectively as possible. But that means treating yourself like a human being, with basic decency and respect for your own needs.

 

Alright, suppose you do care about "being good." Maybe you believe in virtue ethics or deontology or have some other values where you have an idea of what a good person is, independent of maximizing an utilitarian consequence.

The same result follows. You should take whatever action maximizes your "goodness," but again, you don't have perfect control over yourself. You're a manager with one permanent employee. There's no point in asking more than they can do, unless they like that (some people say they do) - look for the things that actually do motivate them, and make sure their needs get met. That's the only way to keep them motivated to work towards being a "good person" in the long term; all the burnout considerations still apply.

 

What Do You Mean By "You"?

There's not really just one you. You have lots of parts! The part that wants to help people is probably distinct from the part that wants to feel like a good person, which is in turn distinct from the part that has needs like physical well-being. You all have to come to some sort of negotiated agreement if you want to actually get anything done.

In my own life, it was a major breakthrough, for example, to realize that my desire to steer the world toward a better state - my desire to purchase "altruons" with actions or dollars - is distinct from my desire to feel good about getting things done and be validated for doing good work that makes a clear difference. Once I realized these were two very different desires, I could at least try to give each part some of what it wanted.

Pretending your political opponents don't exist is not a viable strategy in multiple-person politics. It's no better in single-person politics. You have three options:

1) Crush the opposition.

If exercising is a strong net positive for you, but part of you is whining "I'm tired, I don't wanna," you can just overpower it with willpower.

In politics, there are all sorts of fringe groups that pretty much get totally ignored. For example, legalization of cocaine doesn't seem to have gone anywhere in the US, even though I'm sure there are a few people who feel very, very strongly about it. No concessions whatsoever seem to have been made.

The advantages of this strategy are that you get what you think what you want, without giving up anything in exchange, and get practice using your willpower (which may get stronger with use).

The disadvantages are that you can't do it without a majority, that some parts of you don't get their needs met, and that if you're tired or distracted the government may be overturned by the part of yourself that has been disenfranchised.

2) Engage in "log-rolling."

Sometimes the part of you that's resisting may want something that's easy to give it. For example, I just finished the first draft of a short story. Prior to that I hadn't finished a work of fiction in at least ten years. I'd started plenty, of course, so clearly there was some internal resistance.

My strategy this time was to get used to writing anything at all, regularly, and "finishing" (i.e. giving up and publishing) things, whether I think they're good or not. Get used to writing at all, and worry about getting good once I've installed the habit of writing.

But I stalled out anyway when writing fiction. Eventually, instead of just fighting myself with willpower when I noticed that I was stalling, I engaged myself in dialogue:

"Why don't you want to keep writing?"

"I can't think of what to write next."

"You literally can't think of what to write? Or you don't like your ideas?"

"I don't like the ideas."

"Why not?"

"Because I think they're bad. I'm trying to write something good, like you asked, but all I have is bad ideas."

"Darn it, self, I didn't ask you to write something good. I asked you to write something at all. Go ahead and write the bad version. We'll worry about writing something good later."

"Oh, is that all you wanted? That's easy!"

And I happily went back to work and kept writing.

Sometimes the best you can do is give everyone just part of what they want, though. There are people who believe that the rich US should give much of its excess wealth to poor people. If you believe this, what's a better strategy? Start a magazine called "America Is Bad And It Should Feel Bad", or try to expand our guest-worker visa program? One, and only one, of these will increase the wealth of poor foreigners at all.

The advantage of this approach is that is probably maximizes your short-term happiness, more of your needs get met, and it saves willpower for things where this approach is not viable.

3) Lose.

If you can't crush the opposition, and you can't trade with them, then you lose. If you're losing, and you have spent five minutes thinking about it and can't think of either a viable way to win or an idea-generating method you expect to work, then give up. Stop expending willpower on it, accept the bad consequence, and get on with your life.

I'm a bad person? Okay, I'm a bad person, I'd still like to help people, though. What's for lunch?


The Bottom Line


KIRK: I wish I were on a long sea voyage somewhere. Not too much deck tennis, no frantic dancing, and no responsibility. Why me? I look around that Bridge, and I see the men waiting for me to make the next move. And Bones, what if I'm wrong?
MCCOY: Captain, I
KIRK: No, I don't really expect an answer.
MCCOY: But I've got one. Something I seldom say to a customer, Jim. In this galaxy, there's a mathematical probability of three million Earth-type planets. And in all of the universe, three million million galaxies like this. And in all of that, and perhaps more, only one of each of us. Don't destroy the one named Kirk.

CFAR - Second Impression and a Bleg

TLDR: CFAR is awesome because they do things that work. They've promised to do more research into some of the "epistemic rationality" areas that I'd wished to see more progress on. They have matched donations through 31 January 2014, please consider giving if you can.

UPDATE: CFAR now has a post up on Less Wrong explaining what they are working on and why you should give. Here's the official version: http://lesswrong.com/lw/jej/why_cfar/

Second Thoughts on CFAR

You may have seen my first-impression review of the Center For Applied Rationality's November workshop in Ossining, NY. I've had more than a month to think it over, and on balance I'm pretty impressed.

For those of you who don't already know, CFAR (the Center For Applied Rationality) is an organization dedicated to developing training to help people overcome well-studied cognitive biases, and thus become more effective at accomplishing their goals. If you've never heard of CFAR before, you should check out their about page before continuing here.

The first thing you need to understand about CFAR is that they teach stuff that actually works, in a way that works. This is because they have a commitment to testing their beliefs, abandoning ideas that don't work out, and trying new things until they find something that works. As a workshop participant I benefited from that, it was clear that the classes were way better honed, specific, and action-oriented than they'd been in 2011.

At the time I expressed some disappointment that a lot of epistemic rationality stuff seemed to have been neglected, postponed, or abandoned. Even though some of those things seem objectively much harder than some of the personal effectiveness training CFAR seems to have focused on, they're potentially high-value in saving the world.

The Good News

After my post, Anna Salamon from CFAR reached out to see if we could figure out some specific things they should try again. I think this was a helpful conversation for both of us. Anna explained to me a few things that helped me understand what CFAR was doing:

1) Sometimes an "epistemic rationality" idea turns into a "personal effectiveness" technique when operationalized.

For example consider the epistemic rationality idea of beliefs as anticipations, rather than just verbal propositions. The idea is that you should expect to observe something differently in the world if a belief is true, than if it's false. Sounds pretty obvious, right? But the "Internal Simulator," where you imagine how surprised you will be if your plan doesn't work out, is a non-obvious application of that technique.

2) Some of the rationality techniques I'd internalized from the Sequences at Less Wrong, that seemed obvious to me, are not obvious to a lot of people going to the workshops, so some of the epistemic rationality training going on was invisible to me.

For example, some attendees hadn't yet learned the Bayesian way of thinking about information - that you should have a subjective expectation based on the evidence, even when the evidence isn't conclusive yet, and there are mathematical rules governing how you should treat this partial evidence. So while I didn't get much out of the Bayes segment, that's because I've already learned the thing that class is supposed to teach.

3) CFAR already tried a bunch of stuff.

They did online randomized trials of some epistemic rationality techniques and published the results. They tried a bunch of ways to teach epistemic rationality stuff and found that it didn't work (which is what I'd guessed). They'd found ways to operationalize bits of epistemic rationality.

4) The program is not just the program.

Part of CFAR's mission is the actual rationality-instruction it does. But another part is taking people possibly interested in rationality, and introducing them to the broader community of people interested in existential risk mitigation or other effective altruism, and epistemic rationality. Even if CFAR doesn't know how to teach all these things yet, combining people who know each of these things will produce a community with the virtues the world needs.

In the course of the conversation, Anna asked me why I cared about this so much - what was my "Something to Protect"? This question helped me clarify what I really was worried about.

In my post on effective altruism, I mentioned that a likely extremely high-leverage way to help the world was to help people working on mitigating existential risk. The difficulty is that the magnitude of the risks, and the impact of the mitigation efforts, is really, really hard to assess. An existential risk is not something like malaria, where we can observe how often it occurs. By definition we haven't observed even one event that kills off all humans. So how can we assess the tens or hundreds of potential threats?

A while before, Anna had shared a web applet that let you provide your estimates for, e.g., the probability each year of a given event like global nuclear war or the development of friendly AI, and it would tell you the probability that humanity survived a certain number of years. I tried it out, and in the process, realized that:

Something Is Wrong With My Brain and I Don't Know How to Fix It

For one of these rates, I asked myself the probability in each year, and got back something like 2%.

But then I asked myself the probability in a decade, and got back something like 5%.

A century? 6%.

That can't be right. My intuitions seem obviously inconsistent. But how do I know which one to use, or how to calibrate them?

Eliezer Yudkowsky started writing the Sequences to fix whatever was wrong with people's brains that was stopping them from noticing and doing something about existential risk. But a really big part of this is gaining the epistemic rationality skills necessary to follow highly abstract arguments, modeling events that we have not and cannot observe, without getting caught by shiny but false arguments.

I know my brain is inadequate to the task right now. I read Yudkowsky's arguments in the FOOM Debate and I am convinced. I read Robin Hanson's arguments and am convinced. I read Carl Shulman's arguments and am convinced. But they don't all agree! To save the world effectively - instead of throwing money in the direction of the person who has most recently made a convincing argument - we need to know how to judge these things.

In Which I Extract Valuable Concessions from CFAR in Exchange for Some Money

Then it turned out CFAR was looking for another match-pledger for their upcoming end/beginning of year matched donations fundraiser. Anna suggested that CFAR might be willing to agree to commit to certain epistemic rationality projects in exchange. I was skeptical at first - if CFAR didn't already think these were first-best uses of its money, why should I think I have better information? - but on balance I can't think of a less-bad outcome than what we actually got, because I do think these things are urgently needed, and I think that if CFAR isn't doing them now, it will only get harder to pivot from its current program of almost exclusively teaching instrumental rationality and personal effectiveness.

We hashed out what kinds of programs CFAR would be willing to do on the Epistemic Rationality front, and agreed that these things would get done if enough money is donated to activate my pledge:

  • Participate in Tetlock's Good Judgment Project to learn more about what rationality skills help make good predictions, or would help but are missing
  • Do three more online randomized experiments to test more epistemic rationality techniques
  • Do one in-person randomized trial of an epistemic rationality training technique.
  • Run three one-day workshops on epistemic rationality, with a mixture of old and new material, as alpha tests.
  • Bring at least one epistemic rationality technique up to the level where it goes into the full workshops.

And of course CFAR will continue with a lot of the impressive work it's already been doing.

Here are the topics that I asked them to focus on for new research:

Here are the major "epistemic rationality" areas where I'd love to see research:
  • Noticing Confusion (& doing something about it)
  • Noticing rationalization, and doing something to defuse it, e.g. setting up a line of retreat
  • Undistractability/Eye-on-the-ball/12th virtue/"Cut the Enemy"/"Intent to Win" (this kind of straddles epistemic and instrumental rationality AFAICT but distractions usually look like epistemic failures)
  • Being specific / sticking your neck out / being possibly wrong instead of safely vague / feeling an "itch" to get more specific when you're being vague
Here are some advanced areas that seem harder (because I have no idea how to do these things) but would also count:
  • Reasoning about / modeling totally new things. How to pick the right "reference classes."
  • Resolving scope-insensitivity (e.g. should I "shut up and multiply" or "shut up and divide"). Especially about probabilities *over time* (since there are obvious X-Risk applications).
  • How to assimilate book-learning / theoretical knowledge (can be broken down into how to identify credible sources, how to translate theoretical knowledge into procedural knowledge)

If you're anything like me, you think that these programs would be awesome. If so, please consider giving to CFAR, and helping me spend my money to buy this awesomeness.

The Bad News

For some reason, almost one month into their two-month fundraiser, CFAR has no post up on Less Wrong promoting it. As I was writing this post, CFAR had raised less than $10,000 compared to a total of $150,000 in matching funds pledged. (UPDATE: CFAR now has an excellent post up explaining their plan and the fundraiser is doing much better.)

CFAR Fundraiser Progress Bar

Huge oopses happen, even to very good smart organizations, but it's relevant evidence around operational competence. Then again I kind of have an idiosyncratic axe to grind with respect to CFAR and operational competence, as is obvious if you read my first-impression review. But it's still a bad sign, for an organization working on a problem this hard, to fail some basic tests like this. You should probably take that into account.

It's weak evidence, though.

CFAR Changed Me for the Better

The ultimate test of competence for an organization like CFAR is not operational issues like whether people can physically get to and from the workshops or whether anyone knows about the fundraiser. The test is, does CFAR make people who take its training better at life?

In my case there was more than one confounding factor (I'd started working with a life coach a few weeks before and read Scott Adams's new book a few weeks after - Less Wrong review here), but I have already benefited materially from my experience:

I had three separate insights related to how I think about my career that jointly let me actually start to plan and take action. In particular, I stopped letting the best be the enemy of the good, noticed that my goals can be of different kinds, and figured out which specific component of my uncertainty was the big scary one and took actual steps to start resolving it.

A couple of things in my life improved immediately as if by magic. I started working out every morning, for example, for the first time since college. I'm still not sure how that happened. I didn't consciously expend any willpower.

Several other recent improvements in my life of comparable size are partially attributable to CFAR as well. (The other main contributors are my excellent life coach, Scott Adams's book, and the cumulative effect of everything else I've done, seen, heard, and read.)

Several of the classes that seemed hard to use at the time became obviously useful in hindsight. For example, I started noticing things where a periodic "Strategic Review" would be helpful.

In addition, I learned how to be "greedy" about asking other people for questions and advice when I thought it would be helpful. This has been tremendously useful already.

I'll end the way I began, with a summary:

The problems humanity is facing in this century are unprecedented in both severity and difficulty. To meet these challenges, we need people who are rational enough to sanely and evaluate the risks and possible solutions, effective enough to get something done, and good enough to take personal responsibility for making sure something happens. CFAR is trying to create a community of such people. Almost no one else is even trying.

CFAR is awesome because they do things that work. They've promised to do more research into some of the "epistemic rationality" areas that I'd wished to see more progress on. They have a fundraiser with matched donations through 31 January 2014, please consider giving if you can.

Give Smart, Help More

This post is about helping people more effectively. I'm not going to try to pitch you on giving more. I'm going to try to convince you to give smarter.

There's a summary at the bottom if you don't feel like reading the whole thing.

Do you want to help people? At least a little bit?

Imagine that there is a switch in front of you, in the middle position. It can only be flipped once. Flip it up, and one person somewhere on the other side of the world is cured of a deadly disease. Flip it down, and ten people are cured. You don't know any of these people personally, they are randomly selected. You will never see their faces or hear their stories. And this isn't a trick questiogn - they're not all secretly Pol Pot or something.

What do you do? Do you flip it up, flip it down, or leave it as it is? Make sure you think of the answer before you look ahead.

...

...

...

...

...

...

...

...

...

...

I'm going to assume you flipped the switch down. If you didn't, this post is not for you.

If you did, then why did you do that? Not because down is easier or more pleasant. Because it helps more people, and costs you nothing more. So if you made that choice and did it for that reason, you want to help people. Even people you don't know and will never meet. This might not be a preference that is particularly salient or relevant in your life right now, but when you chose between a world where more people are helped and a world where fewer people are helped, you chose the one where more people are helped. To summarize:

We agree that it is good to help people, and better to help more people, even if they're strangers or foreigners.

You probably already donate to charity. Why?

Most Americans give at least some money to charity. In 2010, in a Pew Research Center study, 95% of Americans said that they gave to a charitable organization specifically to help with the earthquake in Haiti. So when you add people who give, but didn't give for that, you end up with nearly everyone. And if you look at tax returns, the IRS reports that in 2011, out of 46 million people who itemized deductions, 38 million listed charitable contributions. That's 82%. So either way, most people give. Which means you probably do. (I'm assuming that most of my readers are in countries sufficiently similar to America for the conclusion to transfer.)

Why do you give to charity? That's actually a complicated question. People give for lots of reasons. You might be motivated by the simple fact that people will be helped, yes. But there are lots of other valid reasons to give to charity. You could want to support a cause that someone you care about is involved in, like sponsoring someone's charitable walk. You could want to express your solidarity with and membership in an institution like a church or community center. You could just value the warm fuzzy feeling that comes along with the stories you hear about what the charity does.

So here are some reasons why we give:

  • Warm fuzzies.
  • Group affiliation
  • Supporting friends
  • The simple preference for people to be helped.

All of these things are okay reasons to give, and I'm going to repeat that later for emphasis. I'm going to say some things that sound like I'm hating on warm fuzzies, but I'm really not.  To be clear: Warm fuzzies are nice! They feel good! You should do things that feel good! They just shouldn't be confused with other things that are good for different reasons.

The Power of Smarter Giving

I'm going to make up some numbers here.

Imagine three people: Kelsey, Meade, and Shun. They have the same job, that they all enjoy, and make $50,000 per year. They each give $1,000 per year to charity, 2% of their income. But they want to help people more.

Let's say that they give to a charity that tries to save lives by providing health care to people who can't access it. Each of their $1,000 donations purchases interventions that collectively add one year to one person's life, on average. That's actually a pretty good deal already - I'd certainly buy a year of extra life for myself, for that kind of money. I'm going to call that "helping one person," though we understand that it's just an average.

But now they each want to help more people. Kelsey decides to just give more, by cutting back on other expenses. Less savings, more meals at home, shorter vacations. Kelsey's able to scrape together an extra $1,000, so Kelsey's now giving $2,000, adding a year to two people's lives on average. On the other hand, Kelsey has fewer of other enjoyable things.

Meade decides instead of cutting back on expenses, to put in extra hours to get promoted to a job that's more stressful but pays better. After six months of this, let's say Meade is successful, and gets a 10% pay bump. Then Meade gives all that extra money to charity. That's $6,000 now that Meade is giving, adding on average a year of life each to 6 people.

Now how about Shun? Shun is lazy, like me. Shun decides that they don't want to work hard to help people. But Shun is willing to do 3 hours of research online, to find the best way to save lives. Shun finds a charity where outside researchers agree that a $1,000 donation on average adds a year of life to each of 10 people. Maybe because they focus on the cheapest treatments, like vaccines. Maybe because they operate in poor countries where expenses are lower, and there's more low-hanging health care fruit. Either way, Shun spend 3 hours doing research and now Shun's $1,000 per year adds a year of life to each of 10 people.

To summarize: Kelsey is scraping by to give $2,000 to give 2 people an extra year of life. Meade put in six months' extra hours at work - and has a more stressful job - and their $6,000 gives 6 people an extra year of life. Shun spent just 3 hours doing research on the internet, stills has the job Shun loves and gets to live the way Shun likes, and their $1,000 now gives 10 people an extra year of life each.

Kelsey - 2

Meade - 6

Shun - 10

Who would you rather be?

I don't want to deprecate any of these strategies. Sometimes your situation is different. Kelsey's a great person for trying to help people. There are a lot of reasons that Meade's strategy could be better than it sounds. But Shun went for the low-hanging fruit - and was able to help the most people, but suffered the least for it.

If my numbers are realistic, then researching different charities' effectiveness is an incredibly cheap way to help more people.

Why is this the case? Because in the numbers I made up, there was an order of magnitude effectiveness difference between two charities. One charity helped ten times as many people per dollar as another did.

This is sometimes true in the real world. Some charitable activities work better than others.

GiveWell, an organization that evaluates how effectively charities produce positive outcomes, thinks that there is a difference in effectiveness between two of their top-rated charities by a factor of between 2 and 3.

To repeat: one of GiveWell's top-rated charities is 2-3 times as effective as another. GiveWell only has three top-rated charities.

Then think about how different these numbers must be, on average, from the non-top-rated charities - or unrateable ones that don't try to measure outcomes at all. So a factor of 10 isn't unrealistic - but even if it's a factor of 2, that's a better return on time invested than Meade got - they might have worked more than three extra hours every week!

How do I do the research?

Was Shun's three hours of research a realistic estimate? It wouldn't be if nobody were already out there helping you - but fortunately there are now several organizations designed to help you figure out where your money does the most good.

The most famous one is probably still Charity Navigator. Charity Navigator basically reports on charities' finances, which is helpful in figuring out whether your money is going toward the programs you think it is, or whether it is going toward executives' paychecks and fancy gala fundraisers. Charity Navigator is a good first step, if all you want to do is weed out charities that are literally scams.

But we should be more ambitious. Remember, we don't just want to be not cheated. We're happiest if people actually get helped. And to know that, we don't just need to know how much program your money buys - we need to know if that program works.

GiveWell, AidGrade, Giving What We Can, and The Life You Can Save are all organizations that try to evaluate charities not just by how much work they do, but whether they can show that their work improves outcomes in some measurable way. All three seem to have mutual respect for one another, and I know there have been some friendly debates between GWWC and GiveWell on methodology.

If you want to search for more stuff on this, a good internet search term is "Effective Altruism".

If you really, really don't feel like spending a few hours doing research, you'll do fine giving to one of GiveWell's top 3.

Existential Risk: A Special Case

I want to put in a special plug here for a category of charity that gets neglected, where I think you can get a lot of bang for your buck in terms of results, and that's charities that try to mitigate existential risk.

An existential risk is something that might be unlikely - or hard to estimate - but if it happens, it would wipe out humanity. Even a small reduction in the chance of an extinction event could help a lot of people - because you'd be saving not only people at the time, but future generations. Giving What We Can has recently acknowledged this as a promising area for high-impact giving, and GiveWell's shown some interest as well.

Examples of existential risk are:

  • Nuclear Weapons
  • Biotechnology
  • Nanotechnology
  • Asteroids
  • Artificial Intelligence

Organizations that focus on existential risk include:

  • The Future of Humanity Institute (FHI) takes an academic approach, mostly focused on raising awareness and assessing risks.
  • The Lifeboat Foundation - I actually used to give to them, but I'm not sure quite what they really do, so I put that on hold - I may pick it up later if I learn something encouraging.
  • The Machine Intelligence Research Institute (MIRI) is working on the specific problem of avoiding an unfriendly intelligence explosion - by building friendly artificial intelligence. They believe this will also help solve many other existential risks.

In particular, MIRI is holding a fundraiser where new large donors (someone who has not yet given a total of $5,000 to MIRI) who make a donation of $5,000 or more, are matched 3:1 on the whole donation. Please consider it if you think MIRI's work is important. [UPDATE: This was a success and is now over.]

But Didn't You Say Meade Had a Good Strategy Too?

Yes. If you are super serious about helping people a lot, you might want to consider making career choices partly on that basis. I don't have a lot to say about this personally, but 80,000 hours specializes in helping people with this kind of thing.

One thing I can add is that it's easy to get intimidated by the difficulty of the optimal career choice for helping and thereby avoid making a knowably better choice. Don't do that. Better is better. Don't worry about making the perfect choice - you can always change your mind later when you think things through more.

Leveraged Giving and Meta-Charity

When we talk about leverage in giving, people usually take it literally and think about matched donations. Matched donations are fine, they double effectiveness and that's great, but a factor of 5-10 from research will be more important than a factor of 2 from matched giving.

But there's another kind of leverage - giving in ways that increases the effectiveness or quality of others' giving. For example, you could give to GWWC, AidGrade, or GiveWell, and this would mean that everyone else who gives based on their recommendations makes a sligjtly more effective choice - or that they're able to convince more people to give at all. You could probably do a quick back-of-the-envelope Fermi estimate to figure out what the impact is - whether there's a multiplier effect or not. Giving What We Can actually gives some numbers themselves - and I know that if GiveWell thinks they can't use the money, they'll just pass it along to their top-rated charities.

There's also a special case of leverage, and that's the Center For Applied Rationality, or CFAR. CFAR is trying to help people think better and more clearly, and act more effectively to accomplish their goals. A large part of their motivation for this, is to create a large community of people interested in effective altruism, with the skills to recognize the high-impact causes, and the personal effectiveness to actually do something to help. If your lifetime donations just create one highly motivated person, then you've "broken even" - in other words you've helped at least as many people as you would have, giving directly. But right now it's a much more leveraged opportunity, because CFAR plans to eventually become self-sustaining, but for their next few years they'll still probably depend on donations to supplement any fees they can charge for their training.

This year I'm part of the group matching donations for CFAR's end-of-year fundraiser. If you want to spend some of my money to try to build a community of true guardians of humanity, please do! [UPDATE: This fundraiser also concluded, successfully.]

So I should give all my charity budget to the one most effective charity?

Probably not.

Now, that's not because of "diversification". The National Center for Charitable Statistics (NCCS) estimates that there are about half a million charities in the US alone. That's plenty of diversity - I don't think anything's at risk of being neglected just because you give your whole charity budget to the best one.

The reason why you don't want to give everything to the charity you think helps the most, is those four reasons people give:

  • Warm fuzzies.
  • Group affiliation
  • Supporting friends
  • The simple preference for people to be helped.

And there are probably lots of others, but for now I'll just group them all together as "warm fuzzies" for the sake of brevity.

If you force yourself to pretend that you only care about helping, you'll feel bad about missing out on your warm fuzzies, and eventually you'll find an excuse to abandon the strategy.

I want to be clear that all of these are okay reasons to give! Some people, when they hear this argument, assume that it means, "Some of my donations are motivated by my selfish desire for warm fuzzies. This is wrong! I should just give to charity to help people. I shouldn't spend any charity money on feeling good about myself."

You are a human being and you deserve to be happy. Also you probably won't stick with a strategy that reliably makes you feel bad. So unfortunately, the exact optimal helping-strategy is unlikely to work for you (though if it does, that's fine too).

Fortunately, we can get most of the way to a maximum-help strategy without giving up on your other motivations, because of:

One Weird Trick to Get Warm Fuzzies on the Cheap

The human brain has a defect called scope insensitivity (but don't click through until you read this section, there's a spoiler). It basically means that the part of us that has feelings doesn't understand about very large or very small quantities. So while you intellectually might have a preference for helping more people over fewer, you'll get the same feel-good hit from helping one person and hearing their touching story, as you would from helping a group of ten.

In a classic experiment, researchers told people, assigned randomly into three groups, about an ecological problem that was going to kill some birds, but could be fixed. They asked participants how much they would personally be willing to pay to fix the problem. The only thing they changed from group to group, was how many birds would be affected.

One group was told 2,000 birds were affected, and they were willing to pay on average $80 each. The other two groups were told 20,000 and 200,000 birds were affected, respectively. How much do you think they were willing to pay? Try to actually guess before you look at the answer.

...

...

...

...

...

...

...

...

...

...

Here's how much the average person in each group was willing to pay:

2,000 birds: $80

20,000 birds: $78

200,000 birds: $88

So, basically the same, with some random variation.

Why do we care about this? Because it suggests that you should be able to get your warm fuzzies with a very small donation. Your emotions don't care how much you helped - they care whether you helped at all.

So you should consider setting aside a small portion of your charity budget for the year, and spreading it equally between everything it seems like a good idea to give to. It probably wouldn't cost you much to literally not say no to anything - just give every cause you like a dollar! You might even get more good vibes this way, then before, when you were trying to accomplish helping and warm fuzzies with the exact same donations.

Then give the rest to charity you think is most effective.

Summary:

You probably already want to help people you don't know, and give to charity. Researching charities' effectiveness in producing outcomes is a cheap way of making your donation help more people.

These organizations can help with your research:

Because of scope insensitivity, you should try to get your warm fuzzies and your effective helping done separately: designate a small portion of your charity budget for warm fuzzies, and give a tiny bit to every cause you'll fee l good about.

You may also be interested in some higher leverage options. CFAR is trying to create more people who care about effective altruism are effective enough to make a difference, and they have a matched donations fundraiser going on right now, which I'm one of the matchers for. [UPDATE: This fundraiser was successful, and is now over.]

Existential Risk is another field where beneficial effects are underestimated and you should consider giving, especially to FHI or MIRI.

MIRI in particular has a matched donations fundraiser going on now, where new large donors (>$5,000) will be matched at a 3:1 rate. [UPDATE: This fundraiser was successful, and is now over.]

Cross-posted at the Effective Altruism Society of DC blog.