Tag Archives: Effective Altruism

An OpenAI board seat is surprisingly expensive

The Open Philanthropy Project recently bought a seat on the board of the billion-dollar nonprofit AI research organization OpenAI for $30 million. Some people have said that this was surprisingly cheap, because the price in dollars was such a low share of OpenAI's eventual endowment: 3%.

To the contrary, this seat on OpenAI's board is very expensive, not because the nominal price is high, but precisely because it is so low.

If OpenAI hasn’t extracted a meaningful-to-it amount of money, then it follows that it is getting something other than money out of the deal. The obvious thing it is getting is buy-in for OpenAI as an AI safety and capacity venture. In exchange for a board seat, the Open Philanthropy Project is aligning itself socially with OpenAI, by taking the position of a material supporter of the project. The important thing is mutual validation, and a nominal donation just large enough to neg the other AI safety organizations supported by the Open Philanthropy Project is simply a customary part of the ritual.

By my count, the grant is larger than all the Open Philanthropy Project's other AI safety grants combined.

(Cross-posted at LessWrong.)

Against responsibility

I am surrounded by well-meaning people trying to take responsibility for the future of the universe. I think that this attitude – prominent among Effective Altruists – is causing great harm. I noticed this as part of a broader change in outlook, which I've been trying to describe on this blog in manageable pieces (and sometimes failing at the "manageable" part).

I'm going to try to contextualize this by outlining the structure of my overall argument.

Why I am worried

Effective Altruists often say they're motivated by utilitarianism. At its best, this leads to things like Katja Grace's excellent analysis of when to be a vegetarian. We need more of this kind of principled reasoning about tradeoffs.

At its worst, this leads to some people angsting over whether it's ethical to spend money on a cup of coffee when they might have saved a life, and others using the greater good as license to say things that are not quite true, socially pressure others into bearing inappropriate burdens, and make ever-increasing claims on resources without a correspondingly strong verified track record of improving people's lives. I claim that these actions are not in fact morally correct, and that people keep winding up endorsing those conclusions because they are using the wrong cognitive approximations to reason about morality.

Summary of the argument

  1. When people take responsibility for something, they try to control it. So, universal responsibility implies an attempt at universal control.
  2. Maximizing control has destructive effects:
    • An adversarial stance towards other agents.
    • Decision paralysis.
  3. These failures are not accidental, but baked into the structure of control-seeking. We need a practical moral philosophy to describe strategies that generalize better, and that benefit from the existence of other benevolent agents rather than treating them primarily as threats.

Continue reading

Effective Altruism is not a no-brainer

Ozy writes that Effective Altruism avoids the typical failure modes of people in developed countries intervening in developing ones, because it is evidence-based, humble, and respects the autonomy of the recipients of the intervention. The basic reasoning is that Effective Altruists pay attention to empirical evidence, focus on what's shown to work, change what they're doing when it looks like it's not working, and respect the autonomy of the people for whose benefit they're intervening.

Effective Altruism is not actually safe from the failure modes alluded to:

    • Effective Altruism is not humble. Its narrative in practice relies on claims of outsized benefits in terms of hard-to-measure things like life outcomes, which makes humility quite difficult. Outsized benefits probably require going out on a limb and doing extraordinary things.
    • Effective Altruism is less evidence based than EAs think. People talk about some EA charities as producing large improvements in life outcomes with certainty, but this is often not happening. And when the facts disagree with our hopes, we seem pretty good at ignoring the facts.
    • Effective Altruism is not about autonomy. Some EA charities are good at respecting the autonomy of beneficiaries, but this is nowhere near central to the movement, and many top charities are not about autonomy at all, and are much better fits for the stereotype of rich Westerners deciding that they know what's best for people in poor countries.
    • Standard failure modes are standard. We need a model of what causes them, and how we're different, in order to be sure we're avoiding them.

Continue reading

My life so far: motives and morals

This is the story of my life, through the lens of motivations, of actions I took to steer myself towards long-term outcome, of the way the self that stretches out in causal links over long periods of time produced the self I have at this moment. This is only one of the many ways to tell the story of my life. Continue reading

The performance of pain as a political tactic

This post uses activism around factory farming as an example, but I don’t mean to criticize animal welfare activism in particular. It’s just an especially available example to me of a broader pattern. My selection of example is maybe even biased towards better causes - or causes I approve of more - since I tend to associate with people doing things I approve of. Animals on factory farms seem to suffer a lot, this can probably be changed at fairly little cost, and we should do so.

This is also not the opinion of my employer. I want to make that absolutely clear. This is my private opinion, it’s not based on the opinion of anyone else where I work as far as I know, and it’s not indicative of my employer's future actions.

The Personal

Before a recent Effective Altruist event in San Francisco, some potential participants complained about the plan to serve meat. There were two main types of arguments made against serving animal products. One was the utilitarian argument against eating meat. Factory farmed meat, so the argument goes, provides much less enjoyment to the eater than suffering to the eaten. I find this argument plausible, though difficult to judge.

The second argument was that the presence of meat would make vegans (and many people associated with the Effective Altruist movement are vegans) uncomfortable. It would make them feel unwelcome. Some said it would be offensive, it would make them feel the way a barbecue featuring roasted two-year-old human would make me feel. This complaint seemed pretty valid to me on the face of it, and presumably the organizers agreed - the food ended up being animal-free. However, something about the argument made and still makes me uneasy.

Continue reading

Want to Summon Less Animal Suffering?

I've been thinking about Julia Galef's chart of how much of an animal's live an unit of food costs. Her summary of her approach:

If you’re bothered by the idea of killing animals for food, then going vegetarian might seem like an obvious response. But if you want your diet to kill as few animals as possible, then eschewing meat is actually quite an indirect, and sometimes even counterproductive, strategy. The question you should be asking yourself about any given food is not, “Is this food animal flesh?” The question you should be asking yourself is, “How many animal lives did this food cost?”

She ends up with this chart:

But as we know, from a hedonic utilitarian perspective, the moral cost of consuming animals is not measured in their deaths, but the net suffering in their lives - and lives are made of days.

I am not a hedonic utilitarian, but I think that for big-picture issues, utilitarianism is an important heuristic for figuring out what the right answer is, since at least it handles addition and multiplication better than our unaided moral intuitions.

So I looked up how long each of these animals typically lives, to estimate how many days of each animal's life you are responsible for when you consume 1,000 calories of that animal. The Food and Agriculture Organization of the United Nations says that beef cattle are slaughtered at 36 months old (i.e. 1,096 days), and pork hogs at 6 months old (183 days). Wikipedia says that chickens raised for meat typically live about 6 weeks (42 days), and laying hens about 2 years (730 days), and that dairy cows live an average of four years (1,461 days) before dying or being culled and sold as beef.

Using those figures yields the following:

Days per 1000 Calories

 

Eggs now appear to be even worse than eating chicken flesh, since broiler chickens live very short lives compared to laying hens. Similarly, beef loses its advantage relative to pork, which comes from smaller but faster-maturing animals. Dairy still seems pretty okay, comparatively.

A few complicating factors need to be explored before this becomes a reliable guide, though.

First, not all animal suffering is of equal intensity. It may actually be net good for well-treated animals to exist. It is still not obvious that factory farmed animals would rather never have been born, or how to count their suffering.

Second, it is not obvious how to attribute responsibility for animals that are not themselves raised to feed humans. For example, Julia divided laying hens' yield in half to account for killed male chicks, but the chicks obviously live almost no time. If you double the yield back to what it would have been before this adjustment, eggs come out about the same as chicken. Similarly, if a dairy cow is killed for beef, it seems like this should lower milk drinkers' and cheese eaters' day counts, since beef eaters contribute to the viability of raising dairy cattle too.

Finally, there may be different elasticity for different animal products; because lowered demand leads to lowered prices, which increase demand, the industry might reduce production by less than a full chicken for each chicken you abstain from buying, and this rate may differ by animal.

What am I going to do? I like eggs a lot, and think they're cheap and pretty good for me, and I basically believe Katja Grace's argument that I shouldn't put in a lot of work to change my behavior on this, so I'm going to keep eating them, though I'll continue to preferentially buy free range eggs where available. I have a rationalization that laying hens don't suffer nearly as much as broilers, so I'll flinch a little bit each time I consider eating chicken. I was favoring eating beef over pork due to Julia's analysis, but I will stop doing that now that I know the two are pretty much equivalent in terms of animal-days.

[UPDATE: Brian Tomasik has an unsurprisingly somewhat more thoroughly worked out treatment of this line of thinking here.]

Birthday Wish

Dear Friends,

For those of you who were able to come celebrate my birthday with me, thank you. And for those of you who couldn't make it, you were missed, but not loved the less for it.

On the topic of presents – while none will be turned away, I’m fortunate to mostly have enough things in my life. If you’d like to do something for me to celebrate my birthday, I’m going to ask you to take an action instead. I’m going to ask you to take an action to help others.
Continue reading

Whatever Is Not Best Is Forbidden

At this year's CFAR Alumni Reunion, Leah Libresco hosted a series of short talks on Effective Altruism. She now has a post up on an issue Anna Salamon brought up, the disorienting nature of some EA ideas:

For some people, getting involved in effective altruism is morally disorienting — once you start translating the objects and purchases around you into bednets, should you really have any of them? Should you skip a gruel diet so you can keep your strength up, work as an I-banker, and “earn to give” — funneling your salary into good causes? Ruminating on these questions can lead to analysis paralysis — plus a hefty serving of guilt.

In the midst of our discussion, I came up with a speculative hypothesis about what might drive this kind of reaction to Effective Altruism. While people were sharing stories about their friends, some of their anxious behaviors and thoughts sounded akin to Catholic scrupulosity. One of the more exaggerated examples of scrupulosity is a Catholic who gets into the confessional, lists her sins, receives absolution, and then immediately gets back into line, worried that she did something wrong in her confession, and should now confess that error.

Both of these obviously bear some resemblance to anxiety/OCD, period, but I was interested in speculating a little about why. In Jonathan Haidt’s The Righteous Mind, he lays out a kind of factor analysis of what drives people’s moral intuitions. In his research, some moral foundations (e.g. care/harm) are pretty common to everyone, but some (sanctity/degradation or “purity”) are more predictive in some groups than others.

My weak hypothesis is that effective altruism can feel more like a “purity” decision than other modes of thought people have used to date. You can be inoculated against moral culture shock by previous exposure to other purity-flavored kinds of reasoning (deontology, religion, etc), but, if not (and maybe if you’re also predisposed to anxiety), the sudden clarity about a bestmode of action, that is both very important, and very unlikely for you pull off everyday may trigger scrupulosity.

EAs sometimes seem to think of the merit of an action as a binary quality, where either it is obligatory because it has the "bestness" attribute and outweighs the opportunity cost, or it is forbidden because it doesn't. You're allowed to take care of yourself, and do the best known thing given imperfect information, but only if it's "best.” This framing is exhausting and paralyzing because you're never doing anything positively good, everything is either obligatory or forbidden.

It doesn't have to be that way; we can distinguish between intrapersonal and interpersonal opportunity cost.

I'm not a public utility, I'm a person. If I help others in an inefficient way, or with less of my resources than I could have employed, then I've helped others. If last year I gave to a very efficient charity, but this year I switched to a less efficient charity, then I helped others last year, and helped others again this year. Those are things to celebrate.

But if I pressure or convince someone else to divert their giving from a more efficient to a less efficient charity, or support a cause that itself diverts resources from more efficient causes, then I have actually harmed others on net.

Cross-posted at the Effective Altruism Society of DC bloc.

Don't Worry, Be Canny

Oops

My girlfriend is [...] triggered [...] by many discussions of charity – whenever ze hears about it, ze starts worrying ze is a bad person for not donating more money to charity, has a mental breakdown, and usually ends up shaking and crying for a little while.

I just wrote a post on giving efficiently.

I just wrote another asking people to give to CFAR.

And I'm pretty sure I mentioned both to the person in question.

Oops.

Of course I put a disclaimer up front about how I'm not talking about how much to give, just how to use your existing charity budget better. But of course that doesn't matter unless it actually worked - which it likely didn't.

Of course I would have acted differently if I'd had more information up front - but I don't get extra points for ignorance; the expected consequence is just as bad.

I'm going to try and write an antidote to the INFINITE GUILT that can feel like the natural response to Peter Singer style arguments. It probably won't work, but I doubt it will hurt. (If it does, let me know. If there's bad news, I want to hear it!)

 

You Don't Have To Be a Good Person To Be a Good Person

What are you optimizing for, anyway, being a good person or helping people?

If you care about helping people, then you should think of yourself as a manager, with a team of one. You can't fire this person, or replace them, or transfer them to another department. All you can do is try to motivate them as best you can.

Are you going to try to work this person into the ground, use up 100% of their capacity every day, helping others? No! The mission of the firm is "helping people," but that's not necessarily your employee's personal motivation. If they burn out and lose motivation, you can't replace them - you have to build them back up again. Instead, you should try really, really hard to keep this person happy. This person, of course, being you.

If telling them they should try harder gets them motivated, then fine, do that. But if it doesn't - if it makes them curl up into a ball and be sad instead, then try something else. Ask them if they need to give up on some of the work. Ask them if there's anything they need that they aren't getting. Because if your one employee at the firm of You isn't happy to be there, you'd better figure out how to make that happen. That's your number one job as manager - because without you, you don't have anyone.

That doesn't make the firm any less committed to helping people. As your own manager, you are doing your best to make sure helping-people activities happen, as much and as effectively as possible. But that means treating yourself like a human being, with basic decency and respect for your own needs.

 

Alright, suppose you do care about "being good." Maybe you believe in virtue ethics or deontology or have some other values where you have an idea of what a good person is, independent of maximizing an utilitarian consequence.

The same result follows. You should take whatever action maximizes your "goodness," but again, you don't have perfect control over yourself. You're a manager with one permanent employee. There's no point in asking more than they can do, unless they like that (some people say they do) - look for the things that actually do motivate them, and make sure their needs get met. That's the only way to keep them motivated to work towards being a "good person" in the long term; all the burnout considerations still apply.

 

What Do You Mean By "You"?

There's not really just one you. You have lots of parts! The part that wants to help people is probably distinct from the part that wants to feel like a good person, which is in turn distinct from the part that has needs like physical well-being. You all have to come to some sort of negotiated agreement if you want to actually get anything done.

In my own life, it was a major breakthrough, for example, to realize that my desire to steer the world toward a better state - my desire to purchase "altruons" with actions or dollars - is distinct from my desire to feel good about getting things done and be validated for doing good work that makes a clear difference. Once I realized these were two very different desires, I could at least try to give each part some of what it wanted.

Pretending your political opponents don't exist is not a viable strategy in multiple-person politics. It's no better in single-person politics. You have three options:

1) Crush the opposition.

If exercising is a strong net positive for you, but part of you is whining "I'm tired, I don't wanna," you can just overpower it with willpower.

In politics, there are all sorts of fringe groups that pretty much get totally ignored. For example, legalization of cocaine doesn't seem to have gone anywhere in the US, even though I'm sure there are a few people who feel very, very strongly about it. No concessions whatsoever seem to have been made.

The advantages of this strategy are that you get what you think what you want, without giving up anything in exchange, and get practice using your willpower (which may get stronger with use).

The disadvantages are that you can't do it without a majority, that some parts of you don't get their needs met, and that if you're tired or distracted the government may be overturned by the part of yourself that has been disenfranchised.

2) Engage in "log-rolling."

Sometimes the part of you that's resisting may want something that's easy to give it. For example, I just finished the first draft of a short story. Prior to that I hadn't finished a work of fiction in at least ten years. I'd started plenty, of course, so clearly there was some internal resistance.

My strategy this time was to get used to writing anything at all, regularly, and "finishing" (i.e. giving up and publishing) things, whether I think they're good or not. Get used to writing at all, and worry about getting good once I've installed the habit of writing.

But I stalled out anyway when writing fiction. Eventually, instead of just fighting myself with willpower when I noticed that I was stalling, I engaged myself in dialogue:

"Why don't you want to keep writing?"

"I can't think of what to write next."

"You literally can't think of what to write? Or you don't like your ideas?"

"I don't like the ideas."

"Why not?"

"Because I think they're bad. I'm trying to write something good, like you asked, but all I have is bad ideas."

"Darn it, self, I didn't ask you to write something good. I asked you to write something at all. Go ahead and write the bad version. We'll worry about writing something good later."

"Oh, is that all you wanted? That's easy!"

And I happily went back to work and kept writing.

Sometimes the best you can do is give everyone just part of what they want, though. There are people who believe that the rich US should give much of its excess wealth to poor people. If you believe this, what's a better strategy? Start a magazine called "America Is Bad And It Should Feel Bad", or try to expand our guest-worker visa program? One, and only one, of these will increase the wealth of poor foreigners at all.

The advantage of this approach is that is probably maximizes your short-term happiness, more of your needs get met, and it saves willpower for things where this approach is not viable.

3) Lose.

If you can't crush the opposition, and you can't trade with them, then you lose. If you're losing, and you have spent five minutes thinking about it and can't think of either a viable way to win or an idea-generating method you expect to work, then give up. Stop expending willpower on it, accept the bad consequence, and get on with your life.

I'm a bad person? Okay, I'm a bad person, I'd still like to help people, though. What's for lunch?


The Bottom Line


KIRK: I wish I were on a long sea voyage somewhere. Not too much deck tennis, no frantic dancing, and no responsibility. Why me? I look around that Bridge, and I see the men waiting for me to make the next move. And Bones, what if I'm wrong?
MCCOY: Captain, I
KIRK: No, I don't really expect an answer.
MCCOY: But I've got one. Something I seldom say to a customer, Jim. In this galaxy, there's a mathematical probability of three million Earth-type planets. And in all of the universe, three million million galaxies like this. And in all of that, and perhaps more, only one of each of us. Don't destroy the one named Kirk.

CFAR - Second Impression and a Bleg

TLDR: CFAR is awesome because they do things that work. They've promised to do more research into some of the "epistemic rationality" areas that I'd wished to see more progress on. They have matched donations through 31 January 2014, please consider giving if you can.

UPDATE: CFAR now has a post up on Less Wrong explaining what they are working on and why you should give. Here's the official version: http://lesswrong.com/lw/jej/why_cfar/

Second Thoughts on CFAR

You may have seen my first-impression review of the Center For Applied Rationality's November workshop in Ossining, NY. I've had more than a month to think it over, and on balance I'm pretty impressed.

For those of you who don't already know, CFAR (the Center For Applied Rationality) is an organization dedicated to developing training to help people overcome well-studied cognitive biases, and thus become more effective at accomplishing their goals. If you've never heard of CFAR before, you should check out their about page before continuing here.

The first thing you need to understand about CFAR is that they teach stuff that actually works, in a way that works. This is because they have a commitment to testing their beliefs, abandoning ideas that don't work out, and trying new things until they find something that works. As a workshop participant I benefited from that, it was clear that the classes were way better honed, specific, and action-oriented than they'd been in 2011.

At the time I expressed some disappointment that a lot of epistemic rationality stuff seemed to have been neglected, postponed, or abandoned. Even though some of those things seem objectively much harder than some of the personal effectiveness training CFAR seems to have focused on, they're potentially high-value in saving the world.

The Good News

After my post, Anna Salamon from CFAR reached out to see if we could figure out some specific things they should try again. I think this was a helpful conversation for both of us. Anna explained to me a few things that helped me understand what CFAR was doing:

1) Sometimes an "epistemic rationality" idea turns into a "personal effectiveness" technique when operationalized.

For example consider the epistemic rationality idea of beliefs as anticipations, rather than just verbal propositions. The idea is that you should expect to observe something differently in the world if a belief is true, than if it's false. Sounds pretty obvious, right? But the "Internal Simulator," where you imagine how surprised you will be if your plan doesn't work out, is a non-obvious application of that technique.

2) Some of the rationality techniques I'd internalized from the Sequences at Less Wrong, that seemed obvious to me, are not obvious to a lot of people going to the workshops, so some of the epistemic rationality training going on was invisible to me.

For example, some attendees hadn't yet learned the Bayesian way of thinking about information - that you should have a subjective expectation based on the evidence, even when the evidence isn't conclusive yet, and there are mathematical rules governing how you should treat this partial evidence. So while I didn't get much out of the Bayes segment, that's because I've already learned the thing that class is supposed to teach.

3) CFAR already tried a bunch of stuff.

They did online randomized trials of some epistemic rationality techniques and published the results. They tried a bunch of ways to teach epistemic rationality stuff and found that it didn't work (which is what I'd guessed). They'd found ways to operationalize bits of epistemic rationality.

4) The program is not just the program.

Part of CFAR's mission is the actual rationality-instruction it does. But another part is taking people possibly interested in rationality, and introducing them to the broader community of people interested in existential risk mitigation or other effective altruism, and epistemic rationality. Even if CFAR doesn't know how to teach all these things yet, combining people who know each of these things will produce a community with the virtues the world needs.

In the course of the conversation, Anna asked me why I cared about this so much - what was my "Something to Protect"? This question helped me clarify what I really was worried about.

In my post on effective altruism, I mentioned that a likely extremely high-leverage way to help the world was to help people working on mitigating existential risk. The difficulty is that the magnitude of the risks, and the impact of the mitigation efforts, is really, really hard to assess. An existential risk is not something like malaria, where we can observe how often it occurs. By definition we haven't observed even one event that kills off all humans. So how can we assess the tens or hundreds of potential threats?

A while before, Anna had shared a web applet that let you provide your estimates for, e.g., the probability each year of a given event like global nuclear war or the development of friendly AI, and it would tell you the probability that humanity survived a certain number of years. I tried it out, and in the process, realized that:

Something Is Wrong With My Brain and I Don't Know How to Fix It

For one of these rates, I asked myself the probability in each year, and got back something like 2%.

But then I asked myself the probability in a decade, and got back something like 5%.

A century? 6%.

That can't be right. My intuitions seem obviously inconsistent. But how do I know which one to use, or how to calibrate them?

Eliezer Yudkowsky started writing the Sequences to fix whatever was wrong with people's brains that was stopping them from noticing and doing something about existential risk. But a really big part of this is gaining the epistemic rationality skills necessary to follow highly abstract arguments, modeling events that we have not and cannot observe, without getting caught by shiny but false arguments.

I know my brain is inadequate to the task right now. I read Yudkowsky's arguments in the FOOM Debate and I am convinced. I read Robin Hanson's arguments and am convinced. I read Carl Shulman's arguments and am convinced. But they don't all agree! To save the world effectively - instead of throwing money in the direction of the person who has most recently made a convincing argument - we need to know how to judge these things.

In Which I Extract Valuable Concessions from CFAR in Exchange for Some Money

Then it turned out CFAR was looking for another match-pledger for their upcoming end/beginning of year matched donations fundraiser. Anna suggested that CFAR might be willing to agree to commit to certain epistemic rationality projects in exchange. I was skeptical at first - if CFAR didn't already think these were first-best uses of its money, why should I think I have better information? - but on balance I can't think of a less-bad outcome than what we actually got, because I do think these things are urgently needed, and I think that if CFAR isn't doing them now, it will only get harder to pivot from its current program of almost exclusively teaching instrumental rationality and personal effectiveness.

We hashed out what kinds of programs CFAR would be willing to do on the Epistemic Rationality front, and agreed that these things would get done if enough money is donated to activate my pledge:

  • Participate in Tetlock's Good Judgment Project to learn more about what rationality skills help make good predictions, or would help but are missing
  • Do three more online randomized experiments to test more epistemic rationality techniques
  • Do one in-person randomized trial of an epistemic rationality training technique.
  • Run three one-day workshops on epistemic rationality, with a mixture of old and new material, as alpha tests.
  • Bring at least one epistemic rationality technique up to the level where it goes into the full workshops.

And of course CFAR will continue with a lot of the impressive work it's already been doing.

Here are the topics that I asked them to focus on for new research:

Here are the major "epistemic rationality" areas where I'd love to see research:
  • Noticing Confusion (& doing something about it)
  • Noticing rationalization, and doing something to defuse it, e.g. setting up a line of retreat
  • Undistractability/Eye-on-the-ball/12th virtue/"Cut the Enemy"/"Intent to Win" (this kind of straddles epistemic and instrumental rationality AFAICT but distractions usually look like epistemic failures)
  • Being specific / sticking your neck out / being possibly wrong instead of safely vague / feeling an "itch" to get more specific when you're being vague
Here are some advanced areas that seem harder (because I have no idea how to do these things) but would also count:
  • Reasoning about / modeling totally new things. How to pick the right "reference classes."
  • Resolving scope-insensitivity (e.g. should I "shut up and multiply" or "shut up and divide"). Especially about probabilities *over time* (since there are obvious X-Risk applications).
  • How to assimilate book-learning / theoretical knowledge (can be broken down into how to identify credible sources, how to translate theoretical knowledge into procedural knowledge)

If you're anything like me, you think that these programs would be awesome. If so, please consider giving to CFAR, and helping me spend my money to buy this awesomeness.

The Bad News

For some reason, almost one month into their two-month fundraiser, CFAR has no post up on Less Wrong promoting it. As I was writing this post, CFAR had raised less than $10,000 compared to a total of $150,000 in matching funds pledged. (UPDATE: CFAR now has an excellent post up explaining their plan and the fundraiser is doing much better.)

CFAR Fundraiser Progress Bar

Huge oopses happen, even to very good smart organizations, but it's relevant evidence around operational competence. Then again I kind of have an idiosyncratic axe to grind with respect to CFAR and operational competence, as is obvious if you read my first-impression review. But it's still a bad sign, for an organization working on a problem this hard, to fail some basic tests like this. You should probably take that into account.

It's weak evidence, though.

CFAR Changed Me for the Better

The ultimate test of competence for an organization like CFAR is not operational issues like whether people can physically get to and from the workshops or whether anyone knows about the fundraiser. The test is, does CFAR make people who take its training better at life?

In my case there was more than one confounding factor (I'd started working with a life coach a few weeks before and read Scott Adams's new book a few weeks after - Less Wrong review here), but I have already benefited materially from my experience:

I had three separate insights related to how I think about my career that jointly let me actually start to plan and take action. In particular, I stopped letting the best be the enemy of the good, noticed that my goals can be of different kinds, and figured out which specific component of my uncertainty was the big scary one and took actual steps to start resolving it.

A couple of things in my life improved immediately as if by magic. I started working out every morning, for example, for the first time since college. I'm still not sure how that happened. I didn't consciously expend any willpower.

Several other recent improvements in my life of comparable size are partially attributable to CFAR as well. (The other main contributors are my excellent life coach, Scott Adams's book, and the cumulative effect of everything else I've done, seen, heard, and read.)

Several of the classes that seemed hard to use at the time became obviously useful in hindsight. For example, I started noticing things where a periodic "Strategic Review" would be helpful.

In addition, I learned how to be "greedy" about asking other people for questions and advice when I thought it would be helpful. This has been tremendously useful already.

I'll end the way I began, with a summary:

The problems humanity is facing in this century are unprecedented in both severity and difficulty. To meet these challenges, we need people who are rational enough to sanely and evaluate the risks and possible solutions, effective enough to get something done, and good enough to take personal responsibility for making sure something happens. CFAR is trying to create a community of such people. Almost no one else is even trying.

CFAR is awesome because they do things that work. They've promised to do more research into some of the "epistemic rationality" areas that I'd wished to see more progress on. They have a fundraiser with matched donations through 31 January 2014, please consider giving if you can.