Tag Archives: CFAR

The order of the soul

In standard three-part models of the soul, bias maps well onto the middle part. Symmetry maps well onto the "upper" part in ancient accounts, but not modern ones. This reflects a real change in how people think. It is a sign of damage. Damage wrought on people's souls – especially among elites – by formal schooling and related pervasive dominance relations in employment. Continue reading

Solve your problems by fantasizing

The problem with most goal-driven plans is that most goals are fake, and so are most plans. One way to fix this is to fantasize. Continue reading

My life so far: motives and morals

This is the story of my life, through the lens of motivations, of actions I took to steer myself towards long-term outcome, of the way the self that stretches out in causal links over long periods of time produced the self I have at this moment. This is only one of the many ways to tell the story of my life. Continue reading

CFAR - Second Impression and a Bleg

TLDR: CFAR is awesome because they do things that work. They've promised to do more research into some of the "epistemic rationality" areas that I'd wished to see more progress on. They have matched donations through 31 January 2014, please consider giving if you can.

UPDATE: CFAR now has a post up on Less Wrong explaining what they are working on and why you should give. Here's the official version: http://lesswrong.com/lw/jej/why_cfar/

Second Thoughts on CFAR

You may have seen my first-impression review of the Center For Applied Rationality's November workshop in Ossining, NY. I've had more than a month to think it over, and on balance I'm pretty impressed.

For those of you who don't already know, CFAR (the Center For Applied Rationality) is an organization dedicated to developing training to help people overcome well-studied cognitive biases, and thus become more effective at accomplishing their goals. If you've never heard of CFAR before, you should check out their about page before continuing here.

The first thing you need to understand about CFAR is that they teach stuff that actually works, in a way that works. This is because they have a commitment to testing their beliefs, abandoning ideas that don't work out, and trying new things until they find something that works. As a workshop participant I benefited from that, it was clear that the classes were way better honed, specific, and action-oriented than they'd been in 2011.

At the time I expressed some disappointment that a lot of epistemic rationality stuff seemed to have been neglected, postponed, or abandoned. Even though some of those things seem objectively much harder than some of the personal effectiveness training CFAR seems to have focused on, they're potentially high-value in saving the world.

The Good News

After my post, Anna Salamon from CFAR reached out to see if we could figure out some specific things they should try again. I think this was a helpful conversation for both of us. Anna explained to me a few things that helped me understand what CFAR was doing:

1) Sometimes an "epistemic rationality" idea turns into a "personal effectiveness" technique when operationalized.

For example consider the epistemic rationality idea of beliefs as anticipations, rather than just verbal propositions. The idea is that you should expect to observe something differently in the world if a belief is true, than if it's false. Sounds pretty obvious, right? But the "Internal Simulator," where you imagine how surprised you will be if your plan doesn't work out, is a non-obvious application of that technique.

2) Some of the rationality techniques I'd internalized from the Sequences at Less Wrong, that seemed obvious to me, are not obvious to a lot of people going to the workshops, so some of the epistemic rationality training going on was invisible to me.

For example, some attendees hadn't yet learned the Bayesian way of thinking about information - that you should have a subjective expectation based on the evidence, even when the evidence isn't conclusive yet, and there are mathematical rules governing how you should treat this partial evidence. So while I didn't get much out of the Bayes segment, that's because I've already learned the thing that class is supposed to teach.

3) CFAR already tried a bunch of stuff.

They did online randomized trials of some epistemic rationality techniques and published the results. They tried a bunch of ways to teach epistemic rationality stuff and found that it didn't work (which is what I'd guessed). They'd found ways to operationalize bits of epistemic rationality.

4) The program is not just the program.

Part of CFAR's mission is the actual rationality-instruction it does. But another part is taking people possibly interested in rationality, and introducing them to the broader community of people interested in existential risk mitigation or other effective altruism, and epistemic rationality. Even if CFAR doesn't know how to teach all these things yet, combining people who know each of these things will produce a community with the virtues the world needs.

In the course of the conversation, Anna asked me why I cared about this so much - what was my "Something to Protect"? This question helped me clarify what I really was worried about.

In my post on effective altruism, I mentioned that a likely extremely high-leverage way to help the world was to help people working on mitigating existential risk. The difficulty is that the magnitude of the risks, and the impact of the mitigation efforts, is really, really hard to assess. An existential risk is not something like malaria, where we can observe how often it occurs. By definition we haven't observed even one event that kills off all humans. So how can we assess the tens or hundreds of potential threats?

A while before, Anna had shared a web applet that let you provide your estimates for, e.g., the probability each year of a given event like global nuclear war or the development of friendly AI, and it would tell you the probability that humanity survived a certain number of years. I tried it out, and in the process, realized that:

Something Is Wrong With My Brain and I Don't Know How to Fix It

For one of these rates, I asked myself the probability in each year, and got back something like 2%.

But then I asked myself the probability in a decade, and got back something like 5%.

A century? 6%.

That can't be right. My intuitions seem obviously inconsistent. But how do I know which one to use, or how to calibrate them?

Eliezer Yudkowsky started writing the Sequences to fix whatever was wrong with people's brains that was stopping them from noticing and doing something about existential risk. But a really big part of this is gaining the epistemic rationality skills necessary to follow highly abstract arguments, modeling events that we have not and cannot observe, without getting caught by shiny but false arguments.

I know my brain is inadequate to the task right now. I read Yudkowsky's arguments in the FOOM Debate and I am convinced. I read Robin Hanson's arguments and am convinced. I read Carl Shulman's arguments and am convinced. But they don't all agree! To save the world effectively - instead of throwing money in the direction of the person who has most recently made a convincing argument - we need to know how to judge these things.

In Which I Extract Valuable Concessions from CFAR in Exchange for Some Money

Then it turned out CFAR was looking for another match-pledger for their upcoming end/beginning of year matched donations fundraiser. Anna suggested that CFAR might be willing to agree to commit to certain epistemic rationality projects in exchange. I was skeptical at first - if CFAR didn't already think these were first-best uses of its money, why should I think I have better information? - but on balance I can't think of a less-bad outcome than what we actually got, because I do think these things are urgently needed, and I think that if CFAR isn't doing them now, it will only get harder to pivot from its current program of almost exclusively teaching instrumental rationality and personal effectiveness.

We hashed out what kinds of programs CFAR would be willing to do on the Epistemic Rationality front, and agreed that these things would get done if enough money is donated to activate my pledge:

  • Participate in Tetlock's Good Judgment Project to learn more about what rationality skills help make good predictions, or would help but are missing
  • Do three more online randomized experiments to test more epistemic rationality techniques
  • Do one in-person randomized trial of an epistemic rationality training technique.
  • Run three one-day workshops on epistemic rationality, with a mixture of old and new material, as alpha tests.
  • Bring at least one epistemic rationality technique up to the level where it goes into the full workshops.

And of course CFAR will continue with a lot of the impressive work it's already been doing.

Here are the topics that I asked them to focus on for new research:

Here are the major "epistemic rationality" areas where I'd love to see research:
  • Noticing Confusion (& doing something about it)
  • Noticing rationalization, and doing something to defuse it, e.g. setting up a line of retreat
  • Undistractability/Eye-on-the-ball/12th virtue/"Cut the Enemy"/"Intent to Win" (this kind of straddles epistemic and instrumental rationality AFAICT but distractions usually look like epistemic failures)
  • Being specific / sticking your neck out / being possibly wrong instead of safely vague / feeling an "itch" to get more specific when you're being vague
Here are some advanced areas that seem harder (because I have no idea how to do these things) but would also count:
  • Reasoning about / modeling totally new things. How to pick the right "reference classes."
  • Resolving scope-insensitivity (e.g. should I "shut up and multiply" or "shut up and divide"). Especially about probabilities *over time* (since there are obvious X-Risk applications).
  • How to assimilate book-learning / theoretical knowledge (can be broken down into how to identify credible sources, how to translate theoretical knowledge into procedural knowledge)

If you're anything like me, you think that these programs would be awesome. If so, please consider giving to CFAR, and helping me spend my money to buy this awesomeness.

The Bad News

For some reason, almost one month into their two-month fundraiser, CFAR has no post up on Less Wrong promoting it. As I was writing this post, CFAR had raised less than $10,000 compared to a total of $150,000 in matching funds pledged. (UPDATE: CFAR now has an excellent post up explaining their plan and the fundraiser is doing much better.)

CFAR Fundraiser Progress Bar

Huge oopses happen, even to very good smart organizations, but it's relevant evidence around operational competence. Then again I kind of have an idiosyncratic axe to grind with respect to CFAR and operational competence, as is obvious if you read my first-impression review. But it's still a bad sign, for an organization working on a problem this hard, to fail some basic tests like this. You should probably take that into account.

It's weak evidence, though.

CFAR Changed Me for the Better

The ultimate test of competence for an organization like CFAR is not operational issues like whether people can physically get to and from the workshops or whether anyone knows about the fundraiser. The test is, does CFAR make people who take its training better at life?

In my case there was more than one confounding factor (I'd started working with a life coach a few weeks before and read Scott Adams's new book a few weeks after - Less Wrong review here), but I have already benefited materially from my experience:

I had three separate insights related to how I think about my career that jointly let me actually start to plan and take action. In particular, I stopped letting the best be the enemy of the good, noticed that my goals can be of different kinds, and figured out which specific component of my uncertainty was the big scary one and took actual steps to start resolving it.

A couple of things in my life improved immediately as if by magic. I started working out every morning, for example, for the first time since college. I'm still not sure how that happened. I didn't consciously expend any willpower.

Several other recent improvements in my life of comparable size are partially attributable to CFAR as well. (The other main contributors are my excellent life coach, Scott Adams's book, and the cumulative effect of everything else I've done, seen, heard, and read.)

Several of the classes that seemed hard to use at the time became obviously useful in hindsight. For example, I started noticing things where a periodic "Strategic Review" would be helpful.

In addition, I learned how to be "greedy" about asking other people for questions and advice when I thought it would be helpful. This has been tremendously useful already.

I'll end the way I began, with a summary:

The problems humanity is facing in this century are unprecedented in both severity and difficulty. To meet these challenges, we need people who are rational enough to sanely and evaluate the risks and possible solutions, effective enough to get something done, and good enough to take personal responsibility for making sure something happens. CFAR is trying to create a community of such people. Almost no one else is even trying.

CFAR is awesome because they do things that work. They've promised to do more research into some of the "epistemic rationality" areas that I'd wished to see more progress on. They have a fundraiser with matched donations through 31 January 2014, please consider giving if you can.

Words and Techniques

How do you learn a behavior? How do you teach it?

Well, let's say you want someone to, when in potentially dangerous situations, scan their environment for threats. You could just tell them that. But what happens? If you tell them once, it's just thing someone told them. If you tell them many times, they'll get a little voice in their head that says, occasionally, "when you're in a potentially dangerous situation, scan your environment for likely threats." That's the behavior you've taught - to rehearse the admonition. At most, if "potentially dangerous" is really understood, they might remember your admonition when they're already scared, and look around one more time.

So the problem with general verbal admonitions is that they aren't very good at producing helpful behavior. What you really need to do is tell them, "before you cross the street, look both ways."

Why is that better? Because it prescribes a specific action, and a specific situational cue to execute the behavior. That's how people actually learn to do things. Concepts like "potentially dangerous" are too abstract to trigger a stereotyped response, though maybe "feeling scared" is specific enough. But even in that case, it's not actually the same thing - if I'm scared of the dark, should I look around my dark hallway at night for threats? No.

Here are some more examples:

Too general: My car is broken -> fix it

Better: A light on the dashboard turned on -> go to the mechanic

Too general: Eat healthier

Better: It it's not mealtime, I don't eat anything but fresh vegetables. At mealtime, I always start with a small amount protein-heavy and some vegetables, and then wait a few minutes to determine whether I'm still hungry.

Notice that the first example obviously doesn't cover all cases, and the second one has a very specific behavior that won't be appropriate for everyone. So you might want to think about how to teach more generalized good behaviors.

I can think of two ways to train more generalized behavior. You can teach a behavior-generating behavior or an explicit diagnostic tool.

What's a behavior-generating behavior? Well, you could teach someone how to use a map when they need to figure out how to get somewhere. Then every time they have the situational cue "I don't know how to get there," they can pull out their map and design a new route that they've never done before.

What's a diagnostic tool? You could learn to recognize feeling frustrated, and train the habit of asking what you can do in the future to fix the problem instead of trying to assign blame.

This has helped me understand a lot of the changes in the curriculum at CFAR since the 2011 Minicamp.

Rationality is often divided up into "epistemic" and "instrumental" rationality, where epistemic rationality is about having explicit verbal beliefs that are accurate, and instrumental rationality is about taking actions that accomplish your goals.

At Minicamp, we spent a lot of time on "epistemic rationality," but most of it didn't really rise above the level of verbal admonitions. I spent a while figuring out how to put these into Anki cards so I'd at least have the cached thoughts in my head. Here are a few of them (I've omitted the cloze-deletion):

  • When I notice defensiveness/pride/positive affect about a belief, I reward myself for noticing, and ask myself what new evidence would change my mind.
  • When I notice that I am considering changing my mind, I reward myself for noticing, and write it down.
  • Once I have decided whether to change my mind, I write down the outcome.
  • When I avoid thinking about what happens if I am wrong, I might be overconfident.
  • If I am not asking experts what they think, I might not be curious enough.
  • My beliefs should predict some outcomes and prohibit others.
  • When I think of or ask for an example, I reward myself.
  • When I can't think of any examples, I consider changing my mind.
  • When I keep finding different reasons to avoid thinking about or doing something, but do not feel a salient negative affect, I notice that I have a strong aversion.
  • When I notice an opportunity to make a prediction in advance, I make a prediction and reward myself for noticing.
  • When someone whose opinion I respect disagrees with me , I consider changing my mind.

I now have these cached thoughts, but I'm not sure they've affected my behavior much. There were also some things that were so general I didn't even know how to write an Anki card that I expected might work.

There was basically none of this at this past weekend's CFAR workshop. Instead, we had techniques that applied some of these principles, in specific well-described situation.

For example, a big part of epistemic rationality is the idea that beliefs should cash out to anticipated experiences, and you should test your beliefs. We didn't cover this at a high level anywhere, but we did talk about using the "inner simulator" to troubleshoot plans in advance. Basically, imagine that someone tells you your plan failed, after the fact. How surprised do you feel? That's just a special case of noticing your subjective anticipation beforehand, to give you the opportunity to reconcile it with your explicit belief.

"Inner simulator" also gives you an opportunity to make excuses in advance, by asking your imagined future self why the plan failed.

The sad thing about this technique is that my brain didn't connect it automatically with the admonition "test your beliefs!" The nice thing about this technique is:

I might actually use it.

In my "first impression" review of the CFAR workshop I spent a lot of time talking about the things that were missing. Well, aside from the fact that it was less than half the length of the Minicamp, the workshop had another good reason to drop that stuff: the actually existing epistemic rationality training just didn't work yet. The folks at CFAR did a lot of testing, and it turned out that people basically don't change their lives in response to high-level epistemic rationality admonitions. So they had to choose between two options:

1) Do what works

2) Fix the broken thing

This is a pretty common decision to have to make, and it's often not obvious which is the right one. The advantage to "Do what works" is it doesn't take much extra effort once you've identified where the problems are - you just stay away from them!

The upside to "Fix the broken thing" is that it is often the only way to get extraordinary results. Chances are, someone else has already tried "do what works," though that's not so likely that it's not worth testing. It's an uphill climb to fix something that doesn't work, you'll have to try a lot of things, most of them won't work, you'll be really frustrated and want to pull your hair out, and then you'll stumble on something so obvious in hindsight that you'll feel like an idiot. You'll have to give up a whole bunch of beautiful ideas that all seemed like just the insight you needed, because they didn't actually work.

So why put up with all that? It depends on what you think the world needs. If the world needs a bunch of people who are all just a little more effective, then by all means stick with the things that work.

But it the world has big important problems that must be solved, but can only be solved by going against the grain, by reaching into solution space for something far away from our local maximum - then only the hard way can save the world. Not just making people who have power over their own emotions, habits, and behaviors, which makes them more effective, but people whose explicit verbal reasoning is correct as well, so they can notice the alternative that's 10 or 1,000 times as good as the default action.