Tag Archives: train

On the fetishization of money in Galt’s Gulch

Ayn Rand’s Atlas Shrugged is set in a world in which the death dance of capitalism has reached its final stages, the state itself becoming an instrument of direct appropriation of surplus value generated by the workers. As industrialists become aware of the extractive nature of the process in which they are participating, one by one, they convert to the radical anarchism of an agitator named John Galt,* and “go on strike” to an utopian community hidden in the mountains of Colorado: Galt’s Gulch.

In Galt’s Gulch, resources are allocated to whomever can use them most productively, in an informal process; since everyone can see how their interests converge, levels of trust are high, and hoarding and shirking are basically nonproblems. People pick up whatever tasks seem needed, regardless of their profession or the ability such tasks might give them to extract rents.

This raises the obvious question: Why does anyone use money in Galt’s Gulch? Continue reading

Words and Techniques

How do you learn a behavior? How do you teach it?

Well, let's say you want someone to, when in potentially dangerous situations, scan their environment for threats. You could just tell them that. But what happens? If you tell them once, it's just thing someone told them. If you tell them many times, they'll get a little voice in their head that says, occasionally, "when you're in a potentially dangerous situation, scan your environment for likely threats." That's the behavior you've taught - to rehearse the admonition. At most, if "potentially dangerous" is really understood, they might remember your admonition when they're already scared, and look around one more time.

So the problem with general verbal admonitions is that they aren't very good at producing helpful behavior. What you really need to do is tell them, "before you cross the street, look both ways."

Why is that better? Because it prescribes a specific action, and a specific situational cue to execute the behavior. That's how people actually learn to do things. Concepts like "potentially dangerous" are too abstract to trigger a stereotyped response, though maybe "feeling scared" is specific enough. But even in that case, it's not actually the same thing - if I'm scared of the dark, should I look around my dark hallway at night for threats? No.

Here are some more examples:

Too general: My car is broken -> fix it

Better: A light on the dashboard turned on -> go to the mechanic

Too general: Eat healthier

Better: It it's not mealtime, I don't eat anything but fresh vegetables. At mealtime, I always start with a small amount protein-heavy and some vegetables, and then wait a few minutes to determine whether I'm still hungry.

Notice that the first example obviously doesn't cover all cases, and the second one has a very specific behavior that won't be appropriate for everyone. So you might want to think about how to teach more generalized good behaviors.

I can think of two ways to train more generalized behavior. You can teach a behavior-generating behavior or an explicit diagnostic tool.

What's a behavior-generating behavior? Well, you could teach someone how to use a map when they need to figure out how to get somewhere. Then every time they have the situational cue "I don't know how to get there," they can pull out their map and design a new route that they've never done before.

What's a diagnostic tool? You could learn to recognize feeling frustrated, and train the habit of asking what you can do in the future to fix the problem instead of trying to assign blame.

This has helped me understand a lot of the changes in the curriculum at CFAR since the 2011 Minicamp.

Rationality is often divided up into "epistemic" and "instrumental" rationality, where epistemic rationality is about having explicit verbal beliefs that are accurate, and instrumental rationality is about taking actions that accomplish your goals.

At Minicamp, we spent a lot of time on "epistemic rationality," but most of it didn't really rise above the level of verbal admonitions. I spent a while figuring out how to put these into Anki cards so I'd at least have the cached thoughts in my head. Here are a few of them (I've omitted the cloze-deletion):

  • When I notice defensiveness/pride/positive affect about a belief, I reward myself for noticing, and ask myself what new evidence would change my mind.
  • When I notice that I am considering changing my mind, I reward myself for noticing, and write it down.
  • Once I have decided whether to change my mind, I write down the outcome.
  • When I avoid thinking about what happens if I am wrong, I might be overconfident.
  • If I am not asking experts what they think, I might not be curious enough.
  • My beliefs should predict some outcomes and prohibit others.
  • When I think of or ask for an example, I reward myself.
  • When I can't think of any examples, I consider changing my mind.
  • When I keep finding different reasons to avoid thinking about or doing something, but do not feel a salient negative affect, I notice that I have a strong aversion.
  • When I notice an opportunity to make a prediction in advance, I make a prediction and reward myself for noticing.
  • When someone whose opinion I respect disagrees with me , I consider changing my mind.

I now have these cached thoughts, but I'm not sure they've affected my behavior much. There were also some things that were so general I didn't even know how to write an Anki card that I expected might work.

There was basically none of this at this past weekend's CFAR workshop. Instead, we had techniques that applied some of these principles, in specific well-described situation.

For example, a big part of epistemic rationality is the idea that beliefs should cash out to anticipated experiences, and you should test your beliefs. We didn't cover this at a high level anywhere, but we did talk about using the "inner simulator" to troubleshoot plans in advance. Basically, imagine that someone tells you your plan failed, after the fact. How surprised do you feel? That's just a special case of noticing your subjective anticipation beforehand, to give you the opportunity to reconcile it with your explicit belief.

"Inner simulator" also gives you an opportunity to make excuses in advance, by asking your imagined future self why the plan failed.

The sad thing about this technique is that my brain didn't connect it automatically with the admonition "test your beliefs!" The nice thing about this technique is:

I might actually use it.

In my "first impression" review of the CFAR workshop I spent a lot of time talking about the things that were missing. Well, aside from the fact that it was less than half the length of the Minicamp, the workshop had another good reason to drop that stuff: the actually existing epistemic rationality training just didn't work yet. The folks at CFAR did a lot of testing, and it turned out that people basically don't change their lives in response to high-level epistemic rationality admonitions. So they had to choose between two options:

1) Do what works

2) Fix the broken thing

This is a pretty common decision to have to make, and it's often not obvious which is the right one. The advantage to "Do what works" is it doesn't take much extra effort once you've identified where the problems are - you just stay away from them!

The upside to "Fix the broken thing" is that it is often the only way to get extraordinary results. Chances are, someone else has already tried "do what works," though that's not so likely that it's not worth testing. It's an uphill climb to fix something that doesn't work, you'll have to try a lot of things, most of them won't work, you'll be really frustrated and want to pull your hair out, and then you'll stumble on something so obvious in hindsight that you'll feel like an idiot. You'll have to give up a whole bunch of beautiful ideas that all seemed like just the insight you needed, because they didn't actually work.

So why put up with all that? It depends on what you think the world needs. If the world needs a bunch of people who are all just a little more effective, then by all means stick with the things that work.

But it the world has big important problems that must be solved, but can only be solved by going against the grain, by reaching into solution space for something far away from our local maximum - then only the hard way can save the world. Not just making people who have power over their own emotions, habits, and behaviors, which makes them more effective, but people whose explicit verbal reasoning is correct as well, so they can notice the alternative that's 10 or 1,000 times as good as the default action.