Monthly Archives: November 2013

Can God Make a Rock So Big He Can't Pick it Up? Or, Why Does My Calculus Textbook Start With This Chapter About Unions and Intersections?

Can God create a rock so big that He can't pick it up? To understand the problem, we need to understand set theory. But I don't really want to talk about Russell's paradox quite yet - a big problem with set theory as it's taught is that it doesn't respond to a felt need, it's just plopped down at the beginning of a calculus or logic textbook without explanation. Here's a bunch of self-evident stuff! Go calculate what the union of the intersections is!

I'm not going to tell you how to do set theory here. You can look that up if you want. I'm just going to try to explain a little bit about why it matters, why you should be interested in it, and how to apply some set-theory-ish rules of thumb to your own thoughts.

Think about the difference between these two arguments:


The king of Freedonia is Phillip III.

The husband of Mary Teller is Phillip III.

Therefore, the king of Freedonia is the husband of Mary Teller.


Milk is white.

Snow is white.

Therefore, milk is snow.


The second argument looks just like the first one - but the first one works and the second one doesn't. Why?

Well, I've deliberately made it tricky by using the verb "is" in each case. "Is" is one of those tricky verbs whose meaning is very context dependent. Here's a more precise formulation of the arguments:


The king of Freedonia is the same as Phillip III.

The husband of Mary Teller is the same as Phillip III.

Therefore, the king of Freedonia is the same as the husband of Mary Teller.


Milk is one of the things that are always white.

Snow is one of the things that are always white.

Therefore, milk ??? snow.


it's not even clear which spurious consequence is supposed to follow from the second argument anymore. Is this a specious proof that milk and snow are identical, or that all milk is snow, or that all snow is milk, or just that some things are both milk and snow?

Here's another paired example:


A shark is an aquatic animal.

An aquatic animal is a living thing.

Therefore, a shark is a living thing.


A knife is an item in my silverware drawer.

An item in my silverware drawer is a spoon.

Therefore, a knife is a spoon.


 And with more specific wording:


Every shark is an aquatic animal.

Every aquatic animal is a living thing.

Therefore, every shark is a living thing.


At least one knife is an item in my silverware drawer.

At least one item in my silverware drawer is a spoon.

Therefore, ???


Or better yet:


There exists at least one item that is both a knife and in my silverware drawer.

There exists at least one item that is both in my silverware drawer and a spoon.

Therefore, ???


Set theory is a way to force yourself to use statements more explicit than "X is Y", to prevent you from accidentally equivocating and "proving" that knives are spoons. Since math is all about proving possibly counterintuitive things, this is kind of important in math. But it's also important whenever you're making explicit compounded arguments of the (A, B, THEREFORE C) style.

In set theory you never say "X is Y." You instead are always talking about whether something is a member of a set. For now, think of a set as nothing more specific than a collection of things. There's a problem with this, but I'll get to it later.

You can say that something is a member of a set, or that if something is a member of one set, then it must be a member of another, or that there is at least one thing that is both a member of set A and a member of set B, etc. You can also negate these things - you can say that there are no things that are both members of set A and set B. Think about these sentences, and how to make them more precise:

  • A mouse is in this cage.
  • A mouse is an animal.
  • This mouse is Pinky.
  • Pinky is in this cage.
  • Dallas's football team is heavier than the people in China.
  • A dragon is not real.
  • WEF wrestling is fake.

Here are some formulations that are a little more set theory-ish:

  • There exists at least one thing that is both a member of the set (is a mouse) and a member of the set (things in this cage)
  • Every member of the set (is a mouse) is a member of the set (is an animal).
  • Every member of the set (this mouse) is a member of the set (Pinky). Also, every member of the set (Pinky) is a member of the set (this mouse).

(A pithier way to say that one is: Something is a member of the set (this mouse) if and only if it is a member of the set (Pinky). This is an "identity" relation.)

  • Every member of the set (Pinky) is a member of the set (in this cage).
  • The average of the weights of all the members of the set (members of Dallas's football team) is higher than the average of the weights of all the members of the set (the people in China).

(This one is tricky - the original statement is ambiguous, because it's worded as a statement about the set, but what exactly are we saying is heavier than what? Are we saying that each Dallas Cowboy is heavier than each person in China? Or that the Dallas Cowboys, weighed all together, are heavier than the people in China, weighed all together? Or that the average weight of a member of the first set is greater than that of a member of the second? It's important to be specific about things like this when talking about group characteristics.)

  • There are no members of the set (dragons) that are members of the set (real things).
  • Every member of the set (WEF wrestling matches) is a member of the set (fake things).

Do you get the pattern? You never simply talk about how something "is" or "is not" something else, only about whether a member of set A is never, sometimes, or always a member of set B, and whether an assertion is true or false.

This can be helpful in avoiding getting into stupid arguments. If someone says, "a mouse is an animal," do they mean that there is at least one mouse that is an animal, or that every mouse is an animal, or that something is a mouse if and only if it's an animal?

If they mean that there's at least one mouse that's an animal, then finding a mouse that's not an animal (like a computer mouse, or a robotic mouse) is not evidence against their point - all they have to do to prove it's true is find at least one mouse that is an animal. But if you phrase it explicitly like that, it's harder for them to equivocate and "prove" that a computer mouse is an animal.

Or maybe more realistically, if I "prove" that wiggins are thieves by showing you one wiggin who steals something (which only proves that there is at least one wiggin who is a thief), I might then pretend that you should draw the inference that some other wiggin is also a thief (which would only be valid if I had proved that every member of set "is a wiggin" is a member of set "is a thief").

If they mean that every mouse is an animal, then finding an example of a mouse that is not an animal is a counterexample, but finding an example of an animal that is not a mouse, like a dog, is a not counterexample. If they've shown to your satisfaction that all members of set "mouse" are members of set "animal", then you can go on and assume that's true for each new mouse you encounter - but it doesn't imply that all members of set "animal" are members of set "mouse".

Finally, if they show "if and only if," then you would have been able to prove them wrong just by showing them a dog. But if they convince you of this, then - and only then - you should accept the inference both ways.

It's easy to lose track of this when you say things like "mice are animals" or "wiggins are thieves", so it can be helpful to use set-theoretic language (which is almost as compact), like "MICE is a subset of ANIMALS."

OK, so what does this have to do with God's rocks? Well, sets are important, right? And we want to be correct when talking about important things - and sets help us be correct. So we want to describe sets using other sets. And talk about sets of sets!

Like you might want to talk about the properties of "sets that have no members." Or "sets that have a finite number of members." This is fine. But there are limits.

Let's walk through one of them - the rock paradox. It's usually stated as:

God is omnipotent. That means God can do any thing.

Making a rock so big that God can't pick it up is a thing.

Therefore, God can make a rock so big that God can't pick it up.

But picking up an arbitrary object that exists is also a thing.

Therefore God can pick up an arbitrary object that exists.

Now, let that arbitrary object be "a rock so big that God can't pick it up."

Then, God can pick up a rock so big that God can't pick it up.

Now, if the existence of such a rock were impossible, then this wouldn't be a problem. But we just said that God can make one.

But it's not really a rock so big that God can't pick it up, if God can pick it up.

Thus, the omnipotence of God implies a contradiction.

Therefore, there can be no omnipotent God.

The problem here seems to be using omnipotence in the definition of one of the powers. If you don't allow that, then there's no way to get the contradiction.

This brings up another set-theoretic principle: the "things" a set can be a collection of have to be well-defined, before we define any of the sets. So if we're talking about puppies, and we already know what puppies are, without using sets of puppies in the definition, then we can talk about sets of puppies. But we can't just define a collection of "puppies and sets of puppies," before we know what the sets of puppies are. And the sets of puppies can't themselves be defined until the puppies are defined.

So does the rock paradox follow this rule? No.

"God is omnipotent" can be rephrased as:

For every ability X, let there be a set (entities that have ability X).

Every omnipotent being is a member of every such set.

God is an omnipotent being.

Therefore, for every ability X, God is a member of the set (entities that have ability X.)

Now, this works for abilities like "walk on water" or "use set-theoretic notation" or "make ten commands". Because those things are well-defined even if we don't know about God.

How about "make a rock so big that God can't pick it up." Is this well-defined before we start talking about sets of abilities? No, because the ability is defined by a reference to what God can do, and what God can do is defined by a particular set of abilities. So a collection of abilities that includes "make a rock so big that God can't pick it up" is simply not a well-defined collection that we can take sets of.

In fact, "make a rock so big that [someone] is not a member of set (entities that have the ability to pick up a rock of that size)" is never a first-order ability.

A set-theoretically valid definition of omnipotence would be something more like this:

Define some collection of "abilities," none of which reference other powers or omnipotence directly.

Define omnipotence as the set of all these abilities.

Now, maybe "make an arbitrarily large rock" is one of the powers. And maybe "pick up an arbitrarily large rock" is a power. But none of the powers refer to each other, or to sets of powers, no matter how indirectly. So "make a rock so big that God can't pick it up" isn't an ability.

We can then think of sets of abilities, like the set of rock-making and rock-picking-up. Omnipotence is the ability-set that is contains all abilities.

Now we need to use a concept called a "subset." A X is a subset of Y if every member of set X is also a member of set Y. For example, "Puppies" is a subset of "Animals," and "Animals" is also a subset of "Animals, but "Animals is not a subset of "Puppies."

So every ability-set is a subset of omnipotence.

Of course, that doesn't mean that no one can make a rock so big that someone else can't pick it up. Or even a rock so big that they themselves can't pick it up. But that's a statement about combinations of abilities and inabilities.

So what if you wanted to describe all the collections of abilities that don't include certain abilities? Well, that's a second-order set. Call it a schmet. So you might have a schmet of ability-sets that include walking on water, but not swimming. Or making a 32kg rock, but not picking it up.

Now let's get back to that paradox. Can God make a rock so big that He can't pick it up? How does that cash out when thinking about sets of abilities?

If someone can make a rock so big they can't pick it up, that means that their ability-set is a member of a certain schmet. In particular, it's the schmet that includes ability-sets where for some size X, they include the ability "can make a rock of size X", and also do not include any ability "can pick up a rock of up to size Y", for any Y>=X.

So the question is, is God's ability set (omnipotence) a member of that schmet? The answer is no: omnipotence is not a member of the schmet "can make a rock so big you can't pick it up."

There's no paradox, because a schmet is not an ability. Remember, we had to define all the abilities before defining any of the ability-sets, and we had to define the ability-sets before defining the schmets. So there can't be an ability that refers to a schmet! And omnipotence is an ability-set, so its definition can't refer to schmets either - it's just the ability set that includes all abilities.

If you look up Russell's paradox explained, you will find a similar exposition, except it's less fun because it isn't about God and rocks.

Rationality Cocktails

Sphex on the Beach

1) Assemble bottles of vodka, peach schnapps, creme de cassis, orange juice, and cranberry juice, and an orange slice and a maraschino cherry.

2) Rinse glass.

3) Put ingredients aside to make another cocktail.

4) Go to step 1.


Bayesian Update Martini

1) Start with 2 ounces of the last Bayesian update Martini. If this is your first Bayesian Update Martini, start with one ounce of gin and one ounce of Vermouth.

2) Ask the customer for their preferred gin:vermouth ratio.

3) Add 2 ounces of gin and vermouth, in the requested ratio.

4) Pour out 2 ounces into a vessel with ice, and shake our stir, then serve. Reserve the other 2 ounces for the next Bayesian Update Martini.

Words and Techniques

How do you learn a behavior? How do you teach it?

Well, let's say you want someone to, when in potentially dangerous situations, scan their environment for threats. You could just tell them that. But what happens? If you tell them once, it's just thing someone told them. If you tell them many times, they'll get a little voice in their head that says, occasionally, "when you're in a potentially dangerous situation, scan your environment for likely threats." That's the behavior you've taught - to rehearse the admonition. At most, if "potentially dangerous" is really understood, they might remember your admonition when they're already scared, and look around one more time.

So the problem with general verbal admonitions is that they aren't very good at producing helpful behavior. What you really need to do is tell them, "before you cross the street, look both ways."

Why is that better? Because it prescribes a specific action, and a specific situational cue to execute the behavior. That's how people actually learn to do things. Concepts like "potentially dangerous" are too abstract to trigger a stereotyped response, though maybe "feeling scared" is specific enough. But even in that case, it's not actually the same thing - if I'm scared of the dark, should I look around my dark hallway at night for threats? No.

Here are some more examples:

Too general: My car is broken -> fix it

Better: A light on the dashboard turned on -> go to the mechanic

Too general: Eat healthier

Better: It it's not mealtime, I don't eat anything but fresh vegetables. At mealtime, I always start with a small amount protein-heavy and some vegetables, and then wait a few minutes to determine whether I'm still hungry.

Notice that the first example obviously doesn't cover all cases, and the second one has a very specific behavior that won't be appropriate for everyone. So you might want to think about how to teach more generalized good behaviors.

I can think of two ways to train more generalized behavior. You can teach a behavior-generating behavior or an explicit diagnostic tool.

What's a behavior-generating behavior? Well, you could teach someone how to use a map when they need to figure out how to get somewhere. Then every time they have the situational cue "I don't know how to get there," they can pull out their map and design a new route that they've never done before.

What's a diagnostic tool? You could learn to recognize feeling frustrated, and train the habit of asking what you can do in the future to fix the problem instead of trying to assign blame.

This has helped me understand a lot of the changes in the curriculum at CFAR since the 2011 Minicamp.

Rationality is often divided up into "epistemic" and "instrumental" rationality, where epistemic rationality is about having explicit verbal beliefs that are accurate, and instrumental rationality is about taking actions that accomplish your goals.

At Minicamp, we spent a lot of time on "epistemic rationality," but most of it didn't really rise above the level of verbal admonitions. I spent a while figuring out how to put these into Anki cards so I'd at least have the cached thoughts in my head. Here are a few of them (I've omitted the cloze-deletion):

  • When I notice defensiveness/pride/positive affect about a belief, I reward myself for noticing, and ask myself what new evidence would change my mind.
  • When I notice that I am considering changing my mind, I reward myself for noticing, and write it down.
  • Once I have decided whether to change my mind, I write down the outcome.
  • When I avoid thinking about what happens if I am wrong, I might be overconfident.
  • If I am not asking experts what they think, I might not be curious enough.
  • My beliefs should predict some outcomes and prohibit others.
  • When I think of or ask for an example, I reward myself.
  • When I can't think of any examples, I consider changing my mind.
  • When I keep finding different reasons to avoid thinking about or doing something, but do not feel a salient negative affect, I notice that I have a strong aversion.
  • When I notice an opportunity to make a prediction in advance, I make a prediction and reward myself for noticing.
  • When someone whose opinion I respect disagrees with me , I consider changing my mind.

I now have these cached thoughts, but I'm not sure they've affected my behavior much. There were also some things that were so general I didn't even know how to write an Anki card that I expected might work.

There was basically none of this at this past weekend's CFAR workshop. Instead, we had techniques that applied some of these principles, in specific well-described situation.

For example, a big part of epistemic rationality is the idea that beliefs should cash out to anticipated experiences, and you should test your beliefs. We didn't cover this at a high level anywhere, but we did talk about using the "inner simulator" to troubleshoot plans in advance. Basically, imagine that someone tells you your plan failed, after the fact. How surprised do you feel? That's just a special case of noticing your subjective anticipation beforehand, to give you the opportunity to reconcile it with your explicit belief.

"Inner simulator" also gives you an opportunity to make excuses in advance, by asking your imagined future self why the plan failed.

The sad thing about this technique is that my brain didn't connect it automatically with the admonition "test your beliefs!" The nice thing about this technique is:

I might actually use it.

In my "first impression" review of the CFAR workshop I spent a lot of time talking about the things that were missing. Well, aside from the fact that it was less than half the length of the Minicamp, the workshop had another good reason to drop that stuff: the actually existing epistemic rationality training just didn't work yet. The folks at CFAR did a lot of testing, and it turned out that people basically don't change their lives in response to high-level epistemic rationality admonitions. So they had to choose between two options:

1) Do what works

2) Fix the broken thing

This is a pretty common decision to have to make, and it's often not obvious which is the right one. The advantage to "Do what works" is it doesn't take much extra effort once you've identified where the problems are - you just stay away from them!

The upside to "Fix the broken thing" is that it is often the only way to get extraordinary results. Chances are, someone else has already tried "do what works," though that's not so likely that it's not worth testing. It's an uphill climb to fix something that doesn't work, you'll have to try a lot of things, most of them won't work, you'll be really frustrated and want to pull your hair out, and then you'll stumble on something so obvious in hindsight that you'll feel like an idiot. You'll have to give up a whole bunch of beautiful ideas that all seemed like just the insight you needed, because they didn't actually work.

So why put up with all that? It depends on what you think the world needs. If the world needs a bunch of people who are all just a little more effective, then by all means stick with the things that work.

But it the world has big important problems that must be solved, but can only be solved by going against the grain, by reaching into solution space for something far away from our local maximum - then only the hard way can save the world. Not just making people who have power over their own emotions, habits, and behaviors, which makes them more effective, but people whose explicit verbal reasoning is correct as well, so they can notice the alternative that's 10 or 1,000 times as good as the default action.

Wait vs Interrupt Culture

At this past weekend's CFAR Workshop (about which, by the way, I plan to have another post soon with less whining and more serious discussion), someone mentioned that they were uncomfortable with pauses in conversation, and that got me thinking about different conversational styles.

Growing up with friends who were disproportionately male and disproportionately nerdy, I learned that it was a normal thing to interrupt people. If someone said something you had to respond to, you'd just start responding. Didn't matter if it "interrupted" further words - if they thought you needed to hear those words before responding, they'd interrupt right back.

Occasionally some weird person would be offended when I interrupted, but I figured this was some bizarre fancypants rule from before people had places to go and people to see. Or just something for people with especially thin skins or delicate temperaments, looking for offense and aggression in every action.

Then I went to St. John's College - the talking school (among other things). In Seminar (and sometimes in Tutorials) there was a totally different conversational norm. People were always expected to wait until whoever was talking was done. People would apologize not just for interrupting someone who was already talking, but for accidentally saying something when someone else looked like they were about to speak. This seemed totally crazy. Some people would just blab on unchecked, and others didn't get a chance to talk at all. Some people would ignore the norm and talk over others, and nobody interrupted them back to shoot them down.

But then a few interesting things happened:

1) The tutors were able to moderate the discussions, gently. They wouldn't actually scold anyone for interrupting, but they would say something like, "That's interesting, but I think Jane was still talking," subtly pointing out a violation of the norm.

2) People started saying less at a time.

#1 is pretty obvious - with no enforcement of the social norm, a no-interruptions norm collapses pretty quickly. But #2 is actually really interesting. If talking at all is an implied claim that what you're saying is the most important thing that can be said, then polite people keep it short.

With 15-20 people in a seminar, this also meant that no one could try to force the conversation in a certain direction. When you're done talking, the conversation is out of your hands. This can be frustrating at first, but with time, you learn to trust not your fellow conversationalists, but the conversation itself, to go where it needs to. If you haven't said enough, then you trust that someone will ask you a question, and you'll say more.

When people are interrupting each other - when they're constantly tugging the conversation back and forth between their preferred directions - then the conversation itself is just a battle of wills. But when people just put in one thing at a time, and trust their fellows to only say things that relate to the thing that came right before - at least, until there's a very long pause - then you start to see genuine collaboration.

And when a lull in the conversation is treated as an opportunity to think about the last thing said, rather than an opportunity to jump in with the thing you were holding onto from 15 minutes ago because you couldn't just interrupt and say it - then you also open yourself up to being genuinely surprised, to seeing the conversation go somewhere that no one in the room would have predicted, to introduce ideas that no one brought with them when they sat down at the table.

By the time I graduated, I'd internalized this norm, and the rest of the world seemed rude to me for a few months. Not just because of the interrupting - but more because I'd say one thing, politely pause, and then people would assume I was done and start explaining why I was wrong - without asking any questions! Eventually, I realized that I'd been perfectly comfortable with these sorts of interactions before college. I just needed to code-switch! Some people are more comfortable with a culture of interrupting when you want to, and accepting interruptions. Others are more comfortable with a culture of waiting their turn, and courteously saying only one thing at a time, not trying to cram in a whole bunch of arguments for their thesis.

Now, I've praised the virtues of wait culture because I think it's undervalued, but there's plenty to say for interrupt culture as well. For one, it's more robust in "unwalled" circumstances. If there's no one around to enforce wait culture norms, then a few jerks can dominate the discussion, silencing everyone else. But someone who doesn't follow "interrupt" norms only silences themselves.

Second, it's faster and easier to calibrate how much someone else feels the need to talk, when they're willing to interrupt you. It takes willpower to stop talking when you're not sure you were perfectly clear, and to trust others to pick up the slack. It's much easier to keep going until they stop you.

So if you're only used to one style, see if you can try out the other somewhere. Or at least pay attention and see whether you're talking to someone who follows the other norm. And don't assume that you know which norm is the "right" one; try it the "wrong" way and maybe you'll learn something.


Cross-posted at Less Wrong.

What Nietzsche Said to Me

Nietzsche famously wrote that he was writing to be understood only by his friends, which raises the obvious question of why so many people who don't like what they think he says claim to understand him. This weekend I listened to a few conversations that seemed to get him totally wrong. I resisted the urge to correct them at the time since it wasn't completely material to the conversation, so I'm dominating that urge into a blog post to get writing practice.

Note that Nietzsche didn't write this way, presumably for a good reason. You may superficially understand what I'm saying but fail to internalize it, unless you follow up by reading the original until you understand how this is the same thing as that.

According to Nietzsche, in the beginning, there were people and power relations.

Words are Powerful

Words are one of the main ways people interpret, keep track of, and interact with their world. Words like "one" and "two" and "tree" and "sheep are important tools of agriculture, trade, etc. But words like "good," "wicked," "proud," "sinful," "man," "woman," "justice," and "sexism" also affect people's behavior in profound ways. One simple example of this is that in standard English the default pronoun for one person it's always either male or female. This makes it much more natural to make statements about men or women rather than humans, and it cuts against the grain to make sex-neutral statements. For another consider the Christian sin - but Aristotelian virtue - of pride. For more on this, read 1984 by George Orwell.

But they're Made Up

The framework of ideas we use to understand our world is not an attribute of the things themselves. It is a behavior of our minds. It's made up! And someone made it up. Whoever made up the thoughts you use determined not which propositions you affirm or deny, but which ones are thinkable in the first place.

The ancients seem alien and incomprehensible because their basic ideas are so different from ours that only a truly deep thinker can understand them. The Greek "soul" is not necessarily separable from the body, or entirely rational in nature - Aristotle thought a soul was something a body did, even an animal's or plant's body - but the moderns think either that there are no souls ("Huh? Do the bodies just lie there motionless our something?" - Aristotle) or that only humans have them and they go to heaven or hell after we die.

Now Everyone is a Wizard

Modernity (the legacy of Hobbes, Machiavelli, Locke, Descartes, Hume, etc.) is not that it's the first time anyone said that the people should rule. That's old. These are the features of modern ideas:

Baconian science means that you can add to our stock of true attributes we know about nature without understanding your tools.

Algebra means you can perform lots of calculations without understanding math.

Liberalism means that lots of people are allowed to talk about different "moralities" and choose a god, ethos, and role in society as one might choose a shirt. We don't have a unified cultural elite controlling how we're allowed to talk about things. Instead, our elite believe in and endorse total freedom of speech. Which means that anyone can playing around with the lens through which humans are able to think about their world and decide right from wrong.

You can't get arrested for killing the gods, because after all, it's only words. Not that it makes the gods any less dead.

With no unified control over language, controversy over what to call things is a power struggle more akin to war than to politics, because the goal is not to enact a set of preferred practical policies, but to permanently destroy the enemy's ability to fight, by ripping out their tongues. At the same time, seeing that all values are questionable, people lose faith in words about rightness and wrongness, the just and the true and the good, so nothing holds them back from this return to the war of all against all.

The Nietzschean Hero

You can't fix this with arguments about what the good should be. Arguments are just another piece in the Game of Words. Which set of ideas you use determines which combinations of words you evaluate as true propositions. Aristotle is correct when he says that animals have souls, but Descartes is correct when he says they don't.

Is there a way out? Not an easy our a likely one. We're probably doomed to this forever. But if someone were to make up - and popularize, at least among the elite - a new set ideas, one with a new set of values appropriate for out times and circumstances, who would that person have to be?

They would need a sufficiently deep understanding to know that the words they have received are not the only words that can be, that to make a new thing you have to destroy, distort, or forget the post.

And they would have to be profoundly creative. Creative enough to be able to come up with a totally new set of ideas adequate to give modern people the power they need, while taking away the curse of infinitely malleable values.

That is the Nietzschean superman.

A First Impression Review of the CFAR NY Workshop

UPDATE: This review is old. My revised take is here.

I'm writing this on the train on my way back from the Center for Applied Rationality's workshop in Ossining, NY, a little less than an hour north of the city. Because I was wondering what to do and my brain wanted to do this instead of just reading. So that's a good sign.

Please bear in mind that this is just a first impression and that I am likely to change my mind about both both the good and the bad over the next few months. If you read this and a lot of time has passed, feel free to contact me directly to find out what I think then.

I went to the equivalent workshop back in 2011, when the institution was still part of the Singularity Institute (now called the Machine Intelligence Research Institute, or MIRI), and was excited to see what had changed and improved as a result of CFAR's extensive testing and iteration. I was hoping the training would resemble the Jeffreyssai stories much more than it did then.

My literal first and last impressions were actually pretty bad. I found out only a few weeks before the event that it wouldn't actually be in NYC like I'd thought but in much harder to get to Ossining, 45 minutes north of the city. (By contrast the site in Berkeley was within walking distance of a BART station.) Then about a week before the workshop I finally got the promised email with logistic information, which said there would be pickups provided at the Ossining train station. Amtrak only goes to the nearby Croton-Harmon station, and I didn't want to go all the way across Manhattan to transfer to the Metro-North commuter train that does go to Ossining, so I asked if they could pick me up from there. It's a lot closer than the airports they promised pickup from, so I figured this would be a reasonable request. No response. A few days later I got an email asking me to fill out a survey, which also asked what my transportation situation was. Again I asked about pickup from Croton-Harmon. Again no response, though I did get another email asking me to fill out the survey. Finally, the day before the workshop, I sent another email asking what was going on, and got a response that they had gotten my survey answer and also could pick me up from the Croton-Harmon station, and to send a text message when I arrived.

When I got to the station, I sent the text message, and got a response saying wait 30-45 minutes and they'd be able to pick me up. Half an hour later I got a text that said "Here now. In parking lot. Where are you?" I was looking at the parking lot. After a few confused texts back and forth, I called and it turned out that they were at Ossining, not Croton-Harmon where I was, the shuttle was full, and they wouldn't be able to pick me up. They said they had lost a driver and might not be able to come back soon. They suggested I try to get a cab. ... At least they reimbursed me for that one, and at the end of the workshop they told me with a bit more warning that I had no ride (but no reimbursement). But you'd really hope people running a workshop on cognitive bias would know to make sure the first and last parts of the experience are extra good.

Whining aside, things were better once I finally got there. We were busy pretty much all day for four days straight and in nearly every session I got either a technique I'm excited about applying to my life, or a major insight about a skill I need to develop. I really want to do some goal factoring to figure out if my current behavior is well suited to the goals it is trying to satisfy or whether there's a more efficient or effective solution is be happy with, aversion modeling to figure out why I'm not doing things I think I want to do, offline habit training to"practice" a new habit with the power of imagination,, urge propagation to build positive urges to do things that accomplish outcomes I like, value of information calculations to learn when I should spend resources on gaining information to optimize my life, build an emotional library to gain control over my emotions, and practice moving between sympathetic and parasympathetic nervous system modes.

I also learned that I don't have a good memory for the bodily sensations associated with my emotions, so I'll need to practice that. And I dissolved some confusion around long term planning, when I realized that my goals were actually a bunch of different things: urges/desires, behaviors, plans, and preferences about future world states.

Now for the bad stuff, especially by comparison to the Singularity Institute minicamp in 2011 - these core epistemic rationality issues were not covered much if at all:

  • Noticing confusion
  • Noticing rationalization or motivated cognition
  • Doing literature research effectively in order to use existing scientific knowledge
  • Becoming curious and noticing curiosity, and what to do when you're curious about a factual question (gather data, ask an expert, review the literature, etc)
  • How to change your mind
  • Why and how to "stick your neck out" and make testable predictions (though there were prediction markets, which was great).
  • The relationship between beliefs and anticipation - making beliefs pay rent (though there was a related instrumental rationality segment called "internal simulator")

This felt like a rational self-improvement workshop, not a rationality workshop. To be fair, the epistemic rationality segments in 2011 were the worst segments - I agreed with the content but didn't learn any skills. But the thing to do in make it better, not drop it entirely!

A lesser disappointment was that nearly everything was in a "class" format, except for Comfort Zone Expansion, or CoZE, where we went out separately to practice with little accountability or real-time feedback. Some of the units made sense this way, but for example there should have been drills in Being Specific, and sticking your neck out, exposing yourself to the possibility of being wrong, since many participants seemed to lack that skill on the 5-second level. And in a lot of other areas I would have benefited from paired practice out something else that would have put me on the spot and forced me to execute one step from a technique.

Most of the classes focused on a single technique, and were specific about what situations the techniques were for. I loved this level of specificity and it made the knowledge feel more genuinely procedural and usable. But for a few of the classes, it took a while to figure out exactly which techniques were applicable to which problems, because the techniques' intended results were often described vaguely. For example, a class on how to develop habits turned out to be a class on how to develop a tendency to remember to do something, when there wasn't an aversion stopping you. (Whereas I'd figured at first, not unreasonably I think, that a habit is just a regularly repeated behavior.) In one case there was the opposite problem - the technique was so universal and high-level that it seemed difficult to translate it into specific actions.

All in all, it was a very fun experience, and I think it will turn out to have been well worth my time. The instructors were great, the missing pieces were mostly things I already had, and I think what I learned we'll make me much more effective.

The workshop also comes with 6 follow-up sessions via Skype, which is a great idea; one of the best things about the minicamp in 2011 was the follow-ups the participants did with each other. I'm really looking forward to that too.