Reading, writing, and thinking, with your brain

In a recent blog post I pointed to the idea that your brain has a sort of implied query language, and there are more and less efficient ways to ask it questions:

I think an important abstraction here is that when you ask your brain a question, it’s often not enough to ask it something that specifies logically what you want – you also have to give it some clues as to where to look for the answer. I call this shaping the query.

This is a roundup of principles I’ve found helpful for using my brain effectively - committing things to memory, finding ideas, and thinking about things.

Getting ideas

The key thing to remember when querying your brain for information is that your brain can’t efficiently sort through everything you have any memory of, to find the things that meet your conditions. Your memories aren’t stored as a list, they’re stored in a network. You need to tell your brain how to crawl through the network to find what you want. You’re not just giving it a success criterion; you’re specifying a search method.

There are two main categories of problem I’ve encountered when searching my mind for ideas:

  • Blankness - I can’t think of anything
  • Perversity - I’m fixated on an idea that is counterproductive.

Remedies for blankness

Sometimes when I’m at a loss for ideas, it’s because I’ve named the target in some formal sense, but not really primed my brain with a search method. In this case the thing to do is to add vividness and specificity to the query somehow. Other times, it feels like my brain’s locked up, tight somehow, and what I need to do is relax some internal censor that’s ruling out ideas before they enter consciousness.

More vivid searches

What search methods can the brain use? I agree with David Hume’s schema from Treatise of Human Nature - three things govern which thought follows after another:

  • Resemblance - we are likely to think of things similar to what we were experiencing or thinking about before.
  • Contiguity - if we’re accustomed to seeing two things adjoin each other in time or space, then when we experience or think about one, we’re likely to think about the other.
  • Causality - If we think of one thing as causing another, then when we experience or think about either, the other is likely to come to mind.

I make the further claim - also a key Humean one - that vividness is key for getting the brain to do anything. The impressions that strike us with more force are more likely to trigger associated ideas and actions.

The way you find more ideas is by making more specific requests, even if the specificity is arbitrary. For example, the book Superhuman Social Skills prescribes this exercise:

If you don't feel like you have a lot of interesting stories, a good exercise is to take a sheet of paper and write the letters of the alphabet down the left side. Then come up with a short description of a story that begins with each letter. So mine might be A- Alaskan Motorcycle Trip B- Bank Robbery C- Crashed Motorcycle, all the way down to Z.

You could also search for embarrassing stories, things to brag about, a story for each year of your life - or go through the items on your desk and look for a story somehow related to each item.

One common idea-generation method relies specifically on the causality relation: pick some particular person you know or know about, and ask what ideas they would come up with.

You might imagine that adding constraints makes it harder to think of ideas - after all, there are many more stories I could tell, than stories I could tell about something beginning with the letter “Z” - but sometimes the opposite is true. When your brain looks for something based on a query you give it, it doesn’t treat the content of your request as a logical test, go through all the things it remembers, and output the ones that pass the test. Instead, it looks for things that are connected in some way - resemblance, contiguity, or causality - to the thing you’re asking for. Counterintuitively, adding keywords to the search gives a boost to ideas that might otherwise be neglected, but feel related to the additional priming you added.

This means that being greedy in your initial request can actually be a better strategy than being modest - if gives you more to go on when searching. A friend mentioned to me recently that while asking people if they knew of any jobs he could apply for was not a very successful job hunting strategy, asking if they knew of any analytics jobs was a little more successful, and asking whether they knew of any jobs doing machine learning for marketing got him a job doing machine learning for sales. He shaped the query so that it was easy for them to find matches; the additional specificity just made the best ideas seem even more relevant.

When I was in DC trying to get the EA meetup group going, I separately wanted to have more dinner parties with high-quality conversations. So I asked myself how I could achieve both at the same time, and got the obvious answer.

Other ways to prime involve assuming part of the solution as an anchor and arranging other things around it. For instance, if I’m trying to figure out how to arrange the items in my room, I might assume that a desk goes there in that corner, not because I’m sure it will, but because that will make it easier to come up with a schema, since other things will naturally fall into place (e.g. I need space for a chair at the desk, this might constrain where the bed can go, etc.).

Censor circumvention

Sometimes my brain’s throwing away perfectly good ideas because it’s too risk-averse. I’ve written about this before:

Let’s say I have something I want to do, and I can’t think of any good ways it can be done. Like improving my emotional vocabulary – I want to figure out what exercises I can do that will increase the number of emotions I can recognize and name in the moment, and the rate at which I remember them afterwards. At first I thought I couldn’t think of anything good.

Then I tried to come up with ten terrible ideas.

My working model of how this happens is that I implicitly have a stack of ideas, and my idea-fetcher assumes that the top of the stack is probably the best idea, so when I query my mind for “ideas about how to do X” the fetcher inspects the top item, finds it terrible, and decides that there are no ideas. If I ask again, the fetcher goes back to the stack, inspects the same top item, judges it unacceptable, and returns “no results” again.

So why does asking for terrible ideas fix this? Because it’s not actually possible to query my mind for terrible ideas. Appending the word “terrible” doesn’t actually suppress the good ideas – it just stops me from suppressing the bad ones. And once I’ve retrieved the top idea from the stack (even though it often is pretty terrible), my fetcher will turn up something different when I query it again. So I can inspect the second, and third, etc. Often, in my list of ten “terrible” ideas, some will obviously be good ones, and some others will be bad but improvable. And you can make a lot more improvements to a bad idea you are considering, than a bad idea you aren’t even thinking of.

Giving yourself explicit permission to come up with unworkable or undesirable ideas - or even leaning into that and trying to come up with ideas that only just barely count, ideas that anyone could see are impossible, ideas that are deliberately perverse, ideas that are outrageous. If I’m trying to solve a problem, I like to say that if one of my ideas isn’t at least as ridiculous as going out into the woods, whittling a stick into a magic wand, and trying to cast a spell to produce the desired result, then I’m not stumped yet.

Sometimes I get there through a sense of determination - other times I get there through a sense of playfulness.

Remedies for perversity

There are a lot of ways mental queries can be malformed. I’m going to focus on two:

Failed negation - when we ask for “not X”, we often get especially bad examples of “X”. For instance, asking one’s brain to generate non-awkward behavior often leads to thinking of the most inappropriate things to say, not the most appropriate.

Biased sampling - when we want to know the relative frequencies of X and Y, we often end up able to generate examples proportionate to the ease of searching for them, which may be unrelated. Wikipedia’s page on the availability heuristic gives a good example:

In Tversky & Kahneman's first examination of availability heuristics, subjects were asked, "If a random word is taken from an English text, is it more likely that the word starts with a K, or that K is the third letter?" They argue that English-speaking people would immediately think of many words that begin with the letter "K" (kangaroo, kitchen, kale), but that it would take a more concentrated effort to think of any words in which "K" is the third letter (acknowledge, ask). Results indicated that participants overestimated the number of words that began with the letter "K" and underestimated the number of words that had "K" as the third letter. Tversky and Kahneman concluded that people answer questions like these by comparing the availability of the two categories and assessing how easily they can recall these instances. In other words, it is easier to think of words that begin with "K", more than words with "K" as the third letter. Thus, people judge words beginning with a "K" to be a more common occurrence. In reality, however, a typical text contains twice as many words that have "K" as the third letter than "K" as the first letter. There are three times more words with "K" in the third position than words that begin with "K".

Negation

One way queries can have perverse consequences is through a naive reliance on negation. It follows straightforwardly from the Humean model of association-based thinking that negations don’t do much in our queries to the brain. In a logical query, X produces one set of results, and not-X produces everything not in the first set. In an associative query, not-X produces things with some combination of the core attributes of X, and a “not” feeling - usually aversion (so, the worst cases of X). (Unless, of course, you’re used enough to thinking of “not-X” that it feels like a primitive concept. For instance, “non-kosher” immediately brings to my mind the idea of a pig.)

One way to get around this is, instead of searching for non-X, finding a category that’s not positively related to X, and searching for examples in that category:

Let’s say I am going into a social interaction and am nervous that it will be awkward because I’m not good with strangers. We now know that “don’t be awkward” is not a query that will produce useful plans. Even “be socially skilled” is a problem – if you’re worried about being awkward, you don’t necessarily have a strong and vivid an image of what a generic successful conversation looks like – but you sure know what an awkward one looks like. Even if the explicit verbal instruction you give your mind is “tell me how to be socially skilled in this conversation,” it will get parsed as “tell me how to be not awkward” and your [idea-]fetcher will in turn parse that as “be awkward” and helpfully suggest ways to accomplish that goal.

Instead, you might want to make the other person laugh, or get some information from them, or ask them for a favor, or just let them know that you like them and want to be their friend. Pick a goal – or more than one – that is sideways relative to awkwardness, and optimize for that. Your conversation won’t be perfect, but it will be a lot less awkward than if you spend all your energy thinking about how to be awkward.

This might lead to some ideas that actually fail to satisfy the negation criterion, but you can apply that censor after you’ve generated ideas, rather than before. One approach that might even be more efficient is if you can find a category that feels like a primitive concept, that implies the negation of “X”.

Another way to deal with this type of situation is to lean into the nominal bad thing - instead of figuring out “how not to be awkward”, figure out “how to be awkward in a delightful and awesome way”, for example. Then, see if the part of your plan causing awkwardness is actually necessary, and if not, you can remove it.

Sampling

It’s easy to think of examples of words that start with “K”. It’s hard to think of words where the third letter is “K”. So if you go by one-off examples, you’ll tend to overestimate how likely a word is to start with “K”, relative to how likely it is to have it as the third letter. One way to get around this is to simply think of sentences, without regard to letters - and then count the number of words with a “K” in each of those two positions. For instance, not counting the times I mention it, the letter “K” appears in this paragraph not at all as the first letter, and twice as the third. This sacrifices a lot in the way of efficiency (“K” isn’t totally absent as a first letter, so the sample is way too small), but can be useful when accuracy is more important.

The sidewaysness of this solution is similar to the kind of sidewaysness that can prevent us from becoming fixated on adverse attractors (e.g. accidentally looking for ways to be socially awkward).

I gave an example of solving a different kind of sampling problem in my prior post on query-shaping:

Brienne told me about Eliezer’s ambition-calibrating heuristic that if you can’t think of a time you’ve failed in the last 6 months, you’re not trying hard enough things. At first, I couldn’t think of anything – and began to tell myself a story in which I just don’t classify things as failures but instead just think about cases where I’ve redirected my efforts. But then I posed a different query – “what’s a recent project I undertook?” – and immediately thought of one where I’d failed, multiple times, in the last 6 months. Because my record of tries is efficiently searchable by project, but not by month or whether I failed.

Writing to memory

When trying to commit something to memory, it’s important to understand exactly what you want to remember, in what context - and how different presentations of the information affect how you’ll remember it. Three major categories of memory are:

  • Propositional memory
  • Spatial memory (e.g. “memory palaces”)
  • Situational memory (e.g. habits, implementation intentions)

Writing to propositional memory

I don’t have much to say about propositional memory. (The concept of declarative memory is similar but I’m not sure it lines up exactly with what I’m trying to point to). It’s the most intuitive place for most people to go when they try to memorize something - often by putting it in their verbal loop and repeating it a few times: “Remember to buy milk, remember to buy milk, remember to buy milk.”

Declarative memory on its own has a few important disadvantages:

  • It is difficult to store information through brute memorization.
  • It is difficult to retrieve information that is not linked to any searchable index.
  • Even if information has been properly stored and indexed, you may not remember to search for it.

In Surely You’re Joking, Mr. Feynman!, Richard Feynman gives an example of how declarative memory - even an extensive library of formal statements that can be related to each other - can be difficult to access using the kind of intuitive keyword search you’d apply in situations where you might want to apply that knowledge:

I was teaching a group of students who would ultimately become teachers [...]. These students had already had many courses, and this was to be their most advanced course in electricity and magnetism – Maxwell’s equations, and so on.

I discovered a very strange phenomenon: I could ask a question, which the students would answer immediately. But the next time I would ask the question – the same subject, and the same question, as far as I could tell – they couldn’t answer it at all! For instance, one time I was talking about polarized light, and I gave them all some strips of polaroid.

Polaroid passes only light whose electric vector is in a certain direction, so I explained how you could tell which way the light is polarized from whether the polaroid is dark or light.

We first took two strips of polaroid and rotated them until they let the most light through. From doing that we could tell that the two strips were now admitting light polarized in the same direction – what passed through one piece of polaroid could also pass through the other. But then I asked them how one could tell the absolute direction of polarization, for a single piece of polaroid.

They hadn’t any idea.

I knew this took a certain amount of ingenuity, so I gave them a hint: “Look at the light reflected from the bay outside.”

Nobody said anything.

Then I said, “Have you ever heard of Brewster’s Angle?”

“Yes, sir! Brewster’s Angle is the angle at which light reflected from a medium with an index of refraction is completely polarized.”

“And which way is the light polarized when it’s reflected?”

“The light is polarized perpendicular to the plane of reflection, sir.” Even now, I have to think about it; they knew it cold! They even knew the tangent of the angle equals the index!

I said, “Well?”

Still nothing. They had just told me that light reflected from a medium with an index, such as the bay outside, was polarized; they had even told me which way it was polarized.

I said, “Look at the bay outside, through the polaroid. Now turn the polaroid.”

“Ooh, it’s polarized!” they said.

After a lot of investigation, I finally figured out that the students had memorized everything, but they didn’t know what anything meant. When they heard “light that is reflected from a medium with an index,” they didn’t know that it meant a material such as water. They didn’t know that the “direction of the light” is the direction in which you see something when you’re looking at it, and so on. Everything was entirely memorized, yet nothing had been translated into meaningful words. So if I asked, “What is Brewster’s Angle?” I’m going into the computer with the right keywords. But if I say, “Look at the water,” nothing happens – they don’t have anything under “Look at the water”!

This is why it’s important to understand how to write usable memories.

Writing to spatial memory

The most famous mnemonic technique is the method of loci, or memory palace. The technique has two basic parts:

  • Each item to remember is symbolized as a vivid, unique, easy to remember image.
  • Each item to remember is stored in your imagination at an unique location inside some structured area you are familiar with, such a room in your childhood home, or a neighborhood you are familiar with.

Classical orators such as Cicero report using the method of loci in place of written notes when giving speeches.

I don’t have much to say on implementation, and I rarely use memory palaces as such. You can find similar mnemonic techniques, and expansions on them, in books such as Moonwalking with Einstein and The Memory Book.

The method of loci can store memories in a structured way because it makes use of parts of our brain that are already well-adapted to remembering structured data. We’re used to persistently tracking the relative locations of a large number of objects in a space. I’ve complained that people seem not to have “object permanence” for facts about other people, but we have literal object permanence about literal objects from a very young age.

Using the power of association (the principles of resemblance, contiguity, and causation), you can select an image reminds you of an arbitrary memory. Using the power of vividness, you can make that image memorable. Using the power of imagination, you can situate that image in a location. Using the power of spatial reasoning and object permanence, you can “find” that image - and the related memory - when you “walk through” your memory palace.

A much easier (automatic) but weaker form of this technique is to use physical media like paper to read and write on. I have multiple friends - including myself - who report that if they’ve read something in a book, it feels like the information has a place it’s located in. They can often find a bit they’re thinking about, in order to look up the details, by remembering about how far into the book it was and what part of the page it was on. This is probably at least part of the reason why taking notes on paper improves retention.

Marcello Herreshoff proposes a generalization of this principle - your brain has a native architecture for performing some types of cognition, and if you leverage this, you can think much more efficiently:

Some things the brain can do quickly and intuitively, and some things the brain has to emulate using many more of the brain’s native operations.  Sometimes thinking in metaphors is a good idea, if you’re human.

In particular, visualizing things is part of the brain’s native architecture, but abstract symbolic manipulation has to be learned.  Thus, visualizing mathematics is usually a good idea.

When was the last time you made a sign error?

When was the last time you visualized something upside-down by mistake?

I thought so.

[...] One example of this is the incident that Eliezer recounted as follows:

I once had an exchange which sticks in my mind, and illustrates this point fairly well.  I was pondering utility functions, and said:  "Utility functions are unique up to a positive affine transformation; what kind of information does that preserve? It preserves ordering, but it’s more than just that. It doesn’t preserve proportions…"  And the one who was listening, acting as my person-to-bounce-ideas-off-of, said, "It preserves relative intervals."  And lo, I immediately knew exactly what it meant that the information in a utility function consisted of proportions between intervals between outcomes.

But the flip side of this is that any time I spent studying things like evolutionary biology, evolutionary psychology, neuroscience, cognitive psychology, heuristics and biases, etcetera etcetera, I did not spend studying math, and so I did not know off the top of my head that an affine transformation preserves relative intervals.

Actually, this wasn’t something I knew off the top of my head.  Eliezer had needed to define the word "affine" for me right before that.  I had not studied much linear algebra before working for SIAI. Instead, I instinctively tried to visualize a positive affine transformation.

I visualized positive affine transformations as ways to move and uniformly stretch a rubber band with some ink-blots on it.  If you visualize that, you will *see* that positive affine transformations preserve relative intervals. It didn’t so much take prior knowledge of mathematics, as prior experience coming up with good mathematical visualizations. [...]

The principle of "use the native architecture" extends beyond visualizing mathematics.  Back in my senior year of high school, Eliezer once mentioned to me that Chinese speakers were able to memorize longer strings of digits because each digit is a single syllable in Chinese.  As a computer programmer, it occurred to me that there was nothing stopping me from picking another encoding – and I have perfect pitch, so I picked musical notes.  Middle C is 1, the D above that is 2, and so on up the scale; 0 is the B below Middle C.

Thus, when my psychology teacher put up a string of twenty digits on the board and asked us to memorize them, I was able to do it.  In fact, I still know that string of digits, as well as several phone numbers I used this trick on [...].

Writing to situational memory

Mnemonic techniques like memory palaces are great for storing information, if you’ll know to look for it later. But what about when you won’t know to look for it?

The key to writing memories that will come up when you need them is to be sure to write the correct links, and not just the explicit content. For instance, if I need to remember the milk, memorizing “remember the milk” won’t help much. There’s nothing about that that triggers my recollection as I pass by the grocery store. But if I memorize, “when I leave work today, go to the grocery store and get milk,” that might be more useful - because part of the thing memorized (“when I leave work today”) is associated with the situation in which I want to remember it.

The technical term for this sort of thing is implementation intentions:

While goal intentions (goals) have the structure “I intend to reach Z!” with Z relating to a desired future behavior or outcome, implementation intentions have the structure “If situation X is encountered, then I will perform the goal-directed response Y!” Thus, implementation intentions define when, where, and how one wants to act on one’s goal intentions. In order to form an implementation intention, individuals need to identify a goal-relevant situational cue (such as a good opportunity to act, or an obstacle to goal striving) and link it to an instrumental goal-directed response. [...] For instance, a person with the goal to reduce alcohol consumption might form the following implementation intention: “And whenever a waiter suggests ordering a second drink, then I’ll ask for mineral water!” Empirical data supports the assumption that implementation intentions help close the gap between holding goals and attaining them. A meta-analysis based on close to a hundred studies shows a medium to large effect on increased rate of goal attainment (Gollwitzer & Sheeran, 2006).

Implementation intentions are often framed as a way to bolster your willpower, but I find it more natural to think of them as a bare minimum criterion for really having a plan at all.

All this was specific to plans for action - how is this related to memorizing useful facts?

When a friend told me that he is allergic to onions, I didn’t just verbally recite the bare fact. Doing that would only be useful if, when thinking about food I might serve him, I already expected to ask myself what he might be allergic to. (As it happens, I already have a habit of thinking through this sort of thing - though not reliably - which activates my dietary constraint and preference memories for the people involved.) What I did instead was think of onions, and foods containing onions, and mentally rehearsed linking this with a bad outcome for my friend. (If you’re new to this, you might try vividly picturing the trigger and its relation to the bad outcome.) So now, every time I invite him to dinner, if I consider making something with onions, that immediately reminds me that it would make him sick.

Similarly, when Feynman’s Brazilian students were learning about refractive indices, polarized light, and Brewster’s Angle, they would have done well to look for real-life situations where this knowledge could be applied, in order to connect their learned propositions to other parts of their understanding.

The principle here is to consider linking the fact to the appropriate triggers as part of the process of learning the fact. If I don’t have an idea when I might use a piece of information, then I don’t consider myself to understand it. This follows from the principle that beliefs ought to “pay rent” in anticipated experiences.

Thinking

Sometimes you don’t just ask your brain for the final idea and get it, fully formed - you need to think it through on the spot. Another important way to think things through effectively is to drop unnecessary mental processes that take up attention.

Here are some mental shifts you might consider trying when you’re having trouble thinking something through:

  • Formal systems vs intuition
  • Accountable vs unaccountable language
  • Engaging different sensory modes
  • Concretizing vs generalizing

 

Formal systems vs intuition

Sometimes you know a bunch of somewhat abstract or disconnected facts and have no gut sense of what they add up to - or are overwhelmed trying to hold them all together, or feel like you’re always missing part of the problem. In these cases, it can be helpful to formalize your thinking.

Mathematical proofs, schedules, lists of pros and cons, and belief mapping, are all formalizations, making your system more explicit.

One benefit of formal thinking is that it can pull considerations out of working memory, into writing. If you trust that your reasoning process is reliable, this means that you can focus on doing one step at a time, instead of trying to make a big leap of judgment all at once. Another is that it’s easier to see your unexamined assumptions or gaps in your search strategy if you lay it all out in front of you in something like a belief map or numbered argument. However, formal thinking alone can often skip over important intuitions that don’t have a corresponding explicit verbal proposition, which is why it can be helpful to gut-check your formal models.

On the other hand, sometimes it’s hard to list out all the relevant facts, but some part of your mind has been paying attention and learning, and might be wiser than you think. This is why it’s important to query your unconscious mind directly. Eugene Gendlin’s Focusing is a good example of this. I’ve also had good results from simply relaxing my need to review thoughts before saying them, and trying to speak directly from my tacit beliefs.

Accountable vs unaccountable language

Sometimes a problem feels impossible, only to become easy to solve when we explain it to someone else. This can work even if the other person is only notional - many programmers “explain” their problems to a rubber duck before consulting a live person, a practice called “rubber duck debugging” or rubber ducking:

A very simple but particularly useful technique for finding the cause of a problem is simply to explain it to someone else. The other person should look over your shoulder at the screen, and nod his or her head constantly (like a rubber duck bobbing up and down in a bathtub). They do not need to say a word; the simple act of explaining, step by step, what the code is supposed to do often causes the problem to leap off the screen and announce itself.

It sounds simple, but in explaining the problem to another person you must explicitly state things that you may take for granted when going through the code yourself. By having to verbalize some of these assumptions, you may suddenly gain new insight into the problem. [...]

Why “rubber ducking”? While an undergraduate at Imperial College in London, Dave did a lot of work with a research assistant named Greg Pugh, one of the best developers Dave has known. For several months Greg carried around a small yellow rubber duck, which he’d place on his terminal while coding. It was a while before Dave had the courage to ask. . . .

Sometimes I’ll rehearse little speeches in my head explaining a thing I’m thinking about, getting progressively more detailed and sophisticated as I repeat it. I think this helps me link the things I’m thinking through to their implications and assumptions.

On the other, hand, sometimes the need to articulate exactly why you believe what you believe, and all the steps of reasoning you’ve taken, can impede cognition if what you’re trying to do is novel concept formation. Often I seem to develop a sort of private language when conceptualizing something for the first time, where I’ll use words in nonstandard ways as handles, letting my unconscious mind pick the word with the right connotations, not worrying so much about definitions.

When I’m engaging in idea exploration with other people, it goes easier when they don’t get hung up on making sure the words are exactly right, but instead try and directly perceive the thing I’m pointing towards - look at the moon, not the finger.

It’s been helpful when talking to other people, if they seem like they have something to say but are at a loss for words, to encourage them to say it in their private language, and not worry about whether it’s intelligible to me on the first try. (It usually is anyway.)

Accountable language techniques such as rubber ducking recruit our social communication skills to help with the task of thinking a problem through. Unaccountability lets us disengage from that module, freeing up attention for some other mode of thinking.

Engaging different sensory modes

Marcello’s original example of using the brain’s native architecture to solve problems was using a visualization to quickly develop a mathematical intuition. Our capacity for spatial reasoning is what makes memory palaces work. It’s not obvious to me whether visual and kinesthetic reasoning tend to have very different strengths, or whether some people just lean towards one or the other when using their spatial reasoning. They both, however, seem very different from verbal reasoning.

Another important sort of native reasoning is inhibition-based reasoning. Unless you’re a sociopath, you have a natural talent for solving logic problems, but only when they’re worded as problems about social rules or precautions against risk. (Normal people and sociopaths do equally poorly on isomorphic logic problems that aren’t worded to engage our sense of social or prudential inhibition.)

I suspect, with low confidence, that social and other inhibition-based reasoning is linked to auditory processing (although presumably face-reading should be linked to this too). I often find I have to avoid listening to words in order to access other modes of reasoning.

The key affordance here is one for switching modes. A secondary affordance is for paying attention to similar problems you’ve been able to solve, and what sensory mode you were in for that:

Recently I was talking with Brienne face-to-face, and she noted that a question I’d asked her would be much easier for her to answer if we were talking remotely over a text channel:

Neat thing I learned from Ben Hoffman today: If I imagine that I’m typing at a computer while I’m actually talking to someone in person, I can use my brain better than I usually can in face-to-face conversation. I think the two key thoughts here were, “How would I think about this if I were at a computer with an Internet connection?” and “Imagining seeing the question I’m trying to think about written out in text form.” –Brienne

[...] Brienne had mentioned that answering my question was hard because we were talking in person but would have been easy over text communication. This heuristic triggered a question from me about what her cognitive process would have been over text. In response, she figured out that she’d be using some sort of visual processing, so I suggested that she just do that right then and there.

Concretizing vs generalizing

An important technique for engaging a native mode of reasoning is to make the ideas you’re thinking with more vivid - and this can often be done by making them more concrete.

In Surely You’re Joking, Mr. Feynman!, Richard Feynman talks about how it’s often easier for him to think about something if he moves from the abstract description to a concrete example:

I had a scheme, which I still use today when somebody is explaining something that I'm trying to understand: I keep making up examples.

For instance, the mathematicians would come in with a terrific theorem, and they're all excited. As they're telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball)-- disjoint (two balls). Then the balls turn colors, grow hairs, or whatever, in my head as they put more conditions on.

Finally they state the theorem, which is some dumb thing about the ball which isn't true for my hairy green ball thing, so I say "False!" [and] point out my counterexample.

This is a fairly abstract example of concretizing, of course - generally you want to think of a specific case to which you’d actually want to apply the general reasoning. It’s often easier to think about an example of a thing than the whole class at once - this makes it more vivid and engages your intuitions more strongly.

Sometimes if I’m trying to solve a problem in generality, it helps to think through one particular case. For instance, someone who’s lonely, instead of asking why they don’t spend more time with people, might select one particular person they want to spend time with, and ask themselves for specific reasons why they don’t spend time with that particular person. This could generate answers like “because neither of us either asks the other to hang out”, which can be expanded back into a more general hypothesis.

Eliezer Yudkowsky gives a few examples that helped me operationalize the advice to habitually check my beliefs with examples:

Even with people who've had moderate amounts of exposure to Less Wrong, a fair amount of my helping them think effectively often consists of my saying, "Can you give me a specific example of that?" or "Can you be more concrete?"

A couple of formative childhood readings that taught me to be specific:

"What is meant by the word red?"

"It's a color."

"What's a color?"

"Why, it's a quality things have."

"What's a quality?"

"Say, what are you trying to do, anyway?"

You have pushed him into the clouds.  If, on the other hand, we habitually go down the abstraction ladder to lower levels of abstraction when we are asked the meaning of a word, we are less likely to get lost in verbal mazes; we will tend to "have our feet on the ground" and know what we are talking about.  This habit displays itself in an answer such as this:

"What is meant by the word red?"

"Well, the next time you see some cars stopped at an intersection, look at the traffic light facing them.  Also, you might go to the fire department and see how their trucks are painted."

-- S. I. Hayakawa, Language in Thought and Action

and:

"Beware, demon!" he intoned hollowly.  "I am not without defenses."

"Oh yeah?  Name three."

-- Robert Asprin, Another Fine Myth

And now, no sooner does someone tell me that they want to "facilitate communications between managers and employees" than I say, "Can you give me a concrete example of how you would do that?"  Hayakawa taught me to distinguish the concrete and the abstract; and from that small passage in Asprin, I picked up the dreadful personal habit of calling people's bluffs, often using the specific phrase, "Name three."

“Name three” is great, but first, if stuck reasoning about a class of problems - name one specific unique instance.

On the other hand, sometimes a problem that feels really thorny can feel much more tractable once you see the forest for the trees, going up a level of abstraction. Let’s say I’m hosting a dinner party, and one dish is taking longer than anticipated, and another is ruined, and I haven’t showered yet, and a guest just let me know they’ll be early. This is a fairly overwhelming number of problems to solve all at once, but if instead I try to abstract away from the details, the problem is that I expect to have to entertain before things are ready. This immediately brings to mind some general solutions like just telling guests it will be late, triaging the remaining tasks to cut my losses, and asking my co-host to entertain while I finish prep.

Throughout this post, if it felt like the most natural way to write one of these sections was by describing the principle directly, I tried to flesh it out with at least a minor example, in order to invoke Humean association and help you connect it with the rest of your knowledge. If the most natural presentation felt like walking through an example, I tried to finish by describing the principle explicitly, to help you see which aspects were essential.

5 thoughts on “Reading, writing, and thinking, with your brain

  1. Romeo Stevens

    That people seem to find fairly diverse conceptual metaphors more or less emotionally salient helps explain why sub-cultures need to reinvent the wheel all the time instead of just porting in from experts elsewhere. They both have a need to hear it in their own language/conceptual metaphors and feel a sense of ownership from invented-here syndrome.

    For example when I read this list:
    Formal systems vs intuition
    Accountable vs unaccountable language
    Engaging different sensory modes
    Concretizing vs generalizing

    I reflexively converted each one into the closest conceptual metaphor that I favor for the distinction I think each one is trying to make. I do worry that something like 'potentially specious crosslinking' could be a curiosity stopping phenomena though.

    Reply
    1. Benquo Post author

      I think an important sign of epistemic health is sometimes having the experience of someone explaining a thing, and you try to understand, and repeatedly try to put it into your own conceptual vocabulary, and you just.. don't... get it. And you notice this, and are frustrated by it, and try harder, and try something different. Like I've done with the Kegan Constructive Developmental Theory stuff.

      I'm curious what your success rate was converting those to your own language - presumably you checked them against the detail I provided below the list to verify that you were looking at the same things?

      Reply
  2. Pingback: Improve comments by tagging claims | Compass Rose

Leave a Reply to judi Cancel reply

Your email address will not be published. Required fields are marked *