There is a kind of explanation that I think ought to be a cornerstone of good pedagogy, and I don't have a good word for it. My first impulse is to call it a historical explanation, after the original, investigative sense of the term "history." But in the interests of avoiding nomenclature collision, I'm inclined to call it "zetetic explanation," after the Greek word for seeking, an explanation that embeds in itself an inquiry into the thing.
Often in "explaining" a thing, we simply tell people what words they ought to say about it, or how they ought to interface with it right now, or give them technical language for it without any connection to the ordinary means by which they navigate their lives. We can call these sorts of explanations nominal, functional, and formal.
In my high school chemistry courses, for instance, there was lots of "add X to Y and get Z" plus some formulas, and I learned how to manipulate the symbols in the formulas, but this bore no relation whatsoever to the sorts of skills used in time-travel or Robinson Crusoe stories. Overall I got the sense that chemicals were a sort of magical thing produced by a mysterious Scientific-Industrial priesthood in special temples called laboratories or factories, not things one might find outdoors.
It's only in the last year that I properly learned how one might get something as simple as copper or iron, reading David W. Anthony's The Horse, the Wheel, and Language and Vaclav Smil's Still the Iron Age, both of which contain clear and concrete summaries of the process. Richard Feynman's explanation of triboluminescence is a short example of a zetetic explanation in chemistry, and Paul Lockhart's A Mathematician's Lament bears strong similarities in the field of pure mathematics.
I'm going to work through a different example here, and then discuss this class of explanation more generally.
What is yeast? A worked example
Recently my mother noted that when, in science class, her teacher had explained how bread was made, it had been a revelation to her. I pointed out that while this explanation removed bread from the category of a pure product, to be purchased and consumed, it still placed it in the category of an industrial product requiring specialized, standardized inputs such as yeast. My mother observed that she didn't really know what yeast was, and I found myself explaining.
Seeds, energy storage, and coevolution
Many plants store energy in chemicals such as proteins and carbohydrates around their seeds, to help them start growing once they're in wet ground. Some animals seek out the seeds with the most extra energy, and poop the occasional seed elsewhere. Sometimes this helps the plant reproduce more than it otherwise would have; in such cases, the plant may coevolve with the animals that eat it, often investing much larger amounts of energy in or around the seed, since the most calorific seeds get eaten most eagerly.
Humans coevolved with a sort of grass. If you've seen wild grass, you may have observed stalks with seed pods on them, that look sort of like tiny heads of wheat. Grain is basically massively a grass that coevolved with us to produce plump, overnourished seeds.
Of course, there's only so much we can do to select for digestibility. Often even plants that store a lot of surplus energy need further treatment before they're easy to digest. Some species evolved to specialize in digesting a certain sort of plant matter efficiently; for instance, ruminants such as cattle and sheep have multiple stomachs to break down the free energy in plant matter. Humans, with unspecialized omnivorous guts, learned other ways to extract energy from plants.
One such way is cooking. If you heat up the starches inside a kernel of wheat, they'll often transform into something easier to digest. But bread made this way can still be difficult to digest, as many eaters of matzah or hardtack have learned. Soaking or sprouting seeds also helps. And a third way to make grains more digestible is fermentation.
Where there's dense storage of energy, there's often leakage. Sometimes a seed gets split open for some reason, and there's a bit of digestible carbohydrate exposed on the surface. Where there's free energy like this, microbes evolve to eat it.
Some of these microbes, especially fungal ones, produce byproducts that are toxic to us. But others, such as some bacteria and yeasts, break down hard-to-digest parts of wheat into substances that are easier for us to digest. Presumably at some point, people noticed that if they wet some flour and left it out for a day or two before cooking it, the resulting porridge or cracker was both tastier and more digestible. (Other fermented products such as sauerkraut may have been discovered in a similar way.)
Of course, while grain-eating microbes will often tend to be found on grain, allowing for such accidental discoveries, there is no guarantee that they'll be the kind we like. Since they mostly just eat accidental discharges of energy, there also just aren't very many of them, compared to the amount of energy available to them once the flour is ground up and mixed with water. It takes a while for them to eat and reproduce enough to process the whole batch.
Eventually, people realized that if they took part of a good batch of dough or porridge and didn't cook it, but instead added it to the next batch, this would yield an edible product both more reliably (because the microbes in the starter would have a head start relative to any potentially harmful microbes) and more quickly (again, because they'd be starting with more microbes relative to the amount of grain they needed to process). This is what we call a sourdough "culture" or "starter".
(You can make a sourdough starter at home by mixing some flour, preferably wholemeal, with water, covering it, and adding some more flour and water each day until it gets bubbly. Supposedly, a regularly fed starter can stay active for generations.)
Breads are particularly convenient foods for a few reasons. First, grains have a very high maximum caloric yield per acre, allowing for high population density. Second, dry grains or flour can be stored for a long time without going bad; as a result, stockpiles can tide people over in lean seasons or years, and be traded over large distances. Third, a loaf of bread itself has some amount of more local portability and durability, relative to a porridge.
One of the microbes found in a sourdough culture, yeast, has a particularly simple metabolism with two main byproducts. It pisses alcohol, and farts carbon dioxide. Carbon dioxide is a gas that can leaven or puff up dough, which makes it nicer to eat. Alcohol is a psychoactive drug, and some people likes how it makes them feel. Many food cultures ended up paying special attention to grain products that used one or the other of these traits: beer and leavened bread.
In the 19th century CE, people figured out how to isolate the yeast from the rest of the sourdough culture, which allowed for industrial, standardized production of beer and bread. If you know exactly how much yeast you're adding to the dough, you can standardize dough rising times and temperatures, allowing for mass production on a schedule, reducing potentially costly surprises.
The price of this innovation is twofold. First, when using standardized yeast to bake bread, we forgo the digestive and taste benefits of the other microbes you would find in a sourdough starter. Second, we become alienated from a crucial part of the production of bread, to the point where many people only relate to it as a recipe composed of products you can buy at a store, rather than something made of components you might find out in the wild or grow self-sufficiently.
Additional thoughts on explanation
I'm having some difficulty articulating exactly what seems distinct about this sort of explanation, but here's a preliminary attempt.
Zetetic explanations will tend to be interdisciplinary, as they will often cover a mixture of social and natural factors leading up to the isolation of the thing being explained. This naturally makes it harder to be an expert in everything one is talking about, and requires some minimal amount of courage on the part of the explainer, who may have to risk being wrong. But they're not merely interdisciplinary. You could separately talk about the use of yeast as a literary motif, the chemistry of the yeast cell, and the industrial use in bread, and still come nowhere close to giving people any real sense of why yeast came into the world or how we found it.
Zetetic explanations are empowering. First, the integration of concrete and model-based thinking is checkable on multiple levels - you can look up confirming or disconfirming facts, and you can also validate it against your personal experience or sense of plausibility, and validate the coherence and simplicity of the models used. Second, they affirm the basic competence of humans to explore our world. By centering the process of discovery rather than a finished product, such explanations invite the audience to participate in this process, and perhaps to surprise us with new discoveries.
Of course, it can be hard to know where to stop in such explanations, and it can also be hard to know where to start. This post could easily have been twice as long. Ideally, an explainer would attend to the reactions of their audience, and try to touch base with points of shared understanding. Such explanations also require patience on both sides. Another difficulty this approach raises is that plain-language explanations rooted in everyday concepts may not match the way things are referred to in technical or scientific literature, although this problem should not be hard to solve.
In some cases, one might want to forwards-chain from an interesting puzzle or other thing to play with, rather than backwards-chaining from a product. Lockhart seems to favor exploration over explanation for mathematics, and of course there's no particular reason why one can't use both. In particular, the explanation paradigm seems useful for deciding which explorations to propose.
Related: Truly Part Of You, The Steampunk Aesthetic
Interesting stuff! A lot of my chemistry books and classes actually did take a historical approach to how various particles or effects were discovered - looking at the original experiments that discovered the nucleus, the electron, the wave/particle nature of light. These weren't immediately useful because so much of the work came from later refinements and repititions. The illustrations weren't clear enough that I could make the leap from "okay, they electrons at gold foil and saw that some bounced back" to "this is caused by a nucleus and every single atom has one of these" ("why didn't they think it was just electrons bouncing off of each other? Why did they think every element had these?") and would just have to return to memorization for the rest of what was eventually learned and derived from it.
I was also taught historical models of the atom in increasing order of complexity, at the beginning of each year. Loose Greek conception -> aether/etc -> Dalton's theory -> Plum pudding model -> Rutherford model -> quantum model. In practice, we used the Rutherford model of the atom for most purposes. It was a little confusing. The full field of chemistry feels more resistant to explanations like this than other fields. Biology is a good one, because evolution comes in a nice prepackaged story. (Although there are some important concepts like diffusion for which I've never actually found an intuitive metaphor or story.)
I think that most instruction about the atom is just wasted effort, most students who don't have a special interest in chemistry never really learn what atoms are at all, and a decent holistic understanding appropriate for ordinary high schoolers doesn't have to go into the details of different approximations used over time. (College is different and the appropriate kind of instruction really depends on the social purpose of it, which is controversial.)
Atoms seem like an odd point of focus before the undergraduate level, but you could explain via showing how substances tend to react in simple whole-number ratios under some circumstances. Of course, along the way you'd need to learn how to find substances that can react but are hard to reduce into components, and how to measure the relevant things precisely enough, and probably why people would have bothered to find these. In other words, centering history in the sense of the process of seeking it out, rather than chronology.
Another natural set of things to explore in a single course might be acids and bases. Another might be electricity and magnetism. You can learn a lot about each of these before fitting them together (e.g. I think you can understand Faraday and Maxwell without the atom), and until you know multiple related subdomains it's not very helpful to have an unified model of the atom.
Not sure where periodicity fits in because I actually don't know enough about applied chemistry to understand how it shows up (except that I can make guesses about which arrangements of letters describe things with similar behavior to other arrangements of letters).
Pingback: Rational Feed – deluks917
I like this concept. It seems to have strong ties to actually knowing how to do or make a thing - knowing that you need to utilize a package of yeast from the store doesn't help you make bread if you're trying to recreate 21st century cuisine from 10,000 BC, but knowing that you need to cultivate the organisms that can be cultured in water and wheat seeds under the right conditions actually gives you a foothold to try and build from. At least to me, there's a distinct qualitative threshold of understanding where I feel like "Okay, I could actually stand a decent chance of building this thing from the ground up (and do so having a model for why the steps involved work)." With math, "could I prove this with a formal theorem-prover, given enough time to think through the intuition?" With many real-world concepts, "could I apply this toolbox to a novel situation and obtain new understanding from it?" With yeast, "could I learn to make bread with my current understanding?"
Does this line up with your perception of a zetetic explanation, or are there examples you'd consider to disagree with this classification?
Also curious to hear of any other instances of this sort of thing being done well - Feynman is about the only good source I know of outside of mathematics.
I think the questions & criteria you listed are clustered tightly around the thing I'm trying to point to.
Aside from the specific examples I gave in the post, here are some that come to mind. Charles Petzold's Code and Hofstadter's Gödel, Escher, Bach seem to be aiming for this. From a biography of Alan Turing I learned about Edwin Tenney Brewster's book Natural Wonders Every Child Should Know, which does a great job with some parts of biology, explaining systematic information with reference to things a child's likely to already know about or be able to check. Brewster wrote another book on diet, which is mostly devoted to explaining that calories are a thing, that it's a good idea to have some in your diet, and how to do that. Alicorn's Improvisational Soup does this a bit - her explanation of roux is a brief but reasonably central case. The best journalists do this well - Gary Taubes's Good Calories, Bad Calories has a pretty good explanation of metabolism that I could operationalize to draw useful conclusions other than the ones he was arguing for (with the benefit of a little additional corroborating research).
What makes for a good metaphysics? I posit that it draws one's attention towards relevant features of a situation. Relevant features include precedents, antecendants, or anything else that allows one to pull levers to effect the variables one might care about. And this includes letting you know which variables you might want to care about. A good explanation, then, is one that helps construction, maintenance, and updates of such a metaphysics. The broader the situations in which the framework creates concretely relevant actionable distinctions the better. The best ones are also meta, delineating their own bounds of applicability in ways that prevent you from overfitting.
Pingback: Link Post for September | Thing of Things
There's a really excellent textbook which uses this approach to teach Euclidean and non-Euclidean geometry (and mathematical reasoning more generally). See: https://www.amazon.com/Euclidean-Non-Euclidean-Geometries-Development-History/dp/0716799480
My father did that a lot when I was a kid. I remember him telling me that Europeans wanted to trade with India, but turks were in the way. So they tried going the other way, so when Columbus landed in America he called the natives "indians".
I remembered this kind of stuff infinitely better than school stuff, and I was actually surprised earlynin school that it was so dry. I had expected much more of the same, and was looking forward to it.
Pingback: Rational Newsletter | Issue #49