Response to Discursive Games, Discursive Warfare
The discursive distortions you discuss serve two functions:
1 Narratives can only serve as effective group identifiers by containing fixed elements that deviate from what naive reason would think. In other words, something about the shared story has to be a costly signal of loyalty, and therefore a sign of a distorted map. An undistorted map would be advantageous for anyone regardless of group membership; a distorted map is advantageous only for people using it as an identifying trait. Commercial mapmakers will sometimes include phantom towns so that they (and courts) can distinguish competitors who plagiarized their work from competitors who independently mapped the same terrain. Point deer make horse can catalyze the formation of a faction because it reduces motive ambiguity in a way that "point deer make deer" could not.
"Not Invented Here" dynamics are part of this. To occupy territory, an intellectual faction has to exclude alternative sources of information. I think you're talking about this when you write:
LessWrong rationalism might be able to incorporate ideas from analytic into its own framework, but the possibility of folding LessWrong rationalism into analytic, and in some sense dissolving its discursive boundaries, transforms the social and epistemic position of rationalist writers, to being more minor players in a larger field, on whose desks a large pile of homework has suddenly been dumped (briefing on the history of their new discursive game).
2 Individuals and factions can rise to prominence by fighting others. You can make a debate seem higher-stakes and therefore more attractive to spectators by exaggerating the scope of disagreement.
The opposition to postmodernist thought on LessWrong is enacting this sort of strategy. Analytic philosophy attracts attention in part by its opposition to Continental philosophy, and vice versa. LessWrong is broadly factionally aligned with the Analytic party, in favor of Modernism and therefore against its critics, in ways that don't necessarily correspond to propositional beliefs that would change in the face of contrary evidence. Eliezer can personally notice when Steven Pinker is acting in bad faith against him, but the LessWrong community is mood-affiliated with Steven Pinker, and therefore implicitly against people like Taleb and Graeber.
These two functions can mutually reinforce.
You can force a disagreement to persist by arguing for claims that are in your opponent's group-identity blind spot and preferentially arguing against the people with the most exaggerated blind spots. (There's a tradeoff, though. You get more attention by arguing against people who won't try to learn from you, but you also get more attention by arguing against people who are more prestigious because their arguments make more sense. We see a variety of niches at different levels of prestige.) You can attract more attention by exaggerating those claims. And you can form an identity around this (and thus gain narrative control over followers) by forming a reciprocal blind spot around your exaggerations.
This is the essence of the Hegelian dialectic. It is a conflict strategy that expropriates not from its nominal enemy, but from people who mistake the kayfabe for either a genuine disagreement or a true conflict. The movie Battle of Wits (AKA Battle of Warriors) is the best representation I've seen of this dynamic - a Mohist (Chinese utilitarian) is invited to help defend a city, but gradually discovers the belligerents on both sides are not actually acting on self-interest or trying to win the conflict, but are instead committed to playing out their roles, even when this kills them. They interpret his constructive attempts to save lives as power grabs, and the regime he's trying to help repeatedly acts to thwart him. His attempts to save the lives of the enemy soldiers and leaders are also thwarted, partly by their own actions. By the end of the movie the city has been burnt to the ground by the armies supposedly fighting over it, and the Mohist hero is leading away the local children, who aren't old enough to have been initiated into a Hegelian death cult.
You bring up Marx as an example of someone who tried and failed to control the reception of his own ideas. But such "control" only makes sense in the context of brand management. However, Marx didn't only write the Communist Manifesto, which defined his factional brand. He also wrote Capital, an explanation of class dynamics within a basically Ricardian frame.
Capital won Marx a lot of prestige because it seemed intellectually credible, because it could account for itself in Ricardian terms. Ricardo was widely regarded as intellectually credible. This is related to the fact that there is no Ricardian faction; he's tacitly accepted on the right as well as the left, because he didn't also try to catalyze an adversarial political movement, he simply advanced an explanatory theory. Marx modeled his strategy on that of Hegel (he explicitly described his materialist dialectic as "Hegel turned on his head," a perfectly Hegelian move), and Hegel identified as a Spinozan (another foundational figure, like Ricardo, both widely accepted but not identifiable with any major political faction.)
What's not wrong on purpose is persuasive but does not become a factional identity. What becomes a factional identity is wrong on purpose.
Applying this to LessWrong: Plenty of people read the Sequences, improved their self-models and epistemic standards, and went on to do interesting things not particularly identified with LessWrong. Also, people formed an identity around Eliezer, the Sequences, and MIRI, which means that the community clustered around LessWrong is - aside from a few very confused people who until recently still thought it was about applying the lessons of the Sequences - committed not to Eliezer's insights but to exaggerated versions of his blind spots.
The people who aren't doing that mostly aren't participating in the LessWrong identity, but while factions like that are hostile to the confused people who behave as though they're part of a community trying to become less wrong, such factions are also parasitic on such people, claiming credit for their intellectual contributions. When such participation is fully extinguished, the group begins to decay, having nothing distinctive to offer, unless it has become too big to fail, in which case it's just another component of one political faction or another.
Pingback: Calvinism as a Theory of Recovered High-Trust Agency | Compass Rose
> An undistorted map would be advantageous for anyone regardless of group membership; a distorted map is advantageous only for people using it as an identifying trait.
I take your overall point here, but I'll add a nitpick that this isn't strictly correct.
The other mechanism by which a distorted map can be advantageous is that other actors might update on your beliefs. So if your model of the world is inaccurate in a way that, if others had that inaccurate picture, would benefit your interests, there's an incentive to hold false or miscalibrated beliefs in those area.
A particular common instantiation of this is adaptive overconfidence to scam others in investing in you in various ways (a startup founder convincing a VC to fund their company, a seducer convincing a woman to sleep with him). Another common example is believing that a problem is more severe or more urgent then it in fact is to extract resources from other (donors from a non-profit, extra levels of effort from your employees, etc.)
I think most forms of intentional self-deception also have this character, with the caveat that sometimes the person that you're trying to extract from is *yourself* (eg convincing your system 1 to believe the singularity is 15 years away, to get yourself to work harder).
There's also just non-adaptive self-deception, due to some beliefs being uncomfortable or painful to hold.
But I think these two mechanisms, signaling loyalty, and extracting resources via influencing others beliefs might exhaust the whole space of reasons why having a distorted map can be adaptive?