On Twitter, Freyja wrote:
Things capitalism is trash at:
- Valuing preferences of anything other than adults who earn money (i.e. future people, non-humans)
- Pricing non-standardisable goods (i.e. information)
- Playing nicely with non-quantifiable values + objectives (i.e. love, ritual)
Things capitalism is good at:
- Incentivising the production of novel goods and services
- Coordinating large groups of people to produce complex bundles of goods
- The obvious: making value fungible
Anyone know of work on -
a) integrating the former into existing economic systems, or
b) developing new systems to provide those things while including capitalism's existing benefits?
This intersected well enough with my current interests and those of the people I've been discoursing with most closely that I figured I'd try my hand at a quick explanation of what we're doing, which I've lightly edited into blog post form below. This is only a loose sketch, I think it does reasonably precisely outline the argument, but many readers may find that there are substantial inferential leaps. Questions in the comments are strongly encouraged.
Any serious attempt at (b) will first have to unwind the disinformation that claims that the thing we have now is capitalism, or remotely efficient.
The short version of the project: learning to talk honestly within a small group about how power works, both systemically and as it applies to us, without trying to hold onto information asymmetries. (There's pervasive temptation to withhold political information as part of a zero-sum privilege game, like Plato's philosopher-kings.)
Some background: post-WWII elite institutions (e.g. corps) are competitive to enter, but not under performance pressure, because of US government policy. This strongly selects for zero-sum games, which mimic but wreck discourse. (See Moral Mazes for more, especially the case studies that make up most of the book, starting around chapter 3.)
This creates opportunity in two ways.
First, institutions are mostly too stupid to model their environment beyond the zero-sum games they specialize in, so a small group that's able to maintain information hygiene and not turn on each other should be able to take & hold territory. "And not turn on each other" turns out to be really hard, because all our role models and intuitions for how to survive in this world involve doing that all the time. But we're learning!
(A mundane example of a decisive advantage due to information hygiene: Paul Graham writes about how his startup did better because it used an elegant programming language. That's only information hygiene on the purely technical level, but that was enough to outmaneuver huge corporations with a strong perceived incentive to ruin them, for quite a while. For a less mundane example, the story of how Elisha outmaneuvered multiple ruling dynasties is a personal favorite - 2 Kings 5-10. The narrative distorts the "miracles" a bit but it's not hard to reconstruct how he actually did it.)
Second, because most supposed productive activity is done in the context of huge stable corporations, people are trying to maximize the number of jobs and complexity per unit of output. This implies that many things can be done much more easily.
So that implies that if we can have good enough information hygiene and group cohesion not to fall victim to the perverse impulse to do the kind of make-work or artificial scarcity that creates much of cost disease, we can learn how to build a nearly full-stack civilization in a small city-state. Obviously there are many steps between here and there, but since lots of them involve getting collectively smarter, a detailed plan would be inappropriate.
What does good information hygiene and group cohesion look like? The game Werewolf is a good example. Players are secretly assigned the identity of Villager (initially the majority) or Werewolf (minority). Each round all players vote one player out, and Werewolves secretly do the same. There are other details that allow villagers to make some inferences about who the werewolves are. But they have to play the first few rounds right or they lose.
Optimal play for Werewolves involves (a) targeting whichever villagers are the most helpful to public deliberation, for exclusion, and (b) during public deliberation, being as unhelpful as they can get away with while appearing to try to help at other times. I realized a lot of things about how social skills feel from the inside when I finally figured out how to play correctly as a Werewolf.
Optimal play for Villagers involves creating as much clarity as possible, as soon as possible, and being willing to assume that people who seem to be foolishly gumming up the works are Werewolves if there's no other clear target.
With optimal play, Villagers usually win, but in practice, at best one or two people try to create clarity and are picked off in the first round by the Werewolves. The other Villagers are resigned to trying to die last, so they lose.
The thing I said about elite culture favoring zero-sum games can be recast as: the social environment favors playing Werewolf over playing Villager. In case it's not obvious, optimal real-world play for Villagers can often involve leaving the Werewolves alone. In real life there are better things to do than murder your enemies, like hang out. Villagers just need to defend themselves if and when they're actually threatened.
We're trying to learn how to play the Villager strategy successfully, in a context where we've mostly been acculturated to play as Werewolves, especially among elites. This has to involve figuring out how to do interpersonal fault analysis (identify when people are being Werewolfy) without scapegoating (assuming that fault -> blame -> exclusion).
In other words, justice seeks truth, but intends to leave no one behind; people who can't contribute need to feel safe admitting that, and people who hurt the group need the option to repent & heal the breach.
We don't have great finesse yet but optimal play in our world seems to be some fluid integration of talking about politics, healing personal trauma, and intersubjective openness.
Havel's The Power of the Powerless describes a similar (but less self-aware) strategy which he calls "dissidence." He (accurately, I think) predicts that the situation in Capitalist countries will be more difficult than the situation in Communist ones, because Capitalist ideology is more persuasive because it's more plausibly true.
> This has to involve figuring out how to do interpersonal fault analysis (identify when people are being Werewolfy) without scapegoating (assuming that fault -> blame -> exclusion).
I don't have time to say more at the moment, but want to be sure I register: I currently do not trust you or your coalition to successfully avoid scapegoating in trying to do this.
Neither do I! Why would you? Why would anyone?
It seems correct, for now, for the answer to be "yeah, we have not yet done anything you should regard as credible."
But, a startup should have *some* inside view that says that they are special (even if it's hard to communicate that to others). Ben's prior comment feels similar to the sort of unhelpful modesty that Eliezer critiqued in the sequences.
A response that would have been useful to me would have been "yeah, we don't yet have legible reasons we expect to be credible to you about why our project would succeed. A very rough summary of why we're somewhat optimistic includes [whatever crude handle you have for why you're optimistic].
Also FWIW, a good thing I could imagine coming out of your project is "enough self-documentation, eventually presented well enough, that even if you fail, you leave behind useful information that the next group of idealistic people can use to get a head start." (there's "standing on the shoulders of giants" and there's "standing on the corpses of giants" which isn't quite as good but better than nothing)
(this does require better distillation than I think is typical for your posts, even some of your more effortful ones, but also obviously isn't that much of a priority until after you have more information than you currently do)
"I currently do not trust you or your coalition to [do the thing you said you still need to figure out how to do]" basically seems like too much of a nonsense objection to respond to in more detail. It falsely implies that we're asking for that level of trust, when actually what we're asking for is the courage to try the thing *before* we've got it all figured out. It doesn't make any specific claim that we're likely to be worse at this than any of the groups that haven't even explicitly described the problem.
If it were my project, I basically wouldn't want to hear feedback like that from random people on the internet, but I would want to here from people who I (Ray Arnold) have the level of trust that I have with Nick T. (I don't know about your own relationship with Nick, or your relationship with people in his reference class, or the equivalent reference class for how much you trust people about as much as I trust Nick, which is "medium")
[hey Nick, this is as good a time as any to reveal that the amount I trust you is "medium"]
I didn't see it as an objection, just as a low friction "hey, fwiw, you have not yet passed my part of 'your project makes enough sense for me to have confidence in it." This seems useful precisely because if I'd communicated about as much about my project as you have, I could imagine the alternate, positive version of Nick's comment being useful feedback that I was on the right track.
[Or, rather, at _some_ point in the process of me braindumping the way you are currently braindumping in a series of posts, at some point, the braindump might cross the threshold where people I moderately trust would say 'hmm, Ray has now communicated his ideas and principles well enough for me to think he has a decent chance of getting it right", and it'd be useful to know when that was]
I probably wouldn't stop-drop-and-respond-in-detail to the current, negative version of the comment. Just, nod and be like "cool, that makes sense. Some day I might have thought enough and written enough and demonstrated enough that it'd matter more whether you trust me at this point, but now is now that time."
(I care about this exchange because it seems *really* important to have a sense of what counts as good and useless feedback on a project like this. The sort of pedantic feedback that is [alas] still moderately common on LessWrong is mostly worthless. Refraining from any feedback is also worthless.
I don't have a very clear sense of the optimal norms but it seems like we should be trying to figure them out as we go, with as robust principles as we can manage)
Ray, what about a conditional pitch?
If someone thinks that the 2009 Less Wrong rationalists were promising, but think that the 2019 community is mostly just an above-average Bay Area cult (in the sense that everything is a cult), then this project should be legibly promising in the form of "that, except learning what we think went wrong the first time," where our first achievement is noticing that something went wrong. (Harder than it sounds!)
If someone thinks that the rationalists were never promising, or still promising, then we have not yet done anything they should regard as credible.
Do you have more words on Elisha, or a link to an explanation of what you mean?
I've only played Werewolf in a small group once. I was a Werewolf and we won. The thing that it seemed like it mattered was that the other Werewolf and I formed a voting bloc; the two of us voting in concert was enough to get the elimination votes to land on "not us". (Villager strategy: watch for suspicious voting patterns?)
Pingback: Drowning children are rare | Compass Rose
Pingback: Childhood Memory | Compass Rose
Pingback: Civil Law and Political Drama | Compass Rose
Pingback: The Trauma Coup | Compass Rose
Pingback: Inner Ring as Adversary | Compass Rose
Villager = Crewmate
Werewolf = Impostor