I’m currently coming to terms with just how much of human communication is marketing, like unto the Hobbesian war of all against all. I want to figure out a way for human beings to coordinate and create value when the dominant society is like that. That task is too big for me, so I need a team. I don’t know how to find the right people, but my best guess is that if I can clearly articulate what is needed, the right people have a good chance of recognizing my project as the one they want to be part of. But I can’t write about this in a reasonable way, because I can’t think about this in a reasonable way, because my intuitions are still all applying way too high a level of implicit trust, which means that I’m effectively deluged with spam and buying everything.
So, I want to get away, physically, to buy myself a little breathing room and get my head screwed on straight. A cabin, somewhere where I am not implicitly obligated to engage in more than incidental contact with other human beings, and can live in a reasonable amount of comfort for a while (e.g. includes basic temperature control and running water) is the essential thing. Other desiderata that are nice to have but not essential include:
A high vantage point overlooking something, whether it be coastal cliffs, a mountainside, or something similar. One goes to a mountaintop to receive the law, so this feels aesthetically appropriate in a way that “a cabin in the woods” or something out in the more level desert does not.
A land line telephone I can use.
Drivable distance from the SF Bay Area or otherwise convenient for me to get to.
Internet access, either on-site or within an hour of the site.
Let me know if you have something like this to offer either free or for money, know of someone who does, or have advice beyond checking AirBnB for how to find something like this.
My best guess is that I should try this out for two weeks and then figure out what to do longer-term.
Simple consequentialist reasoning often appears to imply that you should trick others for the greater good. Paul Christiano recently proposed a simple consequentialist justification for acting with integrity:
I aspire to make decisions in a pretty simple way. I think about the consequences of each possible action and decide how much I like them; then I select the action whose consequences I like best.
To make decisions with integrity, I make one change: when I imagine picking an action, I pretend that picking it causes everyone to know that I am the kind of person who picks that option.
If I’m considering breaking a promise to you, and I am tallying up the costs and benefits, I consider the additional cost of you having known that I would break the promise under these conditions. If I made a promise to you, it’s usually because I wanted you to believe that I would keep it. So you knowing that I wouldn’t keep the promise is usually a cost, often a very large one.
Overall this seems like it’s on the right track – I endorse something similar. But it only solves part of the problem. In particular, it explains interpersonal integrity such as keeping one's word, but not integrity of character. Continue reading →
Our higher cognitive functions have two modes: a drive to bias nature towards certain outcomes, and an appreciation of structural symmetry in the arrangement of the universe. In standard three-part models of the soul, bias maps well onto the middle part. Symmetry maps well onto the "upper" part in ancient accounts, but not modern ones. This reflects a real change in how people think. It is a sign of damage. Damage wrought on people's souls – especially among elites – by formal schooling and related pervasive dominance relations in employment. Continue reading →
When people talk about general intelligence in humans, they tend to talk about measured IQ. While a lot of variation in IQ is really just variation in brain health, and probably related to variation in general health, there are at least two distinct modes of general intelligence in humans: fluid intelligence and crystallized intelligence.
Fluid intelligence is pretty much anything you can use a spatial metaphor to think about, and is measured pretty directly by Raven's Progressive Matrices. It's used for puzzle-solving.
Crystallized intelligence, on the other hand, relies on your conceptual vocabulary. You can do analogical reasoning with it – so it lends itself to a fortiori style arguments.
I don't think it's just a coincidence that I know of two main ways people have discovered disjunctive, structural reasoning – once in geometry, and once in the courts. Continue reading →
if you don’t correct errors, you don’t get anything done, because you stay wrong. I don't think we do enough to reward saying oops.
Lately, I’ve been complaining about ways the EA community’s been papering over problems in ways that forgo this sort of learning. But while complaining is important, on its own it doesn’t offer any specific vision for how to do things. At the recent EA Global conference in Boston, I was reflecting with a friend on what sorts of positive norms I would like to see in the discourse.
One example of something I wish I saw more of, is people publicly and very clearly saying, "we tried X, it didn’t work, so now we’re stopping.” Or, “I used to believe X, and as a result asked people to do Y, but now I don’t believe X anymore and don’t think Y is a particularly good use of resources.” People often invest a lot of social capital in their current beliefs and plans; admitting that you were wrong can cost you valuable social momentum and mean you have to start over. You might worry that people will associate you with wrongness. We need communities where instead, clear admissions of error or failure are publicly acknowledged as signs of integrity, and commitment to communal learning and shared model-building.
So I'm offering a prize. But first, let me give an example of the sort of thing we need to be praising more loudly more often. Continue reading →
Scott Alexander argues that Silicon Valley's reputation - that it's decayed into bubbles around nonsense products such as the San Francisco-based Juicero - is undeserved, that there's still plenty of innovation around. But I think what's really going on is that Silicon Valley doesn't exist anymore. Continue reading →
It’s common to think that someone else is arguing in bad faith. In a recent blog post, Nate Soares claims that this intuition is both wrong and harmful:
I believe that the ability to expect that conversation partners are well-intentioned by default is a public good. An extremely valuable public good. When criticism turns to attacking the intentions of others, I perceive that to be burning the commons. Communities often have to deal with actors that in fact have ill intentions, and in that case it's often worth the damage to prevent an even greater exploitation by malicious actors. But damage is damage in either case, and I suspect that young communities are prone to destroying this particular commons based on false premises.
To be clear, I am not claiming that well-intentioned actions tend to have good consequences. The road to hell is paved with good intentions. Whether or not someone's actions have good consequences is an entirely separate issue. I am only claiming that, in the particular case of small high-trust communities, I believe almost everyone is almost always attempting to do good by their own lights. I believe that propagating doubt about that fact is nearly always a bad idea.
It would be surprising, if bad intent were so rare in the relevant sense, that people would be so quick to jump to the conclusion that it is present. Why would that be adaptive? Continue reading →
Among the kinds of people, are the Actors, and the Scribes. Actors mainly relate to speech as action that has effects. Scribes mainly relate to speech as a structured arrangement of pointers that have meanings.
I previously described this as a distinction between promise-keeping "Quakers" and impulsive "Actors," but I think this missed a key distinction. There's "telling the truth," and then there's a more specific thing that's more obviously distinct from even Actors who are trying to make honest reports: keeping precisely accurate formal accounts. This leaves out some other types – I'm not exactly sure how it relates to engineers and diplomats, for instance – but I think I have the right names for these two things now.
Everyone agrees that words have meaning; they convey information from the speaker to the listener or reader. That's all they do. So when I used the phrase “words have meanings” to describe one side of a divide between people who use language to report facts, and people who use language to enact roles, was I strawmanning the other side?
I say no. Many common uses of language, including some perfectly legitimate ones, are not well-described by "words have meanings." For instance, people who try to use promises like magic spells to bind their future behavior don't seem to consider the possibility that others might treat their promises as a factual representation of what the future will be like.
Some uses of language do not simply describe objects or events in the world, but are enactive, designed to evoke particular feelings or cause particular actions. Even when speech can only be understood as a description of part of a model of the world, the context in which a sentence is uttered often implies an active intent, so if we only consider the direct meaning of the text, we will miss the most important thing about the sentence.
Some apparent uses of language’s denotative features may in fact be purely enactive. This is possible because humans initially learn language mimetically, and try to copy usage before understanding what it’s for. Primarily denotative language users are likely to assume that structural inconsistencies in speech are errors, when they’re often simply signs that the speech is primarily intended to be enactive. Continue reading →