Author Archives: Benquo

Actors and scribes, words and deeds

Among the kinds of people, are the Actors, and the Scribes. Actors mainly relate to speech as action that has effects. Scribes mainly relate to speech as a structured arrangement of pointers that have meanings.

I previously described this as a distinction between promise-keeping "Quakers" and impulsive "Actors," but I think this missed a key distinction. There's "telling the truth," and then there's a more specific thing that's more obviously distinct from even Actors who are trying to make honest reports: keeping precisely accurate formal accounts. This leaves out some other types – I'm not exactly sure how it relates to engineers and diplomats, for instance – but I think I have the right names for these two things now.


Everyone agrees that words have meaning; they convey information from the speaker to the listener or reader. That's all they do. So when I used the phrase “words have meanings” to describe one side of a divide between people who use language to report facts, and people who use language to enact roles, was I strawmanning the other side?

I say no. Many common uses of language, including some perfectly legitimate ones, are not well-described by "words have meanings." For instance, people who try to use promises like magic spells to bind their future behavior don't seem to consider the possibility that others might treat their promises as a factual representation of what the future will be like.

Some uses of language do not simply describe objects or events in the world, but are enactive, designed to evoke particular feelings or cause particular actions. Even when speech can only be understood as a description of part of a model of the world, the context in which a sentence is uttered often implies an active intent, so if we only consider the direct meaning of the text, we will miss the most important thing about the sentence.

Some apparent uses of language’s denotative features may in fact be purely enactive. This is possible because humans initially learn language mimetically, and try to copy usage before understanding what it’s for. Primarily denotative language users are likely to assume that structural inconsistencies in speech are errors, when they’re often simply signs that the speech is primarily intended to be enactive. Continue reading

Effective Altruism is self-recommending

A parent I know reports (some details anonymized):

Recently we bought my 3-year-old daughter a "behavior chart," in which she can earn stickers for achievements like not throwing tantrums, eating fruits and vegetables, and going to sleep on time. We successfully impressed on her that a major goal each day was to earn as many stickers as possible.

This morning, though, I found her just plastering her entire behavior chart with stickers. She genuinely seemed to think I'd be proud of how many stickers she now had.

The Effective Altruism movement has now entered this extremely cute stage of cognitive development. EA is more than three years old, but institutions age differently than individuals. Continue reading

An OpenAI board seat is surprisingly expensive

The Open Philanthropy Project recently bought a seat on the board of the billion-dollar nonprofit AI research organization OpenAI for $30 million. Some people have said that this was surprisingly cheap, because the price in dollars was such a low share of OpenAI's eventual endowment: 3%.

To the contrary, this seat on OpenAI's board is very expensive, not because the nominal price is high, but precisely because it is so low.

If OpenAI hasn’t extracted a meaningful-to-it amount of money, then it follows that it is getting something other than money out of the deal. The obvious thing it is getting is buy-in for OpenAI as an AI safety and capacity venture. In exchange for a board seat, the Open Philanthropy Project is aligning itself socially with OpenAI, by taking the position of a material supporter of the project. The important thing is mutual validation, and a nominal donation just large enough to neg the other AI safety organizations supported by the Open Philanthropy Project is simply a customary part of the ritual.

By my count, the grant is larger than all the Open Philanthropy Project's other AI safety grants combined.

(Cross-posted at LessWrong.)

OpenAI makes humanity less safe

If there's anything we can do now about the risks of superintelligent AI, then OpenAI makes humanity less safe.

Once upon a time, some good people were worried about the possibility that humanity would figure out how to create a superintelligent AI before they figured out how to tell it what we wanted it to do.  If this happened, it could lead to literally destroying humanity and nearly everything we care about. This would be very bad. So they tried to warn people about the problem, and to organize efforts to solve it.

Specifically, they called for work on aligning an AI’s goals with ours - sometimes called the value alignment problem, AI control, friendly AI, or simply AI safety - before rushing ahead to increase the power of AI.

Some other good people listened. They knew they had no relevant technical expertise, but what they did have was a lot of money. So they did the one thing they could do - throw money at the problem, giving it to trusted parties to try to solve the problem. Unfortunately, the money was used to make the problem worse. This is the story of OpenAI. Continue reading

Against responsibility

I am surrounded by well-meaning people trying to take responsibility for the future of the universe. I think that this attitude – prominent among Effective Altruists – is causing great harm. I noticed this as part of a broader change in outlook, which I've been trying to describe on this blog in manageable pieces (and sometimes failing at the "manageable" part).

I'm going to try to contextualize this by outlining the structure of my overall argument.

Why I am worried

Effective Altruists often say they're motivated by utilitarianism. At its best, this leads to things like Katja Grace's excellent analysis of when to be a vegetarian. We need more of this kind of principled reasoning about tradeoffs.

At its worst, this leads to some people angsting over whether it's ethical to spend money on a cup of coffee when they might have saved a life, and others using the greater good as license to say things that are not quite true, socially pressure others into bearing inappropriate burdens, and make ever-increasing claims on resources without a correspondingly strong verified track record of improving people's lives. I claim that these actions are not in fact morally correct, and that people keep winding up endorsing those conclusions because they are using the wrong cognitive approximations to reason about morality.

Summary of the argument

  1. When people take responsibility for something, they try to control it. So, universal responsibility implies an attempt at universal control.
  2. Maximizing control has destructive effects:
    • An adversarial stance towards other agents.
    • Decision paralysis.
  3. These failures are not accidental, but baked into the structure of control-seeking. We need a practical moral philosophy to describe strategies that generalize better, and that benefit from the existence of other benevolent agents rather than treating them primarily as threats.

Continue reading

Dominance, care, and social touch

John Salvatier writes about dominance, care, and social touch:

I recently found myself longing for male friends to act dominant over me. Imagining close male friends putting their arms over my shoulders and jostling me a bit, or squeezing my shoulders a bit roughly as they come up to talk to me felt good. Actions that clearly convey ‘I’m in charge here and I think you’ll like it’.

I was surprised at first. After all, aren’t showy displays of dominance bad? I don’t think of myself as particularly submissive either.

But my longing started to make more sense when I thought about my high school cross country coach.

[...] Coach would walk around and stop to talk to individual students. As he came up to you, he would often put his hand on your shoulder or sidle up alongside you and squeeze the nape of your neck. He would ask you - How are you? How did the long run feel yesterday? What are you aiming for at the meet? You’d tell him, and he would tell you what he thought was good - Just shoot to have a good final kick; don’t let anyone pass you.

And it felt really good for him to talk to you like that. At least it did for me.

It was clear that you were part of his plans, that he was looking out for you and that he wanted something from you. And that was reassuring because it meant he was going to keep looking out for you.

I think there are a few things going on here worth teasing apart:

Some people are more comfortable with social touch than others, probably related to overall embodiment.

Some people are more comfortable taking responsibility for things that they haven't been explicitly tasked with and given affordances for, including taking responsibility for things affecting others.

Because people cowed by authority are likely to think they're not allowed to do anything by default, and being cowed by authority is a sort of submission, dominance is correlated with taking responsibility for tasks. (There are exceptions, like service submissives, or people who just don't see helpfulness as related to their dominance.)

Because things that cause social ineptness also cause discomfort or unfamiliarity with social touch, comfort with and skill at social touch is correlated with high social status.

Personally, I don't like much casual social touch. Several years ago, the Rationalist community decided to try to normalize hugging, to promote bonding and group cohesion. It was correct to try this, given our understanding at the time. But I think it's been bad for me on balance; even after doing it for a few years, it still feels fake most of the time. I think I want to revert to a norm of not hugging people, in order to preserve the gesture for cases where I feel authentically motivated to do so, as an expression of genuine emotional intimacy.

I'm very much for the sort of caring where you proactively look after the other person's interests, outside the scope of what you've been explicitly asked to do - of taking it upon yourself to do things that need to be done. I just don't like connecting this with dominance or ego assertion. (I've accepted that I do need to at least inform people that I'm doing the thing, to avoid duplicated effort or allay their anxiety that it's not getting done.)

Sometimes, when I feel let down because someone close to me dropped the ball on something important, they try to make amends by submitting to me. This would be a good appeasement strategy if I mainly felt bad because I wanted them to assign me a higher social rank. But, the thing I want is actually the existence of another agent in the world who is independently looking out for my interests. So when they respond by submitting, trying to look small and incompetent, I perceive them as shirking. My natural response to this kind of shirking is anger - but people who are already trying to appease me by submitting tend to double down on submission if they notice I'm upset at them - which just compounds the problem!

My main strategy for fixing this has been to avoid leaning on this sort of person for anything important. I've been experimenting with instead explicitly telling them I dont' want submission and asking them to take more responsibility, and this occasionally works a bit, but it's slow and frustrating and I'm not sure it's worth the effort.

I don't track my social status as a quantity much at all. A close friend once described my social strategy as projecting exactly enough status to talk to anyone in the room, but no more, and no desire to win more status. This may be how I come across inside social ontologies where status is a quantity everyone has and is important to interactions, but from my perspective, I just talk to people I want to talk to if I think it will be a good use of our time, and don't track whether I'm socially entitled to. This makes it hard for some people, who try to understand people through their dominance level, to read me and predict my actions. But I think fixing this would be harmful, since it would require me to care about my status. I care about specific relationships with individuals, reputation for specific traits and actions, and access to social networks. I don't want to care about dominating people or submitting to them. It seems unfriendly. It seems divergent.

I encourage commenting here or at LessWrong.

New York culture

Recently, a friend looking to support high-quality news sources by subscribing asked for recommendations. I noted that New York Magazine had been doing some surprisingly good journalism.

I'd sneered at that sort of magazine in the past – the sort that people mainly buy to see who's on the annual top doctors list or top restaurants list. But my sneering was inconsistent. I'd assumed that such an obviously gameable metric must already be corrupt – but when I lived in DC, Washingtonian Magazine's restaurant picks were actually pretty good, and my girlfriend found a really good doctor on the Top Doctors list. Nor was he an expensive concierge doctor – he took her fairly ordinary health insurance. I'd assumed there would be paid placement, but there wasn't. The methodology of such lists is actually fairly clever: they survey doctors, asking for each specialty – if you needed to see a doctor other than yourself in this specialty, whom would you go to? Now I live in Berkeley, and the last time I needed to see an ear doctor, I found one on the list just a few blocks from my house – and he was excellent.

But even after correcting for my prejudices, New York Magazine is special. They recently published some of the best science reporting I've seen – it's nominally about the Implicit Association Test, but it's really about the sorts of bad science that contributed to the replication crisis. Here are some excerpts I thought were especially clear: Continue reading

Sufficiently sincere confirmation bias is indistinguishable from science

Some theater people at NYU people wanted to demonstrate how gender stereotypes affected the 2016 US presidential election. So they decided to put on a theatrical performance of the presidential debates – but with the genders of the principals swapped. They assumed that this would show how much of a disadvantage Hillary Clinton was working under because of her gender. They were shocked to discover the opposite – audiences full of Clinton supporters, watching the gender-swapped debates, came away thinking that Trump was a better communicator than they'd thought.

The principals don't seem to have come into this with a fair-minded attitude. Instead, it seems to have been a case of "I'll show them!":

Salvatore says he and Guadalupe began the project assuming that the gender inversion would confirm what they’d each suspected watching the real-life debates: that Trump’s aggression—his tendency to interrupt and attack—would never be tolerated in a woman, and that Clinton’s competence and preparedness would seem even more convincing coming from a man.

Let's be clear about this. This was not epistemic even-handedness. This was a sincere attempt at confirmation bias. They believed one thing, and looked only for confirming evidence to prove their point. It was only when they started actually putting together the experiment that they realized they might learn the opposite lesson:

But the lessons about gender that emerged in rehearsal turned out to be much less tidy. What was Jonathan Gordon smiling about all the time? And didn’t he seem a little stiff, tethered to rehearsed statements at the podium, while Brenda King, plainspoken and confident, freely roamed the stage? Which one would audiences find more likeable?

What made this work? I think what happened is that they took their own beliefs literally. They actually believed that people hated Hillary because she was a woman, and so their idea of something that they were confident would show this clearly was a fair test. Because of this, when things came out the opposite of the way they'd predicted, they noticed and were surprised, because they actually expected the demonstration to work. Continue reading

Bindings and assurances

I've read a few business books and articles that contrast national styles of contract negotiation. Some countries such as the US have a style where a contract is meant to be fully binding such that if one of the parties could predict that they will likely break the contract in the future, accepting that version of the contract is seen as substantively and surprisingly dishonest. In other countries this is not seen as terribly unusual - a contract's just an initial guideline to be renegotiated whenever incentives slip too far out of whack.

More generally, some people reward me for thinking carefully before agreeing to do costly things for them or making potentially big promises, and wording them carefully to not overcommit, because it raises their level of trust in me. Others seem to want to punish me for this because it makes them think I don't really want to do the thing or don't really like them. Continue reading

Humble Charlie

I saw a beggar leaning on his wooden crutch.
He said to me, "You must not ask for so much."
And a pretty woman leaning in her darkened door.
She cried to me, "Hey, why not ask for more?"

-Leonard Cohen, Bird on the Wire

In my series on GiveWell, I mentioned that my mother's friend Charlie, who runs a soup kitchen, gives away surplus donations to other charities, mostly ones he knows well. I used this as an example of the kind of behavior you might hope to see in a cooperative situation where people have convergent goals.

I recently had a chance to speak with Charlie, and he mentioned something else I found surprising: his soup kitchen made a decision not to accept donations online. They only took paper checks. This is because, since they get enough money that way, they don't want to accumulate more money that they don't know how to use.

When I asked why, Charlie told me that it would be bad for the donors to support a charity if they haven't shown up in person to have a sense of what it does. Continue reading