Category Archives: Cooperation

Request: Cabin [RESOLVED]

UPDATE: I'm currently staying in a cabin at a Quaker retreat center. This is basically the thing I needed. Thanks to everyone who reached out with suggestions or offers.

I’m currently coming to terms with just how much of human communication is marketing, like unto the Hobbesian war of all against all. I want to figure out a way for human beings to coordinate and create value when the dominant society is like that. That task is too big for me, so I need a team. I don’t know how to find the right people, but my best guess is that if I can clearly articulate what is needed, the right people have a good chance of recognizing my project as the one they want to be part of. But I can’t write about this in a reasonable way, because I can’t think about this in a reasonable way, because my intuitions are still all applying way too high a level of implicit trust, which means that I’m effectively deluged with spam and buying everything.

So, I want to get away, physically, to buy myself a little breathing room and get my head screwed on straight. A cabin, somewhere where I am not implicitly obligated to engage in more than incidental contact with other human beings, and can live in a reasonable amount of comfort for a while (e.g. includes basic temperature control and running water) is the essential thing. Other desiderata that are nice to have but not essential include:

  • A high vantage point overlooking something, whether it be coastal cliffs, a mountainside, or something similar. One goes to a mountaintop to receive the law, so this feels aesthetically appropriate in a way that “a cabin in the woods” or something out in the more level desert does not.
  • A land line telephone I can use.
  • Drivable distance from the SF Bay Area or otherwise convenient for me to get to.
  • Inexpensive.
  • Internet access, either on-site or within an hour of the site.
  • ETA: Running water

Let me know if you have something like this to offer either free or for money, know of someone who does, or have advice beyond checking AirBnB for how to find something like this.

My best guess is that I should try this out for two weeks and then figure out what to do longer-term.

Taking integrity literally

Simple consequentialist reasoning often appears to imply that you should trick others for the greater good. Paul Christiano recently proposed a simple consequentialist justification for acting with integrity:

I aspire to make decisions in a pretty simple way. I think about the consequences of each possible action and decide how much I like them; then I select the action whose consequences I like best.

To make decisions with integrity, I make one change: when I imagine picking an action, I pretend that picking it causes everyone to know that I am the kind of person who picks that option.

If I’m considering breaking a promise to you, and I am tallying up the costs and benefits, I consider the additional cost of you having known that I would break the promise under these conditions. If I made a promise to you, it’s usually because I wanted you to believe that I would keep it. So you knowing that I wouldn’t keep the promise is usually a cost, often a very large one.

Overall this seems like it’s on the right track – I endorse something similar. But it only solves part of the problem. In particular, it explains interpersonal integrity such as keeping one's word, but not integrity of character. Continue reading

Against responsibility

I am surrounded by well-meaning people trying to take responsibility for the future of the universe. I think that this attitude – prominent among Effective Altruists – is causing great harm. I noticed this as part of a broader change in outlook, which I've been trying to describe on this blog in manageable pieces (and sometimes failing at the "manageable" part).

I'm going to try to contextualize this by outlining the structure of my overall argument.

Why I am worried

Effective Altruists often say they're motivated by utilitarianism. At its best, this leads to things like Katja Grace's excellent analysis of when to be a vegetarian. We need more of this kind of principled reasoning about tradeoffs.

At its worst, this leads to some people angsting over whether it's ethical to spend money on a cup of coffee when they might have saved a life, and others using the greater good as license to say things that are not quite true, socially pressure others into bearing inappropriate burdens, and make ever-increasing claims on resources without a correspondingly strong verified track record of improving people's lives. I claim that these actions are not in fact morally correct, and that people keep winding up endorsing those conclusions because they are using the wrong cognitive approximations to reason about morality.

Summary of the argument

  1. When people take responsibility for something, they try to control it. So, universal responsibility implies an attempt at universal control.
  2. Maximizing control has destructive effects:
    • An adversarial stance towards other agents.
    • Decision paralysis.
  3. These failures are not accidental, but baked into the structure of control-seeking. We need a practical moral philosophy to describe strategies that generalize better, and that benefit from the existence of other benevolent agents rather than treating them primarily as threats.

Continue reading

Dominance, care, and social touch

John Salvatier writes about dominance, care, and social touch:

I recently found myself longing for male friends to act dominant over me. Imagining close male friends putting their arms over my shoulders and jostling me a bit, or squeezing my shoulders a bit roughly as they come up to talk to me felt good. Actions that clearly convey ‘I’m in charge here and I think you’ll like it’.

I was surprised at first. After all, aren’t showy displays of dominance bad? I don’t think of myself as particularly submissive either.

But my longing started to make more sense when I thought about my high school cross country coach.

[...] Coach would walk around and stop to talk to individual students. As he came up to you, he would often put his hand on your shoulder or sidle up alongside you and squeeze the nape of your neck. He would ask you - How are you? How did the long run feel yesterday? What are you aiming for at the meet? You’d tell him, and he would tell you what he thought was good - Just shoot to have a good final kick; don’t let anyone pass you.

And it felt really good for him to talk to you like that. At least it did for me.

It was clear that you were part of his plans, that he was looking out for you and that he wanted something from you. And that was reassuring because it meant he was going to keep looking out for you.

I think there are a few things going on here worth teasing apart:

Some people are more comfortable with social touch than others, probably related to overall embodiment.

Some people are more comfortable taking responsibility for things that they haven't been explicitly tasked with and given affordances for, including taking responsibility for things affecting others.

Because people cowed by authority are likely to think they're not allowed to do anything by default, and being cowed by authority is a sort of submission, dominance is correlated with taking responsibility for tasks. (There are exceptions, like service submissives, or people who just don't see helpfulness as related to their dominance.)

Because things that cause social ineptness also cause discomfort or unfamiliarity with social touch, comfort with and skill at social touch is correlated with high social status.

Personally, I don't like much casual social touch. Several years ago, the Rationalist community decided to try to normalize hugging, to promote bonding and group cohesion. It was correct to try this, given our understanding at the time. But I think it's been bad for me on balance; even after doing it for a few years, it still feels fake most of the time. I think I want to revert to a norm of not hugging people, in order to preserve the gesture for cases where I feel authentically motivated to do so, as an expression of genuine emotional intimacy.

I'm very much for the sort of caring where you proactively look after the other person's interests, outside the scope of what you've been explicitly asked to do - of taking it upon yourself to do things that need to be done. I just don't like connecting this with dominance or ego assertion. (I've accepted that I do need to at least inform people that I'm doing the thing, to avoid duplicated effort or allay their anxiety that it's not getting done.)

Sometimes, when I feel let down because someone close to me dropped the ball on something important, they try to make amends by submitting to me. This would be a good appeasement strategy if I mainly felt bad because I wanted them to assign me a higher social rank. But, the thing I want is actually the existence of another agent in the world who is independently looking out for my interests. So when they respond by submitting, trying to look small and incompetent, I perceive them as shirking. My natural response to this kind of shirking is anger - but people who are already trying to appease me by submitting tend to double down on submission if they notice I'm upset at them - which just compounds the problem!

My main strategy for fixing this has been to avoid leaning on this sort of person for anything important. I've been experimenting with instead explicitly telling them I dont' want submission and asking them to take more responsibility, and this occasionally works a bit, but it's slow and frustrating and I'm not sure it's worth the effort.

I don't track my social status as a quantity much at all. A close friend once described my social strategy as projecting exactly enough status to talk to anyone in the room, but no more, and no desire to win more status. This may be how I come across inside social ontologies where status is a quantity everyone has and is important to interactions, but from my perspective, I just talk to people I want to talk to if I think it will be a good use of our time, and don't track whether I'm socially entitled to. This makes it hard for some people, who try to understand people through their dominance level, to read me and predict my actions. But I think fixing this would be harmful, since it would require me to care about my status. I care about specific relationships with individuals, reputation for specific traits and actions, and access to social networks. I don't want to care about dominating people or submitting to them. It seems unfriendly. It seems divergent.

I encourage commenting here or at LessWrong.

Bindings and assurances

I've read a few business books and articles that contrast national styles of contract negotiation. Some countries such as the US have a style where a contract is meant to be fully binding such that if one of the parties could predict that they will likely break the contract in the future, accepting that version of the contract is seen as substantively and surprisingly dishonest. In other countries this is not seen as terribly unusual - a contract's just an initial guideline to be renegotiated whenever incentives slip too far out of whack.

More generally, some people reward me for thinking carefully before agreeing to do costly things for them or making potentially big promises, and wording them carefully to not overcommit, because it raises their level of trust in me. Others seem to want to punish me for this because it makes them think I don't really want to do the thing or don't really like them. Continue reading

Humble Charlie

I saw a beggar leaning on his wooden crutch.
He said to me, "You must not ask for so much."
And a pretty woman leaning in her darkened door.
She cried to me, "Hey, why not ask for more?"

-Leonard Cohen, Bird on the Wire

In my series on GiveWell, I mentioned that my mother's friend Charlie, who runs a soup kitchen, gives away surplus donations to other charities, mostly ones he knows well. I used this as an example of the kind of behavior you might hope to see in a cooperative situation where people have convergent goals.

I recently had a chance to speak with Charlie, and he mentioned something else I found surprising: his soup kitchen made a decision not to accept donations online. They only took paper checks. This is because, since they get enough money that way, they don't want to accumulate more money that they don't know how to use.

When I asked why, Charlie told me that it would be bad for the donors to support a charity if they haven't shown up in person to have a sense of what it does. Continue reading

Against neglectedness considerations

Effective Altruists talk about looking for neglected causes. This makes a great deal of intuitive sense. If you are trying to distribute food, and one person is hungry, and another has enough food, it does more direct good to give the food to the hungry person.

Likewise, if you are trying to decide on a research project, discovering penicillin might be a poor choice. We know that penicillin is an excellent thing to know about and has probably already saved many lives, but it’s already been discovered and put to common use. You’d do better discovering something that hasn’t been discovered yet.

My critique of GiveWell sometimes runs contrary to this principle. In particular, I argue that donors should think of crowding out effects as a benefit, not a cost, and that they should often be happy to give more than their “fair share” to the best giving opportunities. I ought to explain. Continue reading

GiveWell and the problem of partial funding

At the end of 2015, GiveWell wrote up its reasons for recommending that Good Ventures partially but not fully fund the GiveWell top charities. This reasoning seemed incomplete to me, and when I talked about it with others in the EA community, their explanations tended to switch between what seemed to me to be incomplete and mutually exclusive models of what was going on. This bothered me, because the relevant principles are close to the core of what EA is.

A foundation that plans to move around ten billion dollars and is relying on advice from GiveWell isn’t enough to get the top charities fully funded. That’s weird and surprising. The mysterious tendency to accumulate big piles of money and then not do anything with most of it seemed like a pretty important problem, and I wanted to understand it before trying to add more money to this particular pile.

So I decided to write up, as best I could, a clear, disjunctive treatment of the main arguments I’d seen for the behavior of GiveWell, the Open Philanthropy Project, and Good Ventures. Unfortunately, my writeup ended up being very long. I’ve since been encouraged to write a shorter summary with more specific recommendations. This is that summary. Continue reading

The humility argument for honesty

I have faith that if only people get a chance to hear a lot of different kinds of songs, they'll decide what are the good ones. -Pete Seeger

A lot of the discourse around honesty has focused on the value of maintaining a reputation for honesty. This is an important reason to keep one's word, but it's not the only reason to have an honest intent to inform. Another reason is epistemic and moral humility. Continue reading