Category Archives: Philosophy

On the fetishization of money in Galt’s Gulch

Ayn Rand’s Atlas Shrugged is set in a world in which the death dance of capitalism has reached its final stages, the state itself becoming an instrument of direct appropriation of surplus value generated by the workers. As industrialists become aware of the extractive nature of the process in which they are participating, one by one, they convert to the radical anarchism of an agitator named John Galt,* and “go on strike” to an utopian community hidden in the mountains of Colorado: Galt’s Gulch.

In Galt’s Gulch, resources are allocated to whomever can use them most productively, in an informal process; since everyone can see how their interests converge, levels of trust are high, and hoarding and shirking are basically nonproblems. People pick up whatever tasks seem needed, regardless of their profession or the ability such tasks might give them to extract rents.

This raises the obvious question: Why does anyone use money in Galt’s Gulch? Continue reading

Why I am not a Quaker (even though it often seems as though I should be)

In the past year, I have noticed that the Society of Friends (also known as the Quakers) has come to the right answer long before I or most people did, on a surprising number of things, in a surprising range of domains. And yet, I do not feel inclined to become one of them. Giving credit where credit is due is a basic part of good discourse, so I feel that I owe an explanation.

The virtues of the Society of Friends are the virtues of liberalism: they cultivate honest discourse and right action, by taking care not to engage in practices that destroy individual discernment. The failings of the Society of Friends are the failings of liberalism: they do not seem to have the organizational capacity to recognize predatory systems and construct alternatives.

Fundamentally, Quaker protocols seem like a good start, but more articulated structures are necessary, especially more closed systems of production. Continue reading

Taking integrity literally

Simple consequentialist reasoning often appears to imply that you should trick others for the greater good. Paul Christiano recently proposed a simple consequentialist justification for acting with integrity:

I aspire to make decisions in a pretty simple way. I think about the consequences of each possible action and decide how much I like them; then I select the action whose consequences I like best.

To make decisions with integrity, I make one change: when I imagine picking an action, I pretend that picking it causes everyone to know that I am the kind of person who picks that option.

If I’m considering breaking a promise to you, and I am tallying up the costs and benefits, I consider the additional cost of you having known that I would break the promise under these conditions. If I made a promise to you, it’s usually because I wanted you to believe that I would keep it. So you knowing that I wouldn’t keep the promise is usually a cost, often a very large one.

Overall this seems like it’s on the right track – I endorse something similar. But it only solves part of the problem. In particular, it explains interpersonal integrity such as keeping one's word, but not integrity of character. Continue reading

Against responsibility

I am surrounded by well-meaning people trying to take responsibility for the future of the universe. I think that this attitude – prominent among Effective Altruists – is causing great harm. I noticed this as part of a broader change in outlook, which I've been trying to describe on this blog in manageable pieces (and sometimes failing at the "manageable" part).

I'm going to try to contextualize this by outlining the structure of my overall argument.

Why I am worried

Effective Altruists often say they're motivated by utilitarianism. At its best, this leads to things like Katja Grace's excellent analysis of when to be a vegetarian. We need more of this kind of principled reasoning about tradeoffs.

At its worst, this leads to some people angsting over whether it's ethical to spend money on a cup of coffee when they might have saved a life, and others using the greater good as license to say things that are not quite true, socially pressure others into bearing inappropriate burdens, and make ever-increasing claims on resources without a correspondingly strong verified track record of improving people's lives. I claim that these actions are not in fact morally correct, and that people keep winding up endorsing those conclusions because they are using the wrong cognitive approximations to reason about morality.

Summary of the argument

  1. When people take responsibility for something, they try to control it. So, universal responsibility implies an attempt at universal control.
  2. Maximizing control has destructive effects:
    • An adversarial stance towards other agents.
    • Decision paralysis.
  3. These failures are not accidental, but baked into the structure of control-seeking. We need a practical moral philosophy to describe strategies that generalize better, and that benefit from the existence of other benevolent agents rather than treating them primarily as threats.

Continue reading

New York culture

Recently, a friend looking to support high-quality news sources by subscribing asked for recommendations. I noted that New York Magazine had been doing some surprisingly good journalism.

I'd sneered at that sort of magazine in the past – the sort that people mainly buy to see who's on the annual top doctors list or top restaurants list. But my sneering was inconsistent. I'd assumed that such an obviously gameable metric must already be corrupt – but when I lived in DC, Washingtonian Magazine's restaurant picks were actually pretty good, and my girlfriend found a really good doctor on the Top Doctors list. Nor was he an expensive concierge doctor – he took her fairly ordinary health insurance. I'd assumed there would be paid placement, but there wasn't. The methodology of such lists is actually fairly clever: they survey doctors, asking for each specialty – if you needed to see a doctor other than yourself in this specialty, whom would you go to? Now I live in Berkeley, and the last time I needed to see an ear doctor, I found one on the list just a few blocks from my house – and he was excellent.

But even after correcting for my prejudices, New York Magazine is special. They recently published some of the best science reporting I've seen – it's nominally about the Implicit Association Test, but it's really about the sorts of bad science that contributed to the replication crisis. Here are some excerpts I thought were especially clear: Continue reading

Bindings and assurances

I've read a few business books and articles that contrast national styles of contract negotiation. Some countries such as the US have a style where a contract is meant to be fully binding such that if one of the parties could predict that they will likely break the contract in the future, accepting that version of the contract is seen as substantively and surprisingly dishonest. In other countries this is not seen as terribly unusual - a contract's just an initial guideline to be renegotiated whenever incentives slip too far out of whack.

More generally, some people reward me for thinking carefully before agreeing to do costly things for them or making potentially big promises, and wording them carefully to not overcommit, because it raises their level of trust in me. Others seem to want to punish me for this because it makes them think I don't really want to do the thing or don't really like them. Continue reading

Humble Charlie

I saw a beggar leaning on his wooden crutch.
He said to me, "You must not ask for so much."
And a pretty woman leaning in her darkened door.
She cried to me, "Hey, why not ask for more?"

-Leonard Cohen, Bird on the Wire

In my series on GiveWell, I mentioned that my mother's friend Charlie, who runs a soup kitchen, gives away surplus donations to other charities, mostly ones he knows well. I used this as an example of the kind of behavior you might hope to see in a cooperative situation where people have convergent goals.

I recently had a chance to speak with Charlie, and he mentioned something else I found surprising: his soup kitchen made a decision not to accept donations online. They only took paper checks. This is because, since they get enough money that way, they don't want to accumulate more money that they don't know how to use.

When I asked why, Charlie told me that it would be bad for the donors to support a charity if they haven't shown up in person to have a sense of what it does. Continue reading

Against neglectedness considerations

Effective Altruists talk about looking for neglected causes. This makes a great deal of intuitive sense. If you are trying to distribute food, and one person is hungry, and another has enough food, it does more direct good to give the food to the hungry person.

Likewise, if you are trying to decide on a research project, discovering penicillin might be a poor choice. We know that penicillin is an excellent thing to know about and has probably already saved many lives, but it’s already been discovered and put to common use. You’d do better discovering something that hasn’t been discovered yet.

My critique of GiveWell sometimes runs contrary to this principle. In particular, I argue that donors should think of crowding out effects as a benefit, not a cost, and that they should often be happy to give more than their “fair share” to the best giving opportunities. I ought to explain. Continue reading

Between honesty and perjury

I've promoted Effective Altruism in the past. I will probably continue to promote some EA-related projects. Many individual EAs are well-intentioned, talented, and doing extremely important, valuable work. Many EA organizations have good people working for them, and are doing good work on important problems.

That's why I think Sarah Constantin’s recent writing on Effective Altruism’s integrity problem is so important. If we are going to get anything done, in the long run, we have to have reliable sources of information. This doesn't work unless we call out misrepresentations and systematic failures of honesty, and these concerns get taken seriously.

Sarah's post is titled “EA Has A Lying Problem.” Some people think this is overstated. This is an important topic to be precise on - the whole point of raising these issues is to make public discourse more reliable. For this reason, we want to avoid accusing people of things that aren’t actually true. It’s also important that we align incentives correctly. If dishonesty is not punished, but admitting a policy of dishonesty is, this might just make our discourse worse, not better.

To identify the problem precisely, we need language that can distinguish making specific assertions that are not factually accurate, from other conduct that contributes to dishonesty in discourse. I'm going to lay out a framework for thinking about this and when it's appropriate to hold someone to a high standard of honesty, and then show how it applies to the cases Sarah brings up. Continue reading