Against neglectedness considerations

Effective Altruists talk about looking for neglected causes. This makes a great deal of intuitive sense. If you are trying to distribute food, and one person is hungry, and another has enough food, it does more direct good to give the food to the hungry person.

Likewise, if you are trying to decide on a research project, discovering penicillin might be a poor choice. We know that penicillin is an excellent thing to know about and has probably already saved many lives, but it’s already been discovered and put to common use. You’d do better discovering something that hasn’t been discovered yet.

My critique of GiveWell sometimes runs contrary to this principle. In particular, I argue that donors should think of crowding out effects as a benefit, not a cost, and that they should often be happy to give more than their “fair share” to the best giving opportunities. I ought to explain.

Neglectedness in the kitchen

Imagine a group of friends living together in a house. There are some tasks necessary for the maintenance of common spaces that benefit everyone, such as taking out the trash. There are also some tasks that each housemate disproportionately benefits from, such as tidying their own room.

I see that the dishes are all clean and neatly stacked in the cabinets. I decide not to wash any dishes, because this task is not neglected at all. Then I see that the kitchen garbage can is just about full. What are the considerations for and against taking out the garbage and replacing the bag myself?

Private and public interests

It takes me about a minute to do this public task. That implies an opportunity cost: I might instead use that minute to do some private task that would otherwise be neglected, like tidying my room. If I care more about the public benefit of the kitchen having an empty garbage can, then I’ll want to perform the public task. On the other hand, if I care more about the private benefit of my room’s increased tidiness, then I’ll want to perform the private task.

Of course, I might not only care about my personal well-being. I might also like and care about my housemates, and want them to have the experience of living in a kitchen with a garbage can that isn’t overflowing. In my private calculus, this simply gets counted towards the benefits to me of doing the public task.

I might even be an utilitarian with a principled objection to treating my private interests as more important than my housemates’ interests. But, of course, my tidier room might make me more productive in the long run, freeing up energy to do more public tasks in the future, so I’ll want to count that as well.

Neglectedness expectedness

If I reflect further, I might notice that if I’m not the only person who cares about the public task, I’m also not the only person who might do it. The public task is neglected as of now, but it is not prospectively neglected. What are the implications of this?

On one hand, this takes away some of the benefit to me of taking out the garbage. I have an interest in emptying the trash, but so do my housemates. However, while I care about the private task of tidying my room, they don’t. (Perhaps they don’t understand how much it would improve long-run outcomes by way of my productivity, and I expect it to be difficult to persuade them.) If I were to spend the minute taking out the garbage, the garbage would be taken out but my room wouldn’t get correspondingly tidied. On the other hand, if I were to spend that minute tidying my room, chances are that another housemate would notice that the garbage can was full and empty it on their own. So in the long run I could expect that both tasks would get done.

On the other hand, by spending a minute on a public task, I advance that minute to my housemates. If others follow the same heuristic, I should expect that this gives them another free minute to attend to public tasks that they have noticed and I haven’t, which results in more overall provision of public goods.

Moreover, I have an information advantage right now – I’ve found out that the trash needs emptying. If I simply take out the trash, I’ve paid the direct cost of taking out the trash, but avoided the overhead of communicating this information to others. If I were to avoid taking out the garbage, I’d still have paid the cost of noticing, and another housemate would have again paid the cost of noticing, and so on, enduring the inconvenience of an increasingly overfull garbage can, until someone finally bothered to empty it.

I might also have an advantage at some tasks over others. I don’t mind loading or unloading the dishwasher at all, but dislike hand-washing even one dish. Sometimes, as a houseguest, I’ve unloaded and loaded the dishwasher when I had no other reason to interact with the kitchen, because it was a cheap-for-me way to push my balance of trade with the household in a favorable direction. So, the case for my doing the task is stronger if it’s one where I have an advantage, and weaker if it’s one where I have a disadvantage in this way.

Similarly, I might be better at noticing when some particular tasks need to be done, implicitly reducing the cognitive overhead of actually doing them.

Habits, helping or hoarding

There are two classes of considerations here. Convergent considerations, which point towards contributing to the commons, and divergent considerations, which point towards hoarding and withholding resources. Depending on which type of consideration dominates, you can get one of two very different situations.

Convergence

If convergent considerations dominate, you can get a stable situation where everyone’s trying to pay it forward. I have friendships like this, where we each feel like we’re getting a lot more than we’re giving. This makes it appealing to find new ways to help my friend; if I invest resources in a thing I expect to get more back on net. I don’t have a lot of first-hand knowledge, but open source software projects seem like they’re often like this. So do the online encyclopedia Wikipedia that relies on volunteer contributors and editors, and Genius, the song lyric (and other things) annotation website.

A cooperative equilibrium is easier to achieve in conditions that feel like abundance and compounding returns, where the commons keep spitting out dividends, so it would actually be pretty difficult to put enough in to neglect your private interests. It feels like being big relative to the size of the common problems. It’s also easier if the players have strong values in common, caring more about their convergent goals than their divergent ones.

Divergence

If divergent considerations dominate, you can get a stable situation where each person is trying to contribute as little as they can to public projects. This can lead to the tragedy of the commons, underinvestment in (or overextraction of) public goods.

A divergent dynamic doesn’t always destroy the ability to coordinate. Market economies, for example, manage pretty impressive feats of coordination relative to what one might naively expect, based only on mutually beneficial transactions plus emergent price signals. This is Adam Smith’s famous “invisible hand” effect. The finance business has a particularly strong transactional ethic, because no matter what action you're taking, there's always someone on the other side of the trade.

However, markets may depend on adherence to underlying business norms that are not supported by explicit incentives. As Matthew Yglesias points out (summarizing a paper by Shleifer and Summers), the private equity industry plausibly makes a fair amount of its profits from violating tacit agreements within firms:

The strong case against private equity comes from an old 1988 paper from Lawrence Summers and Andrei Schleifer titled "Breach of Trust in Hostile Takeovers" [...].

Their starting point is Ronald Coase's observation that the existence of companies ("firms") is left a bit mysterious by econ 101 reasoning. Why don't free agents just contract with one another for services as a means of economic cooperation? One reason, he argues convincingly, is that trying to spell out each and every obligation in contractual terms would be both laborious and absurdly inflexible. If you think about the way your workplace actually functions, people have roles and obligations vis-a-vis each other that are considerably richer and more nuanced than what's spelled out in legal documents. These roles evolve over time in various ways, but they also have some stability to them. The point is to create a space of collaborative endeavor that isn't dominated by constant lawyerly bickering.

Summers & Shleifer observe that this often creates substantial arbitrage opportunities. You can buy up a company and then exploit your formal rights as owner to the hilt completely ignoring inumerable [sic] tacit bargains and promises. Indeed, since you the new owner didn't actually make the promises you may feel that you're not bound by them.

The big socialized loss in the case of this kind of "breach of trust" scenario is loss of trust and economy-wide loss of ability of managers and workers to form flexible implicit arrangements with one another. Summers and Shleifer write that it's difficult to assess the systematic impact of this because to do so "we must analyze a world in which people trust each other less, workers are not loyal to firms, and spot market transactions are more common than they are at this time." That's a difficult task. But we do know something about what an economy like that looks like. It looks like Greece or Italy where firms are much smaller and less productive in part as a coping mechanism in a low-trust environment. Interestingly, since the time "Breach of Trust" was published, American firm size has gone into decline.

Some households try to solve the shirking problem by explicitly assigning and tracking chores. More generally, people often try to solve this sort of problem through central planning and command economies, with explicit punishments for shirking. Likewise, much of the market economy is clustered into firms in which people can build trust beyond the level of a single transaction.

A divergent equilibrium is more likely under conditions of scarcity, where the commons feel like an infinite effort pit so you have to hold onto your stuff or you’ll just totally neglect your private interests. It’s also more likely among groups that don’t have a lot in common in terms of outlook and values.

Mixed situations

In practice, most situations aren't purely one or purely the other. We relate to each other with a combination of proactively cooperative modes of interaction, transactional modes where everyone looks out for their own interests, and customs like language where we mostly follow standard protocols because we have limited cognitive capacity and can't optimize every little thing we do.

Public spirit and strategic incapacity

But these situations are self-reinforcing. The convergent dynamic causes participants to interpret their goals as more convergent. If I see a way to promote the public good that I'm especially well-placed to take advantage of, I’m excited about it. As I become accustomed to seeing these efforts pay off and lead to more investment in public goods by others, it will be increasingly appealing for me to use the public good as a proxy goal for my own interests.

The divergent dynamic similarly causes participants to interpret their goals as more divergent.

In the case of the kitchen garbage, prospective neglectedness considerations might lead me to put off taking out the garbage, because I trust that someone else will do it. However, if my housemates are slobs, it starts looking more appealing to me to do it myself. Many women living with a male partner find themselves in this position. Men may do less housework because they care less about the housework – but it's not clear that caring less is in good faith. Perhaps they care less about the housework in part because they can get away with it.

In The Strategy of Conflict, Thomas Schelling gives the example of two people dropped off at different points on an island, who have to locate each other. They each have a two-way radio. The island is pretty big, and neither person wants to go though the hassle of trekking to the other person's location. What should you do to extract as much as you can from the situation? The answer is to destroy your radio's transmitter or speaker, so that you can only send signals, not receive signals from the other player. Then, say you're going to stay at your location, and describe it as best you can. The other player has no choice but to come find you. They can't talk to you, so they can't threaten not to cooperate unless you do some of the walking.

A related problem is Hollywood accounting. It's a commonplace observation among people in the motion picture business that it's a mistake to accept a percentage of a movie's net profit, and if you're the sort of person who might get paid a share of the money the movie earns, it's important to hold out for a percentage of the movie's gross receipts. This is because studios regularly manipulate the accounting to make it look like movies took a net loss, so people counting on a share of the profits get nothing, while it's harder to manipulate a simpler figure like revenue.

When a process like this becomes common, it can lead to cascading breakdowns of trust. Airline bankruptcies are common, probably at least in part because there are multiple relevant unions, and the unions have learned not to believe claims by management that there's no extra profit to be had, so they've learned to keep pushing until the airline actually goes broke.

When I consider contributing less to a public good in order to induce someone else to contribute more, I'm considering a sort of extortionary tactic. This gives others an incentive not to notice the opportunity to contribute to a public good, or not to be very good at it, as a precommitment strategy. If they don't have the opportunity to contribute, then I have to be the one to do so. This destroys important epistemic capital because it leads to everyone trying to understand as little as possible how their well-being depends on the maintenance of public goods, and invest as little as possible in producing resources that can be extracted by others.

(Related: Eight Short Studies on Excuses)

Prospective neglectedness and donor strategy

How should a charitable donor think about prospective replaceability? It matters a lot which norm holds in the domain you're working in – the convergent one or the divergent one.

If you're in a convergence-norm domain, then when you see an opportunity to save others resources by taking care of something yourself – let's say, an unambiguously great program to fund – you should generally be inclined to do so, as plainly as possible, without asking others to pay their "fair share," trusting that they'll do the same when they see opportunities, and the things that need to get done will get done more perfectly this way.

If the domain you're working in has a divergent dynamic, then this strategy will quickly be exhausted by an efficient market. Your willingness to contribute to public goods becomes an exploitable asset. You're giving others an incentive to manufacture apparently great giving opportunities until you're tapped out – and beyond, as they compete to redirect your funding.

It's obvious how this could be bad, if participants can mislead you about how good their program is. Charity employees don't necessarily have an overt intention to lie, but people working for charities have an incentive to notice things that show that the charity is cost-effective, and fail to notice things in the other direction. One of the important services a charity evaluator like GiveWell provides is to check cost-effectiveness claims, so we can have a better idea what we're getting for our money.

But even if we somehow manage to get accurate information about the value of the interventions, we're not necessarily out of trouble. Instead, we then give those funding the most valuable public programs an incentive to withdraw funding. By withdrawing funding, they create an actually high value-for-money giving opportunity for us to fund.

This dynamic is not inevitable. But it is not unlikely. So we should be on the lookout for information that indicates whether we're working in a convergent or a divergent domain. If we're in a divergent domain, then the impact per dollar of the apparent marginal giving opportunity will be an overly optimistic measurement of the value of adding additional funding sources.

In 2015, GiveWell believed that Good Ventures should not commit to fully funding the top charities, and moreover recommended that Good Ventures commit to a fixed level of funding, in large part because doing otherwise might discourage other donors. This is such a sign.

(Discussion on LessWrong as well as here in the comments.)

15 thoughts on “Against neglectedness considerations

  1. Romeo Stevens

    Consider comparative advantage. OpenPhil rightly thinks that they will, by virtue of hiring a bunch of researchers and working on it full time, have more information about good giving opportunities than the average EtG EA. If they crowd out EtG donors from the high visibility giving opportunities, it seems likely that it is not the case that those donors will then go hunting for better opportunities, but that their motivation to give will actually decrease substantially.

    Reply
    1. Benquo Post author

      it seems likely that it is not the case that those donors will then go hunting for better opportunities, but that their motivation to give will actually decrease substantially.

      This is plausible. I don't think Open Phil is crazy to think it's operating in a domain with a divergent dynamic. I do think it's crazy that this hasn't been a primary focus of discussion in EA for the last few years, and that people have been going on as though it weren't a huge problem. GiveWell was right to promote the idea that "coordination theory" is relevant.

      In general, thinking that your collaborators are not as clever or good as you is a divergent force. Sometimes it also happens to be true.

      Reply
  2. Romeo Stevens

    After thinking about it for a bit longer, I notice I'm confused. If I imagine myself as OpenPhil and I believe that my last dollar will be higher use than GiveWell's marginal dollar but I also have some uncertainty about this, my giving to GiveWell might come to be dominated by other considerations such as optimal signaling to EAs and prospective EAs, and how much money is left over to fulfill foundation financial requirements after I have distributed as much as I can to the pilot projects I am running during the year. This would predict that once I discover a promising giving opportunity I will decrease my giving to GiveWell. If I know that this is true in advance it further implies that I should not make GiveWell top charities dependent on an unpredictable giving source (whiplash funding effects).

    Reply
    1. Romeo Stevens

      (and it feels like I'm just rethinking/recombining points made in your original longer post on crowding out, because I haven't thought them all through.)

      Reply
    2. Benquo Post author

      This would predict that once I discover a promising giving opportunity I will decrease my giving to GiveWell.

      Seems to me like you're pretty likely to think you're in one of these worlds:

      1. Open Phil won't come up with enough superior giving opportunities to materially cut into its GiveWell Top Charities budget for a long time (so it should basically not be worried about this, since the local future's so uncertain).
      2. There are enough superior giving opportunities that Open Phil shouldn't fund GiveWell at all (except to alleviate whiplash) because every dollar spent on Top Charities is better spent on opportunity cost.

      Unless some pretty fine-tuned assumptions turn out to be true.

      The second one seems to be Open Phil's current position, modulo some "worldview diversification" stuff that I claim doesn't imply what Open Phil thinks it implies.

      Reply
      1. Romeo Stevens

        I don't see a good way to precommit to alleviating whiplash in 2. If your decision criteria are exposed then other agents will optimize their behavior to pull your levers. This isn't necessarily bad, if you do a good job of aligning incentives it means they do the sorts of research you want done. Outsize impact comes not just from pulling the rope sideways, but from pulling sideways hard when there seems to be stable equilibria on each side. Ie you invest with a fixed cost and then the good thing keeps happening. This is very different from forming an organization optimized for the governance of continuous flows. This is a fairly Thiel and Munger-esque point. OpenPhil's impact is likely to be dominated by a single, or very small set of actions. If this is the case their capital should be deployed exploring the space of possible levers they can pull and how to pull them skillfully so that they can pull hard at the appropriate juncture.

        Reply
  3. Zvi Mowshowitz

    [This comment attempts to take the key elements of the model above as a given]

    Even if you are a CDT agent, Doesn't this model imply that you should heavily consider your decisions' impact on what type of dynamic is ongoing, encouraging you to act more as if you are in a convergent domain (and to signal that to others)?

    Or to go further, using LDT (logical decision theory, including TDT/UDT) rather than CDT, asking to what extent other people's decisions and impressions of what dynamic they are in are related to yours, and using that to shift towards acting more as if you are in a convergent domain?

    Note that this posts' model implicitly assumes that others are effectively LDT agents, since they make their actions correlated to their anticipation of the actions of others in their domain and so on, but seems to act as if the person thinking about what to do should be a CDT agent instead and that their actions will not impact that dynamic. We should, I would presume, try to be at least as smart as the LDT agents we see, rather than falling back on CDT, and at least strongly consider cooperating even if they currently seem to be in a defection dynamic in order to change that dynamic to a convergent one (or to turn out to be wrong in our observation!).

    If we are someone as big as Open Phil (Is it really Phil and not Fill? Come on!) this seems especially true. Open Phil is sufficiently influential as an agent here that it seems especially wrong for them to be acting like CDT agents - other people in EA's models of OP's current and future actions are changing those EA people's behaviors, often quite a lot. It could even be said that OP is, in effect, two-boxing, and sees a divergent dynamic because others anticipate OP behaving in a divergent fashion.

    You commented above that "coordination theory" is being neglected, but that seems like a special case of decision theory being neglected. A lot of EA issues seem like they stem from people implicitly following CDT in places where CDT loses.

    Reply
    1. Benquo Post author

      Doesn't this model imply that you should heavily consider your decisions' impact on what type of dynamic is ongoing, encouraging you to act more as if you are in a convergent domain (and to signal that to others)?

      That's what I'd meant to imply. This post was an attempt to explain the "prisoner's dilemma" dynamic I mentioned here I also very briefly summarized it here. I think that in a 2-player game, the dynamic here is just the iterated prisoner's dilemma, where "cooperate" can get you the payoff of massive gains from logical trade, and "defect" leads to wasting huge amounts of time and attention on a pie-splitting exercise. This was nonobvious enough to others that it seemed worth writing up.

      You're correctly noting that this - and in general what's shaping up to be my "consequentialist microfoundations of morality" series - amounts to an argument for why a CDT agent ought to self-modify to an LDT agent, and why, based on CDT considerations, a human ought to behave like an LDT agent in many circumstances.

      Reply
  4. Fluttershy

    For starters, I generally agree with your conclusion, as defined as everything stated under "Prospective neglectedness and donor strategy". Here's a quote from earlier that I want to unpack:

    > When I consider contributing less to a public good in order to induce someone else to contribute more, I'm considering a sort of extortionary tactic. This gives others an incentive not to notice the opportunity to contribute to a public good, or not to be very good at it, as a precommitment strategy... This destroys important epistemic capital because it leads to everyone trying to understand as little as possible how their well-being depends on the maintenance of public goods, and invest as little as possible in producing resources that can be extracted by others.

    This seems like an excellent description of how some people act in relation to social resources, especially in the EA/LW Bay Area community. One common way for this to happen is for someone who is dominating a conversation to act like they're quite unaware of the harm they're doing to others by being dominant, and more generally to claim that e.g. they can't be expected to do better since they don't understand status games.

    My personal view is that "not understanding social things as a way of extorting value" is also a common way for males to extort value out of romantic/sexual relationships. I think I'd find fully explaining this exhausting, but to gesture in the right direction, I'd say that offenses of varying severity, such as general clumsiness and approaching someone despite unreceptive body language, repeatedly badgering someone for dates/sex, or ignoring no-contact orders, fall into this category. From this view, it is actually sensible to socially punish initiators for making these sorts of mistakes, which are costly to those being approached, even when the initiators have "plausible deniability", because it's very easy for people to lie to themselves about how much they understand socially. Cue part of the reason about why some of us get annoyed at "entitled" people and "nice guys TM" and their defenders.

    Reply
    1. Benquo Post author

      I think we need better language for responding to this sort of thing like a finger slip in the iterated prisoner's dilemma or related games. There should be penalties, but they should be proportionate - it's wrong to either rule out bad faith on priors, or jump to that conclusion on the first offense. This definitely means some people not acting in bad faith will be punished anyway, so the punishment needs to be calibrated for that.

      Reply
  5. Pingback: Humble Charlie | Compass Rose

  6. Pingback: Bad faith is a behavior, not a feeling | Compass Rose

  7. Pingback: Taking integrity literally | Compass Rose

  8. Pingback: Why I am not a Quaker (even though it often seems as though I should be) | Compass Rose

  9. Pingback: On the fetishization of money in Galt’s Gulch | Compass Rose

Leave a Reply

Your email address will not be published. Required fields are marked *