An OpenAI board seat is surprisingly expensive

The Open Philanthropy Project recently bought a seat on the board of the billion-dollar nonprofit AI research organization OpenAI for $30 million. Some people have said that this was surprisingly cheap, because the price in dollars was such a low share of OpenAI's eventual endowment: 3%.

To the contrary, this seat on OpenAI's board is very expensive, not because the nominal price is high, but precisely because it is so low.

If OpenAI hasn’t extracted a meaningful-to-it amount of money, then it follows that it is getting something other than money out of the deal. The obvious thing it is getting is buy-in for OpenAI as an AI safety and capacity venture. In exchange for a board seat, the Open Philanthropy Project is aligning itself socially with OpenAI, by taking the position of a material supporter of the project. The important thing is mutual validation, and a nominal donation just large enough to neg the other AI safety organizations supported by the Open Philanthropy Project is simply a customary part of the ritual.

By my count, the grant is larger than all the Open Philanthropy Project's other AI safety grants combined.

(Cross-posted at LessWrong.)

20 thoughts on “An OpenAI board seat is surprisingly expensive

  1. Aceso Under Glass

    [Full Disclosure: I'm friends with the strongest link between Open AI and OPP]
    I think you're focusing too much on social reality here, and assuming a zero-sum game without proving it. Here are some scenarios in which both orgs are great that equally concordant with the evidence:
    * Open AI would have given OPP a seat for free because they value their input, but didn't want to explain to a bunch of other organizations why *they* couldn't have a seat. Pretending to charge admission makes it easier to tell them no.
    * Open AI and OPP are mostly aligned but wanted to fund a specific project they thought Open AI was neglecting, and there are good reasons to keep the nature of the project secret.
    * OPP really wanted to spend $30mil on AI and no other organization could absorb that amount.

    Is there any evidence Open AI needed legitimization by OPP? They're fully funded, have famous founders, and are getting all the talent they want.

    I especially take exception with the idea that supporting one org is implicitly devaluing others. That is a thing that can happen, but the attitude that it's Occam's Razor and needs no further justification is really toxic and retards progress. Note that the accusation doesn't have these negative externalities if you provide evidence, even if the evidence isn't convincing to all people. The point is that you treat it as an aberration that requires evidence, not a default.

    I know putting the burden of proof on the accuser can be inhibitory and that can be bad. In this case what I care about is not that you criticized Open AI or OPP, but that you implicitly endorsed the social reality without reference to object reality.

    Reply
    1. Benquo Post author

      Open AI and OPP are mostly aligned but wanted to fund a specific project they thought Open AI was neglecting, and there are good reasons to keep the nature of the project secret.

      Anyone can claim to have secret plans, and people seem astonishingly willing to attribute secret plans (often different people confidently asserting incompatible secret plans) to Open Phil. I tried to address some of that here, but I just don't have the time to go through that exercise again. I'm only considering the explanations Open Phil actually gives.

      Reply
    2. Benquo Post author

      Open AI would have given OPP a seat for free because they value their input, but didn't want to explain to a bunch of other organizations why *they* couldn't have a seat. Pretending to charge admission makes it easier to tell them no.

      It seems like most of this is probably true. I'm uncertain what "value their input" really means here but it does seem like Holden's going to invest a pretty substantial amount of his time, which seems like an improvement for OpenAI. But that's also really expensive from Holden's point of view! Overall this doesn't seem like an alternative explanation so much as a fleshing out of a component of mine.

      Reply
    3. Benquo Post author

      OPP really wanted to spend $30mil on AI and no other organization could absorb that amount.

      If I believed this explanation, I'd have to disbelieve this part of the grant writeup:

      We expect the primary benefits of this grant to stem from our partnership with OpenAI, rather than simply from contributing funding toward OpenAI’s work. While we would also expect general support for OpenAI to be likely beneficial on its own, the case for this grant hinges on the benefits we anticipate from our partnership, particularly the opportunity to help play a role in OpenAI’s approach to safety and governance issues.

      If it turned out that OpenAI had a good $30 million AI safety program that was hard to fund due to internal politics, and that would have been much harder outside OpenAI, and this grant got it funded, I'd be very pleasantly surprised, except for the part where the grant writeup would have been a gross misrepresentation.

      Reply
      1. Benquo Post author

        I think it's worth working out under what circumstances generating alternative stories for why someone might have done something is the right move. It seems important for not jumping to conclusions - generally people have reasons for doing what they're doing - but if the universal response for hard-to-understand behavior is to generate some story on which it makes sense, is coherent, and is cooperative, then it's hard to uphold common standards of behavior.

        It's striking to me that most alternative hypotheses I've seen so far - not just yours, Kelsey's as well, and other private conversations - assume that someone's substantively misleading the public about this deal, actively, by omission, or by implication. I think it's worth considering hypotheses involving deception, but it's also important to track what people are actually saying.

        Reply
  2. Alyssa Vance

    @Aceso I'm pretty sure several Hacker News commenters said "oh this is a total slap in the face to MIRI, who they only gave 1/60th as much" or words to that effect. I unfortunately can't link them because they seemed to have been flagged or deleted.

    Reply
  3. Alyssa Vance

    @Ben: I can't think of anyone, either friends or Internet commenters, who has said "oh OpenAI is working with OpenPhil, that means they must be really serious about safety", or anything in that general ballpark. I'm pretty sure that Ilya, Holden and Dario wouldn't base a major grant on people with such opinions in a world where those people don't exist.

    Reply
    1. Benquo Post author

      Alyssa, I'm not so much worried about what people now, who have already been thinking about AI safety, will think. I'm worried about a year from now. If someone deciding where to put resources to help with AI safety sees OpenAI funded by Open Phil as part of Open Phil's AI safety focus, they'll interpret that as an endorsement. If in addition Open Phil has the status it has now as the default place for EAs to give money, and it's funding the other AI safety orgs like MIRI, and they also haven't spoken out clearly about OpenAI's problems, then they'll interpret that as a consensus.

      Reply
  4. Kelsey Piper

    Seconding Elizabeth that I think it's inaccurate to treat everything as entirely social maneuvering in the way you're doing here and adding more possibilities consonant with the evidence (with no private information)

    * Open AI has a billion in funding *pledged*, but that doesn't necessarily translate to anywhere near that amount in their bank account at present, and $30mil could easily be a more significant share of money concretely and short-term available.
    * Open AI has several priorities internally and a grant+ board seat is a compromise that allows people with different priorities in AI risk to all be on board with the partnership.
    * The money is understood to be a trial of the value of this collaboration; if it looks valuable in three years, OPP will continue funding at the $10mil/year level or possibly higher, making the expected value of the partnership greater than $30mil.

    Reply
    1. Benquo Post author

      If this is actually a large amount of money relative to OpenAI's true funding, then my prior objection holds - substantially scaling up an organization with a mission of increasing AI risk doesn't seem like a good way to promote AI safety. If Open Phil wants to learn about funding AI safety research it can just try to fund its own AI safety research.

      Reply
      1. Aceso Under Glass

        > If this is actually a large amount of money relative to OpenAI's true funding, then my prior objection holds - substantially scaling up an organization with a mission of increasing AI risk doesn't seem like a good way to promote AI safety

        You're saying this like it disproves Kelsey's point. If OPP is supporting an organization that increases catastrophic risk it's a bad idea whether they got a good price on it or not.

        And I still haven't seen an answer to my and Kelsey's points that this post treats social reality as the only reality, and that doing so without justification is harmful even if you are correct.

        Reply
        1. Benquo Post author

          My response definitely doesn't disprove Kelsey's point, which seems pretty plausible. I just want to make sure we're all tracking the relevant disjunction (either the donation materially scales up OpenAI or it doesn't).

          I've thought a bit about your social reality objection, and I think I understand what you mean now. I very much agree that social reality isn't the only thing. For instance, if OpenAI wanted Holden's advice, so they asked him to come over and talk to them, and Holden said OK and did, that would just be knowledge transfer and I wouldn't presume that this sort of influence has large hidden costs to Holden.

          But that's very different from becoming a donor and getting a board seat, which resembles much more a transactional thing where in exchange for some sort of buy-in one gets to do some of the steering. In particular, even a nominal donation suggests that a substantial part of the value-add for OpenAI is the visible support of the Open Philanthropy Project, not just Holden's advice. It seems like the Open Philanthropy Project's model is that OpenAI is undervaluing Holden's advice, so by doing the buy-in transaction in social reality, the Open Philanthropy Project can use its steering power to move objective reality in a better direction.

          Reply
  5. Fluttershy

    I generally agree with Ben's analysis of the social impact & plausible intent of OPP's grant to OpenAI, but I'm strongly undecided on whether or not having a social impact of that nature is a positive or a negative thing in this particular case.

    Consider OPP's attention to the fact that its grant to MIRI had the cost of OPP having to deal with communication difficulties. It's unclear what this actually means, but one interpretation is that OPP views socially aligning itself with MIRI as being costly because of the verbally aggressive behavior of Eliezer in both personal and professional contexts. (One notices that it would be hard for OPP to express that this was the case if it was, in fact, the case). My best guess at this point is that OPP finds this sort of behavior sufficiently harmful by the mechanism that verbally aggressive figures who support AI safety research tend to drive away capable researchers from working usefully on the problem of AI safety research because of that verbal aggression.

    If this is the case, the OpenAI grant starts to look like a better idea from OPP's perspective; even in the worlds where they think MIRI is doing much more valuable work than anyone else, they might make a medium-sized grant to MIRI for the direct impact value, and then larger grants to other ostensibly AI safety oriented research groups for the social value that would come from dis-endorsing MIRI (and consequently, Eliezer's social actions) in order to promote AI safety as a field that was actually worth taking seriously.

    Regardless of whether or not one of OPP's main motives is to condemn Eliezer's verbally aggressive social moves while affirming the importance of AI safety research, OPP's actions are themselves socially aggressive in relation to MIRI. However, I intuit that this is less of a problem than it sounds like it might be, since in some sense MIRI is "defecting" on social aggression towards MIRI after MIRI has, in OPP's view, already "defected" on social aggression towards something OPP cares about (the extent to which AI safety research is taken seriously).

    I need to run and didn't have time to proofread, but will be checking back in on this thread later.

    Reply
    1. Aceso Under Glass

      A thing I think has been present in the rationalist/EA AI safety world: they were worried about this before it was cool. They paid a high social price for worrying about it. And now the concept is getting mainstream recognition, but instead of getting recognized as visionaries and showered with resources, they're still regarded as weirdos, and the resources are going to more socially acceptable people who dismissed the problem three years ago. That *sucks*. That sucks *a lot*. Being told you were losing the game because you played poorly, and then seeing the rules change as soon as you should be winning is incredibly unpleasant.

      That social pain is almost entirely divorced from whether this is the best action to prevent dangerous AI. It is entirely possible that the socially charming latecomers will be more effective, either because they are socially charming or because being charming is associated with other good traits.

      (This doesn't answer your belief that Open AI is bad for AI Risk. But if that's true then it stands on its own without picking apart the social reality).

      Reply
      1. Benquo Post author

        I agree that these are logically distinct. I think they're highly correlated, though, because for something like AI risk, it makes a huge difference whether you're timing the market or trading on fundamentals.

        Reply
        1. Michael Vassar

          I agree with Benquo, but I think that concept vocabulary isn't necessarily accessible. Aceso, could you explain the metaphor Benquo used?

          Reply
      2. Fluttershy

        It sounds like there are a few things we're roughly discussing here; two that you mention are:

        1). Whether funding MIRI is a good idea
        2). How much social pain MIRI and its proponents are feeling.

        Ben mentions that those two are probably highly correlated. I'd actually not meant to bring up 2) at all, but rather to guess at OPP's evaluation of 1) as a function of 3):

        3). How much EY and others act/have acted verbally dominant in the future and previously.

        It's not clear to me whether OPP thinks 1) is strongly dependent on 3), or weakly or not at all so. It's also not clear to me how much 1) is *actually* dependent on 3), but my social intuitions say that 1) is dependent enough on 3)--or at least OPP's judgement of 1) is dependent enough on 3)--relative to how easy it is to change 3), that it's worth seriously considering whether changing 3) is in fact worthwhile.

        Reply
  6. Pingback: Effective Altruism is self-recommending | Compass Rose

Leave a Reply

Your email address will not be published. Required fields are marked *