Defense against discourse

So, some writer named Cathy O’Neil wrote about futurists’ opinions about AI risk. This piece focused on futurists as social groups with different incentives, and didn’t really engage with the content of their arguments. Instead, she points out considerations like this:

First up: the people who believe in the singularity and are not worried about it. […] These futurists are ready and willing to install hardware in their brains because, as they are mostly young or middle-age white men, they have never been oppressed.

She doesn’t engage with the content of their arguments about the future. I used to find this sort of thing inexplicable and annoying. Now I just find it sad but reasonable.

O’Neil is operating under the assumption that the denotative content of the futurists’ arguments is not relevant, except insofar as it affects the enactive content of their speech. In other words, their ideology is part of a process of coalition formation, and taking it seriously is for suckers.

AI and ad hominem

Scott Alexander of Slate Star Codex recently complained about O’Neil’s writing:

It purports to explain what we should think about the future, but never makes a real argument for it. It starts by suggesting there are two important axes on which futurists can differ: optimism vs. pessimism, and belief in a singularity. So you can end up with utopian singularitarians, dystopian singularitarians, utopian incrementalists, and dystopian incrementalists. We know the first three groups are wrong, because many of their members are “young or middle-age white men” who “have never been oppressed”. On the other hand, the last group contains “majority women, gay men, and people of color”. Therefore, the last group is right, there will be no singularity, and the future will be bad.

[…]

The author never even begins to give any argument about why the future will be good or bad, or why a singularity might or might not happen. I’m not sure she even realizes this is an option, or the sort of thing some people might think relevant.

Scott doesn’t have a solution to the problem, but he’s taking the right first step - trying to create common knowledge about the problem, and calling for others to do the same:

I wish ignoring this kind of thing was an option, but this is how our culture relates to things now. It seems important to mention that, to have it out in the open, so that people who turn out their noses at responding to this kind of thing don’t wake up one morning and find themselves boxed in. And if you’ve got to call out crappy non-reasoning sometime, then meh, this article seems as good an example as any.

Scott’s interpretation seems basically accurate, as far as it goes. It’s true that O’Neil doesn’t engage with the content of futurists’ arguments. It’s true that this is a problem.

The thing is, perhaps she’s right not to engage with the content of futurists’ arguments. After all, as Scott pointed out years ago (and I reiterated more recently), when the single most prominent AI risk organization initially announced its mission, it was a mission that basically 100% of credible arguments about AI risk imply to be the exact wrong thing. If you had assumed that the content of futurists’ arguments about AI risk would be a good guide to the actions taken as a result, you would quite often be badly mistaken.

Of course, maybe you disbelieve the mission statement instead of the futurists’ arguments. Or maybe you believe both, but disbelieve the claim that OpenAI is working on AI risk relevant things. Anyhow you slice it, you have to dismiss some of the official communication as falsehood, by someone who is in a position to know better.

So, why is it so hard to talk about this?

World of actors, world of scribes

The immediately prior Slate Star Codex post, Different Worlds, argued that if someone’s basic world view seems obviously wrong to you based on all of your personal experience, maybe their experience is really different. In another Slate Star codex post, titled Might People on the Internet Sometimes Lie?, Scott described how difficult he finds it to consider the hypothesis that someone is lying, despite strong reason to believe that lying is common.

Let's combine these insights.

Scott lives in a world in which many people - the most interesting ones - are basically telling the truth. They care about the content of arguments, and are willing to make major life changes based on explicit reasoning. In short, he’s a member of the scribe caste. O’Neil lives in actor-world, in which words are primarily used as commands, or coalition-building narratives.

If Scott thinks that paying attention to the contents of arguments is a good epistemic strategy, and the writer he’s complaining about thinks that it’s a bad strategy, this suggests an opportunity for people like Scott to make inferences about what other people’s very different life experiences are like. (I worked through an example of this myself in my post about locker room talk.)

It now seems to me like the experience of the vast majority of people in our society is that when someone is making abstract arguments, they are more likely to be playing coalitional politics, than trying to transmit information about the structure of the world.

Clever arguers

For this reason, I noted with interest the implications of an exchange in the comments to Jessica Taylor’s recent Agent Foundations post on autopoietic systems and AI alignment. Paul Christiano and Wei Dai considered the implications of clever arguers, who might be able to make superhumanly persuasive arguments for arbitrary points of view, such that a secure internet browser might refuse to display arguments from untrusted sources without proper screening.

Wei Dai writes:

I’m envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won’t be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.

Christiano responds:

It seems quite plausible that we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party.

What if most people already live in that world? A world in which taking arguments at face value is not a capacity-enhancing tool, but a security vulnerability? Without trusted filters, would they not dismiss highfalutin arguments out of hand, and focus on whether the person making the argument seems friendly, or unfriendly, using hard to fake group-affiliation signals? This bears a substantial resemblance to the behavior Scott was complaining about. As he paraphrases:

We know the first three groups are wrong, because many of their members are “young or middle-age white men” who “have never been oppressed”. On the other hand, the last group contains “majority women, gay men, and people of color”. Therefore, the last group is right, there will be no singularity, and the future will be bad.

Translated properly, this simply means, “There are four possible beliefs to hold on this subject. The first three are held by parties we have reason to distrust, but the fourth is held by members of our coalition. Therefore, we should incorporate the ideology of the fourth group into our narrative.”

This is admirably disjunctive reasoning. It is also really, really sad. It is almost a fully general defense against discourse. It’s also not something I expect we can improve by browbeating people, or sneering at them for not understanding how arguments work. The sad fact is that people wouldn’t have these defenses up if it didn’t make sense to them to do so.

When I read Scott's complaints, I was persuaded that O'Neil was fundamentally confused. But then I clicked through to her piece, I was shocked at how good it was. (To be fair, Scott did a very good job lowering my expectations.) She explains her focus quite explicitly:

And although it can be fun to mock them for their silly sounding and overtly religious predictions, we should take futurists seriously. Because at the heart of the futurism movement lies money, influence, political power, and access to the algorithms that increasingly rule our private, political, and professional lives.

Google, IBM, Ford, and the Department of Defense all employ futurists. And I am myself a futurist. But I have noticed deep divisions and disagreements within the field, which has led me, below, to chart the four basic “types” of futurists. My hope is that by better understanding the motivations and backgrounds of the people involved—however unscientifically—we can better prepare ourselves for the upcoming political struggle over whose narrative of the future we should fight for: tech oligarchs that want to own flying cars and live forever, or gig economy workers that want to someday have affordable health care.

I agree with Scott that the content of futurists' arguments matters, and that it has to be okay to engage with that somewhere. But it also has to be okay to engage with the social context of futurists' arguments, and an article that specifically tells you it's about that seems like the most prosocial and scribe-friendly possible way to engage in that sort of discussion. If we're going to whine about that, then in effect we're just asking people to shut up and pretend that futurist narratives aren't being used as shibboleths to build coalitions. That's dishonest.

Most people in traditional scribe rolls are not proper scribes, but a fancy sort of standard-bearer. If we respond to people displaying the appropriate amount of distrust by taking offense - if we insist that they spend time listening to our arguments simply because we’re scribes - then we’re collaborating with the deception. If we really are more trustworthy, we should be able to send costly signals to that effect. The right thing to do is to try to figure out whether we can credibly signal that we are actually trustworthy, by means of channels that have not yet been compromised.

And, of course, to actually become trustworthy. I’m still working on that one.

The walls have already been breached. The barbarians are sacking the city. Nobody likes your barbarian Halloween costume.


Related: Clueless World vs Loser World, Anatomy of a Bubble

20 thoughts on “Defense against discourse

  1. Shauna

    Thank you for this post.

    Scott lives in a world in which many people - the most interesting ones - are basically telling the truth. They care about the content of arguments, and are willing to make major life changes based on explicit reasoning. In short, he’s a member of the scribe caste. O’Neil lives in actor-world, in which words are primarily used as commands, or coalition-building narratives.

    This seems a bit too general to me. I've been following O'Neil's blog for a while, and she strikes me as a lot like Scott: reflective, interested in the truth, willing to admit when she's wrong, etc. For this particular article, and surely some others, that doesn't come across. Perhaps instead of 'scribe caste' and 'actor world' we could speak of 'scribe mode' and 'actor mode'. Different situations and contexts push people from one to the other. When I talk about the negative unintended consequences of AI and technology, especially on already marginalized people, I don't always stay in scribe mode, not because I don't care about the truth, but because the truth is not my end goal. My end goal is decreasing the harm currently being caused by these existing systems.

    We're all scribes and actors both. The question for me is how to integrate the two approaches, rather than trying to stomp out one or the other.

    Reply
    1. Benquo Post author

      I mostly agree with your criticism! I was trying to acknowledge the fact that O'Neil is obviously a real thinker, by pointing out that she's obviously modeling the space of possible beliefs well enough to come up with a proper disjunctive argument, but I think I was way too coy about that out of a misguided desire to be nice to Scott. That was wrong and bad. Oops! I may edit the piece to fix this. If so, let the record show that I endorse Shauna's criticism of the original version.

      I don't think "scribe mode vs actor mode" fully captures the thing, though. There's a relevant factual and prudential disagreement, about what share of people in ostensibly Scribe rolls are honestly performing the Scribe function. You can personally do the scribe thing perfectly well, and also believe that it's not worth engaging with the content of futurists' arguments. This is not quite my position, but it's a coherent one. As far as I can tell this is O'Neil's position as well.

      There are plenty of tells that O'Neil is doing the scribe thing, like the fact that when she bothers to characterize someone's opinion she does it reasonably well, and names enough sources that the reader can easily go and look things up to double-check.

      Reply
      1. Shauna

        Does "scribe mode vs actor mode" capture the thing better once we allow that a person can be operating in both scribe mode and actor mode simultaneously?

        Upon reflection, I think people operate in both modes most of the time. All communicative acts are embodied in the world - they take place via a chosen medium and within a specific social context. Even when a person is wholly focused on reasoning correctly and stating the truth as they know it, their words still impact others in a way that may change future actions. On the other side of the spectrum, even communicative acts meant entirely to manipulate often contain some truthful content. And in the messy middle we have communicative acts which are meant to convey correct information but also to persuade people with that information.

        O'Neil's article is doing something in the messy middle, though it falls a good bit closer to the 'actor' side of the spectrum than her usual work. While I imagine the negative reaction is largely due to a sense of identification with the group being 'acted against', along with whatever misrepresentations the article makes, a big part of it may well be frustration that O'Neil doesn't signpost the mode she's working in. Scott writes, "[the article] purports to explain what we should think about the future" but I don't think the article purports that at all. It strikes me as the kind of misreading that someone who favors 'scribe mode' might do, whereas someone who favors 'actor mode' would more readily get that the article tells us what we should think about *futurists*. I don't think O'Neil was being intentionally deceptive about what mode she was using, but given the existence of people who *do* say they're in scribe mode when they're in actor mode, I can see why someone might feel uncharitable about it.

        I have some further thoughts about how people who are marginalized may be more likely to favor 'actor mode', given that they have less access to other avenues to effect change, but my lunch break is over so I'll have to elaborate some other time.

        Reply
  2. Doug S.

    This seems like a matter of epistemic learned helplessness - there are probably people with crackpot ideas that could argue rings around me because I’m not actually an expert in the relevant field, but I don’t take their arguments seriously because none of the people I would consider an actual expert take the crackpot belief seriously. If you’re not qualified to evaluate the arguments, then all you can do is evaluate the argument makers.

    Reply
    1. Benquo Post author

      I think the OpenAI example is a poor fit for the sort of epistemic learned helplessness you're talking about. The core arguments for worrying about AI safety are correct. And yet, much of their effect seems to have little to do with their content.

      Reply
    2. Moira

      Mm, same.
      And then there's the space where in theory you have the capacity to evaluate the arguments, but the opportunity cost isn't worth it.

      Reply
  3. Steven

    “And, of course, to actually become trustworthy. I’m still working on that one.”

    You and me both.

    Did you by any chance read my recent post motivated by Scott Alexander's review of MCTB? It contains some thoughts on trustworthiness.

    Reply
  4. Zvi Mowshowitz

    Thanks to your praise of it I actually went and read the article by O'Neil. Perhaps you are colored by other work of hers I haven't read that is actually doing the scribe thing, because this was something else. I agree that she is at least honest about not engaging with the content of anyone's arguments, except where she mocks and dismisses the content of some of the arguments. So if her argument was, she's just going to go over a taxonomy of who believes what, I could live with that. Even if she was also saying, our kind of people are over here, so be over here, I'd be more annoyed, but I could live with that too these days I guess.

    This goes deeper than that. Even at the level of know who has what social agenda and what motivations they have, this is not an attempt at truth or remotely fair, and her links serve only to show she's done the research only to use it for the purposes of further mocking. Her job here is explicitly to paint everyone she doesn't like in the worst possible light, attacking groups that her group says it's OK to attack on a deeply insulting, personal level any time you see them. This is just highly toxic stuff and I feel disgusted and dirty having read it.

    If she wanted to take the position that we can't trust arguments so let's look at people, I mean, scary and terrible and all that but OK. Fine. Granted ad argumento. This is still no good very bad stuff. These are the kinds of viciousness you expect from woke comedians attacking people they don't like, and they don't like those people because... they're white and male and nerdy? I mean I'd worry that's unfair but it's actually text, here.

    I can understand your argument for why people might want to evaluate positions based on tribal affiliation or motivations rather than arguments, and still want to think about that more, but I don't understand why you think this is a reasonable or fair version of such an analysis. It isn't.

    Reply
    1. Benquo Post author

      Huh. I'm confused. I reread the piece again after reading your comment, and it still looks basically fine to me.

      To clarify, I don't think O'Neil is arguing for or explaining the underlying power dynamics here. I'm claiming that she's relying on it as a background assumption. Michele Reilly's Anatomy of a Bubble actually bothers to try to explain the thing. But I think that this sort of thing is at least tacitly obvious enough to enough people that O'Neil is entitled to make that kind of background assumption.

      Reply
    2. Benquo Post author

      OK, I think I was unclear with "shocked at how good it was." I meant relative to Scott's description, not that it was way better than baseline. It's probably a little better than baseline for articles on Singularitarians

      I've added a clarifying note.

      Reply
      1. Zvi Mowshowitz

        I think I still disagree; this felt/read like a hit piece to me. She is clearly looking for the quotes/events that would look the worst out of context and quoting those, rather than trying to be at all representative. She is using offensive tropes to attack outgroup members. I remember an article that was written about us a few years ago that got people around here in a panic about reporters and handling the press, and that article was WAY more charitable and generous than this was, on every level. Where you see her as 'reasonable' I see her using the rhetorical trick of saying the conflicting thing first to defuse it - not pointing out that EA moved giant amounts of money to charities helping Approved Liberal Causes would be less effective at attacking them. When you have three paragraphs to describe EA and decide to focus on the suffering of plankton and mocking out-of-touch-sounding cases for existential risk, which she doesn't ignore the case for but rather states it unfairly in order to mock it - you're not trying to be fair. You're a political hit woman.

        Similarly, the majority of the words about rationality here are about the Basilisk (and she goes far enough to create potential memetic hazard). Again, the pattern is to introduce the group (required to hit them), nominally acknowledge that thinking or trying to stop bad things could be good (so she seems fair and defuses counterarguments) then spend 60% of her words on the most absurd-sounding thing that's ever happened.

        So if that's the baseline - outright hostility looking to appear nominally reasonable while actually trying to make everyone look as bad as possible even if they're focused on donating to charity and mostly third world anti-poverty charities at that - then that's a pretty freaking low baseline, to the extent that I'm not sure what would be below it. I suppose she didn't explicitly accuse anyone of X-ism or murder or anything, even though she did implicitly, so there's that?

        I also wonder where you are on the question of "attention is a resource, don't give it to things you think are bad." I thought Scott's decision to talk about her piece was a mistake because one does not reward such behavior, or highlight such rhetoric, even to make the point that this is what rhetoric is like (which you seem to basically agree with, since you're saying this is better than baseline and all his object-level objections seem clearly valid.) The reason I didn't write a full post about the question after you had done so, was because I felt it would violate my only-discuss-things-and-sources-you-endorse-or-at-least-don't-hate policy, which I very much want to preserve.

        Reply
        1. Vasily

          I read O'Neil's article after reading your first comment and my impression of it very much agrees with your second comment. Of course we can never be sure what her actual goal was, but it reads very much like a propaganda piece, so honest informing seems a very improbable intention.
          However, perhaps there's a way to make this compatible with some kind of honest intentions (I don't endorse this way of thinking, but I don't think I have a knock down argument against it either). What if this is the most factual and honest one can be without seriously compromising communication efficiency and audience reach? What if a more balanced and less attention grabbing article would not generate enough attention to the problem (of some negative social consequences of certain technological developments)? Actors are more efficient at persuasion than scribes, and even the most scriby of articles often involve a bit of acting for that extra bit of punch. What if the article represents the optimal balance of acting and scribing for this topic and target audience according to O'Neil's beliefs? I can't really rule this out. I can't even rule out that she's actually right about it. Perhaps offending some rationalists, who make up less than 1% of the population and are already concerned about the problems associated with technological development anyway, is an acceptable price to pay for the social momentum that she is hoping to generate with this piece. Do we know any better way to communicate with general public?

          Reply
          1. Michael Vassar

            Or perhaps social momentum can only ever be a context for crime. Perhaps there is no such thing as 'best informing a public that lack a commitment to truth'

          2. Vasily

            I wouldn't go as far as to say that "social momentum can only ever be a context for crime". This depends on your definition of crime, but I would say that there are non-criminal uses for social momentum. On the other hand, "best informing a public that lack a commitment to truth" would indeed be somewhat of an oxymoron. It seems to me that the intention in this case was not to inform, but rather to persuade, which makes more sense.
            In any case, I have since read Scott Alexander's piece on defensiveness, and I've updated towards taking attacks on rationalists less lightly. Yes, there are not many of us and yes, we will understand, but the public perception of rationalists does matter and attacking minorities is just not a very cool thing to do.

  5. John Nerst

    This is the harsh truth. The question is how to deal with it. It may be natural, and rational in a political and game-theoretic sense, but so is defection in one-off PD.

    The problem with articles like it is not that it's irrational or anything, but that it's essentially setting hard-earned social capital on fire. Chopping down the norms of liberal democracy for firewood isn't a good idea.

    Reply
  6. Michael Vassar

    I'm with Zvi on this point. The article isn't an attempt to convey information. Also, when someone has ontologized everything as being about power and influence by definition, it's dishonest to say "Ultimately this is all about power and influence. "

    The fixation on wealth classifications is simply made-up. Actually, so is the gender and minority stuff... Who in her list is wealthy and isn't an immigrant, when you look at the basic facts. Just Ray I think, and he's definitely Jewish. The gay and genderqueer population in this space is how many times population baseline? Not to mention OCD, autistic spectrum, etc. I'm also noticing that Jews have some historical awareness of persecution.
    Basically, I don't see how anyone could be better informed having read the article than not having read it.

    Reply
  7. antimule

    Please consider changing site font. I would like to read it, and I think you have some good points, but I find it basically unreadable. Even zoomed it it just looks crappy. Stuff that SSC uses should be good.

    Reply
  8. FeepingCreature

    To me, I was mad about the article because I parsed it as an honest attempt to categorize opinions upfront, only to find it was as you described. If that qualification had been upfront instead of buried towards the end, I might have a different opinion.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *