If, when you try to improve the world, you think about people but not about communities, you will tend to favor unsustainable net outflows of resources from your community. I wrote about this in Why I am not a Quaker. Effective Altruist (EA) and Rationalist communities such as the one in the San Francisco Bay Area suffer from this problem. Occasionally individuals - more often than not women, more often than not uncompensated and publicly unacknowledged - do something constructive about this problem. I’m now aware of one such effort where the person involved (Sarah Spikes) is publicly willing to accept support: The Berkeley REACH. The fundraiser page is here.
My actual literal nightmares about civilizational collapse somehow manage to be insanely optimistic about human nature.
I dreamt that in response to the news of the Trumps’ probable successful intimidation or bribery of their New York prosecutors, the US devolved into a lawless hellscape, since the last shreds of pretense of “we’re punishing you because it’s what the law says” were gone. In my dream, I successively wished I’d transferred more of my assets to paper, then money, then gold, then firearms, as I realized how far things had gone.
If I’d been thinking sanely, the thing I should have wished I’d accumulated is the only real source of safety in a state of war: a bigger, better gang. But fundamentally, I should have known better than to imagine that things would collapse quickly.
What was I getting wrong? I was tacitly assuming that the majority of people were perfectly principled. Continue reading
This is a compact account of my current working hypothesis for what's wrong with our culture and what needs to be done. Continue reading
So, some writer named Cathy O’Neil wrote about futurists’ opinions about AI risk. This piece focused on futurists as social groups with different incentives, and didn’t really engage with the content of their arguments. Instead, she points out considerations like this:
First up: the people who believe in the singularity and are not worried about it. […] These futurists are ready and willing to install hardware in their brains because, as they are mostly young or middle-age white men, they have never been oppressed.
She doesn’t engage with the content of their arguments about the future. I used to find this sort of thing inexplicable and annoying. Now I just find it sad but reasonable. Continue reading
I am afraid of the anglerfish. Maybe this is why the comments on my blog tend to be so consistently good.
Recently, a friend was telling me about the marketing strategy for a project of theirs. They favored growth, in a way that I was worried would destroy value. I struggled to articulate my threat model, until I hit upon the metaphor of that old haunter of my dreamscape, the anglerfish. Continue reading
In the past year, I have noticed that the Society of Friends (also known as the Quakers) has come to the right answer long before I or most people did, on a surprising number of things, in a surprising range of domains. And yet, I do not feel inclined to become one of them. Giving credit where credit is due is a basic part of good discourse, so I feel that I owe an explanation.
The virtues of the Society of Friends are the virtues of liberalism: they cultivate honest discourse and right action, by taking care not to engage in practices that destroy individual discernment. The failings of the Society of Friends are the failings of liberalism: they do not seem to have the organizational capacity to recognize predatory systems and construct alternatives.
Fundamentally, Quaker protocols seem like a good start, but more articulated structures are necessary, especially more closed systems of production. Continue reading
UPDATE: I'm currently staying in a cabin at a Quaker retreat center. This is basically the thing I needed. Thanks to everyone who reached out with suggestions or offers.
I’m currently coming to terms with just how much of human communication is marketing, like unto the Hobbesian war of all against all. I want to figure out a way for human beings to coordinate and create value when the dominant society is like that. That task is too big for me, so I need a team. I don’t know how to find the right people, but my best guess is that if I can clearly articulate what is needed, the right people have a good chance of recognizing my project as the one they want to be part of. But I can’t write about this in a reasonable way, because I can’t think about this in a reasonable way, because my intuitions are still all applying way too high a level of implicit trust, which means that I’m effectively deluged with spam and buying everything.
So, I want to get away, physically, to buy myself a little breathing room and get my head screwed on straight. A cabin, somewhere where I am not implicitly obligated to engage in more than incidental contact with other human beings, and can live in a reasonable amount of comfort for a while (e.g. includes basic temperature control and running water) is the essential thing. Other desiderata that are nice to have but not essential include:
- A high vantage point overlooking something, whether it be coastal cliffs, a mountainside, or something similar. One goes to a mountaintop to receive the law, so this feels aesthetically appropriate in a way that “a cabin in the woods” or something out in the more level desert does not.
- A land line telephone I can use.
- Drivable distance from the SF Bay Area or otherwise convenient for me to get to.
- Internet access, either on-site or within an hour of the site.
- ETA: Running water
Let me know if you have something like this to offer either free or for money, know of someone who does, or have advice beyond checking AirBnB for how to find something like this.
My best guess is that I should try this out for two weeks and then figure out what to do longer-term.
Simple consequentialist reasoning often appears to imply that you should trick others for the greater good. Paul Christiano recently proposed a simple consequentialist justification for acting with integrity:
I aspire to make decisions in a pretty simple way. I think about the consequences of each possible action and decide how much I like them; then I select the action whose consequences I like best.
To make decisions with integrity, I make one change: when I imagine picking an action, I pretend that picking it causes everyone to know that I am the kind of person who picks that option.
If I’m considering breaking a promise to you, and I am tallying up the costs and benefits, I consider the additional cost of you having known that I would break the promise under these conditions. If I made a promise to you, it’s usually because I wanted you to believe that I would keep it. So you knowing that I wouldn’t keep the promise is usually a cost, often a very large one.
Overall this seems like it’s on the right track – I endorse something similar. But it only solves part of the problem. In particular, it explains interpersonal integrity such as keeping one's word, but not integrity of character. Continue reading
We were only pretending to engage with each other. But it wasn’t our fault. We had to be, because talking about bad faith is Not OK. Continue reading
I am surrounded by well-meaning people trying to take responsibility for the future of the universe. I think that this attitude – prominent among Effective Altruists – is causing great harm. I noticed this as part of a broader change in outlook, which I've been trying to describe on this blog in manageable pieces (and sometimes failing at the "manageable" part).
I'm going to try to contextualize this by outlining the structure of my overall argument.
Why I am worried
Effective Altruists often say they're motivated by utilitarianism. At its best, this leads to things like Katja Grace's excellent analysis of when to be a vegetarian. We need more of this kind of principled reasoning about tradeoffs.
At its worst, this leads to some people angsting over whether it's ethical to spend money on a cup of coffee when they might have saved a life, and others using the greater good as license to say things that are not quite true, socially pressure others into bearing inappropriate burdens, and make ever-increasing claims on resources without a correspondingly strong verified track record of improving people's lives. I claim that these actions are not in fact morally correct, and that people keep winding up endorsing those conclusions because they are using the wrong cognitive approximations to reason about morality.
Summary of the argument
- When people take responsibility for something, they try to control it. So, universal responsibility implies an attempt at universal control.
- Maximizing control has destructive effects:
- An adversarial stance towards other agents.
- Decision paralysis.
- These failures are not accidental, but baked into the structure of control-seeking. We need a practical moral philosophy to describe strategies that generalize better, and that benefit from the existence of other benevolent agents rather than treating them primarily as threats.