Nassim Nicholas Taleb recommends that instead of the balanced portfolio of investments recommended by portfolio theory, we follow a "barbell" strategy of putting most of our assets in a maximally safe, stable investment, and making small, sustainable bets with very high potential upside. If taken literally, this can't work because no such safe asset class exists. Continue reading
While tidying my room, I felt the onset of the usual cognitive fatigue. But this time, I didn't just want to bounce off the task - I was curious. When I inspected the fatigue, to see what it was made of, it felt similar to when I'm trying to thread a rhetorical needle - for instance, between striking too neutral a tone for anyone to understand the relevance of what I'm saying, and too bold of a tone for my arguments to be taken literally. In short, I was shouldering a heavy burden of interpretive labor.
Why would tidying my room involve interpretive labor? Continue reading
I'm overdue to publish an update on the Oops Prize. It fell off my priority list because I received exactly one nomination. I followed up with the nominee and couldn't get enough clarification to confirm eligibility, but my sense was that while the nominee clearly changed their mind, it wasn't a particularly clear case of public error correction as specified in the prize criteria.
Since the Oops Prize remains unclaimed, I'm offering it again this year. To clarify, I don't think the prize amount is enough to incentivize overt error-correction on its own, but it might be enough to give people an incentive to bother informing me if such error correction is in fact happening.
If anyone at an EA Global conference this year publicly repudiates an old belief, and the efforts they made and asked others to make on this basis, and explained what they're doing differently, then I'd like to celebrate this. Since talk is cheap, I'm offering $1,000 in prize money for the best example of such error-correcting; $900 to the person who most clearly reports changing their mind about something big they’d already invested their time or money or credibility in and asked others to invest in, and $100 to the first person to nominate them. Self-nomination is encouraged.
To qualify, an entry has to have the following attributes:
- It is explicitly error correction, not an account that spins things to look like a series of successes evolving over time, or "I used to think X, and now I think X'."
- The nominee successfully encouraged a public commitment of resources based on the original belief (e.g. funds raised or volunteer hours).
- There is a record of the error-correction statement. If it's not a recorded talk, an independent witness (neither the nominator nor prizewinner) is enough evidence.
- It happened at EA Global, and either was part of a scheduled talk, or an independent witness (neither the nominator nor the nominee) believes that at least ten people were present.
Anyone who speaks at EA Global this year is eligible for the prize, including leaders of EA organizations such as CEA, EAG leadership, and GiveWell / Open Philanthropy Project staff. If no qualifying entries are submitted, then no prize will be awarded. I am the sole, unaccountable judge of this, but will get people to check my work if I don't think anyone's eligible or feel like I'm too close to the person I think should win.
You can send nominations to me by email at email@example.com. If the error-correction is already publicly available, or if the nominee gives me permission, I’ll announce the winner by the end of the year. If there is no public recording and the nominee isn’t OK with the error-correction being publicized in this way, then I reserve the right to award them only a partial prize or none at all.
If, when you try to improve the world, you think about people but not about communities, you will tend to favor unsustainable net outflows of resources from your community. I wrote about this in Why I am not a Quaker. Effective Altruist (EA) and Rationalist communities such as the one in the San Francisco Bay Area suffer from this problem. Occasionally individuals - more often than not women, more often than not uncompensated and publicly unacknowledged - do something constructive about this problem. I’m now aware of one such effort where the person involved (Sarah Spikes) is publicly willing to accept support: The Berkeley REACH. The fundraiser page is here.
My actual literal nightmares about civilizational collapse somehow manage to be insanely optimistic about human nature.
I dreamt that in response to the news of the Trumps’ probable successful intimidation or bribery of their New York prosecutors, the US devolved into a lawless hellscape, since the last shreds of pretense of “we’re punishing you because it’s what the law says” were gone. In my dream, I successively wished I’d transferred more of my assets to paper, then money, then gold, then firearms, as I realized how far things had gone.
If I’d been thinking sanely, the thing I should have wished I’d accumulated is the only real source of safety in a state of war: a bigger, better gang. But fundamentally, I should have known better than to imagine that things would collapse quickly.
What was I getting wrong? I was tacitly assuming that the majority of people were perfectly principled. Continue reading
This is a compact account of my current working hypothesis for what's wrong with our culture and what needs to be done. Continue reading
So, some writer named Cathy O’Neil wrote about futurists’ opinions about AI risk. This piece focused on futurists as social groups with different incentives, and didn’t really engage with the content of their arguments. Instead, she points out considerations like this:
First up: the people who believe in the singularity and are not worried about it. […] These futurists are ready and willing to install hardware in their brains because, as they are mostly young or middle-age white men, they have never been oppressed.
She doesn’t engage with the content of their arguments about the future. I used to find this sort of thing inexplicable and annoying. Now I just find it sad but reasonable. Continue reading
I am afraid of the anglerfish. Maybe this is why the comments on my blog tend to be so consistently good.
Recently, a friend was telling me about the marketing strategy for a project of theirs. They favored growth, in a way that I was worried would destroy value. I struggled to articulate my threat model, until I hit upon the metaphor of that old haunter of my dreamscape, the anglerfish. Continue reading
In the past year, I have noticed that the Society of Friends (also known as the Quakers) has come to the right answer long before I or most people did, on a surprising number of things, in a surprising range of domains. And yet, I do not feel inclined to become one of them. Giving credit where credit is due is a basic part of good discourse, so I feel that I owe an explanation.
The virtues of the Society of Friends are the virtues of liberalism: they cultivate honest discourse and right action, by taking care not to engage in practices that destroy individual discernment. The failings of the Society of Friends are the failings of liberalism: they do not seem to have the organizational capacity to recognize predatory systems and construct alternatives.
Fundamentally, Quaker protocols seem like a good start, but more articulated structures are necessary, especially more closed systems of production. Continue reading
UPDATE: I'm currently staying in a cabin at a Quaker retreat center. This is basically the thing I needed. Thanks to everyone who reached out with suggestions or offers.
I’m currently coming to terms with just how much of human communication is marketing, like unto the Hobbesian war of all against all. I want to figure out a way for human beings to coordinate and create value when the dominant society is like that. That task is too big for me, so I need a team. I don’t know how to find the right people, but my best guess is that if I can clearly articulate what is needed, the right people have a good chance of recognizing my project as the one they want to be part of. But I can’t write about this in a reasonable way, because I can’t think about this in a reasonable way, because my intuitions are still all applying way too high a level of implicit trust, which means that I’m effectively deluged with spam and buying everything.
So, I want to get away, physically, to buy myself a little breathing room and get my head screwed on straight. A cabin, somewhere where I am not implicitly obligated to engage in more than incidental contact with other human beings, and can live in a reasonable amount of comfort for a while (e.g. includes basic temperature control and running water) is the essential thing. Other desiderata that are nice to have but not essential include:
- A high vantage point overlooking something, whether it be coastal cliffs, a mountainside, or something similar. One goes to a mountaintop to receive the law, so this feels aesthetically appropriate in a way that “a cabin in the woods” or something out in the more level desert does not.
- A land line telephone I can use.
- Drivable distance from the SF Bay Area or otherwise convenient for me to get to.
- Internet access, either on-site or within an hour of the site.
- ETA: Running water
Let me know if you have something like this to offer either free or for money, know of someone who does, or have advice beyond checking AirBnB for how to find something like this.
My best guess is that I should try this out for two weeks and then figure out what to do longer-term.