A lot of people in my social network have been trying to track news about the new coronavirus, COVID-19, which seems like a global pandemic that's going to kill a lot of people. I've found some of this overwhelming and difficult to figure out how to use, until I sat down with a few friends, over the phone, and worked out a simple analytic framework for thinking about some basic decisions. Continue reading
Stories such as Peter Singer's "drowning child" hypothetical frequently imply that there is a major funding gap for health interventions in poor countries, such that there is a moral imperative for people in rich-countries to give a large portion of their income to charity. There are simply not enough excess deaths for these claims to be plausible.
On Twitter, Freyja wrote:
Things capitalism is trash at:
- Valuing preferences of anything other than adults who earn money (i.e. future people, non-humans)
- Pricing non-standardisable goods (i.e. information)
- Playing nicely with non-quantifiable values + objectives (i.e. love, ritual)
Things capitalism is good at:
- Incentivising the production of novel goods and services
- Coordinating large groups of people to produce complex bundles of goods
- The obvious: making value fungible
Anyone know of work on -
a) integrating the former into existing economic systems, or
b) developing new systems to provide those things while including capitalism's existing benefits?
This intersected well enough with my current interests and those of the people I've been discoursing with most closely that I figured I'd try my hand at a quick explanation of what we're doing, which I've lightly edited into blog post form below. This is only a loose sketch, I think it does reasonably precisely outline the argument, but many readers may find that there are substantial inferential leaps. Questions in the comments are strongly encouraged.
Any serious attempt at (b) will first have to unwind the disinformation that claims that the thing we have now is capitalism, or remotely efficient.
The short version of the project: learning to talk honestly within a small group about how power works, both systemically and as it applies to us, without trying to hold onto information asymmetries. (There's pervasive temptation to withhold political information as part of a zero-sum privilege game, like Plato's philosopher-kings.) Continue reading
Summary: Political constraints cause supposedly objective technocratic deliberations to adopt frames that any reasonable third party would interpret as picking a side. I explore the case of North Korea in the context of nuclear disarmament rhetoric as an illustrative example of the general trend, and claim that people and institutions can make better choices and generate better options by modeling this dynamic explicitly. In particular, Effective Altruism and academic Utilitarianism can plausibly claim to be the British Empire's central decisionmaking mechanism, and as such, has more options than its current story can consider.
Asymmetric disarmament rhetoric
Ben: It feels increasingly sketchy to me to call tiny countries surrounded by hostile regimes "threatening" for developing nuclear capacity, when US official policy for decades has been to threaten the world with nuclear genocide.
Strong recommendation to read Daniel Ellsberg's The Doomsday Machine.
Georgia: Book review: The Doomsday Machine
So I get that the US' nuclear policy was and probably is a nightmare that's repeatedly skirted apocalypse. That doesn't make North Korea's program better.
Ben [feeling pretty sheepish, having just strongly recommended a book my friend just reviewed on her blog]: "Threatening" just seems like a really weird word for it. This isn't about whether things cause local harm in expectation - it's about the frame in which agents trying to organize to defend themselves are the aggressors, rather than the agent insisting on global domination. Continue reading
For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath.
- The Gospel according to Matthew
r > g
-Thomas Piketty, Capital in the Twenty-First Century
From Jesus to Piketty, it is a commonplace that wealth is a positive feedback loop.
Under one model, differential ability to steward capital, plus compounding gains, implies that perfectly benevolent people with more money than most should keep it more often than a naive expected utility maximization would suggest. On the other hand, conquering empires also experience compounding gains; the ability to leverage force into more force implies that this is a harmful positive feedback loop. Continue reading
I'm overdue to publish an update on the Oops Prize. It fell off my priority list because I received exactly one nomination. I followed up with the nominee and couldn't get enough clarification to confirm eligibility, but my sense was that while the nominee clearly changed their mind, it wasn't a particularly clear case of public error correction as specified in the prize criteria.
Since the Oops Prize remains unclaimed, I'm offering it again this year. To clarify, I don't think the prize amount is enough to incentivize overt error-correction on its own, but it might be enough to give people an incentive to bother informing me if such error correction is in fact happening.
If anyone at an EA Global conference this year publicly repudiates an old belief, and the efforts they made and asked others to make on this basis, and explained what they're doing differently, then I'd like to celebrate this. Since talk is cheap, I'm offering $1,000 in prize money for the best example of such error-correcting; $900 to the person who most clearly reports changing their mind about something big they’d already invested their time or money or credibility in and asked others to invest in, and $100 to the first person to nominate them. Self-nomination is encouraged.
To qualify, an entry has to have the following attributes:
- It is explicitly error correction, not an account that spins things to look like a series of successes evolving over time, or "I used to think X, and now I think X'."
- The nominee successfully encouraged a public commitment of resources based on the original belief (e.g. funds raised or volunteer hours).
- There is a record of the error-correction statement. If it's not a recorded talk, an independent witness (neither the nominator nor prizewinner) is enough evidence.
- It happened at EA Global, and either was part of a scheduled talk, or an independent witness (neither the nominator nor the nominee) believes that at least ten people were present.
Anyone who speaks at EA Global this year is eligible for the prize, including leaders of EA organizations such as CEA, EAG leadership, and GiveWell / Open Philanthropy Project staff. If no qualifying entries are submitted, then no prize will be awarded. I am the sole, unaccountable judge of this, but will get people to check my work if I don't think anyone's eligible or feel like I'm too close to the person I think should win.
You can send nominations to me by email at firstname.lastname@example.org. If the error-correction is already publicly available, or if the nominee gives me permission, I’ll announce the winner by the end of the year. If there is no public recording and the nominee isn’t OK with the error-correction being publicized in this way, then I reserve the right to award them only a partial prize or none at all.
If, when you try to improve the world, you think about people but not about communities, you will tend to favor unsustainable net outflows of resources from your community. I wrote about this in Why I am not a Quaker. Effective Altruist (EA) and Rationalist communities such as the one in the San Francisco Bay Area suffer from this problem. Occasionally individuals - more often than not women, more often than not uncompensated and publicly unacknowledged - do something constructive about this problem. I’m now aware of one such effort where the person involved (Sarah Spikes) is publicly willing to accept support: The Berkeley REACH. The fundraiser page is here.
Here’s a common argument:
The problem with the poor is that they haven’t got enough money. There’s ample empirical evidence backing this up. Therefore, the obviously-correct poverty intervention is to simply give the poor cash. You might be able to do better than this, but it’s a solid baseline and you should often expect to find that interventions are worse than cash.
There are technical reasons to be skeptical of cash transfers - which is why it is so important that the cash transfer charity GiveDirectly is carefully researching what actually happens when they give people cash - but until fairly recently, these objections seemed to me like abstruse nitpicks about an intervention that was almost analytically certain to be strongly beneficial.
But they’re not just nitpicks. Cash isn’t actually the same thing as human well-being, and the assumption that it can be simply exchanged into pretty much anything else is not obviously true.
Of course, saying “X is possibly wrong” isn’t very helpful unless we have a sense of how it’s likely to be wrong, under what circumstances. It’s no good to treat cash transfers just the same as before, but be more gloomy about it.
I’m going to try to communicate an underlying model that generates the appropriate kind of skepticism about interventions like cash transfers, in a way that’s intuitive and not narrowly technical. I’ll begin with a parable, and then talk about how it relates to real-world cases. Continue reading
This is a compact account of my current working hypothesis for what's wrong with our culture and what needs to be done. Continue reading