Tag Archives: autonomy

Minimum viable impact purchases

Several months ago I did some work on a trial basis for AI Impacts. It went well enough, but the process of agreeing in advance on what work needs to be done felt cumbersome. It's not uncommon that midway through a project, it turns out that it makes sense to do a different thing than what you'd originally envisioned - and because I was doing this for someone else, I had to check in at each such point. This didn't just slow down the process, but made the whole thing less motivating for me.

Later, I did my own research project. When natural pivot points came up, this didn't trigger a formal check-in - I just continued to do the thing that made the most sense. I think that I did better work this way, and steered more quickly towards the highest-value aspect of my research. Part of this is because, since I wasn't accountable to anyone else for the work, I could follow my own inner sense of what needed to be done.

I was talking with Katja about my work, and she mentioned that AI Impacts might potentially be interested in funding some of this work. I explained the motivation problem mentioned in the prior paragraphs, and wondered out loud whether AI Impacts might be interested in funding projects retrospectively, after I'd already completed them. Katja responded that in principle this sounded like a much better deal than funding projects prospectively, in large part because it would take less management effort on her part. This also felt like a much better deal to me than being funded prospectively, again because I wouldn't have to worry so much about checking in and fulfilling promises.

I've talked with friends about this consideration, and a few mentioned the fact that sometimes people are hired as researchers with a fairly vague or flexible research mandate, or prefunded to do more like their prior work, in the hope that they'll produce similarly valuable work in the future. But making promises like that, even if very abstract, also makes it difficult for me to proceed in a spirit of play, discovery, and curiosity, which is how I do some of my best work.

It also offends my sense of integrity to accept money for the promise to do one thing, or even one class of thing, when my real plan is to adopt a flexible stance - my best judgment might tell me to radically change course, and at this stage I fully intend to listen to it. For instance, I might decide that I should switch from research to writing and advocacy (what I'm doing now). I might even learn something that persuades me to make a bigger commitment to another course of action, starting or join some organization with a better-defined role.}

What doesn't offend my sense of integrity is to accept money explicitly for past work, with no promises about the future.

Then it clicked - this is the logic behind impact certificates. Continue reading

Effective Altruism is not a no-brainer

Ozy writes that Effective Altruism avoids the typical failure modes of people in developed countries intervening in developing ones, because it is evidence-based, humble, and respects the autonomy of the recipients of the intervention. The basic reasoning is that Effective Altruists pay attention to empirical evidence, focus on what's shown to work, change what they're doing when it looks like it's not working, and respect the autonomy of the people for whose benefit they're intervening.

Effective Altruism is not actually safe from the failure modes alluded to:

    • Effective Altruism is not humble. Its narrative in practice relies on claims of outsized benefits in terms of hard-to-measure things like life outcomes, which makes humility quite difficult. Outsized benefits probably require going out on a limb and doing extraordinary things.
    • Effective Altruism is less evidence based than EAs think. People talk about some EA charities as producing large improvements in life outcomes with certainty, but this is often not happening. And when the facts disagree with our hopes, we seem pretty good at ignoring the facts.
    • Effective Altruism is not about autonomy. Some EA charities are good at respecting the autonomy of beneficiaries, but this is nowhere near central to the movement, and many top charities are not about autonomy at all, and are much better fits for the stereotype of rich Westerners deciding that they know what's best for people in poor countries.
    • Standard failure modes are standard. We need a model of what causes them, and how we're different, in order to be sure we're avoiding them.

Continue reading

Lupin around again: responses to the Werewolf model

I’m very happy with the response to my post on Werewolf Levels. Some people told me they found the concept helpful in naming a thing they’d already felt. Other people proposed objections or refinements. In one case, someone was able to tell me they felt Werewolfy, which helped me give them the reassurance they needed to continue the interaction. This is a roundup of some of the responses. Continue reading