This is the story of my life, through the lens of motivations, of actions I took to steer myself towards long-term outcome, of the way the self that stretches out in causal links over long periods of time produced the self I have at this moment. This is only one of the many ways to tell the story of my life.
I started out as something of a Ravenclaw Primary - I needed to know the truth in order to know what was right:
Ravenclaw Primaries are the system-lovers. They create, modify, and outright adopt systems that give them frames and guidance for interacting with the world. These systems vary wildly, and include but are certainly not limited to religious systems, political systems, family and community morals, and combinations thereof. They value truth. They want to find the correct, best way to look at the world and to interact with morality and the people around them.
I knew enough of the history of science to know that there had been times when things now thought obviously false were thought obviously true (e.g. astronomy prior to the Copernican revolution). I wanted to have at least a chance at noticing if the foundations of modern thought were similarly rotten in some way, so I decided to study the history of philosophy and science and math. But before that (which I did in college), the Jewish faith I’d been brought up in was broken, by some basic thinking about epistemology.
My father is a Reform Jewish rabbi. For a Reform Jew, and even for a Reform rabbi, he has an unusually deep connection with the ritual aspects of Judaism, to the extent that our religious life was probably more typical of how Conservative than Reform Jews practice Judaism.
My high need for intellectual integrity meant that I wasn’t able to participate in ritual without strong internal psychological pressure to really believe the justifying ideas - I needed them to be factually, literally true, the way physics is true, in a way that probably exceeded the level of belief he meant to inculcate. To do otherwise, and still practice the rituals, would be to violate the idolatry taboo - to worship a false god. (This is not a traditional Jewish attitude. To the contrary, Jewish doctrine typically says that it is good to follow any of the commandments, regardless of whether one follows any other, such as the requirement to believe in God.)
I found out about Ayn Rand in high school, and for the first time, I was reading someone who was making arguments for ideas with the expectation that I’d actually be persuaded, and as a result really for real believe something different than I did before. Who was arguing with received morality, with the moral intuitions people around me just seemed to tacitly assume. So of course, I bought into it wholeheartedly for a while.
After reading Introduction to Objectivist Epistemology, I decided to apply its standard for the validity of a concept to everything important, including God. I’d previously defined God down to something compatible with my observations of the world, and with the predictions of scientific materialism. This made God into a kind of invisible dragon: a belief I could make compatible with my observations, but which didn’t constrain them. So when I asked myself the key concept-validity question - “If I didn’t already have a concept of ‘God,’ would my observations of the world motivate me to create one?” - I found that I had no reason to believe. Thus, out of a moral commitment to the truth, I was required not to believe. So integrity compelled me to become an atheist.
College then broke my faith in Ayn Rand, first through Aristotle, whose careful moral thinking helped me notice that her moral thinking just wasn’t comprehensive or careful, and then through Nietzsche, who finally broke my faith that The Truth and the exercise of reason could tell me what to do.
So that’s how I became a Fallen Ravenclaw - since now I had no Truth to tell me what to do, I had to be responsible for my own actions.
Fortunately, I had another moral foundation to catch me when I fell. During my Objectivist period, I learned to model Slytherin Primary, caring about myself and those I had chosen to be in my circle. This included a very weakly held outer circle for all the people in the world - I feel like taking care of them somewhat because in some very weak sense they’re mine.
I discovered Overcoming Bias around this time (via Marginal Revolution), while the Sequences were being written. The Sequences and Ayn Rand both gave me the aesthetic that the best kind of human to be is the kind who does things on purpose, who doesn’t settle into a comfy track just because it’s available, but first works out what they want, and then figures out how to get it, regardless of habit or the expectations of others. I didn’t know how to be that kind of person yet. I didn’t even know how to try to become that sort of person yet. But I knew that it was how I wanted to be.
A couple of other key turns my life took were set up here:
- During my Objectivist phase, I had decided I’d ignore professional not-for-profit do-gooding as mostly a scam, and come back to it if and when I found someone being sufficiently careful to figure out what actually had a positive impact. Then I found out about GiveWell, which was clearly being sufficiently careful. So integrity compelled me to seriously consider do-gooding again.
- I learned about existential risk (X-risk) and AI risk from Overcoming Bias. Independent of any do-gooding impulse, it seemed terribly important to save the world, because I live here. But I didn’t trust myself to evaluate arguments for action rightly. Some strong rhetorical force behind Eliezer Yudkowsky’s arguments vaguely resembled the pleasant sensation of being swept away by Ayn Rand’s rhetorical persuasion, and I didn’t trust my sense of agreement as regarded AI risk. So I was unwilling to do anything on that basis. I (correctly!) didn’t trust my own rationality enough at the time to act as though these things just were true, and I wouldn’t have known how anyway.
After college, I felt like the ability to work on empirical questions where we didn’t already know how history turned out was a big gap in my abilities. I vaguely perceived academia as a place where there wasn’t much market demand for the work, so I was suspicious of its value, and got a job working in risk analytics for Fannie Mae. The money was also nice.
I found that it drove me nuts not to understand the theory behind the statistics we were using at work, so I did a Master’s in Mathematics and Statistics at night, in order to learn how my tools worked.
During this time the Singularity Institute (the precursor to MIRI and CFAR) ran its first-ever Rationality Boot Camp, which was too long for me to do on my allotment of vacation time, but when I found out that there was going to be a minicamp lasting just over a week, I thought: this is a chance to become more of the sort of person I want to be, the sort of person who does things on purpose, who lives up to the rational-agent ideal. If I don’t do this now because it’s inconvenient, then I don’t expect it to get more convenient in the future, which means that I should expect to never do it.
So of course I had to apply. I did, and I went to the 2011 minicamp. Minicamp didn’t make much of a concrete immediate impact on me, except by persuading me to speed up grad school, taking two classes at a time instead of one, which probably accelerated me by a year.
I also started hosting the DC Less Wrong meetup around this time, in the hopes that it would become a group that wanted to actually practice getting better at rationality. This never really materialized, and eventually after making a few serious efforts, I gave up on this.
I was giving a bit to MIRI and CFAR and GiveWell, maxing out my matched donations from Fannie Mae, but not otherwise much involved in EA or X-Risk related stuff.
I knew I wasn’t the sort of person I wanted to be yet, I still didn’t know how to fix it, and it was really aversive to even consider the idea of abandoning the track I was on, 2/3 of the way through my Master's. Also I had little free time. (In hindsight, I might should have changed a lot more about my life, but I didn’t know how to articulate the true source of resistance to alternate plans, so explicit planning discussions didn’t help me very much.) So I decided, and declared repeatedly, that once I was done with my Master’s degree I’d think about. what I wanted to do with my life.
Then, in June 2013, I finished my Master’s Degree. So integrity compelled me to think seriously about what I wanted to do with my life, and then act based on my decision.
At some point I’d read Talent is Overrated, by Geoff Colvin, which is a good account of the concept of deliberate practice, and helped me get an intuition for why coaching might be helpful for things. Atul Gawande’s New Yorker article on coaching also helped build this intuition. Coincidentally, a friend of mine had started working as a life coach, and occasionally talked on Facebook about cool breakthroughs his clients had made. So I did a free consultation / sample session with him, and with two other coaches he referred me to, before ultimately deciding to work with my friend, Gideon (who, by the way, I recently rehired.)
(I liked and like that he’s very attentive to inner work, understanding sources of resistance, exploring what you *really* want, not just narrowly project-focused. And flexible, eager to adapt when I need a different style than his default one. And willing to patiently make me sit with my discomfort at answering a question, until I actually answer it, and point out gently when I’ve not actually given a real answer.)
I also found out that CFAR was doing a workshop on the East Coast, “in New York” (which I falsely assumed meant the location would be convenient for me to get to, but it worked out) in November 2013, so I applied to that and was accepted.
In August 2013, I took a 3-week vacation with my partner, in the Pacific Northwest (Vancouver, Victoria, Portland, Seattle), and when I got back to work afterwards, I was changed. The people there didn’t seem to bullshit, and it made a profound impression on me.
By bullshit, I mean something analogous to the kind of epistemic definition of bullshit proposed by Harry G. Frankfurt. Epistemic bullshit is when someone says something, and isn’t telling the truth, but isn’t deliberately lying either - instead, they simply don’t care and possibly don’t even know whether it is true. Analogously, in the way one lives one’s life, there’s living out one’s values, there’s knowingly compromising one’s values for expediency or under pressure, and there’s doing what’s expected without even bothering to check whether it’s consonant with one’s own values.
A virtue of northeastern urban culture is that people want to make things work. There’s a kind of high-energy eagerness to please. People find themselves in a situation, and want to succeed and excel in it. But the downside of this virtue is a tendency to climb ladders unreflectively, without bothering to check whether the ladder gets you anywhere worth going. By contrast, in the northwest, people don’t prick up their ears and try to anticipate your requests so compulsively. There’s more of an attitude of, “I’m doing my thing, you’re welcome to join in, and if not, good luck to you." This was true of the locals I met socially who were just living their lives, it was true of people working in service jobs, and it felt like it was true of people I saw walking down the street or sitting in coffee shops.
After imbibing some of the spirit of doing one’s own thing just because one wants to, I got back to work and saw how much bullshit there was. How much of the work I was doing was a long slog to patch holes in a poorly designed system rather than making something new that I believed in. I thought, “I understand why this has to be done, but I don’t understand why I have to be the one doing it. There’s some amount of money that would explain it to me, but it’s not what I’m being paid now.” So I was ready, in my gut, for a career shift and more broadly a life change.
After the CFAR workshop, I spent several months trying to figure out what to do, mostly thinking through what an enjoyable high-impact career would look like, probably related to X-risk, since the impact there seemed just so much bigger than anything else. But I ran into the same problem that I didn’t trust my ability to evaluate the relevant arguments or have good enough priors on how the world works.
So I thought about how to build those skills, until eventually, in a conversation with Critch, I realized that maybe I should look for a job that would train my judgment in the relevant area, rather than first improving my epistemics and then looking for a job.
During this time I started realizing that my preferences about what to do to the world aligned fairly well with Effective Altruism, and started identifying tentatively as an EA. I started an EA meetup in DC, mainly in the hope that it would lead to having more people with shared interests to have good conversations with, but it was way more popular than I’d hoped and started seeming like a thing that might be valuable for the world.
At the end of 2013, Anna engaged me in a conversation around ways I’d been disappointed with CFAR, and asked me if I’d be willing to be one of the matchers for CFAR’s annual fundraiser. I ended up agreeing, in exchange for a commitment from CFAR to make some serious efforts to build an epistemic rationality curriculum. A lot of the follow-up on this didn’t happen until 2015 (it now seems that CFAR’s exploring territory much closer to the thing I wanted out of it), but they did run an alumni workshop called Epistemic Rationality for Effective Altruists in April 2014, which I went to. There, I decided that the highest-value thing I could do over the next half-year was probably build up the DC EA meetup, since I was uniquely well-positioned to do so.
Shortly afterwards, I got a surprise expression of interest from the friend-of-a-friend founder of an analytics startup in DC who was interested in maybe making me head of analytics there, which seemed good, but if I was going to invest time in considering a career change sooner than anticipated, it made sense to consider all the options.
So I decided to consider other things, and on an unrelated trip to the Bay, emailed GiveWell. I thought that working for them might satisfy my goal of improving my judgment around world-saving, and also people in the DC EA meetup had expressed interest in volunteering so I reached out about that too. They said, we don’t do volunteers because the management overhead’s too high, but do you want to apply for a job here? I said sure, and did interviews while I was still in SF.
A few months later I chose GiveWell over the main alternative I was considering, starting an EA organization in DC. This seems like it was a good choice, since the group project I led to try to affect policy in an EA direction bloomed into EA Policy Analytics anyway, under the leadership of Matt Gentzel.
Around the same time I moved here I started dating someone new. The new job and new relationship each demanded full performance from me, which had never happened before in my life. I was stretched to the outside of my abilities, had to grow just to keep up, and for the first time people could tell when I was doing my best work, and when I wasn’t. I had a lot of bad habits from high school and my previous job, that were kind of sustainable in environments that didn’t expect much from me, but totally unworkable in a context where if I couldn’t make up for a week of slacking with a day of real work. (My previous romantic relationships weren’t pathological in this way, it can be good to feel safe and loved and enough, and I miss feeling that way - but it was a different good thing, and a new challenge, to be in a relationship that required rapid growth.)
So around the time I was additionally doing a lot of work around moving into a new group house with friends, I broke down. I felt like I was failing at everything, and I ran out of motivation. I thought - this can’t ever happen again. If I’m going to be able to protect the world, I need to be able to perform consistently at a high level and stay motivated. If I’m going to be able to keep my promises, I need to understand why my motivation died off, and learn how to predict and prevent this well in advance.
I troubleshot my problems, and decided that a big problem was that I hadn’t been aware of or cared about my level of motivation. This meant that I didn’t get yellow flags when I started to rely on willpower for things, so I didn’t try to fix motivational drag early, and allowed demotivation to acquire a lot of inertia. So after a bunch of emotionally exhausted dithering during which management at GiveWell was very supportive, and after taking a leave of absence from work, I decided to just plain leave GiveWell in order to work on this full-time.
About a month ago, I assessed my progress and felt like I was far enough along on that project, that it would keep getting better more or less on autopilot, and decided to use my planning and focus on the next bottleneck: understanding the world well enough to figure out what to do.
I noticed that in some sense what I should have done when I didn’t understand the world well enough to assess arguments around AI risk, was to just try and answer all the relevant questions for myself. But, in addition to not having the research skills yet (and working at GiveWell helped a lot with this), I also was just scared of doing something like that where I might fail, where I couldn’t just count on achieving some measure of success if I jumped through the hoops set up by someone else, where I didn’t have the stamp of legitimacy of some institution, like a school or employer.
Now I’m not afraid of that, because I’m so desperate to have the answers, and it no longer feels fast enough to do something vaguely correlated but respectable. I’m trying to work out for myself what I think about AI timelines and takeoffs, to start, and then once I have an opinion about what world I live in, I’ll think about what interventions plausibly actually do something. It’s frustrating and scary and hard and a lot of time gets spent troubleshooting why it’s hard to do self-motivated research on a thing like this, but it feels correct.
I have no doubt that I’m working on the best thing for me to be working on right now.
A lot of it may end up duplicating others’ work, but an important part of this is that I want my own well-integrated model of things. Part of why I initially distrusted my epistemics around AI risk was that I found both sides of the FOOM debate persuasive, and didn’t have some underlying model that could contextualize them. I don’t want to just evaluate other people’s arguments one at a time, that just leads to whiplash and a vague sense of which side had the higher quantity of persuasiveness. I want to understand things well enough to be able to generate the arguments and see what underlying parameters you have to change in your world view to make different takes on AI risk plausible.
This is an exercise in digging in my heels and saying, “No, I DON’T understand, I’ve read the Sequences and the FOOM debate and Superintelligence and I still don’t understand anything, so I’m going to stop and explore the areas of uncertainty and keep doing that until I’m satisfied, even if it takes me a year to figure out what to start doing about it.”
This is inspiring. I realized while reading it that I didn't know a lot about your past, and it's cool to get that context 🙂
Also, I remain really excited about your project to model the world. I'm trying to do this myself, and so far still finding it pretty hard to stay focused on the core details. I haven't yet reached this state:
> "Now I’m not afraid of that, because I’m so desperate to have the answers, and it no longer feels fast enough to do something vaguely correlated but respectable."
But, as you point out, self-motivated research like this is hard. Reading this is making me feel like it makes sense to do way more troubleshooting of my process, though, which I expect to help.
I'd love top discuss the FOOM debate with a group some time.
Pingback: Matching-donation fundraisers can be harmfully dishonest | Compass Rose
Pingback: Automemorial | Compass Rose