Tag Archives: Julia Galef

Want to Summon Less Animal Suffering?

I've been thinking about Julia Galef's chart of how much of an animal's live an unit of food costs. Her summary of her approach:

If you’re bothered by the idea of killing animals for food, then going vegetarian might seem like an obvious response. But if you want your diet to kill as few animals as possible, then eschewing meat is actually quite an indirect, and sometimes even counterproductive, strategy. The question you should be asking yourself about any given food is not, “Is this food animal flesh?” The question you should be asking yourself is, “How many animal lives did this food cost?”

She ends up with this chart:

But as we know, from a hedonic utilitarian perspective, the moral cost of consuming animals is not measured in their deaths, but the net suffering in their lives - and lives are made of days.

I am not a hedonic utilitarian, but I think that for big-picture issues, utilitarianism is an important heuristic for figuring out what the right answer is, since at least it handles addition and multiplication better than our unaided moral intuitions.

So I looked up how long each of these animals typically lives, to estimate how many days of each animal's life you are responsible for when you consume 1,000 calories of that animal. The Food and Agriculture Organization of the United Nations says that beef cattle are slaughtered at 36 months old (i.e. 1,096 days), and pork hogs at 6 months old (183 days). Wikipedia says that chickens raised for meat typically live about 6 weeks (42 days), and laying hens about 2 years (730 days), and that dairy cows live an average of four years (1,461 days) before dying or being culled and sold as beef.

Using those figures yields the following:

Days per 1000 Calories

 

Eggs now appear to be even worse than eating chicken flesh, since broiler chickens live very short lives compared to laying hens. Similarly, beef loses its advantage relative to pork, which comes from smaller but faster-maturing animals. Dairy still seems pretty okay, comparatively.

A few complicating factors need to be explored before this becomes a reliable guide, though.

First, not all animal suffering is of equal intensity. It may actually be net good for well-treated animals to exist. It is still not obvious that factory farmed animals would rather never have been born, or how to count their suffering.

Second, it is not obvious how to attribute responsibility for animals that are not themselves raised to feed humans. For example, Julia divided laying hens' yield in half to account for killed male chicks, but the chicks obviously live almost no time. If you double the yield back to what it would have been before this adjustment, eggs come out about the same as chicken. Similarly, if a dairy cow is killed for beef, it seems like this should lower milk drinkers' and cheese eaters' day counts, since beef eaters contribute to the viability of raising dairy cattle too.

Finally, there may be different elasticity for different animal products; because lowered demand leads to lowered prices, which increase demand, the industry might reduce production by less than a full chicken for each chicken you abstain from buying, and this rate may differ by animal.

What am I going to do? I like eggs a lot, and think they're cheap and pretty good for me, and I basically believe Katja Grace's argument that I shouldn't put in a lot of work to change my behavior on this, so I'm going to keep eating them, though I'll continue to preferentially buy free range eggs where available. I have a rationalization that laying hens don't suffer nearly as much as broilers, so I'll flinch a little bit each time I consider eating chicken. I was favoring eating beef over pork due to Julia's analysis, but I will stop doing that now that I know the two are pretty much equivalent in terms of animal-days.

[UPDATE: Brian Tomasik has an unsurprisingly somewhat more thoroughly worked out treatment of this line of thinking here.]

Cauliflower Bread, Twice Attempted

I like good bread. A lot. A loaf of crusty sourdough or baguette, with some nice butter (and if I'm feeling extra indulgent, radishes and salt) is one of the foods I most enjoy. But it reliably causes me to put on weight, which I'm trying not to do right now.

I asked my friends for a recipe for something like bread except that it doesn't cause me to eat a huge number of calories. (If I were underweight, I could fix it in a day. Just give me some top-quality baguettes and a few pounds of nice butter.) "Eat less bread" isn't an option because I am on the minimum-willpower diet, and stopping before I run out of bread and butter is a major willpower expenditure. More about my minimal-willpower diet in a future post.

My deeply appreciated correspondent Julia Galef provided a recipe which sounded promising, because it has more satiety-inducing fat and protein and fiber, and less other carbs:

My paleo-friendly breadstick recipe:

Blend, in a food processor:
1 cup plain quick oats
1/2 cup egg whites (I was aiming low-calorie, but you could try 2 eggs instead)
~1 cup steamed cauliflower florets
Onion powder, garlic powder, salt & pepper to taste
Enough water to make it just barely pourable
Pour into 9 x 13 baking dish that has been sprayed/brushed with oil. Bake at ~375 degrees for ~1.5 hours. The bottom should get browned and crunchy; the inside should be soft.

This is what I use for a pizza crust, but you could cut it into strips to make breadsticks. I suspect this will satisfy your "dippable breadlike" craving... but lemme know how it turns out! I invented this recipe and have never tried transferring it to another person, so it's possible there's some detail I'm neglecting.

So I preheated the oven to 375 degrees Fahrenheit. Meanwhile, I took a head of cauliflower:

20140422-151035.jpg

And steamed it in the microwave:

20140422-151130.jpg

I stuffed some florets into a 1-cup measure, using up about half the cauliflower:

20140422-151254.jpg

And put all the ingredients into the food processor:

20140422-151353.jpg

Then I blent it until smooth:

20140422-151504.jpg

The recipe said to add water until just barely pourable. Since I was able to pour it out (very slowly) without adding water, I added none, and poured it into a pan:

20140422-151627.jpg

I smoothed it out with a silicone spatula. Then, since I had more than half a head of cauliflower left, I made a second batch, using some baking powder instead of salt. This one came out smoother, as you may be able to see from this side by side comparison:

20140422-151930.jpg

After they had spent about an hour in the oven's top rack, I checked on them, and they looked like this:

20140422-152125.jpg

20140422-152110.jpg

The one made with baking powder turned out fine, but the other one was burnt. I cut both up into squares (putting aside the ones that were a little too crispy), and served them at that night's dinner party, with olive oil. They had nearly the consistency of the flatbread served at Cosi, and my guests said they liked them.

As a second experiment, which I did not document with photographs, I figured that since the problem was that it got crispy all through too soon, and also since a single recipe uses only half a head of cauliflower, I would try to double the recipe and cook it under the same conditions. The bread turned out a little wet, but was otherwise liked. I think that the way to go is to use the original single recipe per pan, with the addition of baking powder, but check it and pull it out sooner.

CFAR - Second Impression and a Bleg

TLDR: CFAR is awesome because they do things that work. They've promised to do more research into some of the "epistemic rationality" areas that I'd wished to see more progress on. They have matched donations through 31 January 2014, please consider giving if you can.

UPDATE: CFAR now has a post up on Less Wrong explaining what they are working on and why you should give. Here's the official version: http://lesswrong.com/lw/jej/why_cfar/

Second Thoughts on CFAR

You may have seen my first-impression review of the Center For Applied Rationality's November workshop in Ossining, NY. I've had more than a month to think it over, and on balance I'm pretty impressed.

For those of you who don't already know, CFAR (the Center For Applied Rationality) is an organization dedicated to developing training to help people overcome well-studied cognitive biases, and thus become more effective at accomplishing their goals. If you've never heard of CFAR before, you should check out their about page before continuing here.

The first thing you need to understand about CFAR is that they teach stuff that actually works, in a way that works. This is because they have a commitment to testing their beliefs, abandoning ideas that don't work out, and trying new things until they find something that works. As a workshop participant I benefited from that, it was clear that the classes were way better honed, specific, and action-oriented than they'd been in 2011.

At the time I expressed some disappointment that a lot of epistemic rationality stuff seemed to have been neglected, postponed, or abandoned. Even though some of those things seem objectively much harder than some of the personal effectiveness training CFAR seems to have focused on, they're potentially high-value in saving the world.

The Good News

After my post, Anna Salamon from CFAR reached out to see if we could figure out some specific things they should try again. I think this was a helpful conversation for both of us. Anna explained to me a few things that helped me understand what CFAR was doing:

1) Sometimes an "epistemic rationality" idea turns into a "personal effectiveness" technique when operationalized.

For example consider the epistemic rationality idea of beliefs as anticipations, rather than just verbal propositions. The idea is that you should expect to observe something differently in the world if a belief is true, than if it's false. Sounds pretty obvious, right? But the "Internal Simulator," where you imagine how surprised you will be if your plan doesn't work out, is a non-obvious application of that technique.

2) Some of the rationality techniques I'd internalized from the Sequences at Less Wrong, that seemed obvious to me, are not obvious to a lot of people going to the workshops, so some of the epistemic rationality training going on was invisible to me.

For example, some attendees hadn't yet learned the Bayesian way of thinking about information - that you should have a subjective expectation based on the evidence, even when the evidence isn't conclusive yet, and there are mathematical rules governing how you should treat this partial evidence. So while I didn't get much out of the Bayes segment, that's because I've already learned the thing that class is supposed to teach.

3) CFAR already tried a bunch of stuff.

They did online randomized trials of some epistemic rationality techniques and published the results. They tried a bunch of ways to teach epistemic rationality stuff and found that it didn't work (which is what I'd guessed). They'd found ways to operationalize bits of epistemic rationality.

4) The program is not just the program.

Part of CFAR's mission is the actual rationality-instruction it does. But another part is taking people possibly interested in rationality, and introducing them to the broader community of people interested in existential risk mitigation or other effective altruism, and epistemic rationality. Even if CFAR doesn't know how to teach all these things yet, combining people who know each of these things will produce a community with the virtues the world needs.

In the course of the conversation, Anna asked me why I cared about this so much - what was my "Something to Protect"? This question helped me clarify what I really was worried about.

In my post on effective altruism, I mentioned that a likely extremely high-leverage way to help the world was to help people working on mitigating existential risk. The difficulty is that the magnitude of the risks, and the impact of the mitigation efforts, is really, really hard to assess. An existential risk is not something like malaria, where we can observe how often it occurs. By definition we haven't observed even one event that kills off all humans. So how can we assess the tens or hundreds of potential threats?

A while before, Anna had shared a web applet that let you provide your estimates for, e.g., the probability each year of a given event like global nuclear war or the development of friendly AI, and it would tell you the probability that humanity survived a certain number of years. I tried it out, and in the process, realized that:

Something Is Wrong With My Brain and I Don't Know How to Fix It

For one of these rates, I asked myself the probability in each year, and got back something like 2%.

But then I asked myself the probability in a decade, and got back something like 5%.

A century? 6%.

That can't be right. My intuitions seem obviously inconsistent. But how do I know which one to use, or how to calibrate them?

Eliezer Yudkowsky started writing the Sequences to fix whatever was wrong with people's brains that was stopping them from noticing and doing something about existential risk. But a really big part of this is gaining the epistemic rationality skills necessary to follow highly abstract arguments, modeling events that we have not and cannot observe, without getting caught by shiny but false arguments.

I know my brain is inadequate to the task right now. I read Yudkowsky's arguments in the FOOM Debate and I am convinced. I read Robin Hanson's arguments and am convinced. I read Carl Shulman's arguments and am convinced. But they don't all agree! To save the world effectively - instead of throwing money in the direction of the person who has most recently made a convincing argument - we need to know how to judge these things.

In Which I Extract Valuable Concessions from CFAR in Exchange for Some Money

Then it turned out CFAR was looking for another match-pledger for their upcoming end/beginning of year matched donations fundraiser. Anna suggested that CFAR might be willing to agree to commit to certain epistemic rationality projects in exchange. I was skeptical at first - if CFAR didn't already think these were first-best uses of its money, why should I think I have better information? - but on balance I can't think of a less-bad outcome than what we actually got, because I do think these things are urgently needed, and I think that if CFAR isn't doing them now, it will only get harder to pivot from its current program of almost exclusively teaching instrumental rationality and personal effectiveness.

We hashed out what kinds of programs CFAR would be willing to do on the Epistemic Rationality front, and agreed that these things would get done if enough money is donated to activate my pledge:

  • Participate in Tetlock's Good Judgment Project to learn more about what rationality skills help make good predictions, or would help but are missing
  • Do three more online randomized experiments to test more epistemic rationality techniques
  • Do one in-person randomized trial of an epistemic rationality training technique.
  • Run three one-day workshops on epistemic rationality, with a mixture of old and new material, as alpha tests.
  • Bring at least one epistemic rationality technique up to the level where it goes into the full workshops.

And of course CFAR will continue with a lot of the impressive work it's already been doing.

Here are the topics that I asked them to focus on for new research:

Here are the major "epistemic rationality" areas where I'd love to see research:
  • Noticing Confusion (& doing something about it)
  • Noticing rationalization, and doing something to defuse it, e.g. setting up a line of retreat
  • Undistractability/Eye-on-the-ball/12th virtue/"Cut the Enemy"/"Intent to Win" (this kind of straddles epistemic and instrumental rationality AFAICT but distractions usually look like epistemic failures)
  • Being specific / sticking your neck out / being possibly wrong instead of safely vague / feeling an "itch" to get more specific when you're being vague
Here are some advanced areas that seem harder (because I have no idea how to do these things) but would also count:
  • Reasoning about / modeling totally new things. How to pick the right "reference classes."
  • Resolving scope-insensitivity (e.g. should I "shut up and multiply" or "shut up and divide"). Especially about probabilities *over time* (since there are obvious X-Risk applications).
  • How to assimilate book-learning / theoretical knowledge (can be broken down into how to identify credible sources, how to translate theoretical knowledge into procedural knowledge)

If you're anything like me, you think that these programs would be awesome. If so, please consider giving to CFAR, and helping me spend my money to buy this awesomeness.

The Bad News

For some reason, almost one month into their two-month fundraiser, CFAR has no post up on Less Wrong promoting it. As I was writing this post, CFAR had raised less than $10,000 compared to a total of $150,000 in matching funds pledged. (UPDATE: CFAR now has an excellent post up explaining their plan and the fundraiser is doing much better.)

CFAR Fundraiser Progress Bar

Huge oopses happen, even to very good smart organizations, but it's relevant evidence around operational competence. Then again I kind of have an idiosyncratic axe to grind with respect to CFAR and operational competence, as is obvious if you read my first-impression review. But it's still a bad sign, for an organization working on a problem this hard, to fail some basic tests like this. You should probably take that into account.

It's weak evidence, though.

CFAR Changed Me for the Better

The ultimate test of competence for an organization like CFAR is not operational issues like whether people can physically get to and from the workshops or whether anyone knows about the fundraiser. The test is, does CFAR make people who take its training better at life?

In my case there was more than one confounding factor (I'd started working with a life coach a few weeks before and read Scott Adams's new book a few weeks after - Less Wrong review here), but I have already benefited materially from my experience:

I had three separate insights related to how I think about my career that jointly let me actually start to plan and take action. In particular, I stopped letting the best be the enemy of the good, noticed that my goals can be of different kinds, and figured out which specific component of my uncertainty was the big scary one and took actual steps to start resolving it.

A couple of things in my life improved immediately as if by magic. I started working out every morning, for example, for the first time since college. I'm still not sure how that happened. I didn't consciously expend any willpower.

Several other recent improvements in my life of comparable size are partially attributable to CFAR as well. (The other main contributors are my excellent life coach, Scott Adams's book, and the cumulative effect of everything else I've done, seen, heard, and read.)

Several of the classes that seemed hard to use at the time became obviously useful in hindsight. For example, I started noticing things where a periodic "Strategic Review" would be helpful.

In addition, I learned how to be "greedy" about asking other people for questions and advice when I thought it would be helpful. This has been tremendously useful already.

I'll end the way I began, with a summary:

The problems humanity is facing in this century are unprecedented in both severity and difficulty. To meet these challenges, we need people who are rational enough to sanely and evaluate the risks and possible solutions, effective enough to get something done, and good enough to take personal responsibility for making sure something happens. CFAR is trying to create a community of such people. Almost no one else is even trying.

CFAR is awesome because they do things that work. They've promised to do more research into some of the "epistemic rationality" areas that I'd wished to see more progress on. They have a fundraiser with matched donations through 31 January 2014, please consider giving if you can.