epistemic status: uncertain
Holden Karnofsky published a great post yesterday, called Some Key Ways in Which I’ve Changed My Mind Over the Last Several Years. Among other things, he wrote that he’s less skeptical of EA than he used to be. He noted that he still has some “concerns and objections” to EA though, and linked to this comment of his as an example.
I disagree with the objections he made in that comment.
He wrote:
I’m generally uncomfortable with (and disagree with) the “obligation” frame of EA. I’m particularly uncomfortable with messages along the lines of “The arts are a waste when there are people suffering,” “You should feel bad about (or feel the need to defend) every dollar you spend on yourself beyond necessities,” etc. I think messages along these lines make EA sound overly demanding/costly to affiliate with as well as intellectually misguided.
I don’t think this message is particularly widespread in EA. I hear people disagreeing with it more than I hear them agreeing with it (eg Julia Galef and Ozy, just this week). I don’t think any of the EAs I know think “You should feel bad about (or feel the need to defend) every dollar you spend on yourself beyond necessities” (though I guess they might not admit that they think that, even if they do).
The 2015 EA survey described the median EA donation of $333 as “Certainly good, but not as impressive [as the mean donation]”. That doesn’t sound like a community where they shame people for every dollar not spent on necessities.
I think there are a variety of messages associated with EA that communicate unwarranted confidence on a variety of dimensions, implying that we know more than we do about what the best causes are and to what extent EAs are “outperforming” the rest of the world in terms of accomplishing good. “Effective altruism could be the last social movement we ever need” and “Global poverty is a rounding error compared to other causes” are both examples of this; both messages have been prominently enough expressed to get in this article, and both messages are problematic in my view.
I don’t buy “both messages have been prominently enough expressed to get in this article”. Journalists specifically look out for problematic things to quote; you should implicitly prepend every quote you see with “After spending a while looking for things that make [whatever group] look bad, the worst I could find was …”.
(The “EA could be the last social movement we ever need” thing was pretty cringeworthy. But that’s by far the cringiest instance of that opinion I’ve ever heard expressed, either privately or publicly.)
I also have criticisms about EAs being overconfident and acting as if they know way more than they do about a wide variety of things, but my criticisms are very different from that. For example, I’m super unimpressed that so many EAs didn’t know that GiveWell thinks that deworming has a relatively low probability of very high impact. I’m also unimpressed by how many people are incredibly confident that animals aren’t morally relevant despite knowing very little about the topic.
And some EAs (especially rationalists) have this awful habit of proposing incredibly dumb solutions to problems that imply that they think that everyone else in the world is incompetent and that it’s easy to just plow past political problems with shitty amateur subterfuge. Eg yesterday I saw someone proposing that we establish a lab and just unilaterally try to eradicate malarial mosquitos, and I see people propose doing geoengineering without consent of any governments, both of which are terrible ideas.
But that’s pretty different from the complaints that Dylan Matthews was making and that Holden is endorsing.