Discussion about this post

User's avatar
Silas Abrahamsen's avatar

Interesting post, but I don't think EA is committed to anything scary here. Maybe some EAs talk about expected values as these objective values we can find or whatever, but the project of EA surely doesn't hinge on anything like this!

We can just say something like: You care about certain things, or think certain things are good--e.g. people surviving and animals not being trapped in cages or whatever. And so when acting in the world you should generally try to do things that you think are more likely to achieve more of these things. So if, say, a charity historically saves 1 life per $1 million and another per $5,000, then the latter seems a better way of getting the things you want.

Now, you can make the idea here more precise with a decision theory like EV, and try to make some useful idealizations in order to be able to say some more informative things. But I don't think EA as such is committed to anything nearly that strong. At least if it is, then it seems that someone preferring oatmeal to poison for breakfast would be similarly committed, and then there's no point in singling out EA.

Ontological Nightmare's avatar

When’s the next one dropping

3 more comments...

No posts

Ready for more?