Each of the leading solutions to anthropics runs into wildly counterintuitive implications. It seems to me that the counterintuitiveness of these implications stems from the fact that I see myself as just a guy, and yet the major anthropic theories ask me to reason as if I am something other than just a guy.
The self-sampling assumption asks me to reason as if I am drawn at random from some “reference class” of observers, e.g., all humans to ever exist. On that assumption, I should expect my birth rank to be about Total Number of Humans Who Will Ever Live / 2, which means I should expect the world to end pretty soon. Why is that counterintuitive? Because I’m just a particular guy – Jesse Clifton, born in 1993 in Baton Rouge, Louisiana. I couldn’t have been any other guy, so why am I reasoning as though I could?
The self-indication assumption instead asks me to reason as if I am drawn at random from the set of all possible people. I.e., I should reason as though there is a well of souls existing beyond spacetime, and having created a world with N observers God plucks N souls at random from the well to inhabit them. This means that, upon observing I exist, I should expect the world to contain a huge number of observers. More observers means more souls get picked, which means more chances for me to get picked.
Again, counterintuitive, because I’m just a guy. I’m not (probably?) a soul plucked at random from the soul realm.
Last, there’s compartmentalized conditionalization, which asks me to reason as though my evidence is “someone observes X”. It turns out that, in large worlds, this seems to rule out the possibility of learning anything. Because in a large world, someone will observe X even if only by freak happenstance. For example, suppose you see your friend appear to shuffle a deck of cards but then deal them out in perfect order. Sure seems like it was a trick. But in a large world with a vast number of shufflings, someone will observe a genuinely shuffled deck come out perfectly in order, by chance. So, with compartmentalized conditionalization, you couldn’t update towards your friend having played a trick.
But not just someone observes the deck being shuffled. I, a particular guy, observed it. Importantly, I, a guy who could have seen the deck come out in a different order had it been genuinely shuffled, observed it. (More on this idea of “could have seen” shortly.)
(Oh and I guess there’s UDASSA, but, uh, yeah.)
What does it mean to be a guy? We need a bit of philosophical machinery to make this intuition into a workable theory. Really, what I mean is an agent, in the sense of List's notion of agential possibility. First, consider the state of an agent at a particular time, their “agential state”; for example, consider my agential state as I am about to witness a coin toss. My agential, as opposed to physical, state consists of high-level features of my mind, which might include my memories, expectations, visual experiences, dispositions, and so on. Crucially, this agential state is multiply realizable: There are many microphysical states of the world that are consistent with the same agential state. Indeed, there is a microphysical state consistent with my agential state in which the coin is determined to come up heads, and a different microphysical state in which the coin is determined to come up tails. This means that observing heads and observing tails are agentially possible for me. There is a possible instance of me who observes tails and a different possible instance who observes heads.
An agent, then, corresponds to a branching structure of agential states, related by psychological continuity. When I say I am just a guy, I mean that I am a particular human agent, born with particular psychological characteristics and so on. One version of me exists in the actual world, but other versions live in other worlds. This is what allows me to say I could have seen Trump lose the 2016 election, and yet I could not have found that I am a caveman, or a bug, or that I didn’t exist at all even if Jesse does.
(It bears pointing out that, if all that relates an agent’s agential states is psychological continuity, then agency inherits the vagueness of that condition. This means our claims about what is possible etc. for an agent would be vague, too.
An especially tricky issue is, When does the agent begin? When they first become phenomenally conscious, perhaps in their mother’s womb? When they are able to imagine different ways the future could go, or acquire some other such capacity? I don’t have an account, and I expect any account is going to involve vagueness. But hopefully the beliefs that we get from an anthropic theory based on this notion of agency won’t end up being very sensitive to how we precisify the boundaries of the agent.)
This brings us to Just-a-Guy Anthropics (JAGA). If I think I am the particular guy (agent) Jesse, it is natural to form beliefs by
specifying a prior supported on worlds in which some instance of Jesse exists;
having observed a sequence of observations X, conditionalizing our prior on the event “Jesse observes X”.
(Actually, this is complicated by the fact that I am uncertain about which agent I am in the first place. E.g., I’m not sure whether I’m human-Jesse or Boltzmann brain-Jesse. So I should do this conditionalization for each possible agent that I am, and then maintain uncertainty over agents who see my observations in some world.)
This gets the intuitive answers in the cases we saw at the outset. I (qua agent) am not more likely to observe being the Nth human in worlds where the world ends soon vs. doesn’t (no Doomsday). I’m not more likely to observe my existence in a large world, because I’m certain to exist in one of the worlds in which Jesse exists (no Presumptuous Philosopher). And, upon seeing that the cards are perfectly ordered, I reason: “Had my friend actually shuffled, I almost certainly would have seen them not-ordered, so I update towards them having played a trick”.
I don’t know that JAGA gets the intuitive answers in all cases, I haven’t thought about it thoroughly yet. (For one thing, it’ll arguably say I should think I’m probably Boltzmann-Jesse, if that bothers you.) But it’s still cool that JAGA gets these three key cases right.
So, should we buy JAGA? In addition to the vagueness point above, a major worry is that it relies on a metaphysically non-fundamental notion of “agency”. I’m sympathetic to a response like: “That may be so, but decision theory and epistemology just are concepts that apply to agents in this sense. Agents, not collections of particles or whatever one takes to be more fundamental, are what receive evidence and make decisions. So this whole exercise only makes sense in the first place if we are OK with regarding ourselves as agents.” But a lot more work would be needed to properly defend JAGA, and I’m definitely not confident in the view. Still, I think it’s helpful for probing what may be driving our intuitions about anthropic puzzles and what the space of options for anthropic theories looks like.
I'm curious whether JAGA always gets the same conclusions as SSA with some particular reference class.
I think maybe it gets the same result as the strong self-sampling assumption with reference class = "continuations of agents that had the first macro-experience I had when I as an agent first began" or something like that. But I haven't properly chcked.
Very compelling! Let me try to explain why.
I think the right way to think about "which reference class to take" (all possible observers, or all existing observers, or all observers who have experienced the same macro-observations and macro-states that Jesse has) is to frame it as "what information should I update on". That is, from which exact prior do I want to do expected value maximization (a more updated, or a less updated one).
You could not update on your existence at all, or just update on the fact that you exist, or also update on all your particular macro-observations and macro-states. Or anything in between, like updating on the fact that at least one observer in this world has observed your macro-observations, without also updating proportionally to how many observers actually saw the macro-observations. Or like updating on some observations but not others! (Even if that usually feels more arbitrary)
(As is always the case with questions of updating) From a common-sense perspective it has a lot of appeal to just update on everything we can! You know you're Jesse, and not a random non-existent alien observer grabbing the universe. So why purposefully ignore that information? There are, of course, some features that even your whole agential state doesn't let you disambiguate, like whether you're a Boltzmann brain. But to the extent you can, why shouldn't you update on your whole agential state, which will let you disambiguate as much as possible? The value of information intuition, the intuition that "I already know X is the case so why ignore it", runs strong. And I think this is why your case here is compelling.
And yet, (as is always the case with questions of updating), we can always construct cases were being more updateless seems beneficial... from a more updateless prior, of course!
Some examples involve fixing your agential state (you are just Jesse), but being updateless about whether you exist. E.g., the "picking flowers" example, where taking an action that is suboptimal (conditional on you existing) makes you more likely to exist.
Other examples (that feel closer to what we usually call "anthropics", rather than "updateless decision problems") involve being updateless about different parts of your agential state. For example, if we forget for a second about everything we know about Jesse and his observations, and just update on the fact that a random existing human (out of all humans that exist) has observed "being one of the first humans", this updates us towards "there aren't many humans that exist". And if we instead update on the fact that Jesse-in-particular observes "being one of the first humans" (that is, we include the full agential state), this doesn't update us at all: Jesse was always, by definition of his agential state, going to be one of the first humans, so this tells me nothing new!
Ultimately, I claim, it is totally up for grabs whether we choose to maximize a more updateless or a more updateful prior. Each prior will always just recommend itself. So we will need to query our philosophical intuitions and see what feels better.
I'm certainly first-order sympathetic to just fully updating to my current beliefs, as you sketch in this post, since that seems like the obvious epistemic perspective that I feel obviously excited about maximizing. But then, examples like picking flowers still have some weight, and thinking of myself as an instance of a more general class of algorithms (that is, not fully updating on my agential state) also evokes some nice feelings of universal resonance, joint optimization and caring for counterfactuals (as hippie as that sounds).