I'm curious whether JAGA always gets the same conclusions as SSA with some particular reference class.
I think maybe it gets the same result as the strong self-sampling assumption with reference class = "continuations of agents that had the first macro-experience I had when I as an agent first began" or something like that. But I haven't properly chcked.
I think the right way to think about "which reference class to take" (all possible observers, or all existing observers, or all observers who have experienced the same macro-observations and macro-states that Jesse has) is to frame it as "what information should I update on". That is, from which exact prior do I want to do expected value maximization (a more updated, or a less updated one).
You could not update on your existence at all, or just update on the fact that you exist, or also update on all your particular macro-observations and macro-states. Or anything in between, like updating on the fact that at least one observer in this world has observed your macro-observations, without also updating proportionally to how many observers actually saw the macro-observations. Or like updating on some observations but not others! (Even if that usually feels more arbitrary)
(As is always the case with questions of updating) From a common-sense perspective it has a lot of appeal to just update on everything we can! You know you're Jesse, and not a random non-existent alien observer grabbing the universe. So why purposefully ignore that information? There are, of course, some features that even your whole agential state doesn't let you disambiguate, like whether you're a Boltzmann brain. But to the extent you can, why shouldn't you update on your whole agential state, which will let you disambiguate as much as possible? The value of information intuition, the intuition that "I already know X is the case so why ignore it", runs strong. And I think this is why your case here is compelling.
And yet, (as is always the case with questions of updating), we can always construct cases were being more updateless seems beneficial... from a more updateless prior, of course!
Some examples involve fixing your agential state (you are just Jesse), but being updateless about whether you exist. E.g., the "picking flowers" example, where taking an action that is suboptimal (conditional on you existing) makes you more likely to exist.
Other examples (that feel closer to what we usually call "anthropics", rather than "updateless decision problems") involve being updateless about different parts of your agential state. For example, if we forget for a second about everything we know about Jesse and his observations, and just update on the fact that a random existing human (out of all humans that exist) has observed "being one of the first humans", this updates us towards "there aren't many humans that exist". And if we instead update on the fact that Jesse-in-particular observes "being one of the first humans" (that is, we include the full agential state), this doesn't update us at all: Jesse was always, by definition of his agential state, going to be one of the first humans, so this tells me nothing new!
Ultimately, I claim, it is totally up for grabs whether we choose to maximize a more updateless or a more updateful prior. Each prior will always just recommend itself. So we will need to query our philosophical intuitions and see what feels better.
I'm certainly first-order sympathetic to just fully updating to my current beliefs, as you sketch in this post, since that seems like the obvious epistemic perspective that I feel obviously excited about maximizing. But then, examples like picking flowers still have some weight, and thinking of myself as an instance of a more general class of algorithms (that is, not fully updating on my agential state) also evokes some nice feelings of universal resonance, joint optimization and caring for counterfactuals (as hippie as that sounds).
I'm curious whether JAGA always gets the same conclusions as SSA with some particular reference class.
I think maybe it gets the same result as the strong self-sampling assumption with reference class = "continuations of agents that had the first macro-experience I had when I as an agent first began" or something like that. But I haven't properly chcked.
I believe that's right
Very compelling! Let me try to explain why.
I think the right way to think about "which reference class to take" (all possible observers, or all existing observers, or all observers who have experienced the same macro-observations and macro-states that Jesse has) is to frame it as "what information should I update on". That is, from which exact prior do I want to do expected value maximization (a more updated, or a less updated one).
You could not update on your existence at all, or just update on the fact that you exist, or also update on all your particular macro-observations and macro-states. Or anything in between, like updating on the fact that at least one observer in this world has observed your macro-observations, without also updating proportionally to how many observers actually saw the macro-observations. Or like updating on some observations but not others! (Even if that usually feels more arbitrary)
(As is always the case with questions of updating) From a common-sense perspective it has a lot of appeal to just update on everything we can! You know you're Jesse, and not a random non-existent alien observer grabbing the universe. So why purposefully ignore that information? There are, of course, some features that even your whole agential state doesn't let you disambiguate, like whether you're a Boltzmann brain. But to the extent you can, why shouldn't you update on your whole agential state, which will let you disambiguate as much as possible? The value of information intuition, the intuition that "I already know X is the case so why ignore it", runs strong. And I think this is why your case here is compelling.
And yet, (as is always the case with questions of updating), we can always construct cases were being more updateless seems beneficial... from a more updateless prior, of course!
Some examples involve fixing your agential state (you are just Jesse), but being updateless about whether you exist. E.g., the "picking flowers" example, where taking an action that is suboptimal (conditional on you existing) makes you more likely to exist.
Other examples (that feel closer to what we usually call "anthropics", rather than "updateless decision problems") involve being updateless about different parts of your agential state. For example, if we forget for a second about everything we know about Jesse and his observations, and just update on the fact that a random existing human (out of all humans that exist) has observed "being one of the first humans", this updates us towards "there aren't many humans that exist". And if we instead update on the fact that Jesse-in-particular observes "being one of the first humans" (that is, we include the full agential state), this doesn't update us at all: Jesse was always, by definition of his agential state, going to be one of the first humans, so this tells me nothing new!
Ultimately, I claim, it is totally up for grabs whether we choose to maximize a more updateless or a more updateful prior. Each prior will always just recommend itself. So we will need to query our philosophical intuitions and see what feels better.
I'm certainly first-order sympathetic to just fully updating to my current beliefs, as you sketch in this post, since that seems like the obvious epistemic perspective that I feel obviously excited about maximizing. But then, examples like picking flowers still have some weight, and thinking of myself as an instance of a more general class of algorithms (that is, not fully updating on my agential state) also evokes some nice feelings of universal resonance, joint optimization and caring for counterfactuals (as hippie as that sounds).