A Glitch in the Admissions Process University of Oxford Physics Interview, 13th December 2050
“
Ok Alice, here are two opaque boxes. The blue one definitely has a thousand pounds in. We put either a million pounds in the red one, or nothing. You can choose to open either or both boxes and keep the money you find inside. We accurately predicted your decision earlier using our latest technology, integrating artificial intelligence with machine learning and quantum consciousness. If we predicted you would open both boxes, we left the red box empty. If we predicted you would only open the red one, we put a million pounds in it. So which box or boxes would you like to open?” Alice is faced with Newcomb’s paradox. Devised by theoretical physicist William Newcomb in 1960, the problem was first analysed in Harvard philosopher Robert Nozick’s 1969 philosophy paper. “I should only open the red one. If I trust your predictor is completely accurate, then I will certainly have a million pounds. Whereas if I open both, the red will be empty, and I will certainly have only a thousand pounds.” Alice reaches for the red box, then pauses. “But you cannot change whether the red box has a million pounds or nothing inside, whatever I choose now. I must get the most money by opening both. Leaving behind a definite thousand pounds in the blue box is nonsensical.” As Nozick commented in his paper, ‘To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly’. Alice reaches for the blue box, then pauses. “The only way your predictor could be completely accurate, and know which boxes I would decide to open, is by simulating my exact decision-making process. So, my experience in this real interview is just as likely to be the simulation of the interview you say you ran earlier. Now, just in case I really am your simulated version, I should make the decision that will benefit my real self. To ensure the predictor put (or will put) a million pounds in the red box, I must only open the red box.” “Yet do you still have free will, if we managed to predict your decision with certainty?” “Yes, because my free will was exercised the first time you ran the simulation. That is when the decision was made, when the information describing my decision was created in the universe.” Satisfied with her logic, Alice draws the red box towards her, then pauses. ***
28