4 minute read

A Glitch in the Admissions Process

University of Oxford Physics Interview, 13th December 2050

“Ok Alice, here are two opaque boxes. The blue one defnitely has a thousand pounds in. We put either a million pounds in the red one, or nothing. You can choose to open either or both boxes and keep the money you fnd inside. We accurately predicted your decision earlier using our latest technology, integrating artifcial intelligence with machine learning and quantum consciousness. If we predicted you would open both boxes, we left the red box empty. If we predicted you would only open the red one, we put a million pounds in it. So which box or boxes would you like to open?” Alice is faced with Newcomb’s paradox. Devised by theoretical physicist William Newcomb in 1960, the problem was frst analysed in Harvard philosopher Robert Nozick’s 1969 philosophy paper.

“I should only open the red one. If I trust your predictor is completely accurate, then I will certainly have a million pounds. Whereas if I open both, the red will be empty, and I will certainly have only a thousand pounds.”

Alice reaches for the red box, then pauses. “But you cannot change whether the red box has a million pounds or nothing inside, whatever I choose now. I must get the most money by opening both. Leaving behind a defnite thousand pounds in the blue box is nonsensical.”

As Nozick commented in his paper, ‘To almost everyone, it is perfectly clear and obvious what should be done. The difculty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly’.

Alice reaches for the blue box, then pauses.

“The only way your predictor could be completely accurate, and know which boxes I would decide to open, is by simulating my exact decision-making process. So, my experience in this real interview is just as likely to be the simulation of the interview you say you ran earlier. Now, just in case I really am your simulated version, I should make the decision that will beneft my real self. To ensure the predictor put (or will put) a million pounds in the red box, I must only open the red box.”

“Yet do you still have free will, if we managed to predict your decision with certainty?”

“Yes, because my free will was exercised the frst time you ran the simulation. That is when the decision was made, when the information describing my decision was created in the universe.”

Satisfed with her logic, Alice draws the red box towards her, then pauses.

Alice sits, paused, on the screen as the interviewers discuss her performance.

“She’s done well, I think she deserves an ofer,” asserts Bob, beginning to fll his holographic spreadsheet. “Ah, simulated interviews are so much more convenient than real ones!”

“Not so fast,” Charlie interjects. “Ok, she concluded she has to make the decision as though she is a simulation. But a really bright candidate would go one step further: realise she is a simulation. After all, if the simulation already tells us what decision she will make in this situation, why would we go to the trouble of the interview question in real life?”

Bob chuckles. “So, you expect her simulation to predict how we would use a simulation of her to predict what she would do in the interview to conclude she is a simulation. Sounds suspiciously recursive to me!” He presses play.

Alice ponders the red box suspiciously.

“This seems like a very, very expensive admissions process. In fact, the only logical conclusion of you setting me this interview question can only be- be- be- be- be…”

Alice is paused, glitching, on screen. “What’s happened?” asks Charlie, pressing the simulator controls in frustration. “We haven’t pressed anything, why has she frozen?” Bob clicks open the source code to see what the problem is and receives an error message, stating the simulator is stuck in a recursive loop.

“Oh Charlie,” grins Bob, “I think Alice concluded she is a simulation.”

The blue box defnitely has a thousand pounds inside it. The red box either has a million pounds, or none. Surely, you must have more money if you take both? Newcomb’s Paradox adds an unsettling twist – if the predictor predicted you would take both boxes, the red box has no money. If the predictor predicted you would take only the red box, the red box has a million pounds. Then, surely you should only open the red box and get the million pounds inside. A paradox!

To know your decision, the predictor must have run a very precise simulation of you, indistinguishable from your real self. When deciding, you cannot know whether you are the simulated or real version. If you are simulated, of course you should only open the red box, so that the predictor puts a million in the red box for your real self to fnd later. Since you cannot know whether you are the simulation or not, you should always only open the red box.

This framing of Newcomb’s Paradox was inspired by Professor David Deutsch, Visiting Professor at Oxford’s Centre for Quantum Computation. Deutsch’s version is available on the “Constructor Theory” website – a theory that reframes physical laws. Newcomb’s Paradox is a controversial component of a branch of mathematics called Decision Theory. Recent extensions of the paradox have encompassed quantum physics and machine consciousness. Maria Violaris is a Physics undergraduate at Magdalen College. The Story Behind the Glitch

This article is from: