4 minute read

John Dale Beety

Next Article
Ralph Mercer

Ralph Mercer

WORLDBUILDING ON THE WRONG PLANET

By John Dale Beety

How would it feel to live on the wrong planet?

THAT sensation inspired the name of an online community for people on the autism spectrum. While I don’t participate at Wrong Planet, I know the feeling. I, too, am autistic. My mask of normalcy, a luxury and curse that keeps me independent, fits poorly; holding it up takes constant calculation. The way another’s “How are you?” means “Smile! Tell me, ‘Great!’ Ask me, ‘How are you?’” is just one substitution cipher in my cryptographic mind.

Unusually for someone autistic, or so the screeners say, I enjoy fiction. My favourite genres are science fiction and fantasy (SF), partly because settings must make the social rules that go unexplained on Earth legible. For example, Ellen Kushner’s fantasy of manners, Swordspoint, helped me appreciate Jane Austen’s novels.

I’m also an SF writer, a Full Member of the Science Fiction and Fantasy Writers of America (SFWA), and a passionate worldbuilder. When I learned of the Future of Life Institute (FLI)’s Worldbuilding Contest, which focused on Artificial General Intelligence (AGI) in 2045, it immediately became a special interest. Working alone, I crafted a submission that, in hindsight, missed the contest’s point entirely.

To make my failure useful, here are three lessons I learned from worldbuilding on the wrong planet.

Lesson #1: Relentlessly Consider the Audience

On entering the contest’s Discord server, I browsed the “introductions” channel. I soon realized I could not compete with AGI researchers on technical matters, so I developed a strategy focused on media culture, special interests and personal strength with a popular bent. The contest finalists, by contrast, were

overwhelmingly technical on balance.

Where did I err? I forgot that before my submission reached the public, it had to connect with the judges. I should’ve understood that judges for a contest focused on AGI generally would be technical rather than cultural experts. My answers to the Questions About Your World compared AI alignment to “Mr. Right” and AI control to “Mr. Right Now.” I wasn’t appealing to AGI experts, and I didn’t.

Even if I passed the AGI-expert judges’ filter, my submission’s Media Piece strategy was flawed. Most finalists made accessible, enjoyable videos. I made conceptual art. Whoops.

Lesson #2: Let a Team Use Your Strengths

There was a simpler solution to my technical deficit: joining a team. 60% of the finalist submissions had multiple contributors. I could’ve teamed with an AGI expert and a filmmaker, written the Day in the Life stories, helped with cultural elements for the Timeline and Questions, and not put myself through a selfdirected crash course in AGI.

Why didn’t I join a team? Fear, bluntly. I crave comfort and routine, the exact opposite of teaming up with strangers. I feared rejection as a storyteller despite my credentials and skills. Letting others down. Being let down. Most deeply, least rationally, getting kicked off a team of futurists for being “too weird.”

Instead, I retreated to my cultural vision...which I promptly shared with others on the contest’s Discord server in a series of wonderful conversations. At one point, a contest administrator even nudged me to team up with another contestant, a gesture I failed to appreciate in time.

Lesson #3: Question Your Assumptions

My autism often results in rigid thinking, a prime source of missed communication. But, more than anything, one unquestioned assumption doomed my submission from the start: the definition of “worldbuilding.”

In worldbuilding for games, my SF writing specialty, a common trap is overemphasizing a setting’s history. Showing the audience a beautiful palace, only to make them sit through a lecture about long-dead kings before they explore it, is half rookie mistake, a half cardinal sin.

In the FLI Worldbuilding Contest, however, the Timeline and several Questions focused on AGI and humanity in the years leading up to 2045, not 2045 itself. While I found this focus unusual, I failed to question what “worldbuilding” meant to FLI and thus missed a crucial implication:

The solutions to problems posed by AGI mattered more than worldbuilding an aspirational 2045.

This emphasis was alluded to in the contest FAQs, though obscured by an excess of information. While teams with AGI researcher members naturally weighted AGI solutions heavily, I placed those solutions largely in the background of a culturesoaked 2045. As another contestant noted, if FLI had wanted Hamlet, I had submitted Rosencrantz and Guildenstern Are Dead.

I wasn’t alone in my mistaken assumptions. On Discord, other contestants, especially those with worldbuilding backgrounds, expressed puzzlement with the slate of finalists based on “aspirational” and similar standards also listed in the contest rules. To its credit, FLI has engaged with constructive feedback. This was FLI’s first such contest and a learning experience. They’ll be better prepared for next time... as will I.

This article is from: