[ EXECUTIVE NOTE ]
Welcome to the Spring issue of VFX Voice! We’re now in year four of publishing VFX Voice and we cannot thank you enough for your enthusiastic support as a part of our global community. Coming out of an exciting awards season, this issue shines a light on the 18th Annual VES Awards celebration and shares a profile of one of our honorees, acclaimed VFX supervisor Sheena Duggal, recipient of this year’s VES Award for Creative Excellence. Congratulations again to all of our outstanding nominees, winners and honorees! Our cover feature takes a look at real-time-generated CG humans and their growing impact on film and TV. In the exploding field of television visual effects, we go inside Snowpiercer, The Mandalorian and The Witcher. We delve into the VR/AR/MR world with ILM’s Vader Immortal: A Star Wars VR Series and Walt Disney Animation’s Myth: A Frozen Tale, and go behind the scenes of family fantasy film Dolittle and animated elf tale Onward. On the tech and tools beat, we talk real-time previs, ray tracing and open source software. And continuing our review of VES worldwide, we highlight the VES Georgia Section, our 14th and newest regional hub. It’s all in here! As the definitive authority on all things VFX, we’re continuing to bring you exclusive stories between issues that are only available online at VFXVoice.com. You can also get VFX Voice and VES updates by following us on Twitter at @VFXSociety. Thank you for making VFX Voice a must-read publication worldwide.
Mike Chambers, Chair, VES Board of Directors
Eric Roth, VES Executive Director
2 • VFXVOICE.COM SPRING 2020
[ CONTENTS ] FEATURES 8 COVER: REAL-TIME CG HUMANS Synthetic characters are now more human and widespread.
VFXVOICE.COM
DEPARTMENTS 2 EXECUTIVE NOTE 92 VES SECTION SPOTLIGHT: GEORGIA
14 TV: SNOWPIERCER A trainload of creative effects drive cult film to the small screen. 20 VR/AR/MR: VADER IMMORTAL ILM’s Darth Vader VR experience brings users into the story. 26 ANIMATION: ONWARD Pixar adopts old-school, stylized approach to character design. 32 PREVIS: REAL-TIME PREVIS New tools and techniques have made previs studios more nimble. 38 TV: THE MANDALORIAN Changing live-action TV VFX with virtual production efficiencies. 44 PROFILE: SHEENA DUGGAL Recipient of the 2020 VES Award for Creative Excellence. 50 THE 18TH ANNUAL VES AWARDS Celebrating the best in visual effects. 58 VES AWARD WINNERS Photo Gallery 64 TECH & TOOLS: NEW VFX TOOLS Four new tools for artists, and how pros use them in production. 68 TV: THE WITCHER Building a fantasy world of creatures, battles and magical effects. 72 TECH & TOOLS: RAY TRACING Maximizing the quality of surfaces and lighting in real-time. 80 FILM: DOLITTLE Advanced VFX helps connect humans and talking CG animals. 84 VR/AR/MR: MYTH: A FROZEN TALE Immersing users in the rich spirit world of Disney’s Frozen 2. 88 TECH & TOOLS: OPEN SOURCE SOFTWARE Creating and maintaining free open source software for the industry.
4 • VFXVOICE.COM SPRING 2020
94 VES NEWS 96 FINAL FRAME: REAL-TIME
ON THE COVER: Meet Elbor, a digital character created by Doug Roble, Head of Software R&D for Digital Domain, and the Digital Human Group. (Image courtesy of Digital Domain)
SPRING 2020 • VOL. 4, NO. 2
WINNER OF THREE FOLIO AWARDS FOR PUBLISHING EXCELLENCE.
VFXVOICE
Visit us online at vfxvoice.com PUBLISHER Jim McCullaugh publisher@vfxvoice.com
VISUAL EFFECTS SOCIETY Eric Roth, Executive Director VES BOARD OF DIRECTORS
EDITOR Ed Ochs editor@vfxvoice.com CREATIVE Alpanian Design Group alan@alpanian.com ADVERTISING VFXVoiceAds@gmail.com SUPERVISOR Nancy Ward CONTRIBUTING WRITERS Ian Failes Naomi Goldman Trevor Hogg Chris McGowan ADVISORY COMMITTEE David Bloom Andrew Bly Rob Bredow Mike Chambers Neil Corbould, VES Irena Cronin Paul Debevec, VES Debbie Denise Karen Dufilho Paul Franklin David Johnson Jim Morris, VES Dennis Muren, ASC, VES Sam Nicholson, ASC Lori H. Schwartz Eric Roth
OFFICERS Mike Chambers, Chair Lisa Cooke, 1st Vice Chair Emma Clifton Perry, 2nd Vice Chair Laurie Blavin, Treasurer Rita Cahill, Secretary DIRECTORS Brooke Breton, Kathryn Brillhart, Colin Campbell Bob Coleman, Dayne Cowan, Kim Davidson Rose Duignan, Richard Edlund, VES, Bryan Grill Dennis Hoffman, Pam Hogarth, Jeff Kleiser Suresh Kondareddy, Kim Lavery, VES Tim McGovern, Emma Clifton Perry Scott Ross, Jim Rygiel, Tim Sassoon Lisa Sepp-Wilson, Katie Stetson David Tanaka, Richard Winn Taylor II, VES Cat Thelia, Joe Weidenbach ALTERNATES Andrew Bly, Gavin Graham, Charlie Iturriaga Andres Martinez, Dan Schrecker Tom Atkin, Founder Allen Battino, VES Logo Design Visual Effects Society 5805 Sepulveda Blvd., Suite 620 Sherman Oaks, CA 91411 Phone: (818) 981-7861 visualeffectssociety.com VES STAFF Nancy Ward, Program & Development Director Chris McKittrick, Director of Operations Ben Schneider, Director of Membership Services Jeff Casper, Manager of Media & Graphics Colleen Kelly, Office Manager Debbie McBeth, Global Coordinator Jennifer Cabrera, Administrative Assistant P.J. Schumacher, Controller Naomi Goldman, Public Relations
Follow us on social media VFX Voice is published quarterly by the Visual Effects Society. Subscriptions: U.S. $50: Canada/Mexico $60; all other countries $70 a year. See vfxvoice.com Advertising: Rate card upon request from publisher@vfxvoice.com or VFXVoiceAds@gmail.com Comments: Write us at comments@vfxvoice.com Postmaster: Send address change to VES, 5805 Sepulveda Blvd., Suite 620, Sherman Oaks, CA 91411. Copyright © 2020 The Visual Effects Society. Printed in the U.S.A.
6 • VFXVOICE.COM SPRING 2020
COVER
REAL-TIME CG HUMANS HIT THE BIG-TIME By IAN FAILES
Just as photoreal CG humans in film and television garner much attention, so too in recent times has the proliferation of real-time CG humans. These were once almost the sole domain of games, where they certainly still exist either in pre-rendered scenes or in fully interactive real-time rendered gameplay. But with game engines and performance capture tools being used more frequently in cinematic productions and in non-game experiences, real-time CG humans are finding their place elsewhere as well. Synthetic humans now tend to appear in shorts, trailers, AR and VR experiences, on-stage events, demonstrations and applications. VFX Voice checked in with a range of game-engine companies, studios and academic researchers for a wide overview of where things are with real-time CG humans.
valley. “Our team at 3Lateral have dedicated themselves to solving this problem for a number of years, and are relentlessly driven to not only capture accurate human likenesses, but to do so with all the subtleties and nuances required to craft believable, real-time digital humans.” Epic’s aim is to enable real-time digital humans for interaction and reaction with users, something of course not possible with
TOP: Actor Andy Serkis was replicated using an Unreal Engine workflow. (Image courtesy of Epic Games) BOTTOM: The central character in Unity’s The Heretic seen in the Unity user interface. (Image of courtesy Unity Technologies)
A WIDE RANGE OF PROJECTS IN UNREAL
TOP: Epic Games’ Unreal Engine was used in the Siren project to craft a photoreal digital actor based off of a live actor performance. (Image courtesy of Epic Games)
8 • VFXVOICE.COM SPRING 2020
Epic Games’ Unreal Engine lies behind many of the real-time digital humans in existence, in terms of projects emanating from Epic itself, through partners or by outside studios using Unreal. Last year, Epic also acquired 3Lateral, a leader in digital human tools. The result is an ecosystem casting a wide net over real-time humans, as Epic Games Lead Tech Animator Chris Evans describes: “From the Senua demo, a cut scene featuring both an actor and director in front of thousands, orchestrated in under five minutes to ‘Meet Mike,’ a VR project where a digital host interviewed many industry pioneers in a shared virtual space, to Siren, a high-fidelity character that interacted live with an audience and a virtual version of actor/director Andy Serkis.” In working within the realm of real-time digital humans, Evans says that 3Lateral, in particular, has been concentrating heavily on the issue of bringing creations out of the so-called uncanny
SPRING 2020 VFXVOICE.COM • 9
COVER
REAL-TIME CG HUMANS HIT THE BIG-TIME By IAN FAILES
Just as photoreal CG humans in film and television garner much attention, so too in recent times has the proliferation of real-time CG humans. These were once almost the sole domain of games, where they certainly still exist either in pre-rendered scenes or in fully interactive real-time rendered gameplay. But with game engines and performance capture tools being used more frequently in cinematic productions and in non-game experiences, real-time CG humans are finding their place elsewhere as well. Synthetic humans now tend to appear in shorts, trailers, AR and VR experiences, on-stage events, demonstrations and applications. VFX Voice checked in with a range of game-engine companies, studios and academic researchers for a wide overview of where things are with real-time CG humans.
valley. “Our team at 3Lateral have dedicated themselves to solving this problem for a number of years, and are relentlessly driven to not only capture accurate human likenesses, but to do so with all the subtleties and nuances required to craft believable, real-time digital humans.” Epic’s aim is to enable real-time digital humans for interaction and reaction with users, something of course not possible with
TOP: Actor Andy Serkis was replicated using an Unreal Engine workflow. (Image courtesy of Epic Games) BOTTOM: The central character in Unity’s The Heretic seen in the Unity user interface. (Image of courtesy Unity Technologies)
A WIDE RANGE OF PROJECTS IN UNREAL
TOP: Epic Games’ Unreal Engine was used in the Siren project to craft a photoreal digital actor based off of a live actor performance. (Image courtesy of Epic Games)
8 • VFXVOICE.COM SPRING 2020
Epic Games’ Unreal Engine lies behind many of the real-time digital humans in existence, in terms of projects emanating from Epic itself, through partners or by outside studios using Unreal. Last year, Epic also acquired 3Lateral, a leader in digital human tools. The result is an ecosystem casting a wide net over real-time humans, as Epic Games Lead Tech Animator Chris Evans describes: “From the Senua demo, a cut scene featuring both an actor and director in front of thousands, orchestrated in under five minutes to ‘Meet Mike,’ a VR project where a digital host interviewed many industry pioneers in a shared virtual space, to Siren, a high-fidelity character that interacted live with an audience and a virtual version of actor/director Andy Serkis.” In working within the realm of real-time digital humans, Evans says that 3Lateral, in particular, has been concentrating heavily on the issue of bringing creations out of the so-called uncanny
SPRING 2020 VFXVOICE.COM • 9
COVER
pre-rendered characters. “This level of versatility opens many more uses for digital characters, whether it’s in healthcare, training, simulation, entertainment and beyond,” suggests Evans. “Looking forward, one area that we often discuss internally is the future of user-driven digital personas existing and interacting within shared digital environments – commonly dubbed as ‘the metaverse.’” UNITY’S CINEMATIC HUMANS
Unity Technologies, the makers of the Unity game engine, have also jumped deep into digital humans, especially via their own short film projects, including The Heretic. To make the digital character for this short, Unity’s Demo Team took advantage of various off-theshelf 3D and 4D scanning services, as well as rigging and motioncapture solutions. “Creating a digital human for a real-time project has already been done in several AAA games, and specifically a lot of innovative work has happened in the creation of game cinematics,” notes Unity producer Silvia Rasheva. “For The Heretic, we wanted to raise the quality above what we’ve seen so far in the area.” The team relied upon Unity’s High Definition Render Pipeline (HDRP) for rendering. “While HDRP was still in preview when we started planning and actually building the character in Unity, it already provided a lot of what we needed in terms of core rendering features,” notes Unity Senior Software Engineer Lasse Jon Fuglsang Pedersen. “For instance, we knew that we needed a proven solution for subsurface scattering, and HDRP by then already had a real-time implementation of the Disney Burley model that we were able to use directly for subsurface scattering on skin, eyes and teeth, with different diffusion profiles. “For our next step on this journey,” continues Pedersen, “we would like to increase the fidelity. For example, based on the quality of the 4D data that we are now getting from our vendors, we have observed that the resolution of our neutral is a bit too low to reproduce some of the smallest details in the actor’s performance. And in terms of data, Unity can handle more than we threw at it this time. So for our next project we will focus a bit on optimization, and we will be less conservative with our data budgets, and we will see where that takes us.” DIGITAL DOMAIN KNOWS HUMANS
VFX studio Digital Domain works in the area of both pre-rendered digital humans and real-time ones. Senior Director of Software R&D Doug Roble famously donned an Xsens suit and helmet-mounted camera for a TED Talk featuring himself and his real-time rendered alter ego DigiDoug. Since then, the studio has continued to extend its research in the area. “Since TED,” says Roble, “we have been able to greatly improve the quality of the facial capture and performance by running massive hyperparameter searches where we automate the process of generating and testing thousands of different deep-learning capture models. Through this technique we can create new mathematical models that more accurately capture the subtle motions in a face and represent them as moving geometry.” Other aspects, such as having someone else other than Roble drive
10 • VFXVOICE.COM SPRING 2020
DigiDoug and making the character autonomous, are underway. The studio has also been able to take advantage of real-time ray-traced rendering in Unreal Engine using NVIDIA’s GPUs to render DigiDoug. They are looking beyond standard rendering techniques, too. “We’re currently developing a whole new way of rendering the character – a neural rendering technique – that takes the realism of DigiDoug to a whole new level,” adds Roble. “In this new deep-learning technique, we are teaching a renderer how to produce more realistic versions of Doug’s face without having to do all the hard work of painstakingly re-creating all the details.” Digital Domain’s real-time digital human technology has been adapted for several projects. It was used to build a real-time Pikachu that was driven by Ryan Reynolds as part of the Detective Pikachu publicity run. “We are also using the same technologies to generate super high-fidelity digital human characters for the next generation of gaming platforms,” states Digital Domain’s Darren Hendler, Director of the Digital Human Group at the studio. “Not only are these using our newer capture systems, but they are also utilizing new real-time facial-rigging platforms that we have developed. Recently, we also created a real-time digital version of Martin Luther King Jr. for a high-profile historical project for TIME.” DIGITAL HUMANS ON STAGE
Another avenue for CG humans has been live broadcasts. For example, at China’s League of Legends Pro League (LPL) finals – an esport event showcasing Riot Games’ League of Legends – a band member of the fictional K-pop band K/DA was shown during a live broadcast both dancing and being interviewed in real-time. This band member, Akali, came about via Cubic Motion’s Persona system. “Using the latest in computer vision, Persona reads, records and translates an actor’s performance onto their digital counterpart in real-time,” describes Cubic Motion Product Manager Tony Lacey. “Designed from the ground up for live performance, Persona enables immediate character animation in game engines such as Unreal Engine 4.” A game rig of Akali was retrofitted for the performance and enhanced to add a large number of Facial Action Coding System (FACS)-based expressions. For the interview portion of the broadcast, a motion-capture volume was built in a room just behind the stage. “Here,” says Lacey, “our partners at Animatrik installed and operated an OptiTrack body performance-capture system. The actress who played Akali, Jeon So-Yeon, was in the volume wearing a mocap suit and the Persona system. The actress was also wearing a microphone and had an audio feed from the stage, so she could participate in a live interview. Her body and facial performance data was all sent to the Pixotope, Future Group’s cross-modality rendering system for AR.”
OPPOSITE TOP TO BOTTOM: 4D scan data for the Unity digital human. (Image courtesy of Unity Technologies) A final render of The Heretic character. (Image courtesy of Unity Technologies) Digital Domain’s Doug Roble as his real-time DigiDoug alter ego. (Image courtesy of Digital Domain) Doug Roble is captured in the Light Stage to acquire his likeness for DigiDoug. (Image courtesy of Digital Domain) TOP: Elbor is a digital character created by Roble, who morphed into Elbor (Roble’s name backwards) during a real-time TED talk event in 2019. (Image courtesy of Digital Domain) BOTTOM: K-pop band member Akali makes an appearance at the League of Legends Pro League finals, care of Cubic Motion’s Persona system. (Image courtesy of Cubic Motion)
DRIVING EMOTIONS WITH SPEECH
Real-time rendered humans allow, of course, the chance for real humans to interact with their digital counterparts. One research group has developed a framework for driving a digital human with
SPRING 2020 VFXVOICE.COM • 11
COVER
pre-rendered characters. “This level of versatility opens many more uses for digital characters, whether it’s in healthcare, training, simulation, entertainment and beyond,” suggests Evans. “Looking forward, one area that we often discuss internally is the future of user-driven digital personas existing and interacting within shared digital environments – commonly dubbed as ‘the metaverse.’” UNITY’S CINEMATIC HUMANS
Unity Technologies, the makers of the Unity game engine, have also jumped deep into digital humans, especially via their own short film projects, including The Heretic. To make the digital character for this short, Unity’s Demo Team took advantage of various off-theshelf 3D and 4D scanning services, as well as rigging and motioncapture solutions. “Creating a digital human for a real-time project has already been done in several AAA games, and specifically a lot of innovative work has happened in the creation of game cinematics,” notes Unity producer Silvia Rasheva. “For The Heretic, we wanted to raise the quality above what we’ve seen so far in the area.” The team relied upon Unity’s High Definition Render Pipeline (HDRP) for rendering. “While HDRP was still in preview when we started planning and actually building the character in Unity, it already provided a lot of what we needed in terms of core rendering features,” notes Unity Senior Software Engineer Lasse Jon Fuglsang Pedersen. “For instance, we knew that we needed a proven solution for subsurface scattering, and HDRP by then already had a real-time implementation of the Disney Burley model that we were able to use directly for subsurface scattering on skin, eyes and teeth, with different diffusion profiles. “For our next step on this journey,” continues Pedersen, “we would like to increase the fidelity. For example, based on the quality of the 4D data that we are now getting from our vendors, we have observed that the resolution of our neutral is a bit too low to reproduce some of the smallest details in the actor’s performance. And in terms of data, Unity can handle more than we threw at it this time. So for our next project we will focus a bit on optimization, and we will be less conservative with our data budgets, and we will see where that takes us.” DIGITAL DOMAIN KNOWS HUMANS
VFX studio Digital Domain works in the area of both pre-rendered digital humans and real-time ones. Senior Director of Software R&D Doug Roble famously donned an Xsens suit and helmet-mounted camera for a TED Talk featuring himself and his real-time rendered alter ego DigiDoug. Since then, the studio has continued to extend its research in the area. “Since TED,” says Roble, “we have been able to greatly improve the quality of the facial capture and performance by running massive hyperparameter searches where we automate the process of generating and testing thousands of different deep-learning capture models. Through this technique we can create new mathematical models that more accurately capture the subtle motions in a face and represent them as moving geometry.” Other aspects, such as having someone else other than Roble drive
10 • VFXVOICE.COM SPRING 2020
DigiDoug and making the character autonomous, are underway. The studio has also been able to take advantage of real-time ray-traced rendering in Unreal Engine using NVIDIA’s GPUs to render DigiDoug. They are looking beyond standard rendering techniques, too. “We’re currently developing a whole new way of rendering the character – a neural rendering technique – that takes the realism of DigiDoug to a whole new level,” adds Roble. “In this new deep-learning technique, we are teaching a renderer how to produce more realistic versions of Doug’s face without having to do all the hard work of painstakingly re-creating all the details.” Digital Domain’s real-time digital human technology has been adapted for several projects. It was used to build a real-time Pikachu that was driven by Ryan Reynolds as part of the Detective Pikachu publicity run. “We are also using the same technologies to generate super high-fidelity digital human characters for the next generation of gaming platforms,” states Digital Domain’s Darren Hendler, Director of the Digital Human Group at the studio. “Not only are these using our newer capture systems, but they are also utilizing new real-time facial-rigging platforms that we have developed. Recently, we also created a real-time digital version of Martin Luther King Jr. for a high-profile historical project for TIME.” DIGITAL HUMANS ON STAGE
Another avenue for CG humans has been live broadcasts. For example, at China’s League of Legends Pro League (LPL) finals – an esport event showcasing Riot Games’ League of Legends – a band member of the fictional K-pop band K/DA was shown during a live broadcast both dancing and being interviewed in real-time. This band member, Akali, came about via Cubic Motion’s Persona system. “Using the latest in computer vision, Persona reads, records and translates an actor’s performance onto their digital counterpart in real-time,” describes Cubic Motion Product Manager Tony Lacey. “Designed from the ground up for live performance, Persona enables immediate character animation in game engines such as Unreal Engine 4.” A game rig of Akali was retrofitted for the performance and enhanced to add a large number of Facial Action Coding System (FACS)-based expressions. For the interview portion of the broadcast, a motion-capture volume was built in a room just behind the stage. “Here,” says Lacey, “our partners at Animatrik installed and operated an OptiTrack body performance-capture system. The actress who played Akali, Jeon So-Yeon, was in the volume wearing a mocap suit and the Persona system. The actress was also wearing a microphone and had an audio feed from the stage, so she could participate in a live interview. Her body and facial performance data was all sent to the Pixotope, Future Group’s cross-modality rendering system for AR.”
OPPOSITE TOP TO BOTTOM: 4D scan data for the Unity digital human. (Image courtesy of Unity Technologies) A final render of The Heretic character. (Image courtesy of Unity Technologies) Digital Domain’s Doug Roble as his real-time DigiDoug alter ego. (Image courtesy of Digital Domain) Doug Roble is captured in the Light Stage to acquire his likeness for DigiDoug. (Image courtesy of Digital Domain) TOP: Elbor is a digital character created by Roble, who morphed into Elbor (Roble’s name backwards) during a real-time TED talk event in 2019. (Image courtesy of Digital Domain) BOTTOM: K-pop band member Akali makes an appearance at the League of Legends Pro League finals, care of Cubic Motion’s Persona system. (Image courtesy of Cubic Motion)
DRIVING EMOTIONS WITH SPEECH
Real-time rendered humans allow, of course, the chance for real humans to interact with their digital counterparts. One research group has developed a framework for driving a digital human with
SPRING 2020 VFXVOICE.COM • 11
COVER
“We are also using the same technologies to generate super high-fidelity digital human characters for the next generation of gaming platforms. Not only are these using our newer capture systems, but they are also utilizing new real-time facial-rigging platforms that we have developed. Recently, we also created a real-time digital version of Martin Luther King Jr. for a high-profile historical project for TIME.” —Darren Hendler, Director of the Digital Human Group, Digital Domain
emotions via speech. This is known as the Matt AI project from Tencent Technology Company Limited’s NEXT Studios and AI Lab, which had also worked on Siren. Here, the digital Matt can ‘talk’ and answer questions, with the facial-animation nuances produced via speech. “The core of speech-driven facial animation with emotion is to learn a general and accurate function that maps the input speech to the controllers of our facial rig,” details Tencent’s Jingxiang Li. “We achieved this goal via deep learning and mainly focused on two key ingredients. Firstly, we constructed a large-scale highquality multi-emotion training dataset, which contains actor’s speech and rig controls corresponding to the actor’s performance frame-by-frame. Next, we trained a deep neutral network based on the training dataset.” Li believes the technology could be used as a tool for lip sync and to automatically generate facial animation for game development, especially for games that have a large amount of dialogue. “It also,” says Li, “made it possible to bring life to a voice assistant by creating an avatar and driving that avatar with speech.” A DIGITAL HUMAN FROM JUST ONE PHOTOGRAPH
While 3D scans, performance capture and other methods can make hugely detailed digital humans, there are new ways for producing high-quality versions with much less inputs. For instance, Reallusion has released an AI-based head generator called Headshot as a plug-in for its Character Creator toolset. “All you need is to load one photo of a face and the Headshot AI will analyze the image, provide a sculpt morph with additional customization and generate a realistic digital double in minutes,” says Reallusion Vice President of Product Marketing John C. Martin. “The major shift is that most digital humans are demos and single-character research examples, but Reallusion has developed and released a product that is ready to start generating characters for films, games, immersive experiences and even virtual production.” “Headshot Pro Mode allows game developers and virtual production teams to immediately funnel a cast of digital doubles into iClone, Unreal, Unity, Maya, ZBrush and more,” notes Martin. “The idea is to allow the digital humans to go anywhere they like and give creators a solution to rapidly develop, iterate and collaborate with real-time.” Other applications exist in this space, too. Pinscreen, for example, provides an avatar-creation app based on deep-learning research relating to the generation of a realistic digital-face re-creation with facial features and hair from a single smartphone photo. THE FUTURE OF REAL-TIME CG HUMANS
TOP: Matt AI from Tencent’s NEXT Studios and AI Lab uses speech to animate emotions. (Image courtesy of Tencent Technology Company Limited) MIDDLE: Using just one input image, Reallusion’s Headshot can generate a 3D avatar suitable for use in real-time projects. (Image courtesy Reallusion) BOTTOM: Headshot works within Reallusion’s Character Creator tool. (Image courtesy of Reallusion)
12 • VFXVOICE.COM SPRING 2020
Ultimately, there are a wealth of real-time CG human projects out there. Some, of course, aim to render characters completely photorealistcally, while others are going for a recognizable likeness. The advent of ‘deep fakes’ that can work in real-time is also now a part of these developments. The uses of real-time CG humans remain widespread, including all those mentioned above, as well as in other areas such as communicating over distances, clothes shopping, personalized gaming and generating synthetic social media identities.
TV
SNOWPIERCER: A SPEEDING TRAIN THROUGH MANKIND’S FROZEN FUTURE By TREVOR HOGG
All images courtesy of Turner Broadcasting System.
14 • VFXVOICE.COM SPRING 2020
A volatile class struggle that takes place onboard a massive train speeding through a post-apocalyptic frozen wasteland was originally conceived as a graphic novel called Le Transperceneige by Jacques Lob and Jean-Marc Rochette in 1982. South Korean filmmaker Bong Joon-ho (Parasite) co-wrote and directed Snowpiercer (2013), a cinematic adaptation that was his English language debut, and it garnered such acclaim that the rights were optioned to develop a television series, produced by Joon-ho, which airs on TNT in the U.S. in May, and globally streams on Netflix. Like the story which is built upon conflict, the journey beyond the page has not been a peaceful one. Controversy erupted when The Weinstein Company removed 25 minutes of footage from the director’s cut of the film before restoring it after critical acclaim for the film and the pilot episode was entirely re-shot with showrunner Josh Friedman (Emerald City) being replaced by Graeme Manson (Orphan Black). “We have a completely different train and effects,” states Showrunner/Executive Producer Manson. “There are no sets from the original [film] production and the pieces we did reuse were heavily modified.” “When I got the chance to pitch for the job, I was a huge fan of Bong’s Snowpiercer,” states Manson. “When I saw it, I actually thought it would make a great series, then forgot all about it. When I began looking into pitching for the gig, I got the graphic novels and discovered this whole other world that is more abstract and has great themes. There was a marrying between the movie and the graphic novels. For the visual style, I was inspired and pushing towards the visceral quality of Bong. It’s an action adventure and so are the graphic novels; that’s a key thing about it. It’s about class struggles as well as climate change, environment, class structure, migration, immigration and detention. All those things were great to form the base of a TV series that could keep going, and be about
resistance and revolution.” “Graeme likes to write big, even back then, which is always good and a challenge,” notes Snowpiercer Visual Effects Supervisor Geoff Scott. “There’s nothing worse than somebody who thinks, ‘Let’s play it off of expressions or have the magic happen behind the camera.’ We always try our best to make it so no one hopefully notices what we’ve done.” Selecting the visual effects vendors was a long process, with work being divided among Method Studios, FuseFX, Torpedo
OPPOSITE TOP: A greyscale model layout of the Snowpiercer. TOP: The lighting is composited into the shot. BOTTOM: The final shot which reveals the entire nighttime environment.
SPRING 2020 VFXVOICE.COM • 15
TV
SNOWPIERCER: A SPEEDING TRAIN THROUGH MANKIND’S FROZEN FUTURE By TREVOR HOGG
All images courtesy of Turner Broadcasting System.
14 • VFXVOICE.COM SPRING 2020
A volatile class struggle that takes place onboard a massive train speeding through a post-apocalyptic frozen wasteland was originally conceived as a graphic novel called Le Transperceneige by Jacques Lob and Jean-Marc Rochette in 1982. South Korean filmmaker Bong Joon-ho (Parasite) co-wrote and directed Snowpiercer (2013), a cinematic adaptation that was his English language debut, and it garnered such acclaim that the rights were optioned to develop a television series, produced by Joon-ho, which airs on TNT in the U.S. in May, and globally streams on Netflix. Like the story which is built upon conflict, the journey beyond the page has not been a peaceful one. Controversy erupted when The Weinstein Company removed 25 minutes of footage from the director’s cut of the film before restoring it after critical acclaim for the film and the pilot episode was entirely re-shot with showrunner Josh Friedman (Emerald City) being replaced by Graeme Manson (Orphan Black). “We have a completely different train and effects,” states Showrunner/Executive Producer Manson. “There are no sets from the original [film] production and the pieces we did reuse were heavily modified.” “When I got the chance to pitch for the job, I was a huge fan of Bong’s Snowpiercer,” states Manson. “When I saw it, I actually thought it would make a great series, then forgot all about it. When I began looking into pitching for the gig, I got the graphic novels and discovered this whole other world that is more abstract and has great themes. There was a marrying between the movie and the graphic novels. For the visual style, I was inspired and pushing towards the visceral quality of Bong. It’s an action adventure and so are the graphic novels; that’s a key thing about it. It’s about class struggles as well as climate change, environment, class structure, migration, immigration and detention. All those things were great to form the base of a TV series that could keep going, and be about
resistance and revolution.” “Graeme likes to write big, even back then, which is always good and a challenge,” notes Snowpiercer Visual Effects Supervisor Geoff Scott. “There’s nothing worse than somebody who thinks, ‘Let’s play it off of expressions or have the magic happen behind the camera.’ We always try our best to make it so no one hopefully notices what we’ve done.” Selecting the visual effects vendors was a long process, with work being divided among Method Studios, FuseFX, Torpedo
OPPOSITE TOP: A greyscale model layout of the Snowpiercer. TOP: The lighting is composited into the shot. BOTTOM: The final shot which reveals the entire nighttime environment.
SPRING 2020 VFXVOICE.COM • 15
TV
Pictures and Zoic Studios, while The Sequence Group was responsible for the animation design. “Then we had an internal team of six artists who concentrated mainly on 2D-centric work,” states Visual Effects Producer Darren Bell (Reign). “We had a fairly decent amount of time to create the effects. Overall, we had about 1,200 shots for the season with 100 plus per episode.” Designing the Snowpiercer train was the biggest challenge. “It was a several months-long process,” states Bell. “We collaborated with Alex Nice (Underwater), a concept artist I’ve worked with before. Alex, Geoff, Graeme, the producers, the network and I worked together to come up with a fantastic design.” A point of reference was the Art Deco trains from the 1920s, 1930s and 1940s. “The one in particular that we started with was The Mercury,” explains Scott. “Coincidentally, we recently found a train that looks like a scaled-down version of it called The Zephyr.” The Snowpiercer is a central character. “Our train is two and a half stories tall and a 1,001 cars long, so the amount of drag and cavitation that is continuously pulling up and pushing snow was huge,” adds Scott. “Our average speed was supposed to be 100 km an hour, but we changed it shot by shot. The train had to look faster than that because it’s so big. We did camera tests to see what looked good in different types of shots.” Considering the action occurs within the confines of a train, it was originally thought there would not be many set extensions. “Early on in shooting we started adding more set extension work as doors open,” reveals Scott. “Each of the cars have a mini-airlock system, so if there’s ever a breach the revolt can be retained within one car. Watching people go through doors got crazy.” Bell agrees. “The show is all about the doors!” The number of windows also reflect class status, with none existing in the cars situated towards the back of the train. “Putting bluescreen out of everything doesn’t buy us that much. Instead of doing chroma keys we used a lot of white and black, if it was night. We did good old-fashioned roto and luminous keys.” The amount of visual effects evolved along with the scripts. “When we got the feel for the world, then we could write to it, such as being able to follow an actor down a hallway and stop on a window and look outside,” remarks Manson. “Or start outside, push in on an establisher of the train, push into a window and find the character on the move. Those kinds of transitions have been fun.” TOP: Previs was used to determine the positioning of the train and avalanche. MIDDLE: An effects pass of the avalanche. BOTTOM: The final shot with the snow simulation.
16 • VFXVOICE.COM SPRING 2020
“Our train is two and a half stories tall and a 1,001 cars long, so the amount of drag and cavitation that is continuously pulling up and pushing snow was huge. Our average speed was supposed to be 100 km an hour, but we changed it shot by shot. The train had to look faster than that because it’s so big. We did camera tests to see what looked good in different types of shots.” —Geoff Scott, VFX Supervisor
“When we got the feel for the world, then we could write to it, such as being able to follow an actor down a hallway and stop on a window and look outside. Or start outside, push in on an establisher of the train, push into a window and find the character on the move. Those kinds of transitions have been fun.” —Graeme Manson, Showrunner/Executive Producer The production team embraced the idea of telling a story within the confines of a train. “It’s about claustrophobia,” notes Manson. “Part of the tension of the piece is that there are 3,000 people stuck in a 10-mile-long steel tube and they’re all going crazy together. We love shots that have a floor, ceiling and converging walls, and pressed the directors to work with that. Thematically, we’re constantly interacting with our environment. It’s nasty out there, and the fact that it keeps us in the train is a story plus. You’ve got to lean into it. I also did shoots, so I know how to shoot a movie in a 14 by 14-foot location.” Not everything takes place indoors. “When we do an exterior shot it usually feels like a breath of fresh air,” states Scott. “The world that we see out there is barren. We roll through destroyed cities. Initially, we were using exteriors as an establisher or at least a location demarcation. As Season 1 evolved exteriors became more story dependent.” A particular Canadian province served as a visual reference. “Darren, Chuck [Desrosiers] and I are from Ontario, so we know what a winter wasteland looks like,” states Scott. “Graeme ran Orphan Black in Toronto for five years during the winter and has experienced minus 30.” An effort was made to be authentic while still being cinematic. “At minus 120 there should be nothing in the air. I don’t even know that, realistically, the sky would be blue as the water would have dropped out of the air. But we didn’t go that far.” The biggest visual effects challenge was how individual vendors dealt with exterior shots. “Even in our earliest days of determining which vendors it was going to be, everyone has a different setup whether they use a proprietary software or an off-the-shelf solution when it comes to building environments,” observes Bell. “There’s not an easy way to share assets. Last year we had two vendors do exterior train shots, Method Studios and FuseFX. The assets did not get shared until later on in the season because the work became too great for one vendor to handle. FuseFX took on approximately 50 exterior shots in episode 10.” Practical vibrations were incorporated into the sets courtesy of Special Effects Supervisor Chuck Desrosiers (The Predator). “It’s quite a daunting task when you have to translate movement with actors but also have them interact with the movement in a way that works for camera. Some of the cars were gimballed while others were on airbag risers so we could add movement to the set piece. Sets get changed but what remains the same is the
TOP: The biggest visual effects challenge was how individual vendors dealt with exterior shots. MIDDLE: 3,000 passengers are contained within a speeding 10-mile-long metal tube. BOTTOM: The train is constantly interacting with the environment, rolling through a cold, barren wasteland and destroyed cities, using exterior shots to establish location.
SPRING 2020 VFXVOICE.COM • 17
TV
Pictures and Zoic Studios, while The Sequence Group was responsible for the animation design. “Then we had an internal team of six artists who concentrated mainly on 2D-centric work,” states Visual Effects Producer Darren Bell (Reign). “We had a fairly decent amount of time to create the effects. Overall, we had about 1,200 shots for the season with 100 plus per episode.” Designing the Snowpiercer train was the biggest challenge. “It was a several months-long process,” states Bell. “We collaborated with Alex Nice (Underwater), a concept artist I’ve worked with before. Alex, Geoff, Graeme, the producers, the network and I worked together to come up with a fantastic design.” A point of reference was the Art Deco trains from the 1920s, 1930s and 1940s. “The one in particular that we started with was The Mercury,” explains Scott. “Coincidentally, we recently found a train that looks like a scaled-down version of it called The Zephyr.” The Snowpiercer is a central character. “Our train is two and a half stories tall and a 1,001 cars long, so the amount of drag and cavitation that is continuously pulling up and pushing snow was huge,” adds Scott. “Our average speed was supposed to be 100 km an hour, but we changed it shot by shot. The train had to look faster than that because it’s so big. We did camera tests to see what looked good in different types of shots.” Considering the action occurs within the confines of a train, it was originally thought there would not be many set extensions. “Early on in shooting we started adding more set extension work as doors open,” reveals Scott. “Each of the cars have a mini-airlock system, so if there’s ever a breach the revolt can be retained within one car. Watching people go through doors got crazy.” Bell agrees. “The show is all about the doors!” The number of windows also reflect class status, with none existing in the cars situated towards the back of the train. “Putting bluescreen out of everything doesn’t buy us that much. Instead of doing chroma keys we used a lot of white and black, if it was night. We did good old-fashioned roto and luminous keys.” The amount of visual effects evolved along with the scripts. “When we got the feel for the world, then we could write to it, such as being able to follow an actor down a hallway and stop on a window and look outside,” remarks Manson. “Or start outside, push in on an establisher of the train, push into a window and find the character on the move. Those kinds of transitions have been fun.” TOP: Previs was used to determine the positioning of the train and avalanche. MIDDLE: An effects pass of the avalanche. BOTTOM: The final shot with the snow simulation.
16 • VFXVOICE.COM SPRING 2020
“Our train is two and a half stories tall and a 1,001 cars long, so the amount of drag and cavitation that is continuously pulling up and pushing snow was huge. Our average speed was supposed to be 100 km an hour, but we changed it shot by shot. The train had to look faster than that because it’s so big. We did camera tests to see what looked good in different types of shots.” —Geoff Scott, VFX Supervisor
“When we got the feel for the world, then we could write to it, such as being able to follow an actor down a hallway and stop on a window and look outside. Or start outside, push in on an establisher of the train, push into a window and find the character on the move. Those kinds of transitions have been fun.” —Graeme Manson, Showrunner/Executive Producer The production team embraced the idea of telling a story within the confines of a train. “It’s about claustrophobia,” notes Manson. “Part of the tension of the piece is that there are 3,000 people stuck in a 10-mile-long steel tube and they’re all going crazy together. We love shots that have a floor, ceiling and converging walls, and pressed the directors to work with that. Thematically, we’re constantly interacting with our environment. It’s nasty out there, and the fact that it keeps us in the train is a story plus. You’ve got to lean into it. I also did shoots, so I know how to shoot a movie in a 14 by 14-foot location.” Not everything takes place indoors. “When we do an exterior shot it usually feels like a breath of fresh air,” states Scott. “The world that we see out there is barren. We roll through destroyed cities. Initially, we were using exteriors as an establisher or at least a location demarcation. As Season 1 evolved exteriors became more story dependent.” A particular Canadian province served as a visual reference. “Darren, Chuck [Desrosiers] and I are from Ontario, so we know what a winter wasteland looks like,” states Scott. “Graeme ran Orphan Black in Toronto for five years during the winter and has experienced minus 30.” An effort was made to be authentic while still being cinematic. “At minus 120 there should be nothing in the air. I don’t even know that, realistically, the sky would be blue as the water would have dropped out of the air. But we didn’t go that far.” The biggest visual effects challenge was how individual vendors dealt with exterior shots. “Even in our earliest days of determining which vendors it was going to be, everyone has a different setup whether they use a proprietary software or an off-the-shelf solution when it comes to building environments,” observes Bell. “There’s not an easy way to share assets. Last year we had two vendors do exterior train shots, Method Studios and FuseFX. The assets did not get shared until later on in the season because the work became too great for one vendor to handle. FuseFX took on approximately 50 exterior shots in episode 10.” Practical vibrations were incorporated into the sets courtesy of Special Effects Supervisor Chuck Desrosiers (The Predator). “It’s quite a daunting task when you have to translate movement with actors but also have them interact with the movement in a way that works for camera. Some of the cars were gimballed while others were on airbag risers so we could add movement to the set piece. Sets get changed but what remains the same is the
TOP: The biggest visual effects challenge was how individual vendors dealt with exterior shots. MIDDLE: 3,000 passengers are contained within a speeding 10-mile-long metal tube. BOTTOM: The train is constantly interacting with the environment, rolling through a cold, barren wasteland and destroyed cities, using exterior shots to establish location.
SPRING 2020 VFXVOICE.COM • 17
TV
TOP: Previs of the cattle car. MIDDLE: The cattle car destroyed by the avalanche. BOTTOM: A tricky balance needed to be maintained in allowing the viewer to see what is going on within the deadly winter conditions.
ability to move platforms. Not only did the movement need to be consistently repeatable, but be something that the director will ask for. They can say, ‘10% more for shake.’ That meant we needed to have a mechanical system in place and I had a really good crew. I’ve been on this show since the pilot, so I had a lot of time to prepare and make this work.” The class distinctions have an impact on the quality of the engineering. “The further up you go in the train the smoother the ride. The idea is that they have dampening systems in place that can control how much vibration is brought across which leads to a calmer ride. It’s like the difference between a jalopy and a BMW.” Various worlds exist within the different cars. “We worked hard to create distinct worlds within each class,” notes Manson. “The tail is a jail. Third class is a working class. Second class is a professional class. First class has it all. As you move up train everything changes, like the design and sound, as well as the condition of the people and the amount of plenty. When you get to these ‘Wow sets’ they’re cool environments where we have small constructed elements.” Season 1 was shot entirely on soundstages at Martini Film Studios in Langley, British Columbia. The art department did a great job of making the interior sets look weathered, while the exteriors and ‘Wow sets’ were created in visual effects, Manson adds. “We literally have our actors on bluescreen when they’re in the tunnel of the aquarium.” Violence erupts with bloody consequences that combines practical and CG elements. “What is fascinating about the blood and gore is that it’s driven by the story,” remarks Desrosiers. “It does not feel like it’s overdone. It’s in the reality of being stuck in such a small space. It’s on a scene-by-shot basis. We determine if we have time for resets and then use blood pumps, squids and sponges. If we have quick forceful action where spears have to go through somebody and don’t have the time to reset the set then we do it digitally. We found a way to marry both worlds.” Creating smoke and fire was made more difficult within the soundstage environment. “We have cutting or welding torches,” states Scott. “Chuck has used a grinder against titanium to create sparks that are nonheated, and when needed the prop department has a torch that has a high-density LED in it and visual effects adds the weld. We would always go and shoot elements with Chuck.” Exterior sets were built on a soundstage. “There are going to be scenes where we see people outside in this wasteland,” reveals Desrosiers. “Treating those in a functional way when you’re
“Even in our earliest days of determining which vendors it was going to be, everyone has a different setup whether they use a proprietary software or an off-the-shelf solution when it comes to building environments. There’s not an easy way to share assets. Last year we had two vendors do exterior train shots, Method Studios and FuseFX. The assets did not get shared until later on in the season because the work became too great for one vendor to handle.” —Darren Bell, Visual Effects Producer interacting with the actors and also making it reasonable for Geoff to make realistic was a challenge.” Bell is proud of the CG set extensions. “People are going to be blown away by the aquarium and a particularly awesome cattle car.” Snowpiercer has been a major team effort. “I’m not a filmmaker showrunner,” admits Manson. “I run the show from the writers’ room. The writers’ room is always the biggest struggle. This is my second series with Geoff, and I give credit to him, Darren and Chuck as well as Producing Director James Hawes (Black Mirror) for holding it down and being able to translate the page. We take the scripts from the writers and go into these in-depth meetings to make it all one thing.”
TOP: A plate shot of Jennifer Connelly and Daveed Diggs. MIDDLE: Animation is added of the aquatic life that lives inside the aquarium car. BOTTOM: The final shot with all of the effects and lighting passes to digitally create what was referred to as a ‘Wow set.’
“It’s quite a daunting task when you have to translate movement with actors but also have them interact with the movement in a way that works for camera. Some of the cars were gimballed while others were on airbag risers so we could add movement to the set piece. Sets get changed but what remains the same is the ability to move platforms. Not only did the movement need to be consistently repeatable, but be something that the director will ask for. They can say, ‘10% more for shake.’” —Chuck Desrosiers, Special Effects Supervisor
18 • VFXVOICE.COM SPRING 2020
SPRING 2020 VFXVOICE.COM • 19
TV
TOP: Previs of the cattle car. MIDDLE: The cattle car destroyed by the avalanche. BOTTOM: A tricky balance needed to be maintained in allowing the viewer to see what is going on within the deadly winter conditions.
ability to move platforms. Not only did the movement need to be consistently repeatable, but be something that the director will ask for. They can say, ‘10% more for shake.’ That meant we needed to have a mechanical system in place and I had a really good crew. I’ve been on this show since the pilot, so I had a lot of time to prepare and make this work.” The class distinctions have an impact on the quality of the engineering. “The further up you go in the train the smoother the ride. The idea is that they have dampening systems in place that can control how much vibration is brought across which leads to a calmer ride. It’s like the difference between a jalopy and a BMW.” Various worlds exist within the different cars. “We worked hard to create distinct worlds within each class,” notes Manson. “The tail is a jail. Third class is a working class. Second class is a professional class. First class has it all. As you move up train everything changes, like the design and sound, as well as the condition of the people and the amount of plenty. When you get to these ‘Wow sets’ they’re cool environments where we have small constructed elements.” Season 1 was shot entirely on soundstages at Martini Film Studios in Langley, British Columbia. The art department did a great job of making the interior sets look weathered, while the exteriors and ‘Wow sets’ were created in visual effects, Manson adds. “We literally have our actors on bluescreen when they’re in the tunnel of the aquarium.” Violence erupts with bloody consequences that combines practical and CG elements. “What is fascinating about the blood and gore is that it’s driven by the story,” remarks Desrosiers. “It does not feel like it’s overdone. It’s in the reality of being stuck in such a small space. It’s on a scene-by-shot basis. We determine if we have time for resets and then use blood pumps, squids and sponges. If we have quick forceful action where spears have to go through somebody and don’t have the time to reset the set then we do it digitally. We found a way to marry both worlds.” Creating smoke and fire was made more difficult within the soundstage environment. “We have cutting or welding torches,” states Scott. “Chuck has used a grinder against titanium to create sparks that are nonheated, and when needed the prop department has a torch that has a high-density LED in it and visual effects adds the weld. We would always go and shoot elements with Chuck.” Exterior sets were built on a soundstage. “There are going to be scenes where we see people outside in this wasteland,” reveals Desrosiers. “Treating those in a functional way when you’re
“Even in our earliest days of determining which vendors it was going to be, everyone has a different setup whether they use a proprietary software or an off-the-shelf solution when it comes to building environments. There’s not an easy way to share assets. Last year we had two vendors do exterior train shots, Method Studios and FuseFX. The assets did not get shared until later on in the season because the work became too great for one vendor to handle.” —Darren Bell, Visual Effects Producer interacting with the actors and also making it reasonable for Geoff to make realistic was a challenge.” Bell is proud of the CG set extensions. “People are going to be blown away by the aquarium and a particularly awesome cattle car.” Snowpiercer has been a major team effort. “I’m not a filmmaker showrunner,” admits Manson. “I run the show from the writers’ room. The writers’ room is always the biggest struggle. This is my second series with Geoff, and I give credit to him, Darren and Chuck as well as Producing Director James Hawes (Black Mirror) for holding it down and being able to translate the page. We take the scripts from the writers and go into these in-depth meetings to make it all one thing.”
TOP: A plate shot of Jennifer Connelly and Daveed Diggs. MIDDLE: Animation is added of the aquatic life that lives inside the aquarium car. BOTTOM: The final shot with all of the effects and lighting passes to digitally create what was referred to as a ‘Wow set.’
“It’s quite a daunting task when you have to translate movement with actors but also have them interact with the movement in a way that works for camera. Some of the cars were gimballed while others were on airbag risers so we could add movement to the set piece. Sets get changed but what remains the same is the ability to move platforms. Not only did the movement need to be consistently repeatable, but be something that the director will ask for. They can say, ‘10% more for shake.’” —Chuck Desrosiers, Special Effects Supervisor
18 • VFXVOICE.COM SPRING 2020
SPRING 2020 VFXVOICE.COM • 19
VR/AR/MR
“I wanted the environments to feel like you’d stepped into a movie. An effect of that was to create pretty complex and rich environments. People would be dropped into that environment, and it took them a while to get acclimatized. They’d spend a lot of time looking around. We found that one of the first challenges was making sure that they were actually paying attention when our character started talking to them!” —Ben Snow, Visual Effects Supervisor, ILM story and up the chain at Lucasfilm,” recounts Snow, “everyone got it and said, yes, it’s going to be so much better if you’re the main character in the story.” MAKING AN IMMERSIVE AND ENGAGING STORY
RAMPING UP THE INTERACTIVITY FOR VADER IMMORTAL IN VR By IAN FAILES
All images courtesy of ILMxLAB. TOP: A scene from Episode 1 of Vader Immortal delivers the user right into the action.
20 • VFXVOICE.COM SPRING 2020
When Lucasfilm launched its immersive studio ILMxLAB in 2015, it quickly found new ways to tell interactive stories using VR, AR, immersive and real-time rendering techniques. ILMxLAB has capitalized, of course, on the artistic and technical skills of those already at ILM and Lucasfilm, delivering, in particular, experiences that are based on existing film properties. One of those experiences is Vader Immortal: A Star Wars VR Series. Written by David S. Goyer and directed by veteran ILM Visual Effects Supervisor Ben Snow, Vader Immortal was released in three parts for the Oculus Quest and Rift VR headsets. Unlike many VR experiences where users are a passive observer of the action, here they are also a player via the VR goggles and touch controllers. Interestingly, a Darth Vader VR project had initially been imagined at ILMxLAB as more of a passive experience. But after some very positive reactions to more interactive immersive VR experiences that the studio had worked on, including Star Wars: Secrets of the Empire and CARNE y ARENA, ILMxLAB changed tact. A small prototype of what eventually became the cell scene in Vader Immortal was developed in which the characters in the story engaged the user directly. “When we showed that prototype to David and our head of
With that new approach determined, Snow and his story team – which included Goyer, Mohen Leo and Colin Mackie – set out to tell a tale of the user as the lead protagonist, interacting with Vader on Mustafar and eventually confronting him in a lightsaber showdown. Along the way, the ILMxLAB team also needed to solve many of the challenges that come with VR experiences, ranging from how to lead the user in a 360-degree world, to how to move through to various levels and locations. One of the very specific challenges came from the desire to realize the VR world of Vader Immortal with film-quality visual effects. “I wanted the environments to feel like you’d stepped into a movie,” observes Snow. “An effect of that was to create pretty complex and rich environments. People would be dropped into that environment, and it took them a while to get acclimatized. They’d spend a lot of time looking around. We found that one of the first challenges was making sure that they were actually paying attention when our character started talking to them!” The solution came in the form of designing sets and lighting
TOP: Episode 2 features a lightsaber dojo battle. MIDDLE: An interactive moment from Episode 2 involving a Darkghast creature living in the caverns of planet Mustafar. BOTTOM LEFT: Users take on a number of droids in Episode 2. BOTTOM RIGHT: One of the many characters users encounter in the VR experience: the Priestess.
SPRING 2020 VFXVOICE.COM • 21
VR/AR/MR
“I wanted the environments to feel like you’d stepped into a movie. An effect of that was to create pretty complex and rich environments. People would be dropped into that environment, and it took them a while to get acclimatized. They’d spend a lot of time looking around. We found that one of the first challenges was making sure that they were actually paying attention when our character started talking to them!” —Ben Snow, Visual Effects Supervisor, ILM story and up the chain at Lucasfilm,” recounts Snow, “everyone got it and said, yes, it’s going to be so much better if you’re the main character in the story.” MAKING AN IMMERSIVE AND ENGAGING STORY
RAMPING UP THE INTERACTIVITY FOR VADER IMMORTAL IN VR By IAN FAILES
All images courtesy of ILMxLAB. TOP: A scene from Episode 1 of Vader Immortal delivers the user right into the action.
20 • VFXVOICE.COM SPRING 2020
When Lucasfilm launched its immersive studio ILMxLAB in 2015, it quickly found new ways to tell interactive stories using VR, AR, immersive and real-time rendering techniques. ILMxLAB has capitalized, of course, on the artistic and technical skills of those already at ILM and Lucasfilm, delivering, in particular, experiences that are based on existing film properties. One of those experiences is Vader Immortal: A Star Wars VR Series. Written by David S. Goyer and directed by veteran ILM Visual Effects Supervisor Ben Snow, Vader Immortal was released in three parts for the Oculus Quest and Rift VR headsets. Unlike many VR experiences where users are a passive observer of the action, here they are also a player via the VR goggles and touch controllers. Interestingly, a Darth Vader VR project had initially been imagined at ILMxLAB as more of a passive experience. But after some very positive reactions to more interactive immersive VR experiences that the studio had worked on, including Star Wars: Secrets of the Empire and CARNE y ARENA, ILMxLAB changed tact. A small prototype of what eventually became the cell scene in Vader Immortal was developed in which the characters in the story engaged the user directly. “When we showed that prototype to David and our head of
With that new approach determined, Snow and his story team – which included Goyer, Mohen Leo and Colin Mackie – set out to tell a tale of the user as the lead protagonist, interacting with Vader on Mustafar and eventually confronting him in a lightsaber showdown. Along the way, the ILMxLAB team also needed to solve many of the challenges that come with VR experiences, ranging from how to lead the user in a 360-degree world, to how to move through to various levels and locations. One of the very specific challenges came from the desire to realize the VR world of Vader Immortal with film-quality visual effects. “I wanted the environments to feel like you’d stepped into a movie,” observes Snow. “An effect of that was to create pretty complex and rich environments. People would be dropped into that environment, and it took them a while to get acclimatized. They’d spend a lot of time looking around. We found that one of the first challenges was making sure that they were actually paying attention when our character started talking to them!” The solution came in the form of designing sets and lighting
TOP: Episode 2 features a lightsaber dojo battle. MIDDLE: An interactive moment from Episode 2 involving a Darkghast creature living in the caverns of planet Mustafar. BOTTOM LEFT: Users take on a number of droids in Episode 2. BOTTOM RIGHT: One of the many characters users encounter in the VR experience: the Priestess.
SPRING 2020 VFXVOICE.COM • 21
VR/AR/MR
Memorable Vader Immortal Moments FX Lead Jeff Gebe identifies some of the major technical challenges of the VR experience. Wielding a lightsaber: One of our artists created a really nice effect for the lightsaber blur streak that was actually very simplistic. It was just a single card. In a normal VFX pipeline, I’d say, ‘Well, we probably have to render this, we’re going to have to animate the image.’ But there’s a lot of other techniques available when you start to think of it from a game point of view, such as manipulating textures, manipulating their UVs, and adding animation through a material to create FX. Those turn out to be a lot more efficient than reading in a full animated sequence. So wherever possible we tried to go with those techniques and it turned out to look very realistic for the lightsaber. Talking pins: There was one scene that had an animated wall of pins that, when you entered the room, would protrude outward towards you forming a face that would speak to you. It morphed into various shapes that would guide you through the next parts of your journey and the experience. Ben Snow really wanted this figure to loom way out of the wall right over you. It was so long in duration that we couldn’t just pre-bake the entire sim. So we had to come up with some tricks to interpolate between the different states. We came up with a procedural technique in the end which would be layered on top of the caches to add a little bit more life and complexity to it.
“[A Darth Vader costume made for Rogue One] was scanned and photographed as part of the asset development. Then we said, ‘Okay, that’s our standard that we want the real-time version to match.’ Of course, it’s actually harder to make the real-time asset if you want to keep that level of quality because you essentially have to drop down the number of polygons that you’re using and you have to switch to a much more normal map-based approach.” —Ben Snow, Visual Effects Supervisor, ILM
TOP: Familiar Star Wars vehicles and weapons make up Vader Immortal.
22 • VFXVOICE.COM SPRING 2020
that drew players towards a certain viewpoint, while also crafting moments where character interactions delivered key story points. The character interactions involved voice-over ADR sessions with actors such as Maya Rudolph (playing the droid ZOE3). There were also motion capture sessions that would, along with keyframe animation, help inform the final characters. This included Vader himself, who was built as a CG character essentially using ILM’s film asset development pipeline. That Vader build process began with referencing a costume that had been made for Rogue One. It was scanned and photographed as part of the asset development. “Then we said, ‘Okay, that’s our standard that we want the real-time version to match,’” says Snow. “Of course, it’s actually harder to make the real-time asset if you want to keep that level of quality because you essentially have to drop down the number of polygons that you’re using and you have to switch to a much more normal map-based approach.” Furthermore, Vader Immortal was developed specifically for the Oculus Quest, a standalone mobile VR headset, as well as the tethered PC-powered Oculus Rift, and so it had to work for both devices. The development team would find clever ways to make certain concessions depending on which headset it was being delivered for. “Basically we would just use different tricks to try to keep the
quality we were after,” advises Snow, something that he recalled had to be done when he first started in visual effects at ILM in the 1990s. “For the Quest version, for example, we broke out Darth Vader’s cape as a separate character. So there are two characters there. One is the cape and one is Vader because you couldn’t have the character with this skeletal complexity to support the cape as well as Vader.” RELYING ON REAL-TIME
Vader Immortal was crafted using Epic Games’ Unreal Engine. The game engine ensured that elements of the experience could be interactive. Indeed, one of the goals was for the user to feel as if they were driving the effects. “Right at the beginning of the first episode,” identifies VFX Lead Jeff Grebe, “you are activating the hyperspace drive, which is pretty awesome. There’s also a really fun moment in the final act where we have a whole army of droids that you have re-activated. We could have made this a movie moment where you watch a scene, but instead we pushed it a step further where you actually get to command the army and direct them how to move and how to fire on a battalion of stormtroopers. It’s a really fun treat to actually interact with the work that you’ve created.”
The droid army: When we were working on the first episode, we were struggling with animating eight characters on screen at the same time. Here we wanted to include 200 to 250 animated characters. It just wasn’t going to work in the normal pipeline. But we were able to take our crowd sims that we had done through our normal proprietary pipeline, bake their animations out using the technique of caching their position datas, and then after we’d modified Unreal a little, we were able to parse that data in and add in the variants that we needed to offset some of the cycles of our crowd animation to break it up so that it looked unique and different depending on your vantage point.
TOP: ZOE3 is voiced by Maya Rudolph. BOTTOM: Episode 3 ends in a showdown with Vader himself.
SPRING 2020 VFXVOICE.COM • 23
VR/AR/MR
Memorable Vader Immortal Moments FX Lead Jeff Gebe identifies some of the major technical challenges of the VR experience. Wielding a lightsaber: One of our artists created a really nice effect for the lightsaber blur streak that was actually very simplistic. It was just a single card. In a normal VFX pipeline, I’d say, ‘Well, we probably have to render this, we’re going to have to animate the image.’ But there’s a lot of other techniques available when you start to think of it from a game point of view, such as manipulating textures, manipulating their UVs, and adding animation through a material to create FX. Those turn out to be a lot more efficient than reading in a full animated sequence. So wherever possible we tried to go with those techniques and it turned out to look very realistic for the lightsaber. Talking pins: There was one scene that had an animated wall of pins that, when you entered the room, would protrude outward towards you forming a face that would speak to you. It morphed into various shapes that would guide you through the next parts of your journey and the experience. Ben Snow really wanted this figure to loom way out of the wall right over you. It was so long in duration that we couldn’t just pre-bake the entire sim. So we had to come up with some tricks to interpolate between the different states. We came up with a procedural technique in the end which would be layered on top of the caches to add a little bit more life and complexity to it.
“[A Darth Vader costume made for Rogue One] was scanned and photographed as part of the asset development. Then we said, ‘Okay, that’s our standard that we want the real-time version to match.’ Of course, it’s actually harder to make the real-time asset if you want to keep that level of quality because you essentially have to drop down the number of polygons that you’re using and you have to switch to a much more normal map-based approach.” —Ben Snow, Visual Effects Supervisor, ILM
TOP: Familiar Star Wars vehicles and weapons make up Vader Immortal.
22 • VFXVOICE.COM SPRING 2020
that drew players towards a certain viewpoint, while also crafting moments where character interactions delivered key story points. The character interactions involved voice-over ADR sessions with actors such as Maya Rudolph (playing the droid ZOE3). There were also motion capture sessions that would, along with keyframe animation, help inform the final characters. This included Vader himself, who was built as a CG character essentially using ILM’s film asset development pipeline. That Vader build process began with referencing a costume that had been made for Rogue One. It was scanned and photographed as part of the asset development. “Then we said, ‘Okay, that’s our standard that we want the real-time version to match,’” says Snow. “Of course, it’s actually harder to make the real-time asset if you want to keep that level of quality because you essentially have to drop down the number of polygons that you’re using and you have to switch to a much more normal map-based approach.” Furthermore, Vader Immortal was developed specifically for the Oculus Quest, a standalone mobile VR headset, as well as the tethered PC-powered Oculus Rift, and so it had to work for both devices. The development team would find clever ways to make certain concessions depending on which headset it was being delivered for. “Basically we would just use different tricks to try to keep the
quality we were after,” advises Snow, something that he recalled had to be done when he first started in visual effects at ILM in the 1990s. “For the Quest version, for example, we broke out Darth Vader’s cape as a separate character. So there are two characters there. One is the cape and one is Vader because you couldn’t have the character with this skeletal complexity to support the cape as well as Vader.” RELYING ON REAL-TIME
Vader Immortal was crafted using Epic Games’ Unreal Engine. The game engine ensured that elements of the experience could be interactive. Indeed, one of the goals was for the user to feel as if they were driving the effects. “Right at the beginning of the first episode,” identifies VFX Lead Jeff Grebe, “you are activating the hyperspace drive, which is pretty awesome. There’s also a really fun moment in the final act where we have a whole army of droids that you have re-activated. We could have made this a movie moment where you watch a scene, but instead we pushed it a step further where you actually get to command the army and direct them how to move and how to fire on a battalion of stormtroopers. It’s a really fun treat to actually interact with the work that you’ve created.”
The droid army: When we were working on the first episode, we were struggling with animating eight characters on screen at the same time. Here we wanted to include 200 to 250 animated characters. It just wasn’t going to work in the normal pipeline. But we were able to take our crowd sims that we had done through our normal proprietary pipeline, bake their animations out using the technique of caching their position datas, and then after we’d modified Unreal a little, we were able to parse that data in and add in the variants that we needed to offset some of the cycles of our crowd animation to break it up so that it looked unique and different depending on your vantage point.
TOP: ZOE3 is voiced by Maya Rudolph. BOTTOM: Episode 3 ends in a showdown with Vader himself.
SPRING 2020 VFXVOICE.COM • 23
VR/AR/MR
Engineering that kind of interaction for big moments like controlling the hyperspace jump or the droid army, as well as intertwining complex visual effects, would prove one of the largest hurdles on the project. Vader Immortal had to run at 90 frames per second in stereo, and of course in real-time. “That puts some hefty limitations on what we can actually do live and dynamic,” says Grebe. “We made decisions when we were scoping things out about what we were going to pre-render pre-cache, and what we were going to keep live.” In the first episode, users are given a puzzle box to operate. Here, several dynamic effects were hooked up to the box because it was imagined that the user might want to wave it around in the air. “We wanted to add to that reality by having trailing effects off of this interactive piece,” says Grebe. “Then there were other moments where things were a little bit more canned, where you would be watching more story elements, like a breach of your ship where the stormtroopers are boarding and they blow off the door. Those are elements where we can get away with pre-rendering explosions and mixing them in.” Making those effects – and generally making assets for Vader Immortal – followed a traditional pipeline up until the point of bringing them into the game engine. “We can’t play back a fluid solve, but we could take a rendered sequence of, say, pyro frames into what we refer to as ‘flipbooks,’” explains Grebe. “These animated stages of an explosion would be all put together and assembled into individual cells of large card or large texture. Then we could play that back as you would a movie sequence. The early stages would be developing effects that actually look realistic and then figuring out a way to actually fit them on this card. We have to figure out how long that animation is going to be, how many cells we could remove and where we could use interpolation.” A NEW KIND OF ENTERTAINMENT
TOP: Vader trains the user in Vader Immortal. MIDDLE: Darkghast concept art. BOTTOM: Concept art for the hangar battle sequence in Episode 3.
24 • VFXVOICE.COM SPRING 2020
For both Grebe and Snow, getting the chance to work on Vader Immortal was a welcome opportunity to be part of a new era of storytelling at Lucasfilm. “I was actually speaking with another artist here who gave me a demonstration of some of the early tests for Vader Immortal where you get to meet Vader for the first time,” recalls Grebe. “I swear right then I was hooked, just because it was so believable. When I would tilt my head around and look from all aspects, I was really drawn into this world. For me, that’s the reason why I got into film in the first place, to have viewers get lost in a moment, an experience. I feel this is the next step in the evolution of that form of storytelling.” That’s exactly what Vader Immortal was designed to feel like: a new way to experience a story. “We were not trying to create a game,” points out Snow. “We were trying to create an experience. The truth is that the VR market up until this point has been mostly adopted by gamer-type things. I feel that Vader Immortal proves that you can do this more narrative intensive approach and that people will like it. Already we are starting to see other experiences that have been influenced by it. I hope that continues, and I want to see our explorations in more heavily narrative content continue as well.”
ANIMATION
ONWARD: CREATING MYTHICAL CREATURES – WITH A SUBURBAN TWIST By IAN FAILES
The latest Pixar animated feature, Onward, directed by Dan Scanlon, portrays a fairytale world full of magic creatures – elves, trolls, mermaids, gnomes, sprites, goblins and dragons. But there’s a twist; those creatures live in what appears to be a normal suburban existence, one where magic is perhaps not as special as it once used to be. Things change when two elf brothers – Ian and Barley Lightfoot – seem to bring their deceased father back to life, at least partially, and end up on a quest to bring him back completely. From an animation and effects point of view, Onward posed several challenges for Pixar, including designing and executing a large range of mythical creatures and characters and generating magical effects. The more stylized nature of the film also enabled the team to approach this work slightly differently, sometimes with old-school animation techniques.
“Our characters had such an art-directed look to them that even in simulation we had animators come in and do hand-drawn sketches on top of our shots. We would then go in and use sculpting tools on top instead of doing re-sims. It saved us a lot of time and we could just straighten off the shoulders, give a specific curve a look, and that would get the shot finalized instead of doing iterations of sims over and over.” —Max Rodriguez, Simulation Technical Director
A STYLIZED APPROACH
All images copyright © 2019 Disney/Pixar. TOP: Brothers Ian and Barley Lightfoot find a way to partially re-animate their deceased father in Onward. OPPOSITE TOP: Animating the father character involved some unique character simulation for just the legs, and later when his body is covered in other clothing. OPPOSITE BOTTOM: Pixar approached Onward with somewhat more of a stylized feel than some of its other films.
26 • VFXVOICE.COM SPRING 2020
The nature of the film’s story was one reason that a more stylized and old-school approach worked, according to animator Michael Bidinger. “The fantasy world in Onward was so rich that it got a lot of people excited to suggest a lot of ideas early. It got a lot of people working rougher; that’s where the organic things just came up. “So,” says Bidinger, “there were a number of animators who started first with 2D sketch-blocking. It helped them start the conversation or continue the conversation about how effects might look in relation to a character’s performance.” “That helped us figure out,” outlines Character Technical Director Seth Freeman, “what is this particular character like, what kind of physicality are we going to need to have on this other character? It let us know what kind of work we had to do.” This continued all the way through production, including in simulation. “Our characters had such an art-directed look to them
SPRING 2020 VFXVOICE.COM • 27
ANIMATION
ONWARD: CREATING MYTHICAL CREATURES – WITH A SUBURBAN TWIST By IAN FAILES
The latest Pixar animated feature, Onward, directed by Dan Scanlon, portrays a fairytale world full of magic creatures – elves, trolls, mermaids, gnomes, sprites, goblins and dragons. But there’s a twist; those creatures live in what appears to be a normal suburban existence, one where magic is perhaps not as special as it once used to be. Things change when two elf brothers – Ian and Barley Lightfoot – seem to bring their deceased father back to life, at least partially, and end up on a quest to bring him back completely. From an animation and effects point of view, Onward posed several challenges for Pixar, including designing and executing a large range of mythical creatures and characters and generating magical effects. The more stylized nature of the film also enabled the team to approach this work slightly differently, sometimes with old-school animation techniques.
“Our characters had such an art-directed look to them that even in simulation we had animators come in and do hand-drawn sketches on top of our shots. We would then go in and use sculpting tools on top instead of doing re-sims. It saved us a lot of time and we could just straighten off the shoulders, give a specific curve a look, and that would get the shot finalized instead of doing iterations of sims over and over.” —Max Rodriguez, Simulation Technical Director
A STYLIZED APPROACH
All images copyright © 2019 Disney/Pixar. TOP: Brothers Ian and Barley Lightfoot find a way to partially re-animate their deceased father in Onward. OPPOSITE TOP: Animating the father character involved some unique character simulation for just the legs, and later when his body is covered in other clothing. OPPOSITE BOTTOM: Pixar approached Onward with somewhat more of a stylized feel than some of its other films.
26 • VFXVOICE.COM SPRING 2020
The nature of the film’s story was one reason that a more stylized and old-school approach worked, according to animator Michael Bidinger. “The fantasy world in Onward was so rich that it got a lot of people excited to suggest a lot of ideas early. It got a lot of people working rougher; that’s where the organic things just came up. “So,” says Bidinger, “there were a number of animators who started first with 2D sketch-blocking. It helped them start the conversation or continue the conversation about how effects might look in relation to a character’s performance.” “That helped us figure out,” outlines Character Technical Director Seth Freeman, “what is this particular character like, what kind of physicality are we going to need to have on this other character? It let us know what kind of work we had to do.” This continued all the way through production, including in simulation. “Our characters had such an art-directed look to them
SPRING 2020 VFXVOICE.COM • 27
ANIMATION
Onward’s Big Challenges
Michael Bidinger, Animator: One of the biggest challenges facing animation was finding the right performance for the Dad character, who at one point is made up of some stuffedtogether clothing. It was a balance between physicality and the respective materials that Dad is made of, while still creating a performance that felt somewhat alive, or that the other characters could mistake for alive. One of the biggest helps was the tools that we were getting from simulation.
Paul Kanyuk, Crowds Technical Supervisor: Crowd specificity was a challenge on this film. We have a very diverse set of crowd characters. You can’t animate a troll the same way you animate a gnome. Then there were, say, sprites riding a bicycle where each had its own separate pose. There’s less animation re-use than we’d normally be able to do for crowds. I miss Ratatouille sometimes, where every rat can use the same run cycle and you just color the fur differently and you’re okay! The challenge on Onward was figuring out what kind of technical tricks and shared workflows we could use to minimize the amount of shot-specific animation.
Max Rodriguez, Simulation Technical Director: Hair was a big challenge. Our Simulation Supervisor, Jacob Brooks, comes from the grooming department. He has a real eye for hair. The main characters, Ian and Barley, have very specific hairstyles that are derived from what direction they are looking. If they’re looking towards the right, it doesn’t look the same if they’re looking in the other direction. We really had to pose the hair in a certain way and hope that it wouldn’t be noticeable when they flipped directions.
Seth Freeman, Character Technical Director: For characters, we had a bit more of a compressed timeline and we had all these species that we had to develop – trolls, gnomes and dragons. It was about, how do we make those characters at the high level we want them to be? For some of that, we built some tools to transfer stuff back and forth. For example, the dragon Blazey’s legs are actually like a derivative of the goblins – we could steal three-legged fingers and put them on there. We also had some tools to automatically rig characters.
Cody Harrington, Effects Technical Director: My biggest challenge was taking an abstract concept like magic and developing it to fit into the world on Onward. That took a lot of iteration, a lot of storyboarding and a lot of working with the art department and story to work out the details of what it might look like, and then iterating on it and putting it in the scene, lighting it, and seeing if it worked. Plus, we had a time compression that took us later into the production. But it did force us to build tools to make it easier – this became the multi-shot pipeline that software engineer Michael Rice developed that let us develop the effect for one shot, work it back and forth, drop it into 20 shots, and then just do shot over-rides for customization.
Renee Tam, Lightspeed Lead: One of the biggest challenges on this show was that we tried to have the most story time for the story department to create a very compelling story. We kept trying to buy more and more time for development, and meanwhile for the technical teams it got crunchier and crunchier at the back end. When a lot of departments are working on top of each other, which we call ‘concurrent workflows,’ this created a bunch of technical issues because we didn’t always have the time to figure out what was wrong or what were the specific challenges with the shots. We had to roll with the current challenges and fix them really fast.
Members of Pixar’s Onward crew reflect on their toughest tasks during production.
“There were a number of animators who started first with 2D sketch-blocking. It helped them start the conversation or continue the conversation about how effects might look in relation to a character’s performance.” —Michael Bidinger, Animator that even in simulation we had animators come in and do handdrawn sketches on top of our shots,” says Simulation Technical Director Max Rodriguez. “We would then go in and use sculpting tools on top instead of doing re-sims. It saved us a lot of time and we could just straighten off the shoulders, give a specific curve a look, and that would get the shot finalized instead of doing iterations of sims over and over.” Even effects simulations for magic, which were mostly produced in Houdini, started life as sketches. Here, 2D cell-like drawings of magical effects were transposed to 3D to deliver a more stylized result. “Our Effects Supervisor, Vincent Serritella, is a fine artist,” details Effects Technical Director Cody Harrington. “He’s an amazing painter. He would come in and work out things like, ‘how’s the motion supposed to happen? How is the flow supposed to go?’ And he would do a quick sketch for us and a draw-over across a series of frames. That hand-drawn approach really saved us a lot of time. Vincent is an incredible effects animator himself, and he’d also set up a base effects pipeline in Houdini to realize what he had done as a quick sketch, if we needed it. “On this film in particular, compared to other Pixar films I’ve worked on,” continues Harrington, “we were sketching all the time in effects. Normally we’re just ‘hands on mouse.’ But here we were doing draw-overs all the time. When you look at those draw-overs over a plate, it has a special kind of dynamic. We would do that just so we could get the ‘spirit’ of those drawings into the 3D.”
28 • VFXVOICE.COM SPRING 2020
Lighting, too, followed what Lightspeed Lead Renee Tam says was a more traditional approach. “Again, it was more art-directed. Sometimes in the tools we use, a lot of the lighting is set there for you. But our DP, Sharon Calahan, definitely lit scenes in a way that was more artistically driven.” TRYING NEW TECHNIQUES FOR A TRADITIONAL FEEL
Among the tools utilized on Onward was motion capture, not an approach traditionally associated with Pixar, but something the studio has been implementing more and more on recent feature films and shorts. “Coco had mocap for some of the skeletons in the backgrounds,” advises Crowds Technical Supervisor Paul Kanyuk. “On Incredibles 2, they were actually able to bring it a little closer to camera for some of the background humans. But, as much as we try, we
OPPOSITE TOP TO BOTTOM: The latest developments in RenderMan and the studio’s Universal Scene Description (USD) architecture were employed on Onward. Ian (voiced by Tom Holland) sits with his magically, partly-reincarnated father. The world of Onward includes mystical and fantastical creatures, providing Pixar with a range of animation challenges. The two teenagers at the center of the film live a suburban life, but soon take on a whole new adventure.
SPRING 2020 VFXVOICE.COM • 29
ANIMATION
Onward’s Big Challenges
Michael Bidinger, Animator: One of the biggest challenges facing animation was finding the right performance for the Dad character, who at one point is made up of some stuffedtogether clothing. It was a balance between physicality and the respective materials that Dad is made of, while still creating a performance that felt somewhat alive, or that the other characters could mistake for alive. One of the biggest helps was the tools that we were getting from simulation.
Paul Kanyuk, Crowds Technical Supervisor: Crowd specificity was a challenge on this film. We have a very diverse set of crowd characters. You can’t animate a troll the same way you animate a gnome. Then there were, say, sprites riding a bicycle where each had its own separate pose. There’s less animation re-use than we’d normally be able to do for crowds. I miss Ratatouille sometimes, where every rat can use the same run cycle and you just color the fur differently and you’re okay! The challenge on Onward was figuring out what kind of technical tricks and shared workflows we could use to minimize the amount of shot-specific animation.
Max Rodriguez, Simulation Technical Director: Hair was a big challenge. Our Simulation Supervisor, Jacob Brooks, comes from the grooming department. He has a real eye for hair. The main characters, Ian and Barley, have very specific hairstyles that are derived from what direction they are looking. If they’re looking towards the right, it doesn’t look the same if they’re looking in the other direction. We really had to pose the hair in a certain way and hope that it wouldn’t be noticeable when they flipped directions.
Seth Freeman, Character Technical Director: For characters, we had a bit more of a compressed timeline and we had all these species that we had to develop – trolls, gnomes and dragons. It was about, how do we make those characters at the high level we want them to be? For some of that, we built some tools to transfer stuff back and forth. For example, the dragon Blazey’s legs are actually like a derivative of the goblins – we could steal three-legged fingers and put them on there. We also had some tools to automatically rig characters.
Cody Harrington, Effects Technical Director: My biggest challenge was taking an abstract concept like magic and developing it to fit into the world on Onward. That took a lot of iteration, a lot of storyboarding and a lot of working with the art department and story to work out the details of what it might look like, and then iterating on it and putting it in the scene, lighting it, and seeing if it worked. Plus, we had a time compression that took us later into the production. But it did force us to build tools to make it easier – this became the multi-shot pipeline that software engineer Michael Rice developed that let us develop the effect for one shot, work it back and forth, drop it into 20 shots, and then just do shot over-rides for customization.
Renee Tam, Lightspeed Lead: One of the biggest challenges on this show was that we tried to have the most story time for the story department to create a very compelling story. We kept trying to buy more and more time for development, and meanwhile for the technical teams it got crunchier and crunchier at the back end. When a lot of departments are working on top of each other, which we call ‘concurrent workflows,’ this created a bunch of technical issues because we didn’t always have the time to figure out what was wrong or what were the specific challenges with the shots. We had to roll with the current challenges and fix them really fast.
Members of Pixar’s Onward crew reflect on their toughest tasks during production.
“There were a number of animators who started first with 2D sketch-blocking. It helped them start the conversation or continue the conversation about how effects might look in relation to a character’s performance.” —Michael Bidinger, Animator that even in simulation we had animators come in and do handdrawn sketches on top of our shots,” says Simulation Technical Director Max Rodriguez. “We would then go in and use sculpting tools on top instead of doing re-sims. It saved us a lot of time and we could just straighten off the shoulders, give a specific curve a look, and that would get the shot finalized instead of doing iterations of sims over and over.” Even effects simulations for magic, which were mostly produced in Houdini, started life as sketches. Here, 2D cell-like drawings of magical effects were transposed to 3D to deliver a more stylized result. “Our Effects Supervisor, Vincent Serritella, is a fine artist,” details Effects Technical Director Cody Harrington. “He’s an amazing painter. He would come in and work out things like, ‘how’s the motion supposed to happen? How is the flow supposed to go?’ And he would do a quick sketch for us and a draw-over across a series of frames. That hand-drawn approach really saved us a lot of time. Vincent is an incredible effects animator himself, and he’d also set up a base effects pipeline in Houdini to realize what he had done as a quick sketch, if we needed it. “On this film in particular, compared to other Pixar films I’ve worked on,” continues Harrington, “we were sketching all the time in effects. Normally we’re just ‘hands on mouse.’ But here we were doing draw-overs all the time. When you look at those draw-overs over a plate, it has a special kind of dynamic. We would do that just so we could get the ‘spirit’ of those drawings into the 3D.”
28 • VFXVOICE.COM SPRING 2020
Lighting, too, followed what Lightspeed Lead Renee Tam says was a more traditional approach. “Again, it was more art-directed. Sometimes in the tools we use, a lot of the lighting is set there for you. But our DP, Sharon Calahan, definitely lit scenes in a way that was more artistically driven.” TRYING NEW TECHNIQUES FOR A TRADITIONAL FEEL
Among the tools utilized on Onward was motion capture, not an approach traditionally associated with Pixar, but something the studio has been implementing more and more on recent feature films and shorts. “Coco had mocap for some of the skeletons in the backgrounds,” advises Crowds Technical Supervisor Paul Kanyuk. “On Incredibles 2, they were actually able to bring it a little closer to camera for some of the background humans. But, as much as we try, we
OPPOSITE TOP TO BOTTOM: The latest developments in RenderMan and the studio’s Universal Scene Description (USD) architecture were employed on Onward. Ian (voiced by Tom Holland) sits with his magically, partly-reincarnated father. The world of Onward includes mystical and fantastical creatures, providing Pixar with a range of animation challenges. The two teenagers at the center of the film live a suburban life, but soon take on a whole new adventure.
SPRING 2020 VFXVOICE.COM • 29
ANIMATION
“Maybe it’s because we’re moving into the technology phase of the animation industry, which has been moving at such a fast pace, that now we’re rebelling a little bit and wanting to just step back. You don’t want to be a machine, you don’t want AI to do your job for you. You want to actually physically get in there and do something. I think right now is kind of like a Renaissance of the tech, where we fall back into being even more organic in what we do.” —Cody Harrington, Effects Technical Director understand the limitations of mocap. What we ended up using it for in Onward was for the sprites as a way to get ideas on the screen. The challenge was that the physicality was all wrong because they’re super-small characters that are super-strong.” There were some Onward characters where mocap definitely did not prove useful. “Forget about a troll!” says Kanyuk. “I have some really funny clips of trying to be a troll. Mocap is a way to get ideas out there, but unfortunately it really does have its limitations in feature film animation. Also, our animators are really good and have ways of getting that stylized feeling in there that you can’t do otherwise. “Still,” suggests Kanyuk, “we’re looking into ways we can use it more effectively. There’s a joke at the end of Ratatouille – ‘No motion capture or any other performance shortcuts were used in the production of this film.’ Some people took that to be the outlook of the entire studio. However, we are open to all technologies and all techniques. We’re not that defensive of the way we work that we insist it has to be one way forever. That said, we have a house style and mocap does not give us that house style. So we’ve got to be very careful in how we’re able to use that.” ONWARD AND UPWARD
TOP TO BOTTOM: Visual development art by Lou Hamou-Lhadj, Ana Lacaze, Kiki Poh, Emron Grover and Matt Nolte. The brothers Ian and Barley (voiced by Tom Holland and Chris Pratt) conjure part of their father. A visual development piece for the film by Sharon Calahan. Barley Lightfoot (voiced by Chris Pratt) next to his beloved van, Guinevere. Director Dan Scanlon gives notes during an Onward art review.
30 • VFXVOICE.COM SPRING 2020
For Pixar’s crew, Onward naturally represented a chance to advance the studio’s artistic and technical side, especially in terms of story and design, rendering via RenderMan, taking advantage of their USD workflow, and through character and effects simulations. At the same time, however, the studio’s artists relished in returning to more grounded roots in their approach to the film, as Harrington observes. “Maybe it’s because we’re moving into the technology phase of the animation industry, which has been moving at such a fast pace, that now we’re rebelling a little bit and wanting to just step back. You don’t want to be a machine, you don’t want AI to do your job for you. You want to actually physically get in there and do something. And so I think right now is kind of like a Renaissance of the tech, where we fall back into being even more organic in what we do.”
PREVIS
HOW PREVIS HAS GONE REAL-TIME By IAN FAILES
TOP LEFT: Halon’s virtual camera setup used for previs at the studio. (Image courtesy of Halon Entertainment) TOP RIGHT: A previs still from Halon’s work for Ford v Ferrari, where real-time techniques aided in generating the stylized animatics. (Image copyright © 2019 20th Century Fox) BOTTOM RIGHT: The idea behind Halon’s cel-shaded-like renders for the previs was to be able to cut easily with storyboards. (Image copyright © 2019 20th Century Fox) OPPOSTE TOP: Behind the scenes of the Drogon oner shot in Season 8 of Game of Thrones, where The Third Floor made use of virtual production techniques to plan the shot. (Image copyright © 2019 HBO) OPPOSITE MIDDLE: Nviz Head of Visualization Janek Lender showcases the studio’s virtual production system for previs. (Image courtesy of Nviz) OPPOSITE BOTTOM: A previs frame from Nviz’s Vcam setup. (Image courtesy of Nviz)
32 • VFXVOICE.COM SPRING 2020
Previs studios have always had to work fast. They are regularly called upon to fashion animatics quickly and make numerous iterations to suit changing scripts and last-minute inclusions. Often this also involves delivery of technical schematics – techvis – for how a shot could be filmed. And then they tend to regularly get involved after the shoot to do postvis, where again it’s necessary to deliver quickly. While previs studios have had to be nimble, new real-time rendering tools have now enabled them to be even more so. Plus, using the same tools coupled with virtual production techniques allows previs studios to deliver even more in the shot planning and actual shooting process. For example, previs and virtual production on the Disney+ series The Mandalorian adopted real-time workflows in a whole new way. Here, a ‘volume’ approach to shooting in which actors were filmed against real-time rendered content on LED walls was undertaken. That content went through several stages of planning that involved traditional previs, a virtual art department and then readying for real-time rendering using Epic Games’ Unreal Engine on the LED walls. The studios involved in this work included ILM, The Third Floor, Halon Entertainment, Happy Mushroom and Mold3D Studio. The Mandalorian is just one of the standout projects where previs, virtual production and real-time rendering are making significant in-roads. Here’s how a bunch of previs studios, including those that worked on the show, have adapted to these new workflows.
Supervisor Ryan McCoy. “War for the Planet of the Apes was our first previs effort with the game engine, and our tools and experience have grown considerably since. It has been our standard workflow for all of our visualization shows for a few years now.” Modeling and keyframe animation for previs still happens in Maya and is then exported to Unreal. Here, virtual rigs to light the scene can be manipulated, or VR editing tools used to place set dressing. “We also have virtual production capabilities where multiple actors can be motion-captured and shot through a virtual camera displaying the Unreal rendered scene,” adds McCoy. “This can work through an iPad, or a VR headset, allowing the director to run the actors through a scene and improvise different actions and camera moves, just as they would on a real set. “Aside from the virtual production benefits, these real-time techniques also have some significant ramifications for our finalpixel projects,” continues McCoy. “Being able to re-light a scene on the fly with ray-traced calculations allows us to tweak the final image without waiting hours for renders. Look development for the shaders and FX simulations are also all done in real-time.” One of Halon’s real-time previs projects was Ford v Ferrari, where the studio produced car race animatics that had a stylized look achieved with cel-shaded-like renders – the aim was to cut easily with storyboards. “We utilized Unreal’s particle systems and post processes to make everything fit the desired style while still timing the visuals to be as realistic as possible,” outlines Halon Lead Visualization Artist Kristin Turnipseed. “That combined with a meticulously built LeMans track and being faithful to the actual race and cornering speeds allowed us to put together a dynamic sequence that felt authentic, and allowed us to provide detailed techvis to production afterward.”
HALON JUMPS INTO REAL-TIME
THE THIRD FLOOR GOES VIRTUAL
Like many previs studios, Halon Entertainment relied for many years almost solely on using Maya and its native viewport rendering for previs. However, the studio did dabble in real-time via MotionBuilder for mocap, the use of ILM’s proprietary previs tool ZVIZ on the film Red Tails, and with Valve’s Source Filmmaker. When Epic Games’ Unreal Engine 4 was released, Halon jumped on board real-time in a much bigger way. “We saw the potential in the stunning real-time render quality, which our founder Dan Gregoire mandated be implemented across all visualization projects,” relays Halon Senior Visualization
The Third Floor has also adopted real-time in a major way, for several reasons, according to CTO and co-founder Eric Carney. “First, the technology can render high-quality visuals in real-time and works really well with real-world input devices such as mocap and cameras. We think a hybrid Maya/Unreal approach offers a lot of great advantages to the work we do and are making it a big part of our workflow. It allows us to more easily provide tools like virtual camera (Vcam), VR scouting and AR on-set apps. It lets us iterate through versions of our shots even faster than before.” The Third Floor has recently leveraged real-time in visualization
“We also have virtual production capabilities where multiple actors can be motion-captured and shot through a virtual camera displaying the Unreal rendered scene. This can work through an iPad, or a VR headset, allowing the director to run the actors through a scene and improvise different actions and camera moves, just as they would on a real set.” —Ryan McCoy, Senior Visualization Supervisor, Halon
SPRING 2020 VFXVOICE.COM • 33
PREVIS
HOW PREVIS HAS GONE REAL-TIME By IAN FAILES
TOP LEFT: Halon’s virtual camera setup used for previs at the studio. (Image courtesy of Halon Entertainment) TOP RIGHT: A previs still from Halon’s work for Ford v Ferrari, where real-time techniques aided in generating the stylized animatics. (Image copyright © 2019 20th Century Fox) BOTTOM RIGHT: The idea behind Halon’s cel-shaded-like renders for the previs was to be able to cut easily with storyboards. (Image copyright © 2019 20th Century Fox) OPPOSTE TOP: Behind the scenes of the Drogon oner shot in Season 8 of Game of Thrones, where The Third Floor made use of virtual production techniques to plan the shot. (Image copyright © 2019 HBO) OPPOSITE MIDDLE: Nviz Head of Visualization Janek Lender showcases the studio’s virtual production system for previs. (Image courtesy of Nviz) OPPOSITE BOTTOM: A previs frame from Nviz’s Vcam setup. (Image courtesy of Nviz)
32 • VFXVOICE.COM SPRING 2020
Previs studios have always had to work fast. They are regularly called upon to fashion animatics quickly and make numerous iterations to suit changing scripts and last-minute inclusions. Often this also involves delivery of technical schematics – techvis – for how a shot could be filmed. And then they tend to regularly get involved after the shoot to do postvis, where again it’s necessary to deliver quickly. While previs studios have had to be nimble, new real-time rendering tools have now enabled them to be even more so. Plus, using the same tools coupled with virtual production techniques allows previs studios to deliver even more in the shot planning and actual shooting process. For example, previs and virtual production on the Disney+ series The Mandalorian adopted real-time workflows in a whole new way. Here, a ‘volume’ approach to shooting in which actors were filmed against real-time rendered content on LED walls was undertaken. That content went through several stages of planning that involved traditional previs, a virtual art department and then readying for real-time rendering using Epic Games’ Unreal Engine on the LED walls. The studios involved in this work included ILM, The Third Floor, Halon Entertainment, Happy Mushroom and Mold3D Studio. The Mandalorian is just one of the standout projects where previs, virtual production and real-time rendering are making significant in-roads. Here’s how a bunch of previs studios, including those that worked on the show, have adapted to these new workflows.
Supervisor Ryan McCoy. “War for the Planet of the Apes was our first previs effort with the game engine, and our tools and experience have grown considerably since. It has been our standard workflow for all of our visualization shows for a few years now.” Modeling and keyframe animation for previs still happens in Maya and is then exported to Unreal. Here, virtual rigs to light the scene can be manipulated, or VR editing tools used to place set dressing. “We also have virtual production capabilities where multiple actors can be motion-captured and shot through a virtual camera displaying the Unreal rendered scene,” adds McCoy. “This can work through an iPad, or a VR headset, allowing the director to run the actors through a scene and improvise different actions and camera moves, just as they would on a real set. “Aside from the virtual production benefits, these real-time techniques also have some significant ramifications for our finalpixel projects,” continues McCoy. “Being able to re-light a scene on the fly with ray-traced calculations allows us to tweak the final image without waiting hours for renders. Look development for the shaders and FX simulations are also all done in real-time.” One of Halon’s real-time previs projects was Ford v Ferrari, where the studio produced car race animatics that had a stylized look achieved with cel-shaded-like renders – the aim was to cut easily with storyboards. “We utilized Unreal’s particle systems and post processes to make everything fit the desired style while still timing the visuals to be as realistic as possible,” outlines Halon Lead Visualization Artist Kristin Turnipseed. “That combined with a meticulously built LeMans track and being faithful to the actual race and cornering speeds allowed us to put together a dynamic sequence that felt authentic, and allowed us to provide detailed techvis to production afterward.”
HALON JUMPS INTO REAL-TIME
THE THIRD FLOOR GOES VIRTUAL
Like many previs studios, Halon Entertainment relied for many years almost solely on using Maya and its native viewport rendering for previs. However, the studio did dabble in real-time via MotionBuilder for mocap, the use of ILM’s proprietary previs tool ZVIZ on the film Red Tails, and with Valve’s Source Filmmaker. When Epic Games’ Unreal Engine 4 was released, Halon jumped on board real-time in a much bigger way. “We saw the potential in the stunning real-time render quality, which our founder Dan Gregoire mandated be implemented across all visualization projects,” relays Halon Senior Visualization
The Third Floor has also adopted real-time in a major way, for several reasons, according to CTO and co-founder Eric Carney. “First, the technology can render high-quality visuals in real-time and works really well with real-world input devices such as mocap and cameras. We think a hybrid Maya/Unreal approach offers a lot of great advantages to the work we do and are making it a big part of our workflow. It allows us to more easily provide tools like virtual camera (Vcam), VR scouting and AR on-set apps. It lets us iterate through versions of our shots even faster than before.” The Third Floor has recently leveraged real-time in visualization
“We also have virtual production capabilities where multiple actors can be motion-captured and shot through a virtual camera displaying the Unreal rendered scene. This can work through an iPad, or a VR headset, allowing the director to run the actors through a scene and improvise different actions and camera moves, just as they would on a real set.” —Ryan McCoy, Senior Visualization Supervisor, Halon
SPRING 2020 VFXVOICE.COM • 33
PREVIS
“From the beginning, we could see the huge potential of using the engine for previs and postvis, and it only took us a few weeks to decide we needed to invest time creating visualization tools to be able to take full advantage of this method of working.” —Janek Lender, Head of Visualization, Nviz
work on For All Mankind, as well as location-based attractions. A significant project in which the company combined several real-time and virtual production approaches was for the final season of Game of Thrones. Here, a simul-cam setup for a Drogon ship attack scene, and VR set scouting for King’s Landing, helped the show’s makers and visual effects teams plan and then complete complex shots that involved a mix of live action and CG. “For the Euron vs. Drogon oner,” says Carney, “we deployed a range of techniques led by Virtual Production/Motion Control Supervisor Kaya Jabar. We used the TechnoDolly as our camera crane and to provide simul-cam. We pre-programmed the camera move into the Technocrane based on the previs, which The Third Floor’s team headed by Visualization Supervisor Michelle Blok had completed. The TechnoDolly provided us with a live camera trans/rotation that we fed into MotionBuilder to generate the CG background. We also set up an LED eyeline so the actor knew where the dragon was at all times.” The virtual set scouting approach for King’s Landing included imagining a large street scene. “The street set was going to be a massive, complicated build, and everyone wanted to make sure it would work for all the scenes that were going to be filmed on it,” states Carney. “Our artists were embedded in the art department and were able to convert their models into Unreal for VR scouting. The directors used the VR to pre-scout the virtual set and check for size and camera angles.” Then, the studio’s VR scouting setup was adopted by production to craft an animatic for the show’s Throne Room sequence, where Drogon burns the throne and retrieves Daenerys. The Third Floor artists animated blocking, while the DP used the camera viewfinder tool in the studio’s VR system to ‘film’ the shots. Says Carney: “We visualized Drogon’s animation across the scene in VR and then moved the shots into regular previs and then finally transferred Drogon’s performance to an on-set Ncam system, allowing the directors and crew to see the ‘dragon in the room.’ Casey Schatz, Head of Virtual Production at The Third Floor, used the real-time composite together with floor marks derived from techvis to provide accurate cues for the actor and eyeline pole operator.” VISUALIZATION WITH NVIZ
Nviz (a rebrand of VFX studio Nvizible and previs studio Nvizage) began utilizing Unreal Engine 4 about three years ago, initially just as a rendering tool. Now, with projects in the works such as The Witches, Morbius and The King’s Man, the studio has become a major realtime user. “From the beginning,” advises Nviz Head of Visualization Janek Lender, “we could see the huge potential of using the engine for previs and postvis, and it only took us a few weeks to decide we needed to invest time creating visualization tools to be able to take full advantage of this method of working. “Right now,” notes Lender, “we are merging our previs and Vcam setups so that we can offer the shot creation back to the filmmakers themselves. We also have tools that allow us to drive characters within the engine from our motion capture suits, and this way we can have complete control over the action within the scene. In essence, this
34 • VFXVOICE.COM SPRING 2020
“First, the technology can render high-quality visuals in real-time and works really well with real-world input devices such as mocap and cameras. We think a hybrid Maya/Unreal approach offers a lot of great advantages to the work we do and are making it a big part of our workflow. It allows us to more easily provide tools like virtual camera (Vcam), VR scouting and AR on-set apps. It lets us iterate through versions of our shots even faster than before.” —Eric Carney, CTO/Co-founder, The Third Floor offering constitutes an extremely powerful visualization toolset, which can be used to virtually scout sets, to visit potential locations which could be used for backplates, to create shots by shooting the action live (using the camera body and lens sets chosen by the DP), and then to edit the shots together there and then to create the previsualized sequences of the film. “Adopting a real-time technique has increased the speed of our workflow and offered far more richness in the final product, such as real-time lighting, real-time effects, faster shot creation, higher quality asset creation, improved sequence design, and many other improvements that we could not have achieved with our legacy workflow.” DNEG: A NEW PLAYER
During its history in visual effects production, DNEG has dived into previs and virtual production on many occasions. But now the studio has launched its dedicated DNEG Virtual Production, made up of specialists in VFX, VR, IT, R&D, live-action shooting and game engines to work in this space. Productions to which DNEG has delivered virtual production services for include Denis Villeneuve’s Dune and Kenneth Branagh’s Death on the Nile. “Our current virtual production services offering includes immersive previsualization tools – virtual scouting in VR and a virtual camera solution – and on-set mixed reality tools for realtime VFX, including an iPad app,” outlines DNEG Head of Virtual Production Isaac Partouche. “Our immersive previs services enable filmmakers to scout, frame and block out their shots in VR using both the HTC VIVE Pro VR headset and our own virtual camera device. “DNEG has been focused on developing a range of proprietary tools for its virtual production offering, creating a suite of tools developed specifically for filmmakers,” adds Partouche, who also notes that the studio’s real-time rendering toolset is based on Unreal Engine. “We have consulted with multiple directors and DPs during the development process, taking on board their feedback and ensuring that the tools are as intuitive as possible for our filmmaking clients.”
OPPOSITE TOP: VFX studio DNEG has launched its own virtual production and previs outfit. (Image courtesy of DNEG) OPPOSITE BOTTOM: DNEG is using a range of simul-cam, VR and real-time tools to aid in virtual production work. (Image courtesy of DNEG) TOP: A still from CNCPT’s gameplay work for Future Man, in which their original previs and layout became the final imagery. (Image copyright © 2017 Hulu) BOTTOM: CNCPT utilized Unreal Engine to craft the gameplay for Future Man. (Image copyright © 2017 Hulu)
SPRING 2020 VFXVOICE.COM • 35
PREVIS
“From the beginning, we could see the huge potential of using the engine for previs and postvis, and it only took us a few weeks to decide we needed to invest time creating visualization tools to be able to take full advantage of this method of working.” —Janek Lender, Head of Visualization, Nviz
work on For All Mankind, as well as location-based attractions. A significant project in which the company combined several real-time and virtual production approaches was for the final season of Game of Thrones. Here, a simul-cam setup for a Drogon ship attack scene, and VR set scouting for King’s Landing, helped the show’s makers and visual effects teams plan and then complete complex shots that involved a mix of live action and CG. “For the Euron vs. Drogon oner,” says Carney, “we deployed a range of techniques led by Virtual Production/Motion Control Supervisor Kaya Jabar. We used the TechnoDolly as our camera crane and to provide simul-cam. We pre-programmed the camera move into the Technocrane based on the previs, which The Third Floor’s team headed by Visualization Supervisor Michelle Blok had completed. The TechnoDolly provided us with a live camera trans/rotation that we fed into MotionBuilder to generate the CG background. We also set up an LED eyeline so the actor knew where the dragon was at all times.” The virtual set scouting approach for King’s Landing included imagining a large street scene. “The street set was going to be a massive, complicated build, and everyone wanted to make sure it would work for all the scenes that were going to be filmed on it,” states Carney. “Our artists were embedded in the art department and were able to convert their models into Unreal for VR scouting. The directors used the VR to pre-scout the virtual set and check for size and camera angles.” Then, the studio’s VR scouting setup was adopted by production to craft an animatic for the show’s Throne Room sequence, where Drogon burns the throne and retrieves Daenerys. The Third Floor artists animated blocking, while the DP used the camera viewfinder tool in the studio’s VR system to ‘film’ the shots. Says Carney: “We visualized Drogon’s animation across the scene in VR and then moved the shots into regular previs and then finally transferred Drogon’s performance to an on-set Ncam system, allowing the directors and crew to see the ‘dragon in the room.’ Casey Schatz, Head of Virtual Production at The Third Floor, used the real-time composite together with floor marks derived from techvis to provide accurate cues for the actor and eyeline pole operator.” VISUALIZATION WITH NVIZ
Nviz (a rebrand of VFX studio Nvizible and previs studio Nvizage) began utilizing Unreal Engine 4 about three years ago, initially just as a rendering tool. Now, with projects in the works such as The Witches, Morbius and The King’s Man, the studio has become a major realtime user. “From the beginning,” advises Nviz Head of Visualization Janek Lender, “we could see the huge potential of using the engine for previs and postvis, and it only took us a few weeks to decide we needed to invest time creating visualization tools to be able to take full advantage of this method of working. “Right now,” notes Lender, “we are merging our previs and Vcam setups so that we can offer the shot creation back to the filmmakers themselves. We also have tools that allow us to drive characters within the engine from our motion capture suits, and this way we can have complete control over the action within the scene. In essence, this
34 • VFXVOICE.COM SPRING 2020
“First, the technology can render high-quality visuals in real-time and works really well with real-world input devices such as mocap and cameras. We think a hybrid Maya/Unreal approach offers a lot of great advantages to the work we do and are making it a big part of our workflow. It allows us to more easily provide tools like virtual camera (Vcam), VR scouting and AR on-set apps. It lets us iterate through versions of our shots even faster than before.” —Eric Carney, CTO/Co-founder, The Third Floor offering constitutes an extremely powerful visualization toolset, which can be used to virtually scout sets, to visit potential locations which could be used for backplates, to create shots by shooting the action live (using the camera body and lens sets chosen by the DP), and then to edit the shots together there and then to create the previsualized sequences of the film. “Adopting a real-time technique has increased the speed of our workflow and offered far more richness in the final product, such as real-time lighting, real-time effects, faster shot creation, higher quality asset creation, improved sequence design, and many other improvements that we could not have achieved with our legacy workflow.” DNEG: A NEW PLAYER
During its history in visual effects production, DNEG has dived into previs and virtual production on many occasions. But now the studio has launched its dedicated DNEG Virtual Production, made up of specialists in VFX, VR, IT, R&D, live-action shooting and game engines to work in this space. Productions to which DNEG has delivered virtual production services for include Denis Villeneuve’s Dune and Kenneth Branagh’s Death on the Nile. “Our current virtual production services offering includes immersive previsualization tools – virtual scouting in VR and a virtual camera solution – and on-set mixed reality tools for realtime VFX, including an iPad app,” outlines DNEG Head of Virtual Production Isaac Partouche. “Our immersive previs services enable filmmakers to scout, frame and block out their shots in VR using both the HTC VIVE Pro VR headset and our own virtual camera device. “DNEG has been focused on developing a range of proprietary tools for its virtual production offering, creating a suite of tools developed specifically for filmmakers,” adds Partouche, who also notes that the studio’s real-time rendering toolset is based on Unreal Engine. “We have consulted with multiple directors and DPs during the development process, taking on board their feedback and ensuring that the tools are as intuitive as possible for our filmmaking clients.”
OPPOSITE TOP: VFX studio DNEG has launched its own virtual production and previs outfit. (Image courtesy of DNEG) OPPOSITE BOTTOM: DNEG is using a range of simul-cam, VR and real-time tools to aid in virtual production work. (Image courtesy of DNEG) TOP: A still from CNCPT’s gameplay work for Future Man, in which their original previs and layout became the final imagery. (Image copyright © 2017 Hulu) BOTTOM: CNCPT utilized Unreal Engine to craft the gameplay for Future Man. (Image copyright © 2017 Hulu)
SPRING 2020 VFXVOICE.COM • 35
Render time irrelevant See behind the scenes and learn how 3rd Gen AMD Ryzen™ Threadripper™ was instrumental for Blur Studio. www.amd.com/blur
Final Print Ad.indd 1
2/9/2020 4:02:49 PM
Render time irrelevant See behind the scenes and learn how 3rd Gen AMD Ryzen™ Threadripper™ was instrumental for Blur Studio. www.amd.com/blur
Final Print Ad.indd 1
2/9/2020 4:02:49 PM
TV
THE MANDALORIAN AND THE FUTURE OF FILMMAKING By IAN FAILES
All images copyright © Lucasfilm Ltd. TOP: Pedro Pascal as the Mandalorian and Misty Rosas as Kuiil (voiced by Nick Nolte) in The Mandalorian. OPPOSITE TOP: Director Deborah Chow on the Razorcrest set. OPPOSITE BOTTOM: Kuiil rides atop a CG blurrg.
38 • VFXVOICE.COM SPRING 2020
Two extraordinary things happened when Disney+’s The Mandalorian began airing on the new streaming service. First, the world fell in love with the phenomenon that is Baby Yoda. And second, the world probably did not realize that when the show’s characters appeared on some alien landscape, what they in fact were often watching were actors performing against giant LED screens. The production, from Lucasfilm and Jon Favreau’s Golem Creations, is the first-ever Star Wars live-action television series, and has perhaps already changed the game in terms of what might be possible in live-action TV. This is because of the virtual production approach to filming the show, in which those LED screens displayed real-time-rendered virtual sets that could be changed on the fly. The idea for such an approach was championed by Favreau, the show’s creator and the writer of several episodes, who already had extensive experience with virtual production filmmaking techniques and real-time rendering on projects such as The Jungle Book and The Lion King. The Mandalorian Visual Effects Supervisor Richard Bluff, who hails from Industrial Light & Magic, which oversaw the VFX effort and collaborated with several virtual production vendors, says that it was Favreau who recognized a new approach to shooting a sprawling live-action Star Wars series was needed, and that LED screens and a real-time-rendered environments shooting methodology could be the answer. Among many other benefits, the approach offered the opportunity to not have to rely solely on blue or greenscreens, but instead
have the actors and filmmakers see and interact directly with their environments and even be filmed for in-camera VFX shots. Not only that, since what was displayed on the LED walls was rendered in real-time, the imagery could move in unison with the camera (or performer) and adjust to the appropriate parallax during shooting. Bluff notes that this ‘Volume’ filmmaking helped in many ways, one of which was achieving scope. “One of the biggest challenges with a live-action Star Wars TV show is the number planets and other worlds we visit,” says the VFX Supervisor, who adds that the number of different exotic locations visited in the show would have necessitated the construction of multiple sets or required extensive – and expensive – location shoots around the world. Instead, most of The Mandalorian was captured on shooting stages in Los Angeles. “One of the reasons we wanted to shoot against the LED screens,” Bluff says, “is that we could store all those environments on a hard drive and then load them all up on the screen while giving the actors a minimal – relative to most productions – ‘physical’ set to interact with. “Also,” continues Bluff, “if we hadn’t done it the usual way, what we’d end up with is, say, 180 degrees of 40-foot-tall bluescreens. The challenge with that, of course, is that the actors don’t know where they are. And then when you get into editorial, you can get lost pretty quickly. Actually, [Lucasfilm President] Kathy Kennedy was visiting us in the first few weeks of shooting, and I asked her what she felt was going to be the biggest difference for her personally by shooting with the LED walls. She said it was going to be the ability to follow the storyline without getting distracted by all the bluescreen and questioning where the scene was taking place.”
“One of the reasons we wanted to shoot against the LED screens is that we could store all those environments on a hard drive and then load them all up on the screen while giving the actors a minimal – relative to most productions – ‘physical’ set to interact with.” —Richard Bluff, Visual Effects Supervisor, ILM
SPRING 2020 VFXVOICE.COM • 39
TV
THE MANDALORIAN AND THE FUTURE OF FILMMAKING By IAN FAILES
All images copyright © Lucasfilm Ltd. TOP: Pedro Pascal as the Mandalorian and Misty Rosas as Kuiil (voiced by Nick Nolte) in The Mandalorian. OPPOSITE TOP: Director Deborah Chow on the Razorcrest set. OPPOSITE BOTTOM: Kuiil rides atop a CG blurrg.
38 • VFXVOICE.COM SPRING 2020
Two extraordinary things happened when Disney+’s The Mandalorian began airing on the new streaming service. First, the world fell in love with the phenomenon that is Baby Yoda. And second, the world probably did not realize that when the show’s characters appeared on some alien landscape, what they in fact were often watching were actors performing against giant LED screens. The production, from Lucasfilm and Jon Favreau’s Golem Creations, is the first-ever Star Wars live-action television series, and has perhaps already changed the game in terms of what might be possible in live-action TV. This is because of the virtual production approach to filming the show, in which those LED screens displayed real-time-rendered virtual sets that could be changed on the fly. The idea for such an approach was championed by Favreau, the show’s creator and the writer of several episodes, who already had extensive experience with virtual production filmmaking techniques and real-time rendering on projects such as The Jungle Book and The Lion King. The Mandalorian Visual Effects Supervisor Richard Bluff, who hails from Industrial Light & Magic, which oversaw the VFX effort and collaborated with several virtual production vendors, says that it was Favreau who recognized a new approach to shooting a sprawling live-action Star Wars series was needed, and that LED screens and a real-time-rendered environments shooting methodology could be the answer. Among many other benefits, the approach offered the opportunity to not have to rely solely on blue or greenscreens, but instead
have the actors and filmmakers see and interact directly with their environments and even be filmed for in-camera VFX shots. Not only that, since what was displayed on the LED walls was rendered in real-time, the imagery could move in unison with the camera (or performer) and adjust to the appropriate parallax during shooting. Bluff notes that this ‘Volume’ filmmaking helped in many ways, one of which was achieving scope. “One of the biggest challenges with a live-action Star Wars TV show is the number planets and other worlds we visit,” says the VFX Supervisor, who adds that the number of different exotic locations visited in the show would have necessitated the construction of multiple sets or required extensive – and expensive – location shoots around the world. Instead, most of The Mandalorian was captured on shooting stages in Los Angeles. “One of the reasons we wanted to shoot against the LED screens,” Bluff says, “is that we could store all those environments on a hard drive and then load them all up on the screen while giving the actors a minimal – relative to most productions – ‘physical’ set to interact with. “Also,” continues Bluff, “if we hadn’t done it the usual way, what we’d end up with is, say, 180 degrees of 40-foot-tall bluescreens. The challenge with that, of course, is that the actors don’t know where they are. And then when you get into editorial, you can get lost pretty quickly. Actually, [Lucasfilm President] Kathy Kennedy was visiting us in the first few weeks of shooting, and I asked her what she felt was going to be the biggest difference for her personally by shooting with the LED walls. She said it was going to be the ability to follow the storyline without getting distracted by all the bluescreen and questioning where the scene was taking place.”
“One of the reasons we wanted to shoot against the LED screens is that we could store all those environments on a hard drive and then load them all up on the screen while giving the actors a minimal – relative to most productions – ‘physical’ set to interact with.” —Richard Bluff, Visual Effects Supervisor, ILM
SPRING 2020 VFXVOICE.COM • 39
TV
DESIGNING WORLDS
VIRTUAL PRODUCTION TECH
“[Lucasfilm President] Kathy Kennedy was visiting us in the first few weeks of shooting, and I asked her what she felt was going to be the biggest difference for her personally by shooting with the LED walls. She said it was going to be the ability to follow the storyline without getting distracted by all the bluescreen and questioning where the scene was taking place.” —Richard Bluff, Visual Effects Supervisor, ILM
40 • VFXVOICE.COM SPRING 2020
The Volume itself and how imagery was displayed on it came about via the joint collaboration of several companies. The wall was made up of a 20-foot-high by 75-foot-diameter semi-circular LED wall and LED ceiling supplied and integrated by Lux Machina. Profile Studios looked after camera tracking (the footage was acquired on an Arri Alexa LF). At the heart of the on-set production was ILM’s StageCraft toolset, which enabled the playback and manipulation of real-time rendered imagery via Epic Games’ Unreal Engine onto the LED walls, i.e. as virtual sets. Actors performed in an area on the ground of the Volume effectively enclosed by those walls, meaning the captured footage also had the benefit of interactive lighting on the actors. The area was dressed with sand, rocks or other materials matching the particular location that would be seen on the LED wall, with the seam between the ground and LED wall designed to line up invisibly (and change the necessary perspective when the camera moved). Getting the cameras synchronized to the LED wall was one of the biggest hurdles, identifies Bluff, as was ensuring there was a robust color pipeline in preparation and projection of the wall imagery. “Whenever anybody walked onto set and looked at the LEDs with their naked eye, the color was very different from what you would imagine it should be. That’s because the LUTs and the camera profile isn’t layered on top of the LED screen. It’s only once the camera points at the screen and you see it through the monitors that the color reverts back to what it needs to look like.”
In some ways, what the Volume presented to The Mandalorian’s makers was an inversion of the visual effects process. A lot of design and planning work needed to be done upfront – rather than in post-production – to craft the virtual sets and ensure they could be properly utilized during filming. That process began with Lucasfilm Creative Director Doug Chiang, whose team designed characters, vehicles and locations. Production Designer Andrew L. Jones then oversaw design and previs for the show out of a Virtual Art Department (VAD). ILM was also part of the VAD, and it was here that designs became game-engine-ready assets. Oftentimes these assets were generated with the benefit of photogrammetry, and sometimes the lighting was also ‘pre-baked’ to ensure that the sets held up on the LED wall. The beauty of all the sets being virtual was that they could be scouted in virtual reality, something that Favreau, the show’s various directors and DPs could do just as if it was a real set location scout. In addition, they could ‘pre-light’ and adjust sets at this stage of production. And adjustments could continue to occur even during the shoot, thanks to Unreal Engine as well as a dedicated team on set positioned at what became known as the ‘Brain Bar.’ VIRTUAL SET HIGHLIGHTS
For Bluff, two particular virtual sets were highlights of The Mandalorian’s shoot. One was the moment the central character visits the very familiar Mos Eisley Cantina bar on Tatooine. Here, a partial Cantina set was meticulously lined up with LED wall content – that is, part of what you saw in a final Cantina bar shot would be real set and part of it was LED set – to produce in-camera-final footage, something that was carefully previs’d. “We had to scout virtually where the camera could and couldn’t be, and on what lens ahead of the shoot to ensure that our magic trick worked,” outlines Bluff. “Through that pre-planning and department collaboration we managed to shoot the Cantina sequence in-camera with zero fixes in post to the LED displayed content.” Another successful virtual environment was the vast ‘Roost’ hangar where at one point the Mandalorian lands his ship, the Razorcrest. “It was a virtual environment that was the size of a football field,” says Bluff. “Across three shooting days, we were
OPPOSITE TOP: The LED walls provided in-camera reflections for the Mandalorian’s helmet. OPPOSITE BOTTOM: One of the many vehicle effects crafted for the series. TOP: IG-11, voiced by Taika Waititi, was a CG creation sometimes played on set by a performer in a gray tracking suit and partial head-piece. MIDDLE: IG-11 and the Mandalorian ready for a shootout. BOTTOM: ‘Baby Yoda,’ a hit in the show, was brought to life by both puppetry and CG means.
SPRING 2020 VFXVOICE.COM • 41
TV
DESIGNING WORLDS
VIRTUAL PRODUCTION TECH
“[Lucasfilm President] Kathy Kennedy was visiting us in the first few weeks of shooting, and I asked her what she felt was going to be the biggest difference for her personally by shooting with the LED walls. She said it was going to be the ability to follow the storyline without getting distracted by all the bluescreen and questioning where the scene was taking place.” —Richard Bluff, Visual Effects Supervisor, ILM
40 • VFXVOICE.COM SPRING 2020
The Volume itself and how imagery was displayed on it came about via the joint collaboration of several companies. The wall was made up of a 20-foot-high by 75-foot-diameter semi-circular LED wall and LED ceiling supplied and integrated by Lux Machina. Profile Studios looked after camera tracking (the footage was acquired on an Arri Alexa LF). At the heart of the on-set production was ILM’s StageCraft toolset, which enabled the playback and manipulation of real-time rendered imagery via Epic Games’ Unreal Engine onto the LED walls, i.e. as virtual sets. Actors performed in an area on the ground of the Volume effectively enclosed by those walls, meaning the captured footage also had the benefit of interactive lighting on the actors. The area was dressed with sand, rocks or other materials matching the particular location that would be seen on the LED wall, with the seam between the ground and LED wall designed to line up invisibly (and change the necessary perspective when the camera moved). Getting the cameras synchronized to the LED wall was one of the biggest hurdles, identifies Bluff, as was ensuring there was a robust color pipeline in preparation and projection of the wall imagery. “Whenever anybody walked onto set and looked at the LEDs with their naked eye, the color was very different from what you would imagine it should be. That’s because the LUTs and the camera profile isn’t layered on top of the LED screen. It’s only once the camera points at the screen and you see it through the monitors that the color reverts back to what it needs to look like.”
In some ways, what the Volume presented to The Mandalorian’s makers was an inversion of the visual effects process. A lot of design and planning work needed to be done upfront – rather than in post-production – to craft the virtual sets and ensure they could be properly utilized during filming. That process began with Lucasfilm Creative Director Doug Chiang, whose team designed characters, vehicles and locations. Production Designer Andrew L. Jones then oversaw design and previs for the show out of a Virtual Art Department (VAD). ILM was also part of the VAD, and it was here that designs became game-engine-ready assets. Oftentimes these assets were generated with the benefit of photogrammetry, and sometimes the lighting was also ‘pre-baked’ to ensure that the sets held up on the LED wall. The beauty of all the sets being virtual was that they could be scouted in virtual reality, something that Favreau, the show’s various directors and DPs could do just as if it was a real set location scout. In addition, they could ‘pre-light’ and adjust sets at this stage of production. And adjustments could continue to occur even during the shoot, thanks to Unreal Engine as well as a dedicated team on set positioned at what became known as the ‘Brain Bar.’ VIRTUAL SET HIGHLIGHTS
For Bluff, two particular virtual sets were highlights of The Mandalorian’s shoot. One was the moment the central character visits the very familiar Mos Eisley Cantina bar on Tatooine. Here, a partial Cantina set was meticulously lined up with LED wall content – that is, part of what you saw in a final Cantina bar shot would be real set and part of it was LED set – to produce in-camera-final footage, something that was carefully previs’d. “We had to scout virtually where the camera could and couldn’t be, and on what lens ahead of the shoot to ensure that our magic trick worked,” outlines Bluff. “Through that pre-planning and department collaboration we managed to shoot the Cantina sequence in-camera with zero fixes in post to the LED displayed content.” Another successful virtual environment was the vast ‘Roost’ hangar where at one point the Mandalorian lands his ship, the Razorcrest. “It was a virtual environment that was the size of a football field,” says Bluff. “Across three shooting days, we were
OPPOSITE TOP: The LED walls provided in-camera reflections for the Mandalorian’s helmet. OPPOSITE BOTTOM: One of the many vehicle effects crafted for the series. TOP: IG-11, voiced by Taika Waititi, was a CG creation sometimes played on set by a performer in a gray tracking suit and partial head-piece. MIDDLE: IG-11 and the Mandalorian ready for a shootout. BOTTOM: ‘Baby Yoda,’ a hit in the show, was brought to life by both puppetry and CG means.
SPRING 2020 VFXVOICE.COM • 41
TV
“[Creating the vast hangar virtual environment where the Mandalorian landed his ship] was a huge undertaking where we had to work very closely with the art department to source all the physical props ahead of time to give visual effects enough lead-time to scan and photograph them to make as CG counterparts that were indistinguishable from the physical ones. There were also CG particle effects for sparks, smoke and digital doubles walking around in real-time on the LED wall that were cue-able by the director.” —Richard Bluff, Visual Effects Supervisor, ILM
going to be filming on one goal line, then the halfway line, and then the opposing goal line. So anything on day one that appeared on the screen that you saw that was on the ground had to be a physical object the next day once we got to the halfway line because the actors needed to interact with it. “This was a huge undertaking where we had to work very closely with the art department to source all the physical props ahead of time to give visual effects enough lead time to scan and photograph them to make as CG counterparts that were indistinguishable from the physical ones. There were also CG particle effects for sparks, smoke and digital doubles walking around in real-time on the LED wall that were cue-able by the director.”
“We had to scout virtually where the camera could and couldn’t be, and on what lens ahead of the shoot to ensure that our magic trick worked. And through that pre-planning and department collaboration we managed to shoot the Cantina sequence in-camera with zero fixes in post to the LED displayed content.” —Richard Bluff, Visual Effects Supervisor, ILM TOP: The Mandalorian makes some repairs to his own armor. Many scenes could be accomplished in the ‘Volume,’ allowing for real-time-rendered backgrounds that could be moved and changed easily. BOTTOM: This sequence involved an attempted raid on a Jawa sand crawler.
42 • VFXVOICE.COM SPRING 2020
BRINGING BABY YODA TO LIFE
The Mandalorian is, early on in the series, tasked with bringing an unknown target back alive – ‘Baby Yoda,’ who the bounty hunter ultimately decides to protect. Legacy Effects was tasked with building both an animatronic puppet and stuffie version of Baby Yoda. “The hope was that we could use the puppet as often as possible, but at the same time I felt that the heavy lifting would need to be done in CG,” notes Bluff. This turned out not to be the case, due in large part to the vision Jon Faverau had for the character. With Favreau’s daily oversight, the practical Baby Yoda was used in a much larger way than anyone in the VFX department had assumed. “Everybody at the same time realized that this little character was quite magical,” points out Bluff, “and that it was going to work way better than we ever expected, and that was all down to Jon’s vision and taste.” Previs for shots involving Baby Yoda had imagined scenes of the character climbing out of seats or around the Razorcrest console (one reason why a CG solution had been envisaged). But, following
the desire for more puppetry usage, shots were re-designed to fit the domain of what an animatronic puppet was capable of. ILM did, however, build a matching Baby Yoda asset and worked on several CG character shots. These included when he was walking or picking up things such as a frog to eat. The studio also augmented some practical eye blinks and carried out puppeteer and rod removal where required. When animating their CG Baby Yoda, ILM – led by Animation Supervisor Hal Hickel – was required at times to match the limitations that were inherent in the physical puppet. “The animators had to understand exactly how the puppet moved,” explains Bluff. “We even went to the lengths of getting photography of the underlying armature that drove the baby, and measured the movement of the servos so that we knew how far we should be moving the eyes or head.” When audiences finally saw Baby Yoda, they were immediately charmed, recalls Bluff. “I don’t think any of us thought it was going to take off in the way it did. I have to say it was just the most wonderful lift while we were shooting Season 2. Every Friday when an episode from Season 1 dropped, we were here [working], and it was a huge shot in the arm, right at the end of what was always a very challenging week.”
Miniatures, Motion Control and The Mandalorian Just as Baby Yoda was imagined partly with what could be considered ‘old-school’ puppetry techniques, so too were some shots of the Razorc rest ship seen in The Mandalorian. Here, a group of ILM artists combined to design, 3D print and sculpt parts for a model of the ship. Then, a hand-built milled motion-control rig – fashioned by none other than ILM Chief Creative Officer and Visual Effects Supervisor John Knoll – was crafted to shoot the model with a DSLR. The moco rig was also designed to provide camera moves that were consistent with ILM’s optical-era motion-control Star Wars spaceship shots. The Razorcrest would be filmed in multiple passes against bluescreen and then composited into final shots. During the season, audiences were none the wiser which Razorcrest shots were achieved this way and which were digital.
THIS IS THE WAY FORWARD
With Season 2 production underway, The Mandalorian team has continued to push forward with new advancements. These include ILM taking more control of the virtual production, LED wall development and real-time workflows (the studio is utilizing its own Helios game engine within StageCraft this time around, for example). “As we all began to further understand this technology and the needs moving forward,” says Bluff, “we’ve gone with a complete in-house solution for driving the LED screens that we feel will thrust the technology even further forward. What we want to do with this technology is effectively give the filmmakers and the creatives a choice of tools and a choice of capabilities.”
TOP: The fabrication and puppetry for Baby Yoda was handled by Legacy Effects. BOTTOM: Baby Yoda causes much mischief in the cockpit of the Razorcrest.
SPRING 2020 VFXVOICE.COM • 43
TV
“[Creating the vast hangar virtual environment where the Mandalorian landed his ship] was a huge undertaking where we had to work very closely with the art department to source all the physical props ahead of time to give visual effects enough lead-time to scan and photograph them to make as CG counterparts that were indistinguishable from the physical ones. There were also CG particle effects for sparks, smoke and digital doubles walking around in real-time on the LED wall that were cue-able by the director.” —Richard Bluff, Visual Effects Supervisor, ILM
going to be filming on one goal line, then the halfway line, and then the opposing goal line. So anything on day one that appeared on the screen that you saw that was on the ground had to be a physical object the next day once we got to the halfway line because the actors needed to interact with it. “This was a huge undertaking where we had to work very closely with the art department to source all the physical props ahead of time to give visual effects enough lead time to scan and photograph them to make as CG counterparts that were indistinguishable from the physical ones. There were also CG particle effects for sparks, smoke and digital doubles walking around in real-time on the LED wall that were cue-able by the director.”
“We had to scout virtually where the camera could and couldn’t be, and on what lens ahead of the shoot to ensure that our magic trick worked. And through that pre-planning and department collaboration we managed to shoot the Cantina sequence in-camera with zero fixes in post to the LED displayed content.” —Richard Bluff, Visual Effects Supervisor, ILM TOP: The Mandalorian makes some repairs to his own armor. Many scenes could be accomplished in the ‘Volume,’ allowing for real-time-rendered backgrounds that could be moved and changed easily. BOTTOM: This sequence involved an attempted raid on a Jawa sand crawler.
42 • VFXVOICE.COM SPRING 2020
BRINGING BABY YODA TO LIFE
The Mandalorian is, early on in the series, tasked with bringing an unknown target back alive – ‘Baby Yoda,’ who the bounty hunter ultimately decides to protect. Legacy Effects was tasked with building both an animatronic puppet and stuffie version of Baby Yoda. “The hope was that we could use the puppet as often as possible, but at the same time I felt that the heavy lifting would need to be done in CG,” notes Bluff. This turned out not to be the case, due in large part to the vision Jon Faverau had for the character. With Favreau’s daily oversight, the practical Baby Yoda was used in a much larger way than anyone in the VFX department had assumed. “Everybody at the same time realized that this little character was quite magical,” points out Bluff, “and that it was going to work way better than we ever expected, and that was all down to Jon’s vision and taste.” Previs for shots involving Baby Yoda had imagined scenes of the character climbing out of seats or around the Razorcrest console (one reason why a CG solution had been envisaged). But, following
the desire for more puppetry usage, shots were re-designed to fit the domain of what an animatronic puppet was capable of. ILM did, however, build a matching Baby Yoda asset and worked on several CG character shots. These included when he was walking or picking up things such as a frog to eat. The studio also augmented some practical eye blinks and carried out puppeteer and rod removal where required. When animating their CG Baby Yoda, ILM – led by Animation Supervisor Hal Hickel – was required at times to match the limitations that were inherent in the physical puppet. “The animators had to understand exactly how the puppet moved,” explains Bluff. “We even went to the lengths of getting photography of the underlying armature that drove the baby, and measured the movement of the servos so that we knew how far we should be moving the eyes or head.” When audiences finally saw Baby Yoda, they were immediately charmed, recalls Bluff. “I don’t think any of us thought it was going to take off in the way it did. I have to say it was just the most wonderful lift while we were shooting Season 2. Every Friday when an episode from Season 1 dropped, we were here [working], and it was a huge shot in the arm, right at the end of what was always a very challenging week.”
Miniatures, Motion Control and The Mandalorian Just as Baby Yoda was imagined partly with what could be considered ‘old-school’ puppetry techniques, so too were some shots of the Razorc rest ship seen in The Mandalorian. Here, a group of ILM artists combined to design, 3D print and sculpt parts for a model of the ship. Then, a hand-built milled motion-control rig – fashioned by none other than ILM Chief Creative Officer and Visual Effects Supervisor John Knoll – was crafted to shoot the model with a DSLR. The moco rig was also designed to provide camera moves that were consistent with ILM’s optical-era motion-control Star Wars spaceship shots. The Razorcrest would be filmed in multiple passes against bluescreen and then composited into final shots. During the season, audiences were none the wiser which Razorcrest shots were achieved this way and which were digital.
THIS IS THE WAY FORWARD
With Season 2 production underway, The Mandalorian team has continued to push forward with new advancements. These include ILM taking more control of the virtual production, LED wall development and real-time workflows (the studio is utilizing its own Helios game engine within StageCraft this time around, for example). “As we all began to further understand this technology and the needs moving forward,” says Bluff, “we’ve gone with a complete in-house solution for driving the LED screens that we feel will thrust the technology even further forward. What we want to do with this technology is effectively give the filmmakers and the creatives a choice of tools and a choice of capabilities.”
TOP: The fabrication and puppetry for Baby Yoda was handled by Legacy Effects. BOTTOM: Baby Yoda causes much mischief in the cockpit of the Razorcrest.
SPRING 2020 VFXVOICE.COM • 43
PROFILE
“It was 1985 and I thought that computers were the future and that high resolution would be the area to have expertise in. Being liberated from shooting each frame on a rostrum camera was also massively appealing. I love to embrace new ideas and technologies. It’s invigorating to innovate and think ahead into the future. This medium made me happy as I could literally go where no man has gone before.” —Sheena Duggal, Visual Effects Supervisor
SHEENA DUGGAL: FROM ROCK-STAR ALBUM COVERS TO HOLLYWOOD BLOCKBUSTERS By TREVOR HOGG
Images courtesy of Sheena Duggal. TOP LEFT: Sheena Duggal. TOP RIGHT: Duggal, left, and Producer Jacquie Barnbrook, right, with the Sex and the City 2 VFX plate crew in 2008. OPPOSITE TOP: Anger Management cast and crew. Left to right: Don McAlpine, Peter Segal, Jack Nicholson, Charles Newirth, Sheena Duggal, Barry Bernardi, Luis Guzmán, Daniel Kuehn, Michael Ewing, Adam Sandler, Nancy Karlin and Todd Garner in 2002. (Photo: Sidney Baldwin) OPPOSITE MIDDLE: With Computer Graphics Supervisor Jake Morrison and Visual Effects Set Producer Daniel Kuehn working on The Prize Winner of Defiance, Ohio in 2005. OPPOSITE BOTTOM: At Yankee Stadium with Executive VFX Producer Debbie Denise and First Assistant Director John Hockridge for Anger Management in 2002. (Photo courtesy of Rob Bredow)
44 • VFXVOICE.COM SPRING 2020
In recognition of a visual effects career that began as a pioneering Flame artist on Super Mario Bros. (1993) and currently has her overseeing the visual effects for Venom 2, Sheena Duggal received the 2020 VES Award for Creative Excellence, which honors individuals who have consistently produced compelling and creative imagery in service to the story. The daughter of an electrical engineer and a homemaker was born in Manchester, England and grew up in Stourport-on-Severn in Hereford and Worcestershire. “I left home at 17 to study an art foundation course at the Victoria Institute School of Art in Worcester,” recalls Duggal. “I then went on to study for a Bachelor of Arts specializing in animation and title design. Bob Godfrey was our animation lecturer and he was a real hoot. Everything we animated was very Roobarb and Custard [an early 1970s BBC animated children’s show that used a hand-sketched style animated on twos]. We had the opportunity to visit Bob at his studio in London and were delighted to handle the Oscar that he won for his best animated short, Great, which made him the first British animator to win an Academy Award.” An internship at Lodge-Cheeseman Production enabled the aspiring artist and fan of American graphic designer and filmmaker Saul Bass (Vertigo) to work with Bernard Lodge. “Besides being famous for the 1963-1973 Doctor Who title sequences, Bernard designed the brief screen display of the complex landscape on the barren asteroid in Alien. This was generated by Systems Simulation Ltd., one of the very few computer graphics facilities outside of the U.S. creating CG for motion pictures. Lodge also directed the Esper sequence in Blade Runner (1982). It was a thrill for me of course. I never imagined that I would go on to VFX supervise two films with an artist I admire as much as Sir Ridley Scott (Matchstick Men, Body of Lies).” Upon graduating with a Bachelor of Arts Honors degree in Graphic Design specializing in animation, Duggal decided to forgo traditional animation and create high-resolution computer designs for musicians and photographers. “It was 1985 and I thought that computers were the future and that high resolution would be the area to have expertise in. Being liberated from shooting each frame on a rostrum camera was also massively appealing. I love to embrace new ideas and technologies. It’s invigorating to innovate and think ahead into the
future. This medium made me happy as I could literally go where no man has gone before.” Duggal got the opportunity to conceptualize the album cover for Traveling Wilburys Vol. 1 with George Harrison. “I was working with Art Director David Costa of Wherefore Art. It was a magical time. I remember how collaborative and down-to-earth George was. We spent many hours sitting in front of the computer together and even went out to dinner. George talked about being a Beatle, but mostly about his time in India and how profoundly affected he was by his yogi, and his devotion to yoga and transcendental meditation.” A chance to work on the first video game adaptation caused Duggal to shift her focus towards cinema. “My friend Philippe Panzini [he later won a sci-tech award for his work on Flame software] was working on Super Mario Bros. and showed my work to the producers. They hired me from London to come to Los Angeles as a digital matte painter. When I arrived, they discovered that I had studied animation, so I was animating and compositing shots in no time.” Around that time a revolutionary 2D compositing software was being developed. “The early days with alpha versions of Flame were fantastic,” Duggal recalls. “We pushed the tools to the limit. The developer who had written the code, Gary Tregaskis, was with us on Super Mario Bros. writing tools as needed. Every day brought something new that we didn’t know we could do. We were devising new methodologies, coming up with ideas and creating images that were literally bleeding edge at the time. “After Super Mario Bros. I went to work at Colossal Pictures where I animated and composited the Robocop theme park ride for Iwerks Entertainment and created the award-winning Coke Sun commercial,” remarks Duggal. “In 1994, I joined ILM to work with Bob Zemeckis (Back to the Future) on a Tales of the Crypt episode in which he was bringing Humphrey Bogart back to life. This is when I first met Ken Ralston (Forrest Gump) and Debbie Denise (Snow Falling on Cedars). I worked on some fun projects at ILM. Compositing the fight sequence on top of the train for the first Mission: Impossible was one of the bigger challenges, but considering this was done in 1995, it holds up really well today.” A large group of ILM employees led by Ralston left to establish Sony Pictures Imageworks. “I took on the role of Co-Director of High-Speed Compositing along with Mark Holmes. This meant setting up a Flame department. After being at ILM it was a shock to try to build a functioning facility mostly from scratch. I was managing and training artists, compositing shots and supervising multiple shows during this time. We were also writing tools and support systems to track and manage data. It was certainly a
SPRING 2020 VFXVOICE.COM • 45
PROFILE
“It was 1985 and I thought that computers were the future and that high resolution would be the area to have expertise in. Being liberated from shooting each frame on a rostrum camera was also massively appealing. I love to embrace new ideas and technologies. It’s invigorating to innovate and think ahead into the future. This medium made me happy as I could literally go where no man has gone before.” —Sheena Duggal, Visual Effects Supervisor
SHEENA DUGGAL: FROM ROCK-STAR ALBUM COVERS TO HOLLYWOOD BLOCKBUSTERS By TREVOR HOGG
Images courtesy of Sheena Duggal. TOP LEFT: Sheena Duggal. TOP RIGHT: Duggal, left, and Producer Jacquie Barnbrook, right, with the Sex and the City 2 VFX plate crew in 2008. OPPOSITE TOP: Anger Management cast and crew. Left to right: Don McAlpine, Peter Segal, Jack Nicholson, Charles Newirth, Sheena Duggal, Barry Bernardi, Luis Guzmán, Daniel Kuehn, Michael Ewing, Adam Sandler, Nancy Karlin and Todd Garner in 2002. (Photo: Sidney Baldwin) OPPOSITE MIDDLE: With Computer Graphics Supervisor Jake Morrison and Visual Effects Set Producer Daniel Kuehn working on The Prize Winner of Defiance, Ohio in 2005. OPPOSITE BOTTOM: At Yankee Stadium with Executive VFX Producer Debbie Denise and First Assistant Director John Hockridge for Anger Management in 2002. (Photo courtesy of Rob Bredow)
44 • VFXVOICE.COM SPRING 2020
In recognition of a visual effects career that began as a pioneering Flame artist on Super Mario Bros. (1993) and currently has her overseeing the visual effects for Venom 2, Sheena Duggal received the 2020 VES Award for Creative Excellence, which honors individuals who have consistently produced compelling and creative imagery in service to the story. The daughter of an electrical engineer and a homemaker was born in Manchester, England and grew up in Stourport-on-Severn in Hereford and Worcestershire. “I left home at 17 to study an art foundation course at the Victoria Institute School of Art in Worcester,” recalls Duggal. “I then went on to study for a Bachelor of Arts specializing in animation and title design. Bob Godfrey was our animation lecturer and he was a real hoot. Everything we animated was very Roobarb and Custard [an early 1970s BBC animated children’s show that used a hand-sketched style animated on twos]. We had the opportunity to visit Bob at his studio in London and were delighted to handle the Oscar that he won for his best animated short, Great, which made him the first British animator to win an Academy Award.” An internship at Lodge-Cheeseman Production enabled the aspiring artist and fan of American graphic designer and filmmaker Saul Bass (Vertigo) to work with Bernard Lodge. “Besides being famous for the 1963-1973 Doctor Who title sequences, Bernard designed the brief screen display of the complex landscape on the barren asteroid in Alien. This was generated by Systems Simulation Ltd., one of the very few computer graphics facilities outside of the U.S. creating CG for motion pictures. Lodge also directed the Esper sequence in Blade Runner (1982). It was a thrill for me of course. I never imagined that I would go on to VFX supervise two films with an artist I admire as much as Sir Ridley Scott (Matchstick Men, Body of Lies).” Upon graduating with a Bachelor of Arts Honors degree in Graphic Design specializing in animation, Duggal decided to forgo traditional animation and create high-resolution computer designs for musicians and photographers. “It was 1985 and I thought that computers were the future and that high resolution would be the area to have expertise in. Being liberated from shooting each frame on a rostrum camera was also massively appealing. I love to embrace new ideas and technologies. It’s invigorating to innovate and think ahead into the
future. This medium made me happy as I could literally go where no man has gone before.” Duggal got the opportunity to conceptualize the album cover for Traveling Wilburys Vol. 1 with George Harrison. “I was working with Art Director David Costa of Wherefore Art. It was a magical time. I remember how collaborative and down-to-earth George was. We spent many hours sitting in front of the computer together and even went out to dinner. George talked about being a Beatle, but mostly about his time in India and how profoundly affected he was by his yogi, and his devotion to yoga and transcendental meditation.” A chance to work on the first video game adaptation caused Duggal to shift her focus towards cinema. “My friend Philippe Panzini [he later won a sci-tech award for his work on Flame software] was working on Super Mario Bros. and showed my work to the producers. They hired me from London to come to Los Angeles as a digital matte painter. When I arrived, they discovered that I had studied animation, so I was animating and compositing shots in no time.” Around that time a revolutionary 2D compositing software was being developed. “The early days with alpha versions of Flame were fantastic,” Duggal recalls. “We pushed the tools to the limit. The developer who had written the code, Gary Tregaskis, was with us on Super Mario Bros. writing tools as needed. Every day brought something new that we didn’t know we could do. We were devising new methodologies, coming up with ideas and creating images that were literally bleeding edge at the time. “After Super Mario Bros. I went to work at Colossal Pictures where I animated and composited the Robocop theme park ride for Iwerks Entertainment and created the award-winning Coke Sun commercial,” remarks Duggal. “In 1994, I joined ILM to work with Bob Zemeckis (Back to the Future) on a Tales of the Crypt episode in which he was bringing Humphrey Bogart back to life. This is when I first met Ken Ralston (Forrest Gump) and Debbie Denise (Snow Falling on Cedars). I worked on some fun projects at ILM. Compositing the fight sequence on top of the train for the first Mission: Impossible was one of the bigger challenges, but considering this was done in 1995, it holds up really well today.” A large group of ILM employees led by Ralston left to establish Sony Pictures Imageworks. “I took on the role of Co-Director of High-Speed Compositing along with Mark Holmes. This meant setting up a Flame department. After being at ILM it was a shock to try to build a functioning facility mostly from scratch. I was managing and training artists, compositing shots and supervising multiple shows during this time. We were also writing tools and support systems to track and manage data. It was certainly a
SPRING 2020 VFXVOICE.COM • 45
PROFILE
“[The lack of female visual effects supervisors] is definitely [the result of] a lack of opportunity and I’d say unconscious bias. ... It never really occurred to me that my gender should hold me back and I was always surprised when it did. I am a strong believer in diversity and inclusion, not just because I am a bi-racial woman but because I believe that greater diversity leads to freer thinking and greater creativity.” —Sheena Duggal, Visual Effects Supervisor
TOP: With Jack Nicholson during shooting of Anger Management in 2002. (Photo courtesy of Rob Bredow) MIDDLE: Recording an interview with Sir Ridley Scott at RSA Studios for a Directors Guild of America video presentation in 2009. BOTTOM: With Visual Effects Supervisor Scott Stokdyk on top of a 50-story building in Manhattan – with a spydercam rig in the background – while filming Spider-Man 3 in 2006. Crews were dropping cable cams down the front of the building for the crane destruction scene and Gwen Stacy’s (Bryce Dallas Howard) falling shot.
46 • VFXVOICE.COM SPRING 2020
character-building experience.” After the completion of Starship Troopers (1997) and Contact (1997), Duggal began actively searching for another project. “The Postman (1997) needed some 911 help, and this was the first show that I supervised and shared the co-supervisor credit with Stephen Rosenbaum (Kong: Skull Island). Ken Ralston took me under his wing and I supervised Patch Adams (1998) under his guidance. I met Executive Producer Charles Newirth (City of Angels) on this film and that began a great collaboration with him. Charles hired me on a lot of films when he was head of Revolution Studios with Joe Roth (Anger Management). I even worked on a film directed by Roth during this time. I had the pleasure of working with Charles again at Marvel a few years later. I am grateful to all the people who gave me an opportunity back then.” Transitioning from working in a facility to the studio production side was not a big leap. “Certainly, all the experience that I gained as Creative Director of the Flame compositing department was helpful in managing teams and budgets,” notes Duggal. “I have been lucky to have partnered with some of the most excellent creative VFX producers and this made a world of difference. What I miss most about not being inside a facility is the access to technological expertise. It’s much harder to keep up with technology when you aren’t part of that larger team working towards a solution across multiple shows. I was used to being involved in aspects of R&D for most of my career. Working at Marvel was great because we had access to the clever folks at Disney Research in Zurich, and it was always fascinating to get a peek into the future technology they are working on. Happily, I am still involved in creative technology R&D projects and have been consulting with Codex, now X2X labs [in partnership with PIX groups], for many years.” Artistic inspirations come from paintings, cinema and music. “As a teenager I was inspired by the playfulness and freedom of Dali,” notes Duggal. “I love impressionism, which can completely capture the essence of something in a nonliteral way. I especially love artists like Georges de La Tour, the Baroque painter who painted the most amazing chiaroscuro scenes lit by only candlelight. When it comes to films, my art influences definitely crossover. Ray Harryhausen’s (Jason and the Argonauts) stop-motion animation was something magical to me. Almost every frame of Blade Runner is a work of art
in its own right. I am a big fan of Peter Greenaway (Zed & Two Noughts) who is an artist influenced by Renaissance and Baroque paintings. I love the symmetry and symbolism in his films. You can see the golden ratio employed in his work. I have definitely been influenced by the graphic design of Saul Bass and by not just the music, but also the art of the album covers of the 1960s and 1970s. It’s fortuitous that I spent some time in my career designing album covers for major rock stars.” Duggal directed the short Inspire: The Chicago Spire Art Film (2007) which utilized the Arnold renderer for the first time. “The piece was littered with visuals and harmonics echoing Phi and was loved by [Spanish architect-engineer] Santiago Calatrava, who had designed his building around the principle of the Fibonacci sequence. Somehow, by looking at designs for his building I understood this, so when I pitched the idea for the short art piece to him on a cold night standing on the site in Chicago, he was delighted. When I showed him the previz, Santiago invited me into his house in New York to see a moving sculpture he had built that exhibited in the Met. There were no notes. Santiago just loved the previz and wanted to share the art he created so I could understand the ideas he had. I was invited to design meetings with him where he was talking to the architects and contractors of the building, I hope it gets built someday; it is a beautiful vision. I asked Santiago what inspired him and he said there is no such thing as inspiration. You focus on the work and the ideas and solution will come.” Duggal’s photography has been published and is an important part of her work, “For me, photography is painting with light and so much more. It acts as a means to bring something into sharp focus, and to create a visual library in my mind so that when reading a script I can see a concept of how it might look. It’s an exciting part of the process. Photography also serves as a communication tool with the cinematographer. The work that we do is built on top of the canvas they create for us to paint on. No cinematographer is going to give you the time of day if you can’t talk the same language as them and understand why a lens or composition choice might be preferable. I’m working with Bob Richardson (Once Upon a Time in Hollywood) right now and he’s a master. I delight in his use of back light, and Bob is the consummate collaborator with visual effects by giving us the most beautiful footage to
TOP LEFT: The Sahara Desert, Morocco, with Director Sir Ridley Scott, VFX Producer Jacquie Barnbrook and data wrangler Mesrob Tolkein for Body of Lies in 2007. TOP MIDDLE: With Director Catherine Hardwicke at the premiere of Miss Bala in 2019. TOP RIGHT: With Producer Avi Arad during post-production of Venom in 2018. MIDDLE: With Robin Williams during production of Patch Adams in 1998. BOTTOM LEFT: At the Academy Awards New Members party in 2012. BOTTOM RIGHT: Shooting in Zimbabwe for UK charity Pump Aid in 2010.
SPRING 2020 VFXVOICE.COM • 47
PROFILE
“[The lack of female visual effects supervisors] is definitely [the result of] a lack of opportunity and I’d say unconscious bias. ... It never really occurred to me that my gender should hold me back and I was always surprised when it did. I am a strong believer in diversity and inclusion, not just because I am a bi-racial woman but because I believe that greater diversity leads to freer thinking and greater creativity.” —Sheena Duggal, Visual Effects Supervisor
TOP: With Jack Nicholson during shooting of Anger Management in 2002. (Photo courtesy of Rob Bredow) MIDDLE: Recording an interview with Sir Ridley Scott at RSA Studios for a Directors Guild of America video presentation in 2009. BOTTOM: With Visual Effects Supervisor Scott Stokdyk on top of a 50-story building in Manhattan – with a spydercam rig in the background – while filming Spider-Man 3 in 2006. Crews were dropping cable cams down the front of the building for the crane destruction scene and Gwen Stacy’s (Bryce Dallas Howard) falling shot.
46 • VFXVOICE.COM SPRING 2020
character-building experience.” After the completion of Starship Troopers (1997) and Contact (1997), Duggal began actively searching for another project. “The Postman (1997) needed some 911 help, and this was the first show that I supervised and shared the co-supervisor credit with Stephen Rosenbaum (Kong: Skull Island). Ken Ralston took me under his wing and I supervised Patch Adams (1998) under his guidance. I met Executive Producer Charles Newirth (City of Angels) on this film and that began a great collaboration with him. Charles hired me on a lot of films when he was head of Revolution Studios with Joe Roth (Anger Management). I even worked on a film directed by Roth during this time. I had the pleasure of working with Charles again at Marvel a few years later. I am grateful to all the people who gave me an opportunity back then.” Transitioning from working in a facility to the studio production side was not a big leap. “Certainly, all the experience that I gained as Creative Director of the Flame compositing department was helpful in managing teams and budgets,” notes Duggal. “I have been lucky to have partnered with some of the most excellent creative VFX producers and this made a world of difference. What I miss most about not being inside a facility is the access to technological expertise. It’s much harder to keep up with technology when you aren’t part of that larger team working towards a solution across multiple shows. I was used to being involved in aspects of R&D for most of my career. Working at Marvel was great because we had access to the clever folks at Disney Research in Zurich, and it was always fascinating to get a peek into the future technology they are working on. Happily, I am still involved in creative technology R&D projects and have been consulting with Codex, now X2X labs [in partnership with PIX groups], for many years.” Artistic inspirations come from paintings, cinema and music. “As a teenager I was inspired by the playfulness and freedom of Dali,” notes Duggal. “I love impressionism, which can completely capture the essence of something in a nonliteral way. I especially love artists like Georges de La Tour, the Baroque painter who painted the most amazing chiaroscuro scenes lit by only candlelight. When it comes to films, my art influences definitely crossover. Ray Harryhausen’s (Jason and the Argonauts) stop-motion animation was something magical to me. Almost every frame of Blade Runner is a work of art
in its own right. I am a big fan of Peter Greenaway (Zed & Two Noughts) who is an artist influenced by Renaissance and Baroque paintings. I love the symmetry and symbolism in his films. You can see the golden ratio employed in his work. I have definitely been influenced by the graphic design of Saul Bass and by not just the music, but also the art of the album covers of the 1960s and 1970s. It’s fortuitous that I spent some time in my career designing album covers for major rock stars.” Duggal directed the short Inspire: The Chicago Spire Art Film (2007) which utilized the Arnold renderer for the first time. “The piece was littered with visuals and harmonics echoing Phi and was loved by [Spanish architect-engineer] Santiago Calatrava, who had designed his building around the principle of the Fibonacci sequence. Somehow, by looking at designs for his building I understood this, so when I pitched the idea for the short art piece to him on a cold night standing on the site in Chicago, he was delighted. When I showed him the previz, Santiago invited me into his house in New York to see a moving sculpture he had built that exhibited in the Met. There were no notes. Santiago just loved the previz and wanted to share the art he created so I could understand the ideas he had. I was invited to design meetings with him where he was talking to the architects and contractors of the building, I hope it gets built someday; it is a beautiful vision. I asked Santiago what inspired him and he said there is no such thing as inspiration. You focus on the work and the ideas and solution will come.” Duggal’s photography has been published and is an important part of her work, “For me, photography is painting with light and so much more. It acts as a means to bring something into sharp focus, and to create a visual library in my mind so that when reading a script I can see a concept of how it might look. It’s an exciting part of the process. Photography also serves as a communication tool with the cinematographer. The work that we do is built on top of the canvas they create for us to paint on. No cinematographer is going to give you the time of day if you can’t talk the same language as them and understand why a lens or composition choice might be preferable. I’m working with Bob Richardson (Once Upon a Time in Hollywood) right now and he’s a master. I delight in his use of back light, and Bob is the consummate collaborator with visual effects by giving us the most beautiful footage to
TOP LEFT: The Sahara Desert, Morocco, with Director Sir Ridley Scott, VFX Producer Jacquie Barnbrook and data wrangler Mesrob Tolkein for Body of Lies in 2007. TOP MIDDLE: With Director Catherine Hardwicke at the premiere of Miss Bala in 2019. TOP RIGHT: With Producer Avi Arad during post-production of Venom in 2018. MIDDLE: With Robin Williams during production of Patch Adams in 1998. BOTTOM LEFT: At the Academy Awards New Members party in 2012. BOTTOM RIGHT: Shooting in Zimbabwe for UK charity Pump Aid in 2010.
SPRING 2020 VFXVOICE.COM • 47
PROFILE
“The early days with alpha versions of Flame were fantastic. We pushed the tools to the limit. The developer who had written the code, Gary Tregaskis, was with us on Super Mario Bros. writing tools as needed. Every day brought something new that we didn’t know we could do. We were devising new methodologies, coming up with ideas and creating images that were literally bleeding edge at the time.” —Sheena Duggal, Visual Effects Supervisor
TOP: Crew in the Sahara desert on Body of Lies in 2007. MIDDLE: Holding a piece of a glacier right after a calving on 50 First Dates in 2003. BOTTOM: With 50 First Dates Director Peter Segal in Blackstone Bay, Alaska.
48 • VFXVOICE.COM SPRING 2020
build on.” Increasing the presence of female visual effects supervisors is an issue that needs to be addressed. “It’s definitely a lack of opportunity and I’d say unconscious bias,” states Duggal. “It didn’t start out that way and there were more women in the early days. Somehow as the industry became more commercialized, it seemed like the higher paying creative jobs became exclusively reserved for our male colleagues. Earlier in my career I was specifically told that the goal was to promote the male supervisors, and watched as guys were promoted up the ranks who had worked under my VFX supervision and were given opportunities on large VFX shows. It never really occurred to me that my gender should hold me back, and I was always surprised when it did. “I am a strong believer in diversity and inclusion, not just because I am a bi-racial woman but because I believe that greater diversity leads to freer thinking and greater creativity,” remarks Duggal. “I worked within the Academy of Motion Pictures Arts and Sciences as a member of the VFX branch executive committee and Chair of the branch A2020 committee in 2017-2018, to help meet the Academy A2020 goals to increase diversity and inclusion by 50% by the year 2020. When I began this work in 2015 there were five women in the VFX branch. We still have a long way to go, but we have increased these numbers significantly as well as increasing the number of people of color in our ranks. Women spend too much time being congenial, and it’s time for us to speak up about our achievements and the opportunities we’ve created for ourselves.” The British visual effects supervisor has worked with several female filmmakers such as Nora Ephron (Michael), Gurinda Chadra (I Dream of Jeannie), Penny Marshall (Riding in Cars with Boys) and Catherine Hardwicke (Miss Balla). “My goal is to continue to work with female filmmakers as much as possible.” Duggal has fond memories of the creative and technical challenges encountered in making Contact and Body of Lies (2008), The Hunger Games (2012) and Agent Carter (ABC, 2015 to 2016). “The beach sequence in Contact was the first time we created a virtual world and the actors were shot on a 360 bluescreen. That was a huge design challenge,” recalls Duggal. “The mirror shot has become legendary and yet was relatively simple in its conception, although the execution presented some compositing challenges. Body of Lies
was a wonderful project. I love working with Ridley Scott because he knows everyone’s job as well as they do. Therefore, he is very secure and wants your input and ideas. My best memory from this film is editing the car chase with Ridley and Pietro Scalia (Black Hawk Down) using visual effects to help weave together the visual storytelling. “The Hunger Games was possibly the most challenging project I worked on,” continues Duggal. “Lionsgate had everything riding on the success of the film. The pressure was enormous. We had as many as 18 visual effects vendors and the work ranged from virtual environments, creatures, FX simulations and motion graphics, to weapons and complex compositing tasks. As a result of production limitations, schedule and budget, we only had four weeks prep on the film, and so we were trying to prep and design while we were shooting out in the woods.” The television series Agent Carter resulted in a VES Award nomination for Outstanding Supporting Visual Effects in a Photoreal Episode. “On the first season of Agent Carter we had hundreds of shots up in the air between episodes one and two, and a looming broadcast schedule airdate. I brought my old ILM friends along with me for the first season and we shared the learning curve together, nearly buckling under the weight of the challenge. I really enjoyed the experience of working on both seasons, mad as it was to execute. “I have had some tremendous people who have shared their knowledge and who have given me opportunities,” notes Duggal. “It’s largely due to this forward-thinking group of people who believed in me that I was able to showcase my talent and achieve what I have. I spent a lot of time working on the fringes or out of the box, and doing so has presented interesting creative opportunities. There are many ways to channel creative thinking and problem-solving, and as a bi-racial woman I have a unique perspective that no one else at this level in VFX has.” Receiving the VES Award for Creative Excellence is a proud accomplishment for Duggal. “I can think of a lot of people who deserve the VES Award for Creative Excellence, and it’s easy for me to think that they deserve it more. I’ve looked at my body of work and began to think ‘it could be more impressive,’ but then I stop myself and think about the question of opportunity and how much of a struggle is has been to persist against the odds.”
TOP LEFT: At the Venom premiere in 2018 with VFX Producer Mark Soper. TOP RIGHT: In Tiananmen Square with Executive Producer Charles Newirth, Director of Photography Gabriel Beristain and Producer Brad Winderbaum on Iron Man 3 in 2012. MIDDLE RIGHT: With Co-producer Marty Cohen at North Fork Reservoir near Black Mountain, Asheville, North Carolina on The Hunger Games in 2011. BOTTOM RIGHT: While volunteering for UK water charity Pump Aid in the Mchinji District of Malawi, Africa in 2012, Duggal recalls: “We were at a Pump Aid well building. I had a GoPro and asked the lady to wear it on her head. They misunderstood and put the bricks they were carrying to the well on their heads and walked to the well with them. It was terrifying as some had babies on their backs.” (Photo by Sheena Duggal)
SPRING 2020 VFXVOICE.COM • 49
PROFILE
“The early days with alpha versions of Flame were fantastic. We pushed the tools to the limit. The developer who had written the code, Gary Tregaskis, was with us on Super Mario Bros. writing tools as needed. Every day brought something new that we didn’t know we could do. We were devising new methodologies, coming up with ideas and creating images that were literally bleeding edge at the time.” —Sheena Duggal, Visual Effects Supervisor
TOP: Crew in the Sahara desert on Body of Lies in 2007. MIDDLE: Holding a piece of a glacier right after a calving on 50 First Dates in 2003. BOTTOM: With 50 First Dates Director Peter Segal in Blackstone Bay, Alaska.
48 • VFXVOICE.COM SPRING 2020
build on.” Increasing the presence of female visual effects supervisors is an issue that needs to be addressed. “It’s definitely a lack of opportunity and I’d say unconscious bias,” states Duggal. “It didn’t start out that way and there were more women in the early days. Somehow as the industry became more commercialized, it seemed like the higher paying creative jobs became exclusively reserved for our male colleagues. Earlier in my career I was specifically told that the goal was to promote the male supervisors, and watched as guys were promoted up the ranks who had worked under my VFX supervision and were given opportunities on large VFX shows. It never really occurred to me that my gender should hold me back, and I was always surprised when it did. “I am a strong believer in diversity and inclusion, not just because I am a bi-racial woman but because I believe that greater diversity leads to freer thinking and greater creativity,” remarks Duggal. “I worked within the Academy of Motion Pictures Arts and Sciences as a member of the VFX branch executive committee and Chair of the branch A2020 committee in 2017-2018, to help meet the Academy A2020 goals to increase diversity and inclusion by 50% by the year 2020. When I began this work in 2015 there were five women in the VFX branch. We still have a long way to go, but we have increased these numbers significantly as well as increasing the number of people of color in our ranks. Women spend too much time being congenial, and it’s time for us to speak up about our achievements and the opportunities we’ve created for ourselves.” The British visual effects supervisor has worked with several female filmmakers such as Nora Ephron (Michael), Gurinda Chadra (I Dream of Jeannie), Penny Marshall (Riding in Cars with Boys) and Catherine Hardwicke (Miss Balla). “My goal is to continue to work with female filmmakers as much as possible.” Duggal has fond memories of the creative and technical challenges encountered in making Contact and Body of Lies (2008), The Hunger Games (2012) and Agent Carter (ABC, 2015 to 2016). “The beach sequence in Contact was the first time we created a virtual world and the actors were shot on a 360 bluescreen. That was a huge design challenge,” recalls Duggal. “The mirror shot has become legendary and yet was relatively simple in its conception, although the execution presented some compositing challenges. Body of Lies
was a wonderful project. I love working with Ridley Scott because he knows everyone’s job as well as they do. Therefore, he is very secure and wants your input and ideas. My best memory from this film is editing the car chase with Ridley and Pietro Scalia (Black Hawk Down) using visual effects to help weave together the visual storytelling. “The Hunger Games was possibly the most challenging project I worked on,” continues Duggal. “Lionsgate had everything riding on the success of the film. The pressure was enormous. We had as many as 18 visual effects vendors and the work ranged from virtual environments, creatures, FX simulations and motion graphics, to weapons and complex compositing tasks. As a result of production limitations, schedule and budget, we only had four weeks prep on the film, and so we were trying to prep and design while we were shooting out in the woods.” The television series Agent Carter resulted in a VES Award nomination for Outstanding Supporting Visual Effects in a Photoreal Episode. “On the first season of Agent Carter we had hundreds of shots up in the air between episodes one and two, and a looming broadcast schedule airdate. I brought my old ILM friends along with me for the first season and we shared the learning curve together, nearly buckling under the weight of the challenge. I really enjoyed the experience of working on both seasons, mad as it was to execute. “I have had some tremendous people who have shared their knowledge and who have given me opportunities,” notes Duggal. “It’s largely due to this forward-thinking group of people who believed in me that I was able to showcase my talent and achieve what I have. I spent a lot of time working on the fringes or out of the box, and doing so has presented interesting creative opportunities. There are many ways to channel creative thinking and problem-solving, and as a bi-racial woman I have a unique perspective that no one else at this level in VFX has.” Receiving the VES Award for Creative Excellence is a proud accomplishment for Duggal. “I can think of a lot of people who deserve the VES Award for Creative Excellence, and it’s easy for me to think that they deserve it more. I’ve looked at my body of work and began to think ‘it could be more impressive,’ but then I stop myself and think about the question of opportunity and how much of a struggle is has been to persist against the odds.”
TOP LEFT: At the Venom premiere in 2018 with VFX Producer Mark Soper. TOP RIGHT: In Tiananmen Square with Executive Producer Charles Newirth, Director of Photography Gabriel Beristain and Producer Brad Winderbaum on Iron Man 3 in 2012. MIDDLE RIGHT: With Co-producer Marty Cohen at North Fork Reservoir near Black Mountain, Asheville, North Carolina on The Hunger Games in 2011. BOTTOM RIGHT: While volunteering for UK water charity Pump Aid in the Mchinji District of Malawi, Africa in 2012, Duggal recalls: “We were at a Pump Aid well building. I had a GoPro and asked the lady to wear it on her head. They misunderstood and put the bricks they were carrying to the well on their heads and walked to the well with them. It was terrifying as some had babies on their backs.” (Photo by Sheena Duggal)
SPRING 2020 VFXVOICE.COM • 49
VES AWARDS
VISUAL EFFECTS ARTISTRY IN THE SPOTLIGHT
1
Captions list all members of each Award-winning team even if some members were not present or out of frame. For more Show photos and a complete list of nominees and winners of the 18th Annual VES Awards visit visualeffectssociety.com.
All photos by: Danny Moloshok and Phil McCarten
1. Eric Roth, Executive Director of the Visual Effects Society, welcomes the crowd. 2. Patton Oswalt hosts the 18th Annual VES Awards.
2
3. The VES Award for Outstanding Visual Effects in a Photoreal Feature went to The Lion King and the team of Robert Legato, ASC, Tom Peitzman, Adam Valdez and Andrew R. Jones.
The Visual Effects Society held the 18th Annual VES Awards, the prestigious yearly celebration that recognizes outstanding visual effects artistry and innovation in film, animation, television, commercials, video games and special venues, and the VFX supervisors, VFX producers and hands-on artists who bring this work to life. Comedian Patton Oswalt served as host for the ninth time to the more than 1,000 guests gathered at the Beverly Hilton Hotel, Los Angeles, on January 29, 2020 to celebrate VFX talent in 25 awards categories. The Lion King was named the photoreal feature winner, garnering three awards in total. Missing Link was named the top animated film, winning two awards. The Mandalorian was named best photoreal episode and garnered two awards. Game of Thrones and Stranger Things 3 also won two awards each. Hennessy: The Seven Worlds topped the commercial field with two wins. Nominees in the 25 categories were selected by VES members via events hosted by 11 VES Sections, including Australia, the Bay Area, Germany, London, Los Angeles, Montreal, New York, New Zealand, Toronto, Vancouver and Washington. The VES Lifetime Achievement Award was presented to Academy, DGA and Emmy Award-winning director-producerscreenwriter Martin Scorsese by VFX Supervisor Pablo Helman. Scorsese’s The Irishman won two awards. The VES Visionary Award was presented to acclaimed directorproducer-screenwriter Roland Emmerich by actress Joey King. The VES Award for Creative Excellence was presented to acclaimed visual effects supervisor Sheena Duggal by actordirector Andy Serkis via video. Award presenters included directors J.J. Abrams, Jon Favreau, Rian Johnson and Josh Cooley, actors Storm Reid, Madeline Brewer, Janina Gavankar, Sophie Skelton and Maxwell Jenkins. Lisa Campbell, Autodesk’s Chief Marketing Officer and SVP, presented the Autodesk Student Award. NOTE: The following two Award-winning categories did not have any representatives on hand for photos. · The VES Award for Outstanding Animated Character in a Commercial went to Cyberpunk 2077 (Dex) and the team of Jonas Ekman, Jonas Skoog, Marek Madej and Grzegorz Chojnacki.
3
50 • VFXVOICE.COM SPRING 2020
· The VES Award for Outstanding Visual Effects in a Real-Time Project went to Control and the team of Janne Pulkkinen, Elmeri Raitanen, Matti Hämäläinen and James Tottman.
4. The VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature went to The Irishman and the team of Pablo Helman, Mitchell Ferm, Jill Brooks, Leandro Estebecorena and Jeff Brink. 5. The VES Award for Outstanding Visual Effects in an Animated Feature went to Missing Link and the team of Brad Schiff, Travis Knight, Steve Emerson and Benoit Dubuc.
4
5
6
7
8
9
6. The VES Award for Outstanding Visual Effects in a Photoreal Episode went to The Mandalorian (The Child) and the team of Richard Bluff, Abbigail Keller, Jason Porter, Hayden Jones and Roy K. Cancino. 7. The VES Award for Outstanding Supporting Visual Effects in a Photoreal Episode went to Chernobyl (1:23:45) and the team of Max Dennison, Lindsay McFarlane, Clare Cheetham, Paul Jones and Claudius Christian Rauch. 8. The VES Award for Outstanding Visual Effects in a Commercial went to Hennessy: The Seven Worlds and the team of Carsten Keller, Selçuk Ergen, Kiril Mirkov and William Laban. 9. The VES Award for Outstanding Visual Effects in a Special Venue Project went to Star Wars: Rise of the Resistance and the team of Jason Bayever, Patrick Kearney, Carol Norton and Bill George. 10. The VES Award for Outstanding Animated Character in a Photoreal Feature went to Alita: Battle Angel (Alita) and the team of Michael Cozens, Mark Haenga, Olivier Lesaint and Dejan Momcilovic.
10
VES AWARDS
VISUAL EFFECTS ARTISTRY IN THE SPOTLIGHT
1
Captions list all members of each Award-winning team even if some members were not present or out of frame. For more Show photos and a complete list of nominees and winners of the 18th Annual VES Awards visit visualeffectssociety.com.
All photos by: Danny Moloshok and Phil McCarten
1. Eric Roth, Executive Director of the Visual Effects Society, welcomes the crowd. 2. Patton Oswalt hosts the 18th Annual VES Awards.
2
3. The VES Award for Outstanding Visual Effects in a Photoreal Feature went to The Lion King and the team of Robert Legato, ASC, Tom Peitzman, Adam Valdez and Andrew R. Jones.
The Visual Effects Society held the 18th Annual VES Awards, the prestigious yearly celebration that recognizes outstanding visual effects artistry and innovation in film, animation, television, commercials, video games and special venues, and the VFX supervisors, VFX producers and hands-on artists who bring this work to life. Comedian Patton Oswalt served as host for the ninth time to the more than 1,000 guests gathered at the Beverly Hilton Hotel, Los Angeles, on January 29, 2020 to celebrate VFX talent in 25 awards categories. The Lion King was named the photoreal feature winner, garnering three awards in total. Missing Link was named the top animated film, winning two awards. The Mandalorian was named best photoreal episode and garnered two awards. Game of Thrones and Stranger Things 3 also won two awards each. Hennessy: The Seven Worlds topped the commercial field with two wins. Nominees in the 25 categories were selected by VES members via events hosted by 11 VES Sections, including Australia, the Bay Area, Germany, London, Los Angeles, Montreal, New York, New Zealand, Toronto, Vancouver and Washington. The VES Lifetime Achievement Award was presented to Academy, DGA and Emmy Award-winning director-producerscreenwriter Martin Scorsese by VFX Supervisor Pablo Helman. Scorsese’s The Irishman won two awards. The VES Visionary Award was presented to acclaimed directorproducer-screenwriter Roland Emmerich by actress Joey King. The VES Award for Creative Excellence was presented to acclaimed visual effects supervisor Sheena Duggal by actordirector Andy Serkis via video. Award presenters included directors J.J. Abrams, Jon Favreau, Rian Johnson and Josh Cooley, actors Storm Reid, Madeline Brewer, Janina Gavankar, Sophie Skelton and Maxwell Jenkins. Lisa Campbell, Autodesk’s Chief Marketing Officer and SVP, presented the Autodesk Student Award. NOTE: The following two Award-winning categories did not have any representatives on hand for photos. · The VES Award for Outstanding Animated Character in a Commercial went to Cyberpunk 2077 (Dex) and the team of Jonas Ekman, Jonas Skoog, Marek Madej and Grzegorz Chojnacki.
3
50 • VFXVOICE.COM SPRING 2020
· The VES Award for Outstanding Visual Effects in a Real-Time Project went to Control and the team of Janne Pulkkinen, Elmeri Raitanen, Matti Hämäläinen and James Tottman.
4. The VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature went to The Irishman and the team of Pablo Helman, Mitchell Ferm, Jill Brooks, Leandro Estebecorena and Jeff Brink. 5. The VES Award for Outstanding Visual Effects in an Animated Feature went to Missing Link and the team of Brad Schiff, Travis Knight, Steve Emerson and Benoit Dubuc.
4
5
6
7
8
9
6. The VES Award for Outstanding Visual Effects in a Photoreal Episode went to The Mandalorian (The Child) and the team of Richard Bluff, Abbigail Keller, Jason Porter, Hayden Jones and Roy K. Cancino. 7. The VES Award for Outstanding Supporting Visual Effects in a Photoreal Episode went to Chernobyl (1:23:45) and the team of Max Dennison, Lindsay McFarlane, Clare Cheetham, Paul Jones and Claudius Christian Rauch. 8. The VES Award for Outstanding Visual Effects in a Commercial went to Hennessy: The Seven Worlds and the team of Carsten Keller, Selçuk Ergen, Kiril Mirkov and William Laban. 9. The VES Award for Outstanding Visual Effects in a Special Venue Project went to Star Wars: Rise of the Resistance and the team of Jason Bayever, Patrick Kearney, Carol Norton and Bill George. 10. The VES Award for Outstanding Animated Character in a Photoreal Feature went to Alita: Battle Angel (Alita) and the team of Michael Cozens, Mark Haenga, Olivier Lesaint and Dejan Momcilovic.
10
VES AWARDS 11. The VES Award for Outstanding Animated Character in an Animated Feature went to Missing Link (Susan) and the team of Rachelle Lambden, Brenda Baumgarten, Morgan Hay and Benoit Dubuc. 12. The VES Award for Outstanding Animated Character in an Episode or Real-Time Project went to Stranger Things 3 (Tom/Bruce Monster) and the team of Joseph DubéArsenault, Antoine Barthod, Frederick Gagnon and Xavier Lafarge. 11
12
14
15. The VES Award for Outstanding Created Environment in an Episode, Commercial, or Real-Time Project went to Game of Thrones (The Iron Throne; Red Keep Plaza) and the team of Carlos Patrick DeLeon, Alonso Bocanegra Martinez, Marcela Silva and Benjamin Ross. 16. The VES Award for Outstanding Virtual Cinematography in a CG Project went to The Lion King and the team of Robert Legato, ASC, Caleb Deschanel, ASC, Ben Grossmann and AJ Sciutto.
15
52 • VFXVOICE.COM SPRING 2020
16
18
13. The VES Award for Outstanding Created Environment in a Photoreal Feature went to The Lion King (The Pridelands) and the team of Marco Rolandi, Luca Bonatti, Jules Bodenstein and Filippo Preti. 14. The VES Award for Outstanding Created Environment in an Animated Feature went to Toy Story 4 (Antiques Mall) and the team of Hosuk Chang, Andrew Finley, Alison Leaf and Philip Shoebottom.
13
17
19 17. The VES Award for Outstanding Model in a Photoreal or Animated Project went to The Mandalorian (The Sin; The Razorcrest) and the team of Doug Chiang, Jay Machado, John Goodson and Landis Fields IV. 18. The VES Award for Outstanding Effects Simulations in a Photoreal Feature went to Star Wars: The Rise of Skywalker and the team of Don Wong, Thibault Gauriau, Goncalo Cababca and François-Maxence Desplanques.
20
20. The VES Award for Outstanding Effects Simulations in an Episode, Commercial, or Real-Time Project went to Stranger Things 3 (Melting Tom/Bruce) and the team of Nathan Arbuckle, Christian Gaumond, James Dong and Aleksandr Starkov. 21. The VES Award for Outstanding Compositing in a feature went to The Irishman and the team of Nelson Sepulveda, Vincent Papaix, Benjamin O’Brien and Christopher Doerhoff.
19. The VES Award for Outstanding Effects Simulations in an Animated Feature went to Frozen 2 and the team of Erin V. Ramos, Scott Townsend, Thomas Wickes and Rattanin Sirinaruemarn. 21
SPRING 2019 VFXVOICE.COM • 53
VES AWARDS 11. The VES Award for Outstanding Animated Character in an Animated Feature went to Missing Link (Susan) and the team of Rachelle Lambden, Brenda Baumgarten, Morgan Hay and Benoit Dubuc. 12. The VES Award for Outstanding Animated Character in an Episode or Real-Time Project went to Stranger Things 3 (Tom/Bruce Monster) and the team of Joseph DubéArsenault, Antoine Barthod, Frederick Gagnon and Xavier Lafarge. 11
12
14
15. The VES Award for Outstanding Created Environment in an Episode, Commercial, or Real-Time Project went to Game of Thrones (The Iron Throne; Red Keep Plaza) and the team of Carlos Patrick DeLeon, Alonso Bocanegra Martinez, Marcela Silva and Benjamin Ross. 16. The VES Award for Outstanding Virtual Cinematography in a CG Project went to The Lion King and the team of Robert Legato, ASC, Caleb Deschanel, ASC, Ben Grossmann and AJ Sciutto.
15
52 • VFXVOICE.COM SPRING 2020
16
18
13. The VES Award for Outstanding Created Environment in a Photoreal Feature went to The Lion King (The Pridelands) and the team of Marco Rolandi, Luca Bonatti, Jules Bodenstein and Filippo Preti. 14. The VES Award for Outstanding Created Environment in an Animated Feature went to Toy Story 4 (Antiques Mall) and the team of Hosuk Chang, Andrew Finley, Alison Leaf and Philip Shoebottom.
13
17
19 17. The VES Award for Outstanding Model in a Photoreal or Animated Project went to The Mandalorian (The Sin; The Razorcrest) and the team of Doug Chiang, Jay Machado, John Goodson and Landis Fields IV. 18. The VES Award for Outstanding Effects Simulations in a Photoreal Feature went to Star Wars: The Rise of Skywalker and the team of Don Wong, Thibault Gauriau, Goncalo Cababca and François-Maxence Desplanques.
20
20. The VES Award for Outstanding Effects Simulations in an Episode, Commercial, or Real-Time Project went to Stranger Things 3 (Melting Tom/Bruce) and the team of Nathan Arbuckle, Christian Gaumond, James Dong and Aleksandr Starkov. 21. The VES Award for Outstanding Compositing in a feature went to The Irishman and the team of Nelson Sepulveda, Vincent Papaix, Benjamin O’Brien and Christopher Doerhoff.
19. The VES Award for Outstanding Effects Simulations in an Animated Feature went to Frozen 2 and the team of Erin V. Ramos, Scott Townsend, Thomas Wickes and Rattanin Sirinaruemarn. 21
SPRING 2019 VFXVOICE.COM • 53
VES AWARDS 25. The VES Award for Outstanding Visual Effects in a Student Project went to The Beauty and the team of Marc Angele, Aleksandra Todorovic, Pascal Schelbli and Noel Winzen. 26. J.J. Abrams, center, is flanked by VES Executive Director Eric Roth, left, and Jeffrey A. Okun, VES. 27. Director Rian Johnson presented multiple awards. 28. Actor Maxwell Jenkins was on hand as a presenter. 22
23
29. VFX Supervisor Pablo Helman presented the VES Lifetime Achievement Award to director Martin Scorsese.
29
30
31
32
30. VES Lifetime Achievement Award recipient Martin Scorsese delivered a heartfelt video thanking the VES for his award.
24
25
31. Lisa Campbell, Autodesk’s Chief Marketing Officer and SVP, presented the Autodesk Student Award. 32. Roland Emmerich receives the VES Visionary Award. 33. Actress Joey King presented Roland Emmerich with the VES Visionary Award.
33
34. Director Jon Favreau, right, with VES Chair Mike Chambers and VES Executive Director Eric Roth.
22. The VES Award for Outstanding Compositing in an Episode went to Game of Thrones (The Long Night; Dragon Ground Battle) and the team of Mark Richardson, Darren Christie, Nathan Abbott and Owen Longstaff.
23. The VES Award for Outstanding Compositing in a Commercial went to Hennessy: The Seven Worlds and the team of Rod Norman, Guillaume Weiss, Alexander Kulikov and Alessandro Granella.
54 • VFXVOICE.COM SPRING 2020
26
27
24. The VES Award for Outstanding Special (Practical) Effects in a Photoreal or Animated Project went to The Dark Crystal: The Age of Resistance (She Knows All the Secrets) and the team of Sean Mathiesen, Jon Savage, Toby Froud and Phil Harvey.
28
34
SPRING 2020 VFXVOICE.COM • 55
VES AWARDS 25. The VES Award for Outstanding Visual Effects in a Student Project went to The Beauty and the team of Marc Angele, Aleksandra Todorovic, Pascal Schelbli and Noel Winzen. 26. J.J. Abrams, center, is flanked by VES Executive Director Eric Roth, left, and Jeffrey A. Okun, VES. 27. Director Rian Johnson presented multiple awards. 28. Actor Maxwell Jenkins was on hand as a presenter. 22
23
29. VFX Supervisor Pablo Helman presented the VES Lifetime Achievement Award to director Martin Scorsese.
29
30
31
32
30. VES Lifetime Achievement Award recipient Martin Scorsese delivered a heartfelt video thanking the VES for his award.
24
25
31. Lisa Campbell, Autodesk’s Chief Marketing Officer and SVP, presented the Autodesk Student Award. 32. Roland Emmerich receives the VES Visionary Award. 33. Actress Joey King presented Roland Emmerich with the VES Visionary Award.
33
34. Director Jon Favreau, right, with VES Chair Mike Chambers and VES Executive Director Eric Roth.
22. The VES Award for Outstanding Compositing in an Episode went to Game of Thrones (The Long Night; Dragon Ground Battle) and the team of Mark Richardson, Darren Christie, Nathan Abbott and Owen Longstaff.
23. The VES Award for Outstanding Compositing in a Commercial went to Hennessy: The Seven Worlds and the team of Rod Norman, Guillaume Weiss, Alexander Kulikov and Alessandro Granella.
54 • VFXVOICE.COM SPRING 2020
26
27
24. The VES Award for Outstanding Special (Practical) Effects in a Photoreal or Animated Project went to The Dark Crystal: The Age of Resistance (She Knows All the Secrets) and the team of Sean Mathiesen, Jon Savage, Toby Froud and Phil Harvey.
28
34
SPRING 2020 VFXVOICE.COM • 55
VES AWARDS 35. More than 1,100 gathered to celebrate the 18th Annual VES Awards. 36. Jon Favreau was on hand as a presenter. 37. Actresses Joey King, Jania Gavankar, Storm Reid and Madeline Brewer were all on hand as presenters. 38. Director Josh Cooley presented various awards.
35
36
39. The VES Award for Creative Excellence was presented to acclaimed visual effects supervisor Sheena Duggal.
40
37
40 40. Actress Sophie Skelton was on hand as a presenter.
38 39
56 • VFXVOICE.COM SPRING 2020
41
42
41. Andy Serkis presents the VES Award for Creative Excellence via video to Sheena Duggal. 42. Visionary Award recipient Roland Emmerich celebrating with Jeffrey A. Okun, VES and VES Chair Mike Chambers.
SPRING 2020 VFXVOICE.COM • 57
VES AWARDS 35. More than 1,100 gathered to celebrate the 18th Annual VES Awards. 36. Jon Favreau was on hand as a presenter. 37. Actresses Joey King, Jania Gavankar, Storm Reid and Madeline Brewer were all on hand as presenters. 38. Director Josh Cooley presented various awards.
35
36
39. The VES Award for Creative Excellence was presented to acclaimed visual effects supervisor Sheena Duggal.
40
37
40 40. Actress Sophie Skelton was on hand as a presenter.
38 39
56 • VFXVOICE.COM SPRING 2020
41
42
41. Andy Serkis presents the VES Award for Creative Excellence via video to Sheena Duggal. 42. Visionary Award recipient Roland Emmerich celebrating with Jeffrey A. Okun, VES and VES Chair Mike Chambers.
SPRING 2020 VFXVOICE.COM • 57
VES AWARD WINNERS
THE LION KING
The VES Award for Outstanding Visual Effects in a Photoreal Feature went to The Lion King, which won three VES Awards, including Outstanding Created Environment in a Photoreal Feature (The Pridelands) and Outstanding Virtual Cinematography in a CG Project. (Photos courtesy of Disney Enterprises Inc.)
58 • VFXVOICE.COM SPRING 2020
SPRING 2020 VFXVOICE.COM • 59
VES AWARD WINNERS
THE LION KING
The VES Award for Outstanding Visual Effects in a Photoreal Feature went to The Lion King, which won three VES Awards, including Outstanding Created Environment in a Photoreal Feature (The Pridelands) and Outstanding Virtual Cinematography in a CG Project. (Photos courtesy of Disney Enterprises Inc.)
58 • VFXVOICE.COM SPRING 2020
SPRING 2020 VFXVOICE.COM • 59
VES AWARD WINNERS
MISSING LINK Missing Link won the VES Awards for Outstanding Visual Effects in an Animated Feature and Outstanding Animated Character in an Animated Feature (Susan). (Photos courtesy of Laika Studios and Annapurna Pictures)
60 • VFXVOICE.COM SPRING 2020
SPRING 2020 VFXVOICE.COM • 61
VES AWARD WINNERS
MISSING LINK Missing Link won the VES Awards for Outstanding Visual Effects in an Animated Feature and Outstanding Animated Character in an Animated Feature (Susan). (Photos courtesy of Laika Studios and Annapurna Pictures)
60 • VFXVOICE.COM SPRING 2020
SPRING 2020 VFXVOICE.COM • 61
VES AWARD WINNERS
THE MANDALORIAN
The Mandalorian won the VES Awards for Outstanding Visual Effects in a Photoreal Episode (The Child) and Outstanding Model in a Photoreal or Animated Project (The Sin; The Razorcrest). (Photos courtesy of Lucasfilm and Walt Disney Studios)
62 • VFXVOICE.COM SPRING 2020
SPRING 2020 VFXVOICE.COM • 63
VES AWARD WINNERS
THE MANDALORIAN
The Mandalorian won the VES Awards for Outstanding Visual Effects in a Photoreal Episode (The Child) and Outstanding Model in a Photoreal or Animated Project (The Sin; The Razorcrest). (Photos courtesy of Lucasfilm and Walt Disney Studios)
62 • VFXVOICE.COM SPRING 2020
SPRING 2020 VFXVOICE.COM • 63
TECH & TOOLS
NEW VFX TOOLS, AND HOW VFX PROS ARE USING THEM By IAN FAILES
Visual effects artists are regularly faced with the prospect of having to learn new tools. Of course, this takes some investment in time and money. But it can also reap rewards from having a new set of skills and ways of achieving VFX imagery at the artists’ disposal. VFX Voice has identified four tools now available for artists to check out, and spoken to professionals about how the different pieces of software are being used in production right now. With The Mill, we jump into Autodesk’s new Bifrost for Maya. With Weta Digital, we detail the studio’s work with the Nuke volumetric plug-in Eddy. Milk VFX then outlines the texturing tool Nexture from Cronobo. Finally, we hear from artists who have been using Houdini’s Crowd tools for a music video. THE MILL JOURNEYS TO BIFROST
TOP: A still from The Mill’s VFX for a Playstation Skyrim commercial, in which Bifrost was used to create atmospherics and snow. (Image courtesy of The Mill)
64 • VFXVOICE.COM SPRING 2020
Several years ago, Autodesk acquired the company, Exotic Matter, which was behind the fluid simulation software Naiad. Exotic Matter founder Marcus Nordenstam, now a senior product manager at Autodesk, has been at work on implementing the successor to Naiad – Bifrost – into Maya, and it was announced as a new plug-in visual programming environment in mid-2019. Bifrost is aimed at bringing a procedural content-creation framework into Maya, letting users do advanced FX simulations. One of those users is FX Supervisor Todd Akita from The Mill. A number of commercials out of The Mill have utilized Bifrost for simulation work, and The Mill has become part of testing the limits of the tool. “About two years ago,” says Akita, “Marcus arranged for a team that included Kosta Stamatelos and Duncan Brinsmead to join us on a PlayStation Skyrim commercial at The Mill, designing tools and workflows for snow. Their work included coupling the particle
system to the Aero volume solver, so that the dragon’s wings in the commercial could displace large volumes of air and push tons of snow particles around, and getting the data out of the system so we could render it. When the Autodesk team first showed up, Bifrost didn’t have open-VDB input or output capabilities – nor did we have the ability to read or write particles, but by the time they left, we were able to do both.” Akita mentions, too, that Bifrost has become useful in handling elements such as feather and garment simulations and foliage instancing for other commercials. He sees it as a tool that will comfortably sit within The Mill’s pipeline. “Bifrost’s close proximity to the animation and lighting disciplines make it an easy pick for certain tasks like mesh deformers or instancing,” notes Akita. “On the FX side, we’ve had success using Bifrost both from inside of Maya, and also running it from inside of Houdini. I think on the Houdini side there’s interest in the Aero solver because it just has a different feel than what some of us are used to getting out of Pyro in Houdini. So you might see Houdini setups with both Bifrost and DOPnet branches, which is fine – choice is good!” “I think ultimately we don’t want the software thing to become a ‘this or that’ proposition, but rather we want the best of all options within easy reach. We’ve also had promising results with Bifrost’s physically based combustion solver – I’m looking forward to exploring that a bit more.” MILK GETS MINUTE DETAILS WITH NEXTURE
Normally, painting detailed CG textures on creatures and characters is an intensive task. One new tool aiming to help implement highly detailed texture maps in a faster time-frame is Cronobo VFX’s Nexture. It incorporates artificial neural networks, combined with a proprietary image synthesis algorithm, to transfer details from a reference pattern bank onto CG texture maps. Milk VFX used Nexture for some of their creatures on the television series Good Omens. “For Good Omens, we created a creature, Satan, which was set to be seen as a wide shot silhouetted against smoke,” explains Milk VFX Head of Modeling Sam Lucas. “However, the storyboard for Satan’s sequence changed quite late in the post phase, with some new shots requiring close-ups on our asset. A few changes were therefore necessary, and adding more details for the close-up turned out to be essential.” Lucas notes this could have been done in a sculpting tool such as ZBrush to paint micro-displacements. But, he says, “the process is long, the model needs to be super subdivided and we couldn’t have had such a detailed skin effect. Nexture was more appropriate for this and we were testing it at this time, so Satan was the perfect candidate!” The process involves painting masks to drive different patterns of skin on the sculpt (i.e., normally there would be different skin pore patterns on the nose versus the cheeks). “Once that’s done,” explains Lucas, “you just have to give the masks, displacement and the reference of the pattern you want for each mask and Nexture will do its magic. All the micro displacement that Nexture
TOP: The Mill took on Bifrost for this Bud Light advertisement. (Image courtesy of The Mill) MIDDLE: The Maya interface showing Bifrost’s nodes setup. (Image courtesy of Autodesk) BOTTOM: Volume effects in Bifrost. (Image courtesy of Autodesk)
SPRING 2020 VFXVOICE.COM • 65
TECH & TOOLS
NEW VFX TOOLS, AND HOW VFX PROS ARE USING THEM By IAN FAILES
Visual effects artists are regularly faced with the prospect of having to learn new tools. Of course, this takes some investment in time and money. But it can also reap rewards from having a new set of skills and ways of achieving VFX imagery at the artists’ disposal. VFX Voice has identified four tools now available for artists to check out, and spoken to professionals about how the different pieces of software are being used in production right now. With The Mill, we jump into Autodesk’s new Bifrost for Maya. With Weta Digital, we detail the studio’s work with the Nuke volumetric plug-in Eddy. Milk VFX then outlines the texturing tool Nexture from Cronobo. Finally, we hear from artists who have been using Houdini’s Crowd tools for a music video. THE MILL JOURNEYS TO BIFROST
TOP: A still from The Mill’s VFX for a Playstation Skyrim commercial, in which Bifrost was used to create atmospherics and snow. (Image courtesy of The Mill)
64 • VFXVOICE.COM SPRING 2020
Several years ago, Autodesk acquired the company, Exotic Matter, which was behind the fluid simulation software Naiad. Exotic Matter founder Marcus Nordenstam, now a senior product manager at Autodesk, has been at work on implementing the successor to Naiad – Bifrost – into Maya, and it was announced as a new plug-in visual programming environment in mid-2019. Bifrost is aimed at bringing a procedural content-creation framework into Maya, letting users do advanced FX simulations. One of those users is FX Supervisor Todd Akita from The Mill. A number of commercials out of The Mill have utilized Bifrost for simulation work, and The Mill has become part of testing the limits of the tool. “About two years ago,” says Akita, “Marcus arranged for a team that included Kosta Stamatelos and Duncan Brinsmead to join us on a PlayStation Skyrim commercial at The Mill, designing tools and workflows for snow. Their work included coupling the particle
system to the Aero volume solver, so that the dragon’s wings in the commercial could displace large volumes of air and push tons of snow particles around, and getting the data out of the system so we could render it. When the Autodesk team first showed up, Bifrost didn’t have open-VDB input or output capabilities – nor did we have the ability to read or write particles, but by the time they left, we were able to do both.” Akita mentions, too, that Bifrost has become useful in handling elements such as feather and garment simulations and foliage instancing for other commercials. He sees it as a tool that will comfortably sit within The Mill’s pipeline. “Bifrost’s close proximity to the animation and lighting disciplines make it an easy pick for certain tasks like mesh deformers or instancing,” notes Akita. “On the FX side, we’ve had success using Bifrost both from inside of Maya, and also running it from inside of Houdini. I think on the Houdini side there’s interest in the Aero solver because it just has a different feel than what some of us are used to getting out of Pyro in Houdini. So you might see Houdini setups with both Bifrost and DOPnet branches, which is fine – choice is good!” “I think ultimately we don’t want the software thing to become a ‘this or that’ proposition, but rather we want the best of all options within easy reach. We’ve also had promising results with Bifrost’s physically based combustion solver – I’m looking forward to exploring that a bit more.” MILK GETS MINUTE DETAILS WITH NEXTURE
Normally, painting detailed CG textures on creatures and characters is an intensive task. One new tool aiming to help implement highly detailed texture maps in a faster time-frame is Cronobo VFX’s Nexture. It incorporates artificial neural networks, combined with a proprietary image synthesis algorithm, to transfer details from a reference pattern bank onto CG texture maps. Milk VFX used Nexture for some of their creatures on the television series Good Omens. “For Good Omens, we created a creature, Satan, which was set to be seen as a wide shot silhouetted against smoke,” explains Milk VFX Head of Modeling Sam Lucas. “However, the storyboard for Satan’s sequence changed quite late in the post phase, with some new shots requiring close-ups on our asset. A few changes were therefore necessary, and adding more details for the close-up turned out to be essential.” Lucas notes this could have been done in a sculpting tool such as ZBrush to paint micro-displacements. But, he says, “the process is long, the model needs to be super subdivided and we couldn’t have had such a detailed skin effect. Nexture was more appropriate for this and we were testing it at this time, so Satan was the perfect candidate!” The process involves painting masks to drive different patterns of skin on the sculpt (i.e., normally there would be different skin pore patterns on the nose versus the cheeks). “Once that’s done,” explains Lucas, “you just have to give the masks, displacement and the reference of the pattern you want for each mask and Nexture will do its magic. All the micro displacement that Nexture
TOP: The Mill took on Bifrost for this Bud Light advertisement. (Image courtesy of The Mill) MIDDLE: The Maya interface showing Bifrost’s nodes setup. (Image courtesy of Autodesk) BOTTOM: Volume effects in Bifrost. (Image courtesy of Autodesk)
SPRING 2020 VFXVOICE.COM • 65
TECH & TOOLS
generates follows the displacement map we give to the software. That’s why we need the sculpt to be done first. In lookdev they can use the new maps as an additional displacement or a bump.” Milk VFX plans to continue using Nexture for its CG character work, with the studio playing a role in providing feedback for additional features after having crafted visual effects shots with an early version of the tool. “Recently we couldn’t use the UDIM system, which made the process quite painful at times,” admits Lucas, “but this has changed now and the user interface is quite different as well. The rendering time has improved so the developer basically already changed everything that I was hoping for after Good Omens.” WETA DIGITAL GOES VOLUMETRIC WITH EDDY
Elements such as CG smoke, fire and gaseous fluids tend to be the domain of 3D simulation tools. But VortechsFX Ltd’s Eddy, a plug-in for Nuke, has changed some VFX studios’ approaches to that kind of work by enabling volumetric simulations to occur at the compositing software stage. Eddy uses the GPU to provide fast turnaround times on simulation elements, which stay in the compositing environment. It’s something Weta Digital has made significant use of for films and television shows, including Avengers: Infinity War, Mortal Engines, Avengers: Endgame, Alita: Battle Angel and Season 8 of Game of Thrones. On Mortal Engines, in particular, Weta Digital relied on Eddy to insert smoke plumes for the chimneys of the giant mobile city of London, and to create elements of dust, atmospherics, fire and explosions. Normally, these things would be generated as simulations by the studio’s FX team and then composited into the shots. With Eddy, simulations can happen directly in Nuke. “In the past, we had done 2D kind of smoke elements in Nuke on ‘cards’ if there wasn’t much parallax,” notes Simone Riginelli, Compositing and Effects Domain Director at Weta Digital. “Meanwhile, usually volumetrics come from the effects and lighting teams. If you have a very complicated camera move, and you want to have all the parallax through the volume, you really needed a proper 3D volume representation. That kind of scenario was almost impossible with 2D cards. “But now with Eddy,” continues Riginelli, “we can easily do it, and it works great, and it’s super fast. We got a lot of speed out of being within Nuke and with GPU acceleration. Eddy is also a multi-discipline tool because it’s volume simulation, it’s full, physically correct rendering, and it’s a compositing tool.” Weta Digital still relies on other tools to provide particularly complex FX simulations. Eddy has helped, however, both in look
TOP TO BOTTOM: The final shot of Satan by Milk VFX in Good Omens, which took advantage of micro texturing with Cronobo’s Nexture tool. (Image courtesy of Milk VFX copyright © 2019 Amazon Studios) Masks generated for the different parts of Satan’s face. (Image courtesy of Milk VFX copyright © 2019 Amazon Studios) Displacement map generated for Satan. (Image courtesy of Milk VFX copyright © 2019 Amazon Studios) The texture map showing the resulting Nexture textures. (Image courtesy of Milk VFX copyright © 2019 Amazon Studios)
66 • VFXVOICE.COM SPRING 2020
development and in implementing many hundreds of final shots that require volumetrics. “For example,” says Riginelli, “there’s the Medusa machine in Mortal Engines, and there was a lot of dry ice on the set, and we used Eddy to extend that. I created some tools that were like gizmos, basically a set of nodes that exposed parameters to the compositor. The compositor would just need to place a simulation, let it run, and we got a full volumetric within Nuke without the need to go through the entire effects pipeline. It’s a great time-saver.” WORKING THE HOUDINI CROWDS
Generating crowds in SideFX Houdini is not necessarily a new ability inside the software, but lately many users have been combining the Engineering Emmy-award winning Crowds toolset with other Houdini tools to generate increasingly interesting work. Director Saad Moosajee and technical director James Bartolozzi, working with Art Camp, used Houdini Crowds for a Thom Yorke music video that features hordes of silhouetted figures. One of their first challenges was working between Cinema 4D (which was used for look development and animation) and Houdini (where the crowd simulation took place). “Houdini is able to do several optimizations that make viewing and rendering crowds fast and easy,” notes Bartolozzi. “Unfortunately, we had to convert the packed crowd agents to geometry to bake them into Alembic caches.” A shot in the music video with a large number of crowd agents would result in converted geometry of 22 million polygons. “Loading a nearly 20 gigabyte file into Cinema 4D would take several minutes, if it even completed,” notes Bartolozzi. “And this was for a single frame of a five-minute video! Needless to say, something had to be done. I implemented two critical components to reduce the size by over 10 times: a dynamic level of detail system for the crowd agent geometry, and camera-based culling. The first involved using a polyreduce to procedurally simplify our crowd agent models. I created new agent layers with these simplified meshes, and then dynamically switched the layers post simulation, based on the agent’s proximity to the camera. I then implemented both frustum and visibility culling to remove agents invisible to the camera.” The important development in this approach was that it was entirely procedural and allowed for camera animation to be done in Houdini with the full crowd. “Since the level of detail geometry downsampling was also procedural, we could dial it in per shot, or per agent asset,” details Bartolozzi. “It also worked with agent layers that included props and could downsample the prop geometry as well. For lighting purposes, we needed to prevent any popping that might result from the agent culling. The frustum pruning includes a padding parameter, and the hidden pruning utilized a velocity-based time sampling to determine whether an agent would become visible within a frame tolerance.” In addition to using Houdini Crowds, the final shots in the music video also made use of Houdini Pyro for fire effects, and a unique pipeline that incorporated posting videos to a Slack channel for other team members to review.
TOP TO BOTTOM: A Weta Digital scene from Mortal Engines, in which Eddy was used for volumetric effects. (Image courtesy of Weta Digital copyright © 2018 Universal Pictures and MRC) Eddy, a plug-in to Nuke, made volumetric effects possible at the compositing stage. (Image courtesy of Weta Digital copyright © 2018 Universal Pictures and MRC) A final shot from the Thom Yorke music video that incorporated silhouetted Houdini Crowds. (Image courtesy of James Bartolozzi) Crowds and fire effects in the final video. (Image courtesy of James Bartolozzi)
SPRING 2020 VFXVOICE.COM • 67
TECH & TOOLS
generates follows the displacement map we give to the software. That’s why we need the sculpt to be done first. In lookdev they can use the new maps as an additional displacement or a bump.” Milk VFX plans to continue using Nexture for its CG character work, with the studio playing a role in providing feedback for additional features after having crafted visual effects shots with an early version of the tool. “Recently we couldn’t use the UDIM system, which made the process quite painful at times,” admits Lucas, “but this has changed now and the user interface is quite different as well. The rendering time has improved so the developer basically already changed everything that I was hoping for after Good Omens.” WETA DIGITAL GOES VOLUMETRIC WITH EDDY
Elements such as CG smoke, fire and gaseous fluids tend to be the domain of 3D simulation tools. But VortechsFX Ltd’s Eddy, a plug-in for Nuke, has changed some VFX studios’ approaches to that kind of work by enabling volumetric simulations to occur at the compositing software stage. Eddy uses the GPU to provide fast turnaround times on simulation elements, which stay in the compositing environment. It’s something Weta Digital has made significant use of for films and television shows, including Avengers: Infinity War, Mortal Engines, Avengers: Endgame, Alita: Battle Angel and Season 8 of Game of Thrones. On Mortal Engines, in particular, Weta Digital relied on Eddy to insert smoke plumes for the chimneys of the giant mobile city of London, and to create elements of dust, atmospherics, fire and explosions. Normally, these things would be generated as simulations by the studio’s FX team and then composited into the shots. With Eddy, simulations can happen directly in Nuke. “In the past, we had done 2D kind of smoke elements in Nuke on ‘cards’ if there wasn’t much parallax,” notes Simone Riginelli, Compositing and Effects Domain Director at Weta Digital. “Meanwhile, usually volumetrics come from the effects and lighting teams. If you have a very complicated camera move, and you want to have all the parallax through the volume, you really needed a proper 3D volume representation. That kind of scenario was almost impossible with 2D cards. “But now with Eddy,” continues Riginelli, “we can easily do it, and it works great, and it’s super fast. We got a lot of speed out of being within Nuke and with GPU acceleration. Eddy is also a multi-discipline tool because it’s volume simulation, it’s full, physically correct rendering, and it’s a compositing tool.” Weta Digital still relies on other tools to provide particularly complex FX simulations. Eddy has helped, however, both in look
TOP TO BOTTOM: The final shot of Satan by Milk VFX in Good Omens, which took advantage of micro texturing with Cronobo’s Nexture tool. (Image courtesy of Milk VFX copyright © 2019 Amazon Studios) Masks generated for the different parts of Satan’s face. (Image courtesy of Milk VFX copyright © 2019 Amazon Studios) Displacement map generated for Satan. (Image courtesy of Milk VFX copyright © 2019 Amazon Studios) The texture map showing the resulting Nexture textures. (Image courtesy of Milk VFX copyright © 2019 Amazon Studios)
66 • VFXVOICE.COM SPRING 2020
development and in implementing many hundreds of final shots that require volumetrics. “For example,” says Riginelli, “there’s the Medusa machine in Mortal Engines, and there was a lot of dry ice on the set, and we used Eddy to extend that. I created some tools that were like gizmos, basically a set of nodes that exposed parameters to the compositor. The compositor would just need to place a simulation, let it run, and we got a full volumetric within Nuke without the need to go through the entire effects pipeline. It’s a great time-saver.” WORKING THE HOUDINI CROWDS
Generating crowds in SideFX Houdini is not necessarily a new ability inside the software, but lately many users have been combining the Engineering Emmy-award winning Crowds toolset with other Houdini tools to generate increasingly interesting work. Director Saad Moosajee and technical director James Bartolozzi, working with Art Camp, used Houdini Crowds for a Thom Yorke music video that features hordes of silhouetted figures. One of their first challenges was working between Cinema 4D (which was used for look development and animation) and Houdini (where the crowd simulation took place). “Houdini is able to do several optimizations that make viewing and rendering crowds fast and easy,” notes Bartolozzi. “Unfortunately, we had to convert the packed crowd agents to geometry to bake them into Alembic caches.” A shot in the music video with a large number of crowd agents would result in converted geometry of 22 million polygons. “Loading a nearly 20 gigabyte file into Cinema 4D would take several minutes, if it even completed,” notes Bartolozzi. “And this was for a single frame of a five-minute video! Needless to say, something had to be done. I implemented two critical components to reduce the size by over 10 times: a dynamic level of detail system for the crowd agent geometry, and camera-based culling. The first involved using a polyreduce to procedurally simplify our crowd agent models. I created new agent layers with these simplified meshes, and then dynamically switched the layers post simulation, based on the agent’s proximity to the camera. I then implemented both frustum and visibility culling to remove agents invisible to the camera.” The important development in this approach was that it was entirely procedural and allowed for camera animation to be done in Houdini with the full crowd. “Since the level of detail geometry downsampling was also procedural, we could dial it in per shot, or per agent asset,” details Bartolozzi. “It also worked with agent layers that included props and could downsample the prop geometry as well. For lighting purposes, we needed to prevent any popping that might result from the agent culling. The frustum pruning includes a padding parameter, and the hidden pruning utilized a velocity-based time sampling to determine whether an agent would become visible within a frame tolerance.” In addition to using Houdini Crowds, the final shots in the music video also made use of Houdini Pyro for fire effects, and a unique pipeline that incorporated posting videos to a Slack channel for other team members to review.
TOP TO BOTTOM: A Weta Digital scene from Mortal Engines, in which Eddy was used for volumetric effects. (Image courtesy of Weta Digital copyright © 2018 Universal Pictures and MRC) Eddy, a plug-in to Nuke, made volumetric effects possible at the compositing stage. (Image courtesy of Weta Digital copyright © 2018 Universal Pictures and MRC) A final shot from the Thom Yorke music video that incorporated silhouetted Houdini Crowds. (Image courtesy of James Bartolozzi) Crowds and fire effects in the final video. (Image courtesy of James Bartolozzi)
SPRING 2020 VFXVOICE.COM • 67
TV
WORLD-BUILDING AND MONSTER-HUNTING WITH THE WITCHER By IAN FAILES
All images © copyright 2019 Netflix. TOP: The swamp arachnid creature known as the Kikimora, which Geralt battles right at the top of the series. OPPOSITE TOP TO BOTTOM: A clash between soldiers was filmed with minimal actors and horses. The final shot made use of crowd simulation software to fill the frame. CG geometry for Cintra Castle. The final shot of the city under siege.
68 • VFXVOICE.COM SPRING 2020
Already a popular book and game series, The Witcher has now also made the move to a streaming series on Netflix, created by Lauren Schmidt Hissrich. With a story that centers around monster hunter Geralt of Rivia (Henry Cavill) making his way in a fantastical world, the show was always going to involve some level of visual effects work. That effort was overseen by Visual Effects Supervisor Julian Parry and Visual Effects Producer Gavin Round, with a host of visual effects studios delivering shots for the show. These included Cinesite, Framestore, One of Us, Nviz and Platige Image. “When I read the scripts, I realized I was not looking at a standard TV series,” discusses Parry. “I realized it had a whole worldbuilding sub-story. I did some further research and saw that it was actually a really big book series, and it dawned on me that we were going to have to do this justice.” The show was to be filmed in Hungary, the Canary Islands and Poland. From a visual effects point of view, live-action photography was a jumping off point for environment extensions, battles, creatures, magical effects and a host of weapons, blood and gore. Says Parry, “Cinesite supplied a couple of our creatures and environments. Framestore worked on mainly environments and 2D effects. One of Us had a mixture of 3D creatures and environments. Nviz were supplying predominantly the eye coloration effects. Platige Image, who were closely tied into the show from the game series, also supplied us with 2D and 3D effects.” CREATURE FEATURES
One of the first major VFX sequences audiences encounter in
The Witcher is a fight between Geralt and a swamp-based creature called the Kikimora. The scene plays out almost like hand-to-hand combat. “Our aquatic arachnid here lived in a bog swamp, so I knew we had to deal with water interaction and other occlusions,” outlines Parry. “We started out with previs to get an idea of what action was required, and from that we worked backwards to work with Production Designer Andrew Laws and Stunt Coordinator Franklin Henson to plan things out. “Alex Pejic, the Visual Effects Supervisor from Cinesite, came out to the shoot,” continues Parry. “We chose to have a couple of prosthetic pieces made. We felt we would benefit from the interaction. The eyelines and scale of the creature was all previs’d so Henry could see what this thing looked like and what space it would take up in the environment. Then it went to Cinesite for post.” “The shots that challenged us the most,” details Pejic, “were mainly to do with the movements of the creature in relationship to Henry. Also, the viscosity of the water, since the movement would drive the water simulations.” Cinesite was also responsible for visual effects related to the Striga, which Geralt battles. Here, both a practical prosthetic suit creation and a completely CG creature were utilized for final scenes. “They shot everything, but there were some safety and physical limitations with what could be done practically,” outlines Pejic. “We ended up with a hybrid version of the creature. Certain shots were kept as is, but we’d sometimes trim down or extend particularly parts of the body. “We body tracked every single shot where we had to do those adjustments, and also built a fully 3D creature,” says Pejic. “Then, with a specific script that we created in Nuke, the walk was blended between the plates and CG elements to create this particular look. A number of shots also involved a fully CG creature where we wanted the animation to be a little ‘crazier.’” Cinesite’s creature work continued with the green and dragons seen later in the series. Gold, in particular, presented a number of design hurdles for the team, as did the various components making up the dragons, including alligator-like features and wings. “It was a major challenge to make it look gold, but not look ridiculous,” admits Pejic. “But I don’t think it had been done before – I don’t remember having seen a golden dragon in any other work.” BUILDING THE MAGIC
The Witcher covers vast tracks of land. Some locations were built practically, or took advantage of existing locations. For instance, the exterior of Cintra Castle in the show included a fort near the Slovakian border in Komárom, which was then extended with visual effects. “Cintra was one of the main builds we did,” attests Framestore Visual Effects Producer Christopher Gray. “There was a lot of research that went into medieval castles. We looked at 11th century Paris and the logic of the way a city and castle needed to feel together.” “Our original inspiration was Carcassonne in France, which is obviously a real place,” adds Framestore Visual Effects Supervisor
SPRING 2020 VFXVOICE.COM • 69
TV
WORLD-BUILDING AND MONSTER-HUNTING WITH THE WITCHER By IAN FAILES
All images © copyright 2019 Netflix. TOP: The swamp arachnid creature known as the Kikimora, which Geralt battles right at the top of the series. OPPOSITE TOP TO BOTTOM: A clash between soldiers was filmed with minimal actors and horses. The final shot made use of crowd simulation software to fill the frame. CG geometry for Cintra Castle. The final shot of the city under siege.
68 • VFXVOICE.COM SPRING 2020
Already a popular book and game series, The Witcher has now also made the move to a streaming series on Netflix, created by Lauren Schmidt Hissrich. With a story that centers around monster hunter Geralt of Rivia (Henry Cavill) making his way in a fantastical world, the show was always going to involve some level of visual effects work. That effort was overseen by Visual Effects Supervisor Julian Parry and Visual Effects Producer Gavin Round, with a host of visual effects studios delivering shots for the show. These included Cinesite, Framestore, One of Us, Nviz and Platige Image. “When I read the scripts, I realized I was not looking at a standard TV series,” discusses Parry. “I realized it had a whole worldbuilding sub-story. I did some further research and saw that it was actually a really big book series, and it dawned on me that we were going to have to do this justice.” The show was to be filmed in Hungary, the Canary Islands and Poland. From a visual effects point of view, live-action photography was a jumping off point for environment extensions, battles, creatures, magical effects and a host of weapons, blood and gore. Says Parry, “Cinesite supplied a couple of our creatures and environments. Framestore worked on mainly environments and 2D effects. One of Us had a mixture of 3D creatures and environments. Nviz were supplying predominantly the eye coloration effects. Platige Image, who were closely tied into the show from the game series, also supplied us with 2D and 3D effects.” CREATURE FEATURES
One of the first major VFX sequences audiences encounter in
The Witcher is a fight between Geralt and a swamp-based creature called the Kikimora. The scene plays out almost like hand-to-hand combat. “Our aquatic arachnid here lived in a bog swamp, so I knew we had to deal with water interaction and other occlusions,” outlines Parry. “We started out with previs to get an idea of what action was required, and from that we worked backwards to work with Production Designer Andrew Laws and Stunt Coordinator Franklin Henson to plan things out. “Alex Pejic, the Visual Effects Supervisor from Cinesite, came out to the shoot,” continues Parry. “We chose to have a couple of prosthetic pieces made. We felt we would benefit from the interaction. The eyelines and scale of the creature was all previs’d so Henry could see what this thing looked like and what space it would take up in the environment. Then it went to Cinesite for post.” “The shots that challenged us the most,” details Pejic, “were mainly to do with the movements of the creature in relationship to Henry. Also, the viscosity of the water, since the movement would drive the water simulations.” Cinesite was also responsible for visual effects related to the Striga, which Geralt battles. Here, both a practical prosthetic suit creation and a completely CG creature were utilized for final scenes. “They shot everything, but there were some safety and physical limitations with what could be done practically,” outlines Pejic. “We ended up with a hybrid version of the creature. Certain shots were kept as is, but we’d sometimes trim down or extend particularly parts of the body. “We body tracked every single shot where we had to do those adjustments, and also built a fully 3D creature,” says Pejic. “Then, with a specific script that we created in Nuke, the walk was blended between the plates and CG elements to create this particular look. A number of shots also involved a fully CG creature where we wanted the animation to be a little ‘crazier.’” Cinesite’s creature work continued with the green and dragons seen later in the series. Gold, in particular, presented a number of design hurdles for the team, as did the various components making up the dragons, including alligator-like features and wings. “It was a major challenge to make it look gold, but not look ridiculous,” admits Pejic. “But I don’t think it had been done before – I don’t remember having seen a golden dragon in any other work.” BUILDING THE MAGIC
The Witcher covers vast tracks of land. Some locations were built practically, or took advantage of existing locations. For instance, the exterior of Cintra Castle in the show included a fort near the Slovakian border in Komárom, which was then extended with visual effects. “Cintra was one of the main builds we did,” attests Framestore Visual Effects Producer Christopher Gray. “There was a lot of research that went into medieval castles. We looked at 11th century Paris and the logic of the way a city and castle needed to feel together.” “Our original inspiration was Carcassonne in France, which is obviously a real place,” adds Framestore Visual Effects Supervisor
SPRING 2020 VFXVOICE.COM • 69
TV
Pedro Sabrosa. “Our version just became a bigger version of that city, a more exaggerated version treading the line between it being fantasy and being real.” Establishers were part of this environment work, as were moments when the city is under siege. Fire and smoke simulations were required here. Many of the final shots during the siege ended up being split across multiple vendors. Among the many other environments Framestore delivered for the show was the female sorceresses academy at Aretuza on a rocky outcrop situated on Thanedd Island in Temeria. Framestore took a live-action location and added brutalist-like CG buildings to the shots. “This was a tricky one because it was always envisaged as two structures that would sit on an island with water in the middle,” details Sabrosa. “It was purely by chance that they found this location in the Canary Islands when they were filming a whole other sequence.” Aretuza is, of course, the location for a bunch of magical effects. One of those is the portal created by the characters Yennefer and Istredd. This was produced by Platige. “It was one of the first magical effects that we started developing very early in the production,” notes Platige Visual Effects Producer Krzysztof Krok. “It was evolving and needed some adjustments when we got plates from set. We understood then how flexible the FX setups as well as our thinking about magic had to be.” Just as practical locations often informed the final environments, practical photography was also a major part of shooting the various battle sequences. Where vast amounts of soldiers could not be filmed, these shots were sometimes supplemented with digital crowds, including work by Cinesite. “On this show we decided to introduce a new crowd system, which was Atom Crowd from Toolchefs,” states Pejic. “We had a lot of groundwork to do first, although we had experimented with this a little in the past. It worked out really well for us, making sure we could accommodate all kinds of creatures, integrate with Houdini and other tools, for driving different secondary dynamics, hair simulations and so forth.” Platige was also a contributor to environments, including Shan Keyan Tree in the enchanted forest. “We had to show it in shots during the day and in the character Ciri’s vision as well,” says Platige Visual Effects Supervisor Mateusz Tokarz. “It has very vibrant color correction in trailers and fans loved it. On the other hand, in some shots from this sequence we’ve replaced around 75% of the screen with CG renders, and it was demanding to match to the very stylized look that was used there.”
During the series, the sorceress Yennefer at one point attempts to evade a Mage and his Roach Hound. One of Us tackled the Roach Hound as a CG creation. “I remember asking at the time, what is the Roach Hound?” recalls The Witcher Visual Effects Supervisor Julian Parry. “Is it a hound that looks like a moth-bitten threadbare mangy dog, or was it literally a roach and a hound? The answer that came back was, ‘No, it’s a Roach Hound.’ We sent over some notes to One of Us and very simply said to the team, ‘We need a Roach Hound, something that looks like a cockroach but moves like a hound.’ “We did a bit of tinkering with the design,” continues Parry. “We embellished it with some markings and gave it a backstory with regards to the way that the Mages took these creatures and then turned them into these little Frankenstein’d things. Some of their limbs have been replaced with blades and some of their shells had been replaced with armor. So there’s a bit of a hybrid thing going on there with the poor souls.” One of Us also worked on the portals that Yennefer and the Mage make their way through during that sequence. “We wanted these portals to look very different from other shows,” notes Parry. “They are formed from the elements that surround their environments. There’s a portal made up of water from the pool, then sand when they’re in the desert, and so on with snow and dust and dirt.”
MANAGING A TV VFX SHOW
The VFX shot count on Season 1 of The Witcher approached 2,000 in number over the seven hours of content. This was post-produced in a six-month period, a relatively fast turnaround. “And the expectation isn’t any lower because it’s television,” notes Parry. Most of the vendors were based in London; Platige was the only remote vendor. “But in this day and age,” says Parry, “sharing the work like that and having different sorts of review tools available such as cineSync and Skype, that’s all pretty straightforward now. By dividing the work up, you get the opportunity to cherry pick where work can go, you don’t over-stress one particular company, and you just manage a nice workflow with the multiple vendors.”
TOP TO BOTTOM: A plate shot in the Canary Islands for the sorceresses academy at Aretuza.
TOP TO BOTTOM: Actress Anya Chalotra as Yennefer mocks leaping through a portal.
The composited shot with the added structures.
The final shot where the portal appears as if made up of sand.
A plate showing the result of an attack from the Roach Hound. The rendered Roach Hound, realized as a Frankenstein’d creature with extra armor and appendages.
70 • VFXVOICE.COM SPRING 2020
Rearing a Roach Hound
CG golden dragons featured in Episode 6 of the series. The dragon completed with fire effects simulation.
SPRING 2020 VFXVOICE.COM • 71
TV
Pedro Sabrosa. “Our version just became a bigger version of that city, a more exaggerated version treading the line between it being fantasy and being real.” Establishers were part of this environment work, as were moments when the city is under siege. Fire and smoke simulations were required here. Many of the final shots during the siege ended up being split across multiple vendors. Among the many other environments Framestore delivered for the show was the female sorceresses academy at Aretuza on a rocky outcrop situated on Thanedd Island in Temeria. Framestore took a live-action location and added brutalist-like CG buildings to the shots. “This was a tricky one because it was always envisaged as two structures that would sit on an island with water in the middle,” details Sabrosa. “It was purely by chance that they found this location in the Canary Islands when they were filming a whole other sequence.” Aretuza is, of course, the location for a bunch of magical effects. One of those is the portal created by the characters Yennefer and Istredd. This was produced by Platige. “It was one of the first magical effects that we started developing very early in the production,” notes Platige Visual Effects Producer Krzysztof Krok. “It was evolving and needed some adjustments when we got plates from set. We understood then how flexible the FX setups as well as our thinking about magic had to be.” Just as practical locations often informed the final environments, practical photography was also a major part of shooting the various battle sequences. Where vast amounts of soldiers could not be filmed, these shots were sometimes supplemented with digital crowds, including work by Cinesite. “On this show we decided to introduce a new crowd system, which was Atom Crowd from Toolchefs,” states Pejic. “We had a lot of groundwork to do first, although we had experimented with this a little in the past. It worked out really well for us, making sure we could accommodate all kinds of creatures, integrate with Houdini and other tools, for driving different secondary dynamics, hair simulations and so forth.” Platige was also a contributor to environments, including Shan Keyan Tree in the enchanted forest. “We had to show it in shots during the day and in the character Ciri’s vision as well,” says Platige Visual Effects Supervisor Mateusz Tokarz. “It has very vibrant color correction in trailers and fans loved it. On the other hand, in some shots from this sequence we’ve replaced around 75% of the screen with CG renders, and it was demanding to match to the very stylized look that was used there.”
During the series, the sorceress Yennefer at one point attempts to evade a Mage and his Roach Hound. One of Us tackled the Roach Hound as a CG creation. “I remember asking at the time, what is the Roach Hound?” recalls The Witcher Visual Effects Supervisor Julian Parry. “Is it a hound that looks like a moth-bitten threadbare mangy dog, or was it literally a roach and a hound? The answer that came back was, ‘No, it’s a Roach Hound.’ We sent over some notes to One of Us and very simply said to the team, ‘We need a Roach Hound, something that looks like a cockroach but moves like a hound.’ “We did a bit of tinkering with the design,” continues Parry. “We embellished it with some markings and gave it a backstory with regards to the way that the Mages took these creatures and then turned them into these little Frankenstein’d things. Some of their limbs have been replaced with blades and some of their shells had been replaced with armor. So there’s a bit of a hybrid thing going on there with the poor souls.” One of Us also worked on the portals that Yennefer and the Mage make their way through during that sequence. “We wanted these portals to look very different from other shows,” notes Parry. “They are formed from the elements that surround their environments. There’s a portal made up of water from the pool, then sand when they’re in the desert, and so on with snow and dust and dirt.”
MANAGING A TV VFX SHOW
The VFX shot count on Season 1 of The Witcher approached 2,000 in number over the seven hours of content. This was post-produced in a six-month period, a relatively fast turnaround. “And the expectation isn’t any lower because it’s television,” notes Parry. Most of the vendors were based in London; Platige was the only remote vendor. “But in this day and age,” says Parry, “sharing the work like that and having different sorts of review tools available such as cineSync and Skype, that’s all pretty straightforward now. By dividing the work up, you get the opportunity to cherry pick where work can go, you don’t over-stress one particular company, and you just manage a nice workflow with the multiple vendors.”
TOP TO BOTTOM: A plate shot in the Canary Islands for the sorceresses academy at Aretuza.
TOP TO BOTTOM: Actress Anya Chalotra as Yennefer mocks leaping through a portal.
The composited shot with the added structures.
The final shot where the portal appears as if made up of sand.
A plate showing the result of an attack from the Roach Hound. The rendered Roach Hound, realized as a Frankenstein’d creature with extra armor and appendages.
70 • VFXVOICE.COM SPRING 2020
Rearing a Roach Hound
CG golden dragons featured in Episode 6 of the series. The dragon completed with fire effects simulation.
SPRING 2020 VFXVOICE.COM • 71
TECH & TOOLS
ILLUMINATING THE PATH AHEAD FOR REAL-TIME RAY TRACING By TREVOR HOGG
TOP: The Speed of Light is a real-time cinematic experience that makes use of NVIDIA Turing architecture, RTX technology, and the rendering advancements of Unreal Engine. (Image courtesy of Epic Games)
72 • VFXVOICE.COM SPRING 2020
Critical in simulating the realistic physical properties of light as it moves through and interacts with virtual environments and objects is ray tracing, which can become cumbersome at render time, especially with complex imagery. A huge technical and creative effort has been made by the likes of NVIDIA, Unity Technologies, Chaos Group, Epic Games and The Future Group to be able to translate the pixels representing the paths of light within real-time. Natalya Tatarchuk, Vice President of Global Graphics at Unity Technologies, has a Masters in Computer Science from Harvard University, with a focus in Computer Graphics. “Having a graphics background is key to being able to understand the technology needs for Unity graphics, in order to deeply engage with our customers to understand how they want to create their entertainment, and what features and functionality they will need,” states Tatarchuk. Real-time ray tracing is not confined to video games, she adds. “Real-time ray tracing will benefit any creative vision where maximal quality of surfaces or lighting is needed. You certainly can improve the look of video games with the hybrid ray-tracing pipeline we offer in High Definition Render Pipeline (HDRP), which selects the highest-quality effect with the highest performance as needed; for example, rendering high-fidelity area lights and shadows with ray tracing, but using rasterization for regular opaque material passes. Incredible fidelity lighting and photorealistic surface response can be created in architectural or industrial scenarios with our High Definition Render Pipeline Real-Time Ray Tracing (HDRP RTRT). You can also render Pixar-quality stylized
movies, or photorealistic cinema, or broadcast TV with beautiful visuals, and the most accurate medical visualizations with realtime ray tracing.” Chaos Group colleagues Lon Grohs, Global Head of Creative, and Phillip Miller, Vice President of Project Management, not only both studied architecture, but also share the same alma mater, the University of Illinois at Urbana-Champaign. The duo is excited about the prospects for Project Lavina, which is a software program that allows for V-Ray scenes to be explored and altered within a fully ray-traced and real-time environment. “It’s definitely a tougher path to go down for sure and one filled with landmines, like the uncanny valley,” admits Grohs. “There’s a built-up demand for realism because we’ve become a more astute audience.” Another element needs to be taken into consideration, notes Miller. “There is also the issue during real-time that it is baked-in so much that things are inaccurate. That bothers people as well.” A streamlined workflow is important. “Two big things are at work here,” Miller adds. “There’s the resulting imagery and the path to get to it. Especially with designers, we’re finding it so much easier for people to stay in pure ray tracing because they can take their original data or design and move it over easily. If you’re working physically based, which most designers are, all you have to do is put lights where they’re suppose to be, use the correct materials, and you’ll get the real result. That’s much easier for people to relate to than think about, ‘What computer graphics do I need to get a certain look or performance?’” Juan Cañada, Ray Tracing Lead Engineer at Epic Games, studied Mechanical Engineering at Universidad Carlos III de Madrid. “My studies gave me a background in math and simulation of physical phenomena that has proven very valuable,” he says. “The real-time quest for ray tracing is practical even with constantly growing creative demands for imagery. The question is whether traditional digital content creation pipelines based on offline techniques are still a practical quest, considering the cost, iteration times and lack of flexibility. Real-time techniques are not the future, but the present.” Unreal Engine drives the real-time technology for Epic Games. “Unreal Engine’s ray-tracing technology has been built in a non-disruptive way, so users can quickly go from raster to ray tracing, and vice versa, without being forced to do things such as changing shader models, or spend time on project settings to make things look right,” Miller explains. “From the general point of view, the workflow would still be the same. However, ray tracing allows creators to light scenes intuitively and closer to how they would light things in the real world. Besides, ray tracing systems are typically easier to use. Ray tracing makes it possible to focus more on the artistic side of things and less on learning technical aspects of algorithms that are difficult to control.” Unity allows creators to set up custom ray-tracing LOD for materials so that the most effective computation can happen. “We simply use their regular materials when real-time reflections are enabled with real-time ray tracing, and fallback to screen-space reflections when it’s not available,” remarks Tatarchuk. “In Unity, real-time ray tracing is part of the HDRP, and we offer the same
TOP TO BOTTOM: By binning together directions in the same spatial location for the rays that are shot, Unity can optimize performance for the ray computations on GPU. (Image courtesy of Unity Technologies) Filmmaker Kevin Margo collaborated with 75 artists and performers to produce 210 fully CG shots for the 12-minute teaser Construct. (Image courtesy of Chaos Group) NVIDIA RTX-powered ray tracing combined with real-time facial animation enabled the movements and reactions of the AR character to match the presenters and dances onstage. (Image courtesy of NVIDIA)
SPRING 2020 VFXVOICE.COM • 73
TECH & TOOLS
ILLUMINATING THE PATH AHEAD FOR REAL-TIME RAY TRACING By TREVOR HOGG
TOP: The Speed of Light is a real-time cinematic experience that makes use of NVIDIA Turing architecture, RTX technology, and the rendering advancements of Unreal Engine. (Image courtesy of Epic Games)
72 • VFXVOICE.COM SPRING 2020
Critical in simulating the realistic physical properties of light as it moves through and interacts with virtual environments and objects is ray tracing, which can become cumbersome at render time, especially with complex imagery. A huge technical and creative effort has been made by the likes of NVIDIA, Unity Technologies, Chaos Group, Epic Games and The Future Group to be able to translate the pixels representing the paths of light within real-time. Natalya Tatarchuk, Vice President of Global Graphics at Unity Technologies, has a Masters in Computer Science from Harvard University, with a focus in Computer Graphics. “Having a graphics background is key to being able to understand the technology needs for Unity graphics, in order to deeply engage with our customers to understand how they want to create their entertainment, and what features and functionality they will need,” states Tatarchuk. Real-time ray tracing is not confined to video games, she adds. “Real-time ray tracing will benefit any creative vision where maximal quality of surfaces or lighting is needed. You certainly can improve the look of video games with the hybrid ray-tracing pipeline we offer in High Definition Render Pipeline (HDRP), which selects the highest-quality effect with the highest performance as needed; for example, rendering high-fidelity area lights and shadows with ray tracing, but using rasterization for regular opaque material passes. Incredible fidelity lighting and photorealistic surface response can be created in architectural or industrial scenarios with our High Definition Render Pipeline Real-Time Ray Tracing (HDRP RTRT). You can also render Pixar-quality stylized
movies, or photorealistic cinema, or broadcast TV with beautiful visuals, and the most accurate medical visualizations with realtime ray tracing.” Chaos Group colleagues Lon Grohs, Global Head of Creative, and Phillip Miller, Vice President of Project Management, not only both studied architecture, but also share the same alma mater, the University of Illinois at Urbana-Champaign. The duo is excited about the prospects for Project Lavina, which is a software program that allows for V-Ray scenes to be explored and altered within a fully ray-traced and real-time environment. “It’s definitely a tougher path to go down for sure and one filled with landmines, like the uncanny valley,” admits Grohs. “There’s a built-up demand for realism because we’ve become a more astute audience.” Another element needs to be taken into consideration, notes Miller. “There is also the issue during real-time that it is baked-in so much that things are inaccurate. That bothers people as well.” A streamlined workflow is important. “Two big things are at work here,” Miller adds. “There’s the resulting imagery and the path to get to it. Especially with designers, we’re finding it so much easier for people to stay in pure ray tracing because they can take their original data or design and move it over easily. If you’re working physically based, which most designers are, all you have to do is put lights where they’re suppose to be, use the correct materials, and you’ll get the real result. That’s much easier for people to relate to than think about, ‘What computer graphics do I need to get a certain look or performance?’” Juan Cañada, Ray Tracing Lead Engineer at Epic Games, studied Mechanical Engineering at Universidad Carlos III de Madrid. “My studies gave me a background in math and simulation of physical phenomena that has proven very valuable,” he says. “The real-time quest for ray tracing is practical even with constantly growing creative demands for imagery. The question is whether traditional digital content creation pipelines based on offline techniques are still a practical quest, considering the cost, iteration times and lack of flexibility. Real-time techniques are not the future, but the present.” Unreal Engine drives the real-time technology for Epic Games. “Unreal Engine’s ray-tracing technology has been built in a non-disruptive way, so users can quickly go from raster to ray tracing, and vice versa, without being forced to do things such as changing shader models, or spend time on project settings to make things look right,” Miller explains. “From the general point of view, the workflow would still be the same. However, ray tracing allows creators to light scenes intuitively and closer to how they would light things in the real world. Besides, ray tracing systems are typically easier to use. Ray tracing makes it possible to focus more on the artistic side of things and less on learning technical aspects of algorithms that are difficult to control.” Unity allows creators to set up custom ray-tracing LOD for materials so that the most effective computation can happen. “We simply use their regular materials when real-time reflections are enabled with real-time ray tracing, and fallback to screen-space reflections when it’s not available,” remarks Tatarchuk. “In Unity, real-time ray tracing is part of the HDRP, and we offer the same
TOP TO BOTTOM: By binning together directions in the same spatial location for the rays that are shot, Unity can optimize performance for the ray computations on GPU. (Image courtesy of Unity Technologies) Filmmaker Kevin Margo collaborated with 75 artists and performers to produce 210 fully CG shots for the 12-minute teaser Construct. (Image courtesy of Chaos Group) NVIDIA RTX-powered ray tracing combined with real-time facial animation enabled the movements and reactions of the AR character to match the presenters and dances onstage. (Image courtesy of NVIDIA)
SPRING 2020 VFXVOICE.COM • 73
TECH & TOOLS
TOP: Unity allows creators to set up custom ray-tracing LOD for materials so that the most effective computation can happen, such as for reflection effects. (Image courtesy of Unity Technologies) BOTTOM LEFT: Natalya Tatarchuk, Vice President of Global Graphics, Unity Technologies. (Image courtesy of Unity Technologies) BOTTOM RIGHT: Juan Cañada, Ray Tracing Lead Engineer, Epic Games
74 • VFXVOICE.COM SPRING 2020
unified workflows as we do for everything in that pipeline.” A key component is the use of ray binning technology. “By binning together directions in the same spatial location for the rays that we shoot, we can optimize performance for the ray computations on GPU,” Tatarchuk says. “We do this by binning by direction of rays in octahedral directional space. This may be commonly found in offline renderers, but to adopt this method to the GPU architecture was a great achievement for optimization in real-time, where we took advantage of the modern GPU compute pipeline with atomics execution for fast performance.” A focus was placed on shooting only the rays that were needed. “Specifically, we employed variance reduction techniques in order to evaluate the lighting for ray directions more effectively. We reduced variance with analytic approximation and precomputation passes to improve performance. It is important to try to reduce the variance of whatever you are integrating during the ray-tracing computations. You can use importance sampling, consistent estimators or even biased estimators if you make sure to compare your result to a reference.” Project Lavina breaks away from the tradition of raster graphics. “That’s a different thing for people to wrap their heads around,” states Grohs, “because we’re so used to having these raster tradeoffs where you have certain geometry or shader limitations.”
Project Lavina is designed to take advantage of new ray-tracing hardware. “The hardest thing in the world is to make something easy,” observes Miller. “Project Lavina is a central gathering spot for people to share their projects regardless of the tool that it was made in. Our goal is, if you can play a 3D game you should be able to use Project Lavina. We’re adding functionality with the measure being, ‘How easy can we make it?’ That’s the core tenet behind real-time.” The denoiser runs in DirectX. “The denoising is a lovehate relationship,” observes Miller. “It’s silky smooth when you’re moving, but as soon as you stop, then you want to see the full detail. One thing that we wanted to make sure was in there before we went to general beta was crossfading from denoising to the actual finished result. That’s in there now. People will be able to enjoy the best of both worlds without having to choose. It’s very clean.” “The first phase in our real-time ray-tracing implementation was focused on basic architecture and required functionality,” remarks Cañada. “This technology was released almost a year ago in beta in Unreal Engine 4.22. We have since shifted the focus to put more effort into both stability and performance. We just released UE 4.24, where many ray-tracing effects are two-three times faster than in previous versions. Of course, we are not done. The know-how among Epic’s engineers is staggering, and we have many ideas to try that surely will keep striving towards increased
TOP: The virtual production software program Pixotope, developed by The Future Group, makes use of a single pass render pipeline which enables ray-tracing processing while maintaining video playback rates. (Image courtesy of NVIDIA) BOTTOM LEFT: Lon Grohs, Global Head of Creative, Chaos Group BOTTOM RIGHT: Phillip Miller, Vice President of Project Management, Chaos Group
SPRING 2020 VFXVOICE.COM • 75
TECH & TOOLS
TOP: Unity allows creators to set up custom ray-tracing LOD for materials so that the most effective computation can happen, such as for reflection effects. (Image courtesy of Unity Technologies) BOTTOM LEFT: Natalya Tatarchuk, Vice President of Global Graphics, Unity Technologies. (Image courtesy of Unity Technologies) BOTTOM RIGHT: Juan Cañada, Ray Tracing Lead Engineer, Epic Games
74 • VFXVOICE.COM SPRING 2020
unified workflows as we do for everything in that pipeline.” A key component is the use of ray binning technology. “By binning together directions in the same spatial location for the rays that we shoot, we can optimize performance for the ray computations on GPU,” Tatarchuk says. “We do this by binning by direction of rays in octahedral directional space. This may be commonly found in offline renderers, but to adopt this method to the GPU architecture was a great achievement for optimization in real-time, where we took advantage of the modern GPU compute pipeline with atomics execution for fast performance.” A focus was placed on shooting only the rays that were needed. “Specifically, we employed variance reduction techniques in order to evaluate the lighting for ray directions more effectively. We reduced variance with analytic approximation and precomputation passes to improve performance. It is important to try to reduce the variance of whatever you are integrating during the ray-tracing computations. You can use importance sampling, consistent estimators or even biased estimators if you make sure to compare your result to a reference.” Project Lavina breaks away from the tradition of raster graphics. “That’s a different thing for people to wrap their heads around,” states Grohs, “because we’re so used to having these raster tradeoffs where you have certain geometry or shader limitations.”
Project Lavina is designed to take advantage of new ray-tracing hardware. “The hardest thing in the world is to make something easy,” observes Miller. “Project Lavina is a central gathering spot for people to share their projects regardless of the tool that it was made in. Our goal is, if you can play a 3D game you should be able to use Project Lavina. We’re adding functionality with the measure being, ‘How easy can we make it?’ That’s the core tenet behind real-time.” The denoiser runs in DirectX. “The denoising is a lovehate relationship,” observes Miller. “It’s silky smooth when you’re moving, but as soon as you stop, then you want to see the full detail. One thing that we wanted to make sure was in there before we went to general beta was crossfading from denoising to the actual finished result. That’s in there now. People will be able to enjoy the best of both worlds without having to choose. It’s very clean.” “The first phase in our real-time ray-tracing implementation was focused on basic architecture and required functionality,” remarks Cañada. “This technology was released almost a year ago in beta in Unreal Engine 4.22. We have since shifted the focus to put more effort into both stability and performance. We just released UE 4.24, where many ray-tracing effects are two-three times faster than in previous versions. Of course, we are not done. The know-how among Epic’s engineers is staggering, and we have many ideas to try that surely will keep striving towards increased
TOP: The virtual production software program Pixotope, developed by The Future Group, makes use of a single pass render pipeline which enables ray-tracing processing while maintaining video playback rates. (Image courtesy of NVIDIA) BOTTOM LEFT: Lon Grohs, Global Head of Creative, Chaos Group BOTTOM RIGHT: Phillip Miller, Vice President of Project Management, Chaos Group
SPRING 2020 VFXVOICE.COM • 75
TECH & TOOLS
A World First for Real-Time Ray Tracing
“Ray tracing allows creators to light scenes intuitively and closer to how they would light things in the real world. Besides, ray tracing systems are typically easier to use. Ray tracing makes it possible to focus more on the artistic side of things and less on learning technical aspects of algorithms that are difficult to control.” —Phillip Miller, Vice President of Project Management, Chaos Group
TOP: Construct was seen as the means to test the real-time ray-tracing and virtual production capabilities of Project Lavina. (Image courtesy of Chaos Group) OPPOSITE TOP: Goodbye Kansas used no custom plug-ins when putting together the short Troll, which demonstrates the cinematic lighting capabilities of the real-time ray-tracing features of Unreal Engine 4.22. (Image courtesy of Epic Games) OPPOSITE BOTTOM: Troll follows a princess on a journey through a misty forest full of fairies. (Image courtesy of Epic Games)
76 • VFXVOICE.COM SPRING 2020
performance with each release.” Real-time ray tracing in Unreal Engine requires a NVIDIA graphics card compatible with DXR. “When other hardware systems implement further support for real-time ray tracing, we will work hard to make them compatible with Unreal Engine,” states Cañada. Compatibility with other software programs and plug-ins is part of the workflow. “Unreal Engine provides a plug-in framework, so this is already possible. Many of the latest internal developments done at Epic have been using this system. On top of that, the whole Unreal Engine code is available on GitHub, so anyone can modify it to make it compatible with third-party applications.” Case studies are an integral part of the research and development. “We were wondering how fast can an artist put together sets from their KitBash3D models and create extremely complex images, almost two billion polygons, without sparing or optimizing any details, and could we do it?” explains Grohs. “Evan Butler, an artist at Blizzard Entertainment, put together a 3ds Max scene. There are scenes that take place in a futuristic utopia, Neo-Tokyo, and a dock. They were KitBash’d together and exported out as a V-Ray scene file. We were in Lavina moving around in real-time just by the drag-and-drop which was awesome.” Miller adds, “The important thing here is that you cannot handle that level of
geometry within the creation tool, but you’re getting it real-time with Lavina.” Another experiment was called Construct. “Kevin Margo, who at the time was with Blur Studios and is now at NVIDIA, always had this idea to take virtual production to the next stage,” states Grohs. “We used a whole cluster of GPU servers and V-Ray GPU and tried to get real-time motion capture. We did that and were able to put it inside MotionBuilder and came close to a 24 fps ray-traced view. It didn’t have the denoising and the speed that we have now.” Real-time ray tracing is a fundamental shift in computer graphics. “To get things into a game engine right now, there are quite a few steps and procedures that people have to use,” notes Grohs. “One is applying things like UV unwrapping and UV maps across all of the channels. Another is optimizing geometry and decimating it so that it fits within polygon counts budgets. People can spend days if not weeks bringing things into whatever their favorite game engine is. Whereas a real-time ray tracer like Lavina can chew through huge scenes and you don’t need UV unwrapping or any of those other in-between steps. It’s literally a drag-anddrop process.” Tatarchuk sees game engines rising to the technological challenge. “With the requirements of orchestration of dynamically load-balanced multi-threaded execution in real-time, taking
During the world’s first real-time ray-traced live broadcast, an AR gaming character answered questions and danced in real-time. In order to achieve this, Riot Games utilized The Future Group’s mixed-reality virtual production software Pixotope which makes use of NVIDIA RTX GPUs. “The raw character assets were provided by Riot Games,” states Marcus Brodersen, CTO at The Future Group. “A large aspect of our creative challenge is simply ensuring that the end result looks convincing, and seamlessly blending the real with the virtual. This includes enforcing correct use of PBR, final lookdev passes, as well as onsite HDRI captures and final lighting to ensure the virtual talent is rendered in a way consistent with the physical environment we put her in. “For the League of Legends Pro League regional finals [in Shanghai in September 2019], the heavy use of fog machines made compositing in Akali far more difficult, as we needed to occlude her with matching virtual fog in order to blend her in seamlessly,” explains Brodersen. “Ray tracing, powered by the newest series of RTX cards from NVIDIA, allows the use of traditional rendering methods in real-time environments. However, emulating millions of photons of light is still a taxing endeavor and has severe performance implications. We had some difficult technical requirements we had to meet. We were rendering and compositing 60 full HD fps and could not miss a single frame as this would be visible as a significant pop in the broadcast.” A significant amount of work went into optimizing every aspect of the performance from dialing ray lengths, to combining as many light sources as possible, selectively trimming shadow attributes for each light, and fine-tuning post-processing effects. “One of the main reasons we are even able to do all of this is because Pixotope is a single-pass rendering solution, giving us the maximum performance of the Unreal Engine,” remarks Brodersen. “With little added overhead in ingesting and compositing our virtual graphics with the camera feed in a single renderer, we are able to push the boundary of what is possible in real-time VFX, resulting in this world-first event of its kind.” Synchronization and latency are technical challenges. “When we are integrating with time-based elements like singing and dancing, we have to make sure that we are in perfect sync with the rest of the production,” notes Brodersen. “This is achieved using a shared timecode. We also have to make sure that the delay between an action on stage to a finished rendered image is low enough that people can actively interact. This was especially important for the live interview segment where the virtual character was talking to an interviewer and the audience. Most importantly, we had to make it look like the character was actually in the venue. This is done by replicating the physical lighting that’s on the stage in the virtual scene, including using the onstage video feeds to light our virtual character. Now being able to use ray tracing as one of the tools makes that job both easier, faster and with a better result.”
SPRING 2020 VFXVOICE.COM • 77
TECH & TOOLS
A World First for Real-Time Ray Tracing
“Ray tracing allows creators to light scenes intuitively and closer to how they would light things in the real world. Besides, ray tracing systems are typically easier to use. Ray tracing makes it possible to focus more on the artistic side of things and less on learning technical aspects of algorithms that are difficult to control.” —Phillip Miller, Vice President of Project Management, Chaos Group
TOP: Construct was seen as the means to test the real-time ray-tracing and virtual production capabilities of Project Lavina. (Image courtesy of Chaos Group) OPPOSITE TOP: Goodbye Kansas used no custom plug-ins when putting together the short Troll, which demonstrates the cinematic lighting capabilities of the real-time ray-tracing features of Unreal Engine 4.22. (Image courtesy of Epic Games) OPPOSITE BOTTOM: Troll follows a princess on a journey through a misty forest full of fairies. (Image courtesy of Epic Games)
76 • VFXVOICE.COM SPRING 2020
performance with each release.” Real-time ray tracing in Unreal Engine requires a NVIDIA graphics card compatible with DXR. “When other hardware systems implement further support for real-time ray tracing, we will work hard to make them compatible with Unreal Engine,” states Cañada. Compatibility with other software programs and plug-ins is part of the workflow. “Unreal Engine provides a plug-in framework, so this is already possible. Many of the latest internal developments done at Epic have been using this system. On top of that, the whole Unreal Engine code is available on GitHub, so anyone can modify it to make it compatible with third-party applications.” Case studies are an integral part of the research and development. “We were wondering how fast can an artist put together sets from their KitBash3D models and create extremely complex images, almost two billion polygons, without sparing or optimizing any details, and could we do it?” explains Grohs. “Evan Butler, an artist at Blizzard Entertainment, put together a 3ds Max scene. There are scenes that take place in a futuristic utopia, Neo-Tokyo, and a dock. They were KitBash’d together and exported out as a V-Ray scene file. We were in Lavina moving around in real-time just by the drag-and-drop which was awesome.” Miller adds, “The important thing here is that you cannot handle that level of
geometry within the creation tool, but you’re getting it real-time with Lavina.” Another experiment was called Construct. “Kevin Margo, who at the time was with Blur Studios and is now at NVIDIA, always had this idea to take virtual production to the next stage,” states Grohs. “We used a whole cluster of GPU servers and V-Ray GPU and tried to get real-time motion capture. We did that and were able to put it inside MotionBuilder and came close to a 24 fps ray-traced view. It didn’t have the denoising and the speed that we have now.” Real-time ray tracing is a fundamental shift in computer graphics. “To get things into a game engine right now, there are quite a few steps and procedures that people have to use,” notes Grohs. “One is applying things like UV unwrapping and UV maps across all of the channels. Another is optimizing geometry and decimating it so that it fits within polygon counts budgets. People can spend days if not weeks bringing things into whatever their favorite game engine is. Whereas a real-time ray tracer like Lavina can chew through huge scenes and you don’t need UV unwrapping or any of those other in-between steps. It’s literally a drag-anddrop process.” Tatarchuk sees game engines rising to the technological challenge. “With the requirements of orchestration of dynamically load-balanced multi-threaded execution in real-time, taking
During the world’s first real-time ray-traced live broadcast, an AR gaming character answered questions and danced in real-time. In order to achieve this, Riot Games utilized The Future Group’s mixed-reality virtual production software Pixotope which makes use of NVIDIA RTX GPUs. “The raw character assets were provided by Riot Games,” states Marcus Brodersen, CTO at The Future Group. “A large aspect of our creative challenge is simply ensuring that the end result looks convincing, and seamlessly blending the real with the virtual. This includes enforcing correct use of PBR, final lookdev passes, as well as onsite HDRI captures and final lighting to ensure the virtual talent is rendered in a way consistent with the physical environment we put her in. “For the League of Legends Pro League regional finals [in Shanghai in September 2019], the heavy use of fog machines made compositing in Akali far more difficult, as we needed to occlude her with matching virtual fog in order to blend her in seamlessly,” explains Brodersen. “Ray tracing, powered by the newest series of RTX cards from NVIDIA, allows the use of traditional rendering methods in real-time environments. However, emulating millions of photons of light is still a taxing endeavor and has severe performance implications. We had some difficult technical requirements we had to meet. We were rendering and compositing 60 full HD fps and could not miss a single frame as this would be visible as a significant pop in the broadcast.” A significant amount of work went into optimizing every aspect of the performance from dialing ray lengths, to combining as many light sources as possible, selectively trimming shadow attributes for each light, and fine-tuning post-processing effects. “One of the main reasons we are even able to do all of this is because Pixotope is a single-pass rendering solution, giving us the maximum performance of the Unreal Engine,” remarks Brodersen. “With little added overhead in ingesting and compositing our virtual graphics with the camera feed in a single renderer, we are able to push the boundary of what is possible in real-time VFX, resulting in this world-first event of its kind.” Synchronization and latency are technical challenges. “When we are integrating with time-based elements like singing and dancing, we have to make sure that we are in perfect sync with the rest of the production,” notes Brodersen. “This is achieved using a shared timecode. We also have to make sure that the delay between an action on stage to a finished rendered image is low enough that people can actively interact. This was especially important for the live interview segment where the virtual character was talking to an interviewer and the audience. Most importantly, we had to make it look like the character was actually in the venue. This is done by replicating the physical lighting that’s on the stage in the virtual scene, including using the onstage video feeds to light our virtual character. Now being able to use ray tracing as one of the tools makes that job both easier, faster and with a better result.”
SPRING 2020 VFXVOICE.COM • 77
TECH & TOOLS
TOP: Unity Technologies’ Natalya Tatarchuk believes that real-time ray tracing will benefit any creative vision where maximal quality of surfaces or lighting is needed. (Image courtesy of Unity Technologies)
advantage of all available hardware to nanoseconds, modern game engines perform a ridiculous amount of complex calculations. I believe game engines are the right vehicle for all ambitions in the real-time experiences and will continue to be, as they are innovating along the right axis for that domain.” “Our goal was to avoid forcing the user to follow different workflows for raster and ray tracing,” states Cañada. “Users should be able to adopt ray tracing quickly, without investing significant amounts of time learning new methodologies. We are proud of our accomplishments so far, but of course it does not mean we are done here. Maintaining usability while adding new features is a key goal for everyone involved. It is important that the system can accommodate rendering, denoising and ray tracing without crashing. Having great engineering, production and QA teams are the reasons all these components work seamlessly. The support from NVIDIA has also been amazing, with incredible graphics hardware, as well as solid DXR specs and drivers for emerging tech.” Machine learning is integral to the future of real-time ray tracing. “In a way,” Cañada says, “techniques that use reinforcement learning have been popular in graphics for a long time. Some of these techniques, such as DLSS, are already suitable for real-time graphics and others will be soon. Machine learning will become an important component of the real-time rendering engineer’s toolset.” The technology continues to evolve. “Rendering a forest with hundreds of thousands of trees, with millions of
78 • VFXVOICE.COM SPRING 2020
leaves moved by the wind, is a major challenge that will require more research on the creation and refit of acceleration structures. Additionally, solving complex light transport phenomena, such as caustics or glossy inter-reflections, will require new and inventive techniques.” In regards to visual effects, real-time ray tracing can be found in previs and shot blocking. “The biggest area in visual effects is virtual production where people want to converge the offline and real-time render pipelines from as early on as possible,” notes Grohs. “One of the benefits with Lavina is that you’d be able to get a real-time ray-tracer view of your production asset for the most part. The challenge there being that there are tools from a visual effects standpoint that they’re looking for in terms of live motion capture and some other things that we’ve got to do more homework on to get it there.” There is no limit to how the technology can be applied, according to the experts. “It’s fast enough to be used for the quickest iteration and previz, as an exploration tool, or any part of the virtual cinematography pipeline,” remarks Tatarchuk. “It is high quality that any discerning final frames will be well received if rendered with it. Our job is to deliver the best technology, and then it’s up to the creator of the frame to drive the vision of how to apply it. It’s really been an inspiring journey to see what people all over the world have been able to accomplish with this technology already, and it’s just at the beginning of what we can do!”
FILM
BALANCING PHOTOREALISM AND ANIMAL PERSONALITY IN DOLITTLE By CHRIS MCGOWAN
All images copyright © 2020 Universal Studios and Perfect Universe Investment Inc. TOP: Gorilla Chee-Chee (voiced by Rami Malek) and Dr. John Dolittle (Robert Downey Jr.). Director Stephen Gaghan wanted to make Dolittle accessible for modern audiences using advanced VFX and SFX.
80 • VFXVOICE.COM SPRING 2020
Dolittle, the latest cinematic adaptation of Hugh Lofting’s children’s books about an English doctor and veterinarian who can talk to animals, launches at an auspicious time – when visual effects have reached new heights in creating photorealistic CG fauna. “Technology-wise we’re obviously more advanced. In the last three or four years, there’s been a jump in technology that we use to create animals, especially the fur,” says Nicolas Aithadi, MPC’s Visual Effects Supervisor on Dolittle. Once an impressive photoreal look is achieved, he explains, “That’s when you start thinking of other things, not just making an animal – it’s about its character.” All the many creatures in Dolittle are CG, with the exceptions of some horses, according to Aithadi. By contrast, the original Doctor Dolittle (1967) featured hundreds of live animals (around 1,200), including giraffes, and practical effects like a giant snail. Directed by Richard Fleischer, the movie was produced in 70mm Todd-AO with a lavish budget for the time of $17 million. There is also a history of high achievement attached to the looming franchise. L.B. Abbott, a multiple Academy Award winner, won an Oscar for Special Visual Effects on Doctor Dolittle. Art Cruickshank (Tron, Planet of the Apes), who had won an Oscar for Special Visual Effects for Fantastic Voyage a year earlier, was credited on the film with Special Photographic Effects.
Subsequently, taking a different tack, a 1998 remake of Doctor Dolittle starred Eddie Murphy, and another Dolittle film with Murphy followed, along with three more sequels without him. The 2020 Dolittle is directed by Stephen Gaghan and stars Robert Downey Jr. as the doctor, and Antonio Banderas and Michael Sheen in live-action roles. The voice cast includes Emma Thompson, Rami Malek, John Cena, Kumail Nanjiani, Tom Holland, Selena Gomez, Ralph Fiennes and Octavia Spencer. “We had something like 20 speaking characters,” says Aithadi. MPC led the visual effects charge and brought its considerable expertise with CG animals in The Jungle Book and The Lion King to bear on Dolittle. MPC started its CG animal work by visiting Amazing Animals, a company in Oxfordshire, England that supplies wild animals for films, TV and other media. 4DMax handled the capture. “We spent a couple of days there,” says Aithadi. A “whole bunch of animals” were present, including an ostrich, a squirrel, a lioness, a lion and a tiger. “We had access to the animals, and we scanned, photographed and filmed them to use as reference. They were our best references.” Two trainers were inside a cage, and photogrammetry gear was ready to handle the scans. “It was scary to watch when we got the lioness, who wasn’t the most peaceful one. It was really scary to see two guys in there with her.” The animals set the pace for the scans. “It was the
TOP: There is rarely a scene in Dolittle in which CG animals and live humans aren’t interacting. Clockwise from bottom left: Ostrich Plimpton (Kumail Nanjiani), monkeys Elliot and Elsie, parrot Polynesia (Emma Thompson), polar bear Yoshi (John Cena), Tommy Stubbins (Harry Collett) and sugar glider Mini (Nick A. Fisher). BOTTOM: From left: Ostrich Plimpton, duck Dab-Dab (Octavia Spencer), Dr. Dolittle, Tommy Stubbins, polar bear Yoshi, gorilla Chee-Chee and parrot Polynesia.
SPRING 2020 VFXVOICE.COM • 81
FILM
BALANCING PHOTOREALISM AND ANIMAL PERSONALITY IN DOLITTLE By CHRIS MCGOWAN
All images copyright © 2020 Universal Studios and Perfect Universe Investment Inc. TOP: Gorilla Chee-Chee (voiced by Rami Malek) and Dr. John Dolittle (Robert Downey Jr.). Director Stephen Gaghan wanted to make Dolittle accessible for modern audiences using advanced VFX and SFX.
80 • VFXVOICE.COM SPRING 2020
Dolittle, the latest cinematic adaptation of Hugh Lofting’s children’s books about an English doctor and veterinarian who can talk to animals, launches at an auspicious time – when visual effects have reached new heights in creating photorealistic CG fauna. “Technology-wise we’re obviously more advanced. In the last three or four years, there’s been a jump in technology that we use to create animals, especially the fur,” says Nicolas Aithadi, MPC’s Visual Effects Supervisor on Dolittle. Once an impressive photoreal look is achieved, he explains, “That’s when you start thinking of other things, not just making an animal – it’s about its character.” All the many creatures in Dolittle are CG, with the exceptions of some horses, according to Aithadi. By contrast, the original Doctor Dolittle (1967) featured hundreds of live animals (around 1,200), including giraffes, and practical effects like a giant snail. Directed by Richard Fleischer, the movie was produced in 70mm Todd-AO with a lavish budget for the time of $17 million. There is also a history of high achievement attached to the looming franchise. L.B. Abbott, a multiple Academy Award winner, won an Oscar for Special Visual Effects on Doctor Dolittle. Art Cruickshank (Tron, Planet of the Apes), who had won an Oscar for Special Visual Effects for Fantastic Voyage a year earlier, was credited on the film with Special Photographic Effects.
Subsequently, taking a different tack, a 1998 remake of Doctor Dolittle starred Eddie Murphy, and another Dolittle film with Murphy followed, along with three more sequels without him. The 2020 Dolittle is directed by Stephen Gaghan and stars Robert Downey Jr. as the doctor, and Antonio Banderas and Michael Sheen in live-action roles. The voice cast includes Emma Thompson, Rami Malek, John Cena, Kumail Nanjiani, Tom Holland, Selena Gomez, Ralph Fiennes and Octavia Spencer. “We had something like 20 speaking characters,” says Aithadi. MPC led the visual effects charge and brought its considerable expertise with CG animals in The Jungle Book and The Lion King to bear on Dolittle. MPC started its CG animal work by visiting Amazing Animals, a company in Oxfordshire, England that supplies wild animals for films, TV and other media. 4DMax handled the capture. “We spent a couple of days there,” says Aithadi. A “whole bunch of animals” were present, including an ostrich, a squirrel, a lioness, a lion and a tiger. “We had access to the animals, and we scanned, photographed and filmed them to use as reference. They were our best references.” Two trainers were inside a cage, and photogrammetry gear was ready to handle the scans. “It was scary to watch when we got the lioness, who wasn’t the most peaceful one. It was really scary to see two guys in there with her.” The animals set the pace for the scans. “It was the
TOP: There is rarely a scene in Dolittle in which CG animals and live humans aren’t interacting. Clockwise from bottom left: Ostrich Plimpton (Kumail Nanjiani), monkeys Elliot and Elsie, parrot Polynesia (Emma Thompson), polar bear Yoshi (John Cena), Tommy Stubbins (Harry Collett) and sugar glider Mini (Nick A. Fisher). BOTTOM: From left: Ostrich Plimpton, duck Dab-Dab (Octavia Spencer), Dr. Dolittle, Tommy Stubbins, polar bear Yoshi, gorilla Chee-Chee and parrot Polynesia.
SPRING 2020 VFXVOICE.COM • 81
FILM
“We spent a couple of days [at Amazing Animals in England with a] whole bunch of animals. We had access to them and they were our best references. It was scary to watch, especially when we got the lioness, who wasn’t the most peaceful one. It was really scary to see two guys in there with her. It was the animal’s time. We had to wait for them to take the scan. You can’t rush a lion.” —Nicolas Aithadi, Visual Effects Supervisor, MPC TOP LEFT: It was crucial for VFX to ensure that the CG creatures were photoreal and felt like actual animals. From left: Ostrich Plimpton, parrot Polynesia, gorilla Chee-Chee, Tommy Stubbins, squirrel Kevin (Craig Robinson), polar bear Yoshi and duck Dab-Dab. TOP RIGHT: Dr. Dolittle and parrot Polynesia. A stuffy was staged and choreographed by the director and a puppeteer to move into the proper positions for the dialogue interactions between the characters. The stuffy was replaced with the CG parrot when the scene was edited. BOTTOM: VFX worked with stuffed versions of the animal characters and were responsible for all live puppeteering. From left: Duck Dab-Dab, polar bear Yoshi, parrot Polynesia, Dr. Dolittle, ostrich Plimpton, Tommy Stubbins and gorilla Chee-Chee.
82 • VFXVOICE.COM SPRING 2020
animal’s time. We had to wait for them to take the scan. You can’t rush a lion.” For animals that couldn’t be scanned, Aithadi notes, “We used a lot of online references, National Geographic, everything we could find. We tried several models [asking ourselves], ‘how do they look, how do they render? How do they move? How do they look with wet feathers? In the water? Tired?’ Everything was based on a reallife reference that we captured ourselves or got from reference.” Yoshi (voiced by John Cena), an upbeat polar bear who wasn’t scanned, proved to be especially challenging and rewarding to create. “The technical side of doing Yoshi was one thing. We spent months [on him], building a perfectly realistic polar bear, matching all the references we could find. It’s very involved in terms of modeling the asset, texturing and all that,” Aithadi recalls. Beyond that, “the character Yoshi is really interesting. We had these discussions at the beginning of the project of how to make Yoshi not just a bear, but a bear you remember. We attached stories to all the animals – the stories of why they came to meet with Dolittle and how they end up living with him. All these animals have these pasts.” Dab-Dab, an enthusiastic duck (voiced by Octavia Spencer), was another favorite character for Aithadi. “The duck is kind of funny. She’s cute and has some sort of fatness to her. I always found myself having a soft spot for her. If I had to choose a [favorite] character, I’d go for the duck. “We always start with the real, and we go to the filmmakers and the real becomes art-directed to fit the theme, the environment and a world. That’s the beauty of the work. It’s interesting – you do animals in 10 different movies and you get 10 different types of animals. That’s the fun. “On Dolittle,” Aithadi observes, “although we were trying to get as real as possible, we didn’t want to go too real. [If the effects are too real] you get into this uncanny valley kind of thing. In the Dolittle world, animals have quite a bit of personality,” he explains. “We didn’t want an ostrich to just be an ostrich. They are all specific characters with personalities. We wanted them to display those character traits and for that you had to anthropomorphize a little bit. So you step away from that 100% photorealistic and get into that weird and interesting world between the animals and the humans. It is a balance between photorealism and anthropomorphizing. That’s where the focus was put. In production
and post-production, a big theme was getting you interested in watching that animal, its characteristics and interesting features.” The bulk of the CG animal work was done at MPC, which has a proprietary software called Furtility. Aithadi notes that fur is something MPC is “kind of good at,” having used Furtility on The Jungle Book and The Lion King. He adds that they used it to do fur and at the same time to create feathers in Dolittle. The feathers are made by applying fur on quills. It’s a very involved process.” Polly the Macaw, Plimpton the cynical ostrich and Dab-Dab the duck were extremely complicated in terms of feathers. Aithadi feels that MPC had reached a level where fur was not that complicated anymore, and feathers had become the real challenge. Meanwhile, according to Special Effects Supervisor Dominic Tuohy, the SFX department was busy handling “any interaction between actors and CGI animals, motion bases for the ship Curlew, the deck and interior cabin, motion bases for Betsy the Giraffe, Plimpton the Ostrich rigs – a riding rig on location to carry both actors and an interior rig to lift and move actors in a small space – and flame effects from the dragon, a bespoke tree rig with a net gag for [the character] Stubbins to be lifted up in the rain, and a Dolittle feeder machine inside his house. Tuohy comments that the toughest challenge was “making the interaction look non-mechanical and fluid.” According to Aithadi, War for the Planet of the Apes and The Lion King established “kind of a benchmark in terms of animal creation.” He observes that the CG animal approach of those two movies differs from that of Dolittle. “The worlds in which the animals evolved is a very different one and is the big difference between the movies. War for the Planet of the Apes is very gritty and realistic while The Lion King is very beautiful, with a National Geographic feel to it,” he says. “We are more in a fantasy world and the characters have to match the world they live in. The animals in each movie would not work in the other movies.” Aithadi is proud of the CG animals in Dolittle. “It was a hard job. There was so much animation and so many complex effects we had to deal with. So, at the end of the day, we were happy and proud of what we’d achieved. It wasn’t an easy task. I feel like they are interesting characters that you want to watch. If there’s a kid not enjoying it, I’d be worried for him,” he jokes. “It’s a fun adventure and the animals are very enticing. They’re very cool characters.”
TOP LEFT: It proved a challenge to make the CG animals interact seamlessly with sets and human actors. From left: Polar bear Yoshi, Tommy Stubbins and gorilla Chee-Chee. TOP RIGHT: Puppeteers served as green-suited human surrogates for Dolittle’s virtual characters. MIDDLE: The film’s CG animals had to interact with appropriate eyelines and real-world physics. From left: Lord Thomas Badgley (Jim Broadbent), Dr. Blair Mudfly (Michael Sheen), Tommy Stubbins, Dr. Dolittle, dog Jip (back to camera, Tom Holland) and Lady Rose (Carmel Laniado). BOTTOM: Dr. Dolittle, duck Dab-Dab and Tommy Stubbins.
SPRING 2020 VFXVOICE.COM • 83
FILM
“We spent a couple of days [at Amazing Animals in England with a] whole bunch of animals. We had access to them and they were our best references. It was scary to watch, especially when we got the lioness, who wasn’t the most peaceful one. It was really scary to see two guys in there with her. It was the animal’s time. We had to wait for them to take the scan. You can’t rush a lion.” —Nicolas Aithadi, Visual Effects Supervisor, MPC TOP LEFT: It was crucial for VFX to ensure that the CG creatures were photoreal and felt like actual animals. From left: Ostrich Plimpton, parrot Polynesia, gorilla Chee-Chee, Tommy Stubbins, squirrel Kevin (Craig Robinson), polar bear Yoshi and duck Dab-Dab. TOP RIGHT: Dr. Dolittle and parrot Polynesia. A stuffy was staged and choreographed by the director and a puppeteer to move into the proper positions for the dialogue interactions between the characters. The stuffy was replaced with the CG parrot when the scene was edited. BOTTOM: VFX worked with stuffed versions of the animal characters and were responsible for all live puppeteering. From left: Duck Dab-Dab, polar bear Yoshi, parrot Polynesia, Dr. Dolittle, ostrich Plimpton, Tommy Stubbins and gorilla Chee-Chee.
82 • VFXVOICE.COM SPRING 2020
animal’s time. We had to wait for them to take the scan. You can’t rush a lion.” For animals that couldn’t be scanned, Aithadi notes, “We used a lot of online references, National Geographic, everything we could find. We tried several models [asking ourselves], ‘how do they look, how do they render? How do they move? How do they look with wet feathers? In the water? Tired?’ Everything was based on a reallife reference that we captured ourselves or got from reference.” Yoshi (voiced by John Cena), an upbeat polar bear who wasn’t scanned, proved to be especially challenging and rewarding to create. “The technical side of doing Yoshi was one thing. We spent months [on him], building a perfectly realistic polar bear, matching all the references we could find. It’s very involved in terms of modeling the asset, texturing and all that,” Aithadi recalls. Beyond that, “the character Yoshi is really interesting. We had these discussions at the beginning of the project of how to make Yoshi not just a bear, but a bear you remember. We attached stories to all the animals – the stories of why they came to meet with Dolittle and how they end up living with him. All these animals have these pasts.” Dab-Dab, an enthusiastic duck (voiced by Octavia Spencer), was another favorite character for Aithadi. “The duck is kind of funny. She’s cute and has some sort of fatness to her. I always found myself having a soft spot for her. If I had to choose a [favorite] character, I’d go for the duck. “We always start with the real, and we go to the filmmakers and the real becomes art-directed to fit the theme, the environment and a world. That’s the beauty of the work. It’s interesting – you do animals in 10 different movies and you get 10 different types of animals. That’s the fun. “On Dolittle,” Aithadi observes, “although we were trying to get as real as possible, we didn’t want to go too real. [If the effects are too real] you get into this uncanny valley kind of thing. In the Dolittle world, animals have quite a bit of personality,” he explains. “We didn’t want an ostrich to just be an ostrich. They are all specific characters with personalities. We wanted them to display those character traits and for that you had to anthropomorphize a little bit. So you step away from that 100% photorealistic and get into that weird and interesting world between the animals and the humans. It is a balance between photorealism and anthropomorphizing. That’s where the focus was put. In production
and post-production, a big theme was getting you interested in watching that animal, its characteristics and interesting features.” The bulk of the CG animal work was done at MPC, which has a proprietary software called Furtility. Aithadi notes that fur is something MPC is “kind of good at,” having used Furtility on The Jungle Book and The Lion King. He adds that they used it to do fur and at the same time to create feathers in Dolittle. The feathers are made by applying fur on quills. It’s a very involved process.” Polly the Macaw, Plimpton the cynical ostrich and Dab-Dab the duck were extremely complicated in terms of feathers. Aithadi feels that MPC had reached a level where fur was not that complicated anymore, and feathers had become the real challenge. Meanwhile, according to Special Effects Supervisor Dominic Tuohy, the SFX department was busy handling “any interaction between actors and CGI animals, motion bases for the ship Curlew, the deck and interior cabin, motion bases for Betsy the Giraffe, Plimpton the Ostrich rigs – a riding rig on location to carry both actors and an interior rig to lift and move actors in a small space – and flame effects from the dragon, a bespoke tree rig with a net gag for [the character] Stubbins to be lifted up in the rain, and a Dolittle feeder machine inside his house. Tuohy comments that the toughest challenge was “making the interaction look non-mechanical and fluid.” According to Aithadi, War for the Planet of the Apes and The Lion King established “kind of a benchmark in terms of animal creation.” He observes that the CG animal approach of those two movies differs from that of Dolittle. “The worlds in which the animals evolved is a very different one and is the big difference between the movies. War for the Planet of the Apes is very gritty and realistic while The Lion King is very beautiful, with a National Geographic feel to it,” he says. “We are more in a fantasy world and the characters have to match the world they live in. The animals in each movie would not work in the other movies.” Aithadi is proud of the CG animals in Dolittle. “It was a hard job. There was so much animation and so many complex effects we had to deal with. So, at the end of the day, we were happy and proud of what we’d achieved. It wasn’t an easy task. I feel like they are interesting characters that you want to watch. If there’s a kid not enjoying it, I’d be worried for him,” he jokes. “It’s a fun adventure and the animals are very enticing. They’re very cool characters.”
TOP LEFT: It proved a challenge to make the CG animals interact seamlessly with sets and human actors. From left: Polar bear Yoshi, Tommy Stubbins and gorilla Chee-Chee. TOP RIGHT: Puppeteers served as green-suited human surrogates for Dolittle’s virtual characters. MIDDLE: The film’s CG animals had to interact with appropriate eyelines and real-world physics. From left: Lord Thomas Badgley (Jim Broadbent), Dr. Blair Mudfly (Michael Sheen), Tommy Stubbins, Dr. Dolittle, dog Jip (back to camera, Tom Holland) and Lady Rose (Carmel Laniado). BOTTOM: Dr. Dolittle, duck Dab-Dab and Tommy Stubbins.
SPRING 2020 VFXVOICE.COM • 83
VR/AR/MR
IMMERSION IN THE VR WORLD OF MYTH: A FROZEN TALE By CHRIS MCGOWAN
All images copyright © 2019 Disney. TOP: Visual development by Brittney Lee. Myth: A Frozen Tale focuses on the duality and balance of the four elemental spirits of Frozen 2: earth, fire, water and air. OPPOSITE TOP TO BOTTOM: The environments team, led by Michael Anderson, developed simple textures to enliven the vegetation while reducing visual noise, allowing for characters such as the Water Nøkk to stand out. Visual development by Brittney Lee – the monoliths of Enchanted Forest at night The monoliths of Enchanted Forest in the daylight.
84 • VFXVOICE.COM SPRING 2020
Walt Disney Animation Studios’ virtual reality short Myth: A Frozen Tale explores a world inspired by the feature film Frozen 2 and may point the way to an intriguing new art form: VR experiences that immerse viewers in settings and stories associated with their favorite films. The approximately eight-minute-long Myth is the second VR short directed by Jeff Gipson, who won critical praise and the 2019 Lumiere Award from the Advanced Imaging Society for directing Cycles, the first VR film from Disney. While Cycles was a poignant look at a couple’s life lived in a beloved house, Myth immerses the viewer in the myths of Frozen 2. In Arendelle [the kingdom in Frozen], a mother reads a bedtime story to her children and the audience is transported to an enchanted forest where the elemental spirits come to life and the myth of their past and future is revealed. “They’re very different films,” Gipson comments. “Myth is more complex with many more effects than Cycles. We really leaned into having the music and the animation tie together, almost like in Fantasia.” Nicholas Russell is the producer, Jose Luis Gomez Diaz (Cycles) is the VR Technology Supervisor and Joseph Trapanese (Tron: Legacy) composed the original score. Evan Rachel Wood (Westworld, Mildred Pierce), who voiced the character of Queen Iduna in Frozen 2, is the film’s narrator, and her participation is an indication of Disney’s serious commitment to the project. Brittney Lee, a key visual development artist on the Frozen movies, is the Production Designer of Myth, and drew inspiration for it from the works of legends such as Disney Art Director Eyvind Earle (Sleeping Beauty) and visual development artist Mary Blair (Alice in Wonderland, Cinderella). She was intrigued by Gipson’s vision of creating a stylized world with the visual influences of pop-up books, graphic silhouettes and stage elements from vaudeville and music hall productions. “Something we tried to push on in Myth was this sense of 2D animation,” says Lee. “I wanted to have hand-drawn animation because Fantasia was one of the main
“One of the things in the film that I really gravitated towards was the elemental spirits. I just loved these characters. … What’s so great is that with the water Nøkk you feel its scale, with the salamander you feel how tiny it is on a rock, and with the Earth Giants you see them massively in scale. That’s the beauty of VR. You’re able to feel things that you’re not able to feel on a flat screen.” —Jeff Gipson, Director inspirations for this film. So we have a lot of our effects artists on the project. [Effects artist] Dan Lund came in and did beautiful 2D hand-drawn effects.” Gipson adds that the 2D animated work “adds a sense of heritage from Disney animation.” Myth started when Frozen and Frozen 2 writer and co-director Jennifer Lee approached Gipson about exploring the world of Frozen for his next VR project. “When Jenn approached me to create something, I felt a mixture of emotions,” Gipson recalls. “I was nervous, I was excited. I was intrigued. What is cool is that Jenn gave me complete creative control on the film that I wanted to make. I was empowered but I was also nervous. Frozen is one of our most iconic films at the studio, and those worlds and those characters are so special. I wanted to do something that would do that film justice and feel like it belonged to that world.” Gipson’s concept for the VR film came about a year ago when he was watching some early screenings of Frozen 2. “One of the things in the film that I really gravitated towards was the elemental spirits,” he comments. “I just loved these characters.” These include the Nøkk, mythical water spirits that take the form of stallions with the power of the ocean; Gale, a wind spirit that is playful and curious and can also rage with a tornado’s force; the Earth Giants, massive creatures that are the spirits of the earth; and Bruni, the fire spirit in the form of a tiny, fast-moving fire salamander who can wreak havoc in a forest in seconds. At the same time, he was thinking about his own family’s
SPRING 2020 VFXVOICE.COM • 85
VR/AR/MR
IMMERSION IN THE VR WORLD OF MYTH: A FROZEN TALE By CHRIS MCGOWAN
All images copyright © 2019 Disney. TOP: Visual development by Brittney Lee. Myth: A Frozen Tale focuses on the duality and balance of the four elemental spirits of Frozen 2: earth, fire, water and air. OPPOSITE TOP TO BOTTOM: The environments team, led by Michael Anderson, developed simple textures to enliven the vegetation while reducing visual noise, allowing for characters such as the Water Nøkk to stand out. Visual development by Brittney Lee – the monoliths of Enchanted Forest at night The monoliths of Enchanted Forest in the daylight.
84 • VFXVOICE.COM SPRING 2020
Walt Disney Animation Studios’ virtual reality short Myth: A Frozen Tale explores a world inspired by the feature film Frozen 2 and may point the way to an intriguing new art form: VR experiences that immerse viewers in settings and stories associated with their favorite films. The approximately eight-minute-long Myth is the second VR short directed by Jeff Gipson, who won critical praise and the 2019 Lumiere Award from the Advanced Imaging Society for directing Cycles, the first VR film from Disney. While Cycles was a poignant look at a couple’s life lived in a beloved house, Myth immerses the viewer in the myths of Frozen 2. In Arendelle [the kingdom in Frozen], a mother reads a bedtime story to her children and the audience is transported to an enchanted forest where the elemental spirits come to life and the myth of their past and future is revealed. “They’re very different films,” Gipson comments. “Myth is more complex with many more effects than Cycles. We really leaned into having the music and the animation tie together, almost like in Fantasia.” Nicholas Russell is the producer, Jose Luis Gomez Diaz (Cycles) is the VR Technology Supervisor and Joseph Trapanese (Tron: Legacy) composed the original score. Evan Rachel Wood (Westworld, Mildred Pierce), who voiced the character of Queen Iduna in Frozen 2, is the film’s narrator, and her participation is an indication of Disney’s serious commitment to the project. Brittney Lee, a key visual development artist on the Frozen movies, is the Production Designer of Myth, and drew inspiration for it from the works of legends such as Disney Art Director Eyvind Earle (Sleeping Beauty) and visual development artist Mary Blair (Alice in Wonderland, Cinderella). She was intrigued by Gipson’s vision of creating a stylized world with the visual influences of pop-up books, graphic silhouettes and stage elements from vaudeville and music hall productions. “Something we tried to push on in Myth was this sense of 2D animation,” says Lee. “I wanted to have hand-drawn animation because Fantasia was one of the main
“One of the things in the film that I really gravitated towards was the elemental spirits. I just loved these characters. … What’s so great is that with the water Nøkk you feel its scale, with the salamander you feel how tiny it is on a rock, and with the Earth Giants you see them massively in scale. That’s the beauty of VR. You’re able to feel things that you’re not able to feel on a flat screen.” —Jeff Gipson, Director inspirations for this film. So we have a lot of our effects artists on the project. [Effects artist] Dan Lund came in and did beautiful 2D hand-drawn effects.” Gipson adds that the 2D animated work “adds a sense of heritage from Disney animation.” Myth started when Frozen and Frozen 2 writer and co-director Jennifer Lee approached Gipson about exploring the world of Frozen for his next VR project. “When Jenn approached me to create something, I felt a mixture of emotions,” Gipson recalls. “I was nervous, I was excited. I was intrigued. What is cool is that Jenn gave me complete creative control on the film that I wanted to make. I was empowered but I was also nervous. Frozen is one of our most iconic films at the studio, and those worlds and those characters are so special. I wanted to do something that would do that film justice and feel like it belonged to that world.” Gipson’s concept for the VR film came about a year ago when he was watching some early screenings of Frozen 2. “One of the things in the film that I really gravitated towards was the elemental spirits,” he comments. “I just loved these characters.” These include the Nøkk, mythical water spirits that take the form of stallions with the power of the ocean; Gale, a wind spirit that is playful and curious and can also rage with a tornado’s force; the Earth Giants, massive creatures that are the spirits of the earth; and Bruni, the fire spirit in the form of a tiny, fast-moving fire salamander who can wreak havoc in a forest in seconds. At the same time, he was thinking about his own family’s
SPRING 2020 VFXVOICE.COM • 85
VR/AR/MR
“Something we tried to push on in Myth was this sense of 2D animation. I wanted to have hand-drawn animation because Fantasia was one of the main inspirations for this film. And so we have a lot of our effects artists on the project. [Effects artist] Dan Lund came in and did beautiful 2D hand-drawn effects.” —Brittney Lee, Production Designer
TOP: Visual development artwork of the fire salamander by Production Designer Brittney Lee. MIDDLE: Lee’s call-out sheet for the fire elemental spirit, a salamander. BOTTOM: Myth: A Frozen Tale film poster
86 • VFXVOICE.COM SPRING 2020
tradition of telling bedtime stories. “We’re all told bedtime stories and we can create our own versions of the stories that we’re told. And that’s kind of what VR does. It takes you into a world, it transports you. So I thought a bedtime story would be a real cool way of framing the elemental spirits.” Gipson imagined living in Arendelle and being told the film’s bedtime story, which includes the elemental spirits. He continues, “What’s so great is that with the water Nøkk you feel its scale, with the salamander you feel how tiny it is on a rock, and with the Earth Giants you see them massively in scale. That’s the beauty of VR,” he says. “You’re able to feel things that you’re not able to feel on a flat screen.” That sense of presence creates a certain “wow factor” with well-done VR. “That’s what makes Myth so special,” Gipson attests. “Early on when we pitched the first version of this, one of the things Chris Buck (co-director of the Frozen movies) challenged us on was, ‘what does Elsa feel like when she’s with the water Nøkk, with the salamander, under the Earth Giants, with the wind? How can the audience have that same feeling, that experience of presence with these characters?’ “In Myth, the Nøkk comes up so close to you,” Gipson continues. “You feel the presence of this horse, the same with the Earth Giants towering above you or the wind swooping around you.” Myth also has a brief portion that is interactive, which adds further presence. At one point, if you come close to the salamander, it scurries away. In Myth, as in Cycles, guiding the audience’s eye is a challenge. “That’s one of the biggest challenges in VR currently, and there are lot of different solutions. We did that in Cycles through motion and movement and color and light. And we do something similar in Myth, where we’re always trying to guide you through movement of the characters, or through the sound, or some of the lighting techniques. We had the Gomez Effect in Cycles, and we re-implemented that – it is the technique that if you’re not looking at the area where the action is taking place, it darkens and [desaturates]. So, everything that is in full saturation and beautiful is where you’re supposed to be looking.” Myth was different from Cycles in assigning each character its own score, palette and language. “Each character has its own piece of music, much like in Peter and the Wolf. What’s great is our composer, Joe Trapanese, came on very early in the process and our animators actually had pieces of the score to animate to, which
“[Making Myth with Unreal Engine] was a gamble in general. We had never made anything with Unreal at the studio, but our technology team – Jose Luis Gomez Diaz [VR Technology Supervisor], Mike Anderson [VR Environment Lead] and Ed Robbins [VR Character Lead] – felt strongly about jumping in and trying to use Unreal. They enjoyed making the film and learning this kind of process.” —Jeff Gipson, Director is rare. At the studio, the animators had this cool challenge of how to match the movements to the music. “There’s also a lot of art direction choices that we did in Myth,” he adds, “where each character has its own color palette as well, and the world transforms and changes. There’s a changing of seasons in this world that is constantly evolving.” In terms of software, Gipson made Cycles with Unity and Myth with Unreal Engine. “This was a gamble in general,” he notes. “We had never made anything with Unreal at the studio, but our technology team – Jose Luis Gomez Diaz, Mike Anderson [VR Environment Lead] and Ed Robbins [VR Character Lead] – felt strongly about jumping in and trying to use Unreal. They enjoyed making the film and learning this kind of process.” Gipson’s team also used Swoop, an in-house tool written at Disney. “It’s a kind of tool where basically you’re drawing paths in VR. We’re putting on the goggles and we’re on the set, and our animators are able to grab the VR controller and orchestrate a movement, drawing a path for Gale and for the salamander’s fire trail around this.” The team continued to use Quill, a VR painting program, for storyboarding, as well as VisDev. Danny Peixe, who was also on Cycles, used it for storyboarding in VR. “We used the same techniques for Myth except much more elaborate. We did this early on just to get the flow, the sense of proximity to our characters. How the story would play out around us.” Gipson enlisted the help of Skywalker Sound on Myth. “Skywalker is an amazing group of people,” he says. “We worked with sound mixers there and they have their own teams specifically for VR. In Myth, there are 200 different sound elements, which is pretty incredible for a VR film, and each one is spatial. The score changes as you look around, so if you’re not looking in the right space it sounds almost as if you ducked out of the concert.” The climax of this film is a kind of a “visual poem where the music and the animation are all married together.” Myth: A Frozen Tale will screen at festivals and events this year.
TOP: Ian Coony, VR Effects Lead (left) and Brittney Lee share their thoughts at an Effects Review in the ‘Myth FX VR Lab.’ BOTTOM LEFT TO RIGHT: Jeff Gipson, Director; Brittney Lee, Production Designer; Jose Luis Gomez Diaz, VR Technology Supervisor; Mike Anderson, VR Environment Lead; Ed Robbins, VR Character Lead; and Danny Peixe, Artist/Illustrator.
SPRING 2020 VFXVOICE.COM • 87
VR/AR/MR
“Something we tried to push on in Myth was this sense of 2D animation. I wanted to have hand-drawn animation because Fantasia was one of the main inspirations for this film. And so we have a lot of our effects artists on the project. [Effects artist] Dan Lund came in and did beautiful 2D hand-drawn effects.” —Brittney Lee, Production Designer
TOP: Visual development artwork of the fire salamander by Production Designer Brittney Lee. MIDDLE: Lee’s call-out sheet for the fire elemental spirit, a salamander. BOTTOM: Myth: A Frozen Tale film poster
86 • VFXVOICE.COM SPRING 2020
tradition of telling bedtime stories. “We’re all told bedtime stories and we can create our own versions of the stories that we’re told. And that’s kind of what VR does. It takes you into a world, it transports you. So I thought a bedtime story would be a real cool way of framing the elemental spirits.” Gipson imagined living in Arendelle and being told the film’s bedtime story, which includes the elemental spirits. He continues, “What’s so great is that with the water Nøkk you feel its scale, with the salamander you feel how tiny it is on a rock, and with the Earth Giants you see them massively in scale. That’s the beauty of VR,” he says. “You’re able to feel things that you’re not able to feel on a flat screen.” That sense of presence creates a certain “wow factor” with well-done VR. “That’s what makes Myth so special,” Gipson attests. “Early on when we pitched the first version of this, one of the things Chris Buck (co-director of the Frozen movies) challenged us on was, ‘what does Elsa feel like when she’s with the water Nøkk, with the salamander, under the Earth Giants, with the wind? How can the audience have that same feeling, that experience of presence with these characters?’ “In Myth, the Nøkk comes up so close to you,” Gipson continues. “You feel the presence of this horse, the same with the Earth Giants towering above you or the wind swooping around you.” Myth also has a brief portion that is interactive, which adds further presence. At one point, if you come close to the salamander, it scurries away. In Myth, as in Cycles, guiding the audience’s eye is a challenge. “That’s one of the biggest challenges in VR currently, and there are lot of different solutions. We did that in Cycles through motion and movement and color and light. And we do something similar in Myth, where we’re always trying to guide you through movement of the characters, or through the sound, or some of the lighting techniques. We had the Gomez Effect in Cycles, and we re-implemented that – it is the technique that if you’re not looking at the area where the action is taking place, it darkens and [desaturates]. So, everything that is in full saturation and beautiful is where you’re supposed to be looking.” Myth was different from Cycles in assigning each character its own score, palette and language. “Each character has its own piece of music, much like in Peter and the Wolf. What’s great is our composer, Joe Trapanese, came on very early in the process and our animators actually had pieces of the score to animate to, which
“[Making Myth with Unreal Engine] was a gamble in general. We had never made anything with Unreal at the studio, but our technology team – Jose Luis Gomez Diaz [VR Technology Supervisor], Mike Anderson [VR Environment Lead] and Ed Robbins [VR Character Lead] – felt strongly about jumping in and trying to use Unreal. They enjoyed making the film and learning this kind of process.” —Jeff Gipson, Director is rare. At the studio, the animators had this cool challenge of how to match the movements to the music. “There’s also a lot of art direction choices that we did in Myth,” he adds, “where each character has its own color palette as well, and the world transforms and changes. There’s a changing of seasons in this world that is constantly evolving.” In terms of software, Gipson made Cycles with Unity and Myth with Unreal Engine. “This was a gamble in general,” he notes. “We had never made anything with Unreal at the studio, but our technology team – Jose Luis Gomez Diaz, Mike Anderson [VR Environment Lead] and Ed Robbins [VR Character Lead] – felt strongly about jumping in and trying to use Unreal. They enjoyed making the film and learning this kind of process.” Gipson’s team also used Swoop, an in-house tool written at Disney. “It’s a kind of tool where basically you’re drawing paths in VR. We’re putting on the goggles and we’re on the set, and our animators are able to grab the VR controller and orchestrate a movement, drawing a path for Gale and for the salamander’s fire trail around this.” The team continued to use Quill, a VR painting program, for storyboarding, as well as VisDev. Danny Peixe, who was also on Cycles, used it for storyboarding in VR. “We used the same techniques for Myth except much more elaborate. We did this early on just to get the flow, the sense of proximity to our characters. How the story would play out around us.” Gipson enlisted the help of Skywalker Sound on Myth. “Skywalker is an amazing group of people,” he says. “We worked with sound mixers there and they have their own teams specifically for VR. In Myth, there are 200 different sound elements, which is pretty incredible for a VR film, and each one is spatial. The score changes as you look around, so if you’re not looking in the right space it sounds almost as if you ducked out of the concert.” The climax of this film is a kind of a “visual poem where the music and the animation are all married together.” Myth: A Frozen Tale will screen at festivals and events this year.
TOP: Ian Coony, VR Effects Lead (left) and Brittney Lee share their thoughts at an Effects Review in the ‘Myth FX VR Lab.’ BOTTOM LEFT TO RIGHT: Jeff Gipson, Director; Brittney Lee, Production Designer; Jose Luis Gomez Diaz, VR Technology Supervisor; Mike Anderson, VR Environment Lead; Ed Robbins, VR Character Lead; and Danny Peixe, Artist/Illustrator.
SPRING 2020 VFXVOICE.COM • 87
TECH & TOOLS
THE ACADEMY SOFTWARE FOUNDATION AND THE ADVANTAGES OF OPEN SOURCE SOFTWARE By CHRIS MCGOWAN
“Now that there is a clear path to inclusion for engineering work, we are seeing a large uptick in new code being written and older bugs fixed. It speaks volumes about the visibility that the Academy Software Foundation has brought to this important work, and the engineering talent that is looking to contribute to the future of filmmaking software.” —Rob Bredow, Sr. Vice President, Executive Creative Director and Head of ILM, and Chair of the ASWF Governing Board
88 • VFXVOICE.COM SPRING 2020
As visual effects have proliferated in the movie and TV industries, open source software (OSS)’s appeal and importance have steadily grown. As a result, the Academy of Motion Picture Arts & Sciences (AMPAS) teamed with the Linux Foundation in August 2018 to form the Academy Software Foundation (ASWF) to create and maintain free open source software for the industry and construct a healthy ecosystem for engineers and artists who want to contribute to this software. The VES is an Associate Member of the ASWF. The origins of the ASWF trace back to when the AMPAS Science and Technology Council conducted a two-year survey into the use of OSS across the motion picture industry and discovered that almost 84% of the industry uses open source software, especially for animation and visual effects. AMPAS undertook the survey because it identified open source software OSS as a key strategic area for the motion picture industry. “Open source software is used in the creation of every film, and every frame of film is stored digitally in open source software formats – this software is critical infrastructure for both making and preserving movies,” says Rob Bredow, Sr. Vice President, Executive Creative Director and Head of ILM, and Chair of the ASWF governing board. The initial investigation included an industry-wide survey, a series of one-on-one interviews with key stakeholders, and three Academy Open Source Summits held at the Academy headquarters, according to Andy Maltz, Managing Director, Science and Technology Council, AMPAS, and ASWF Board Member. Comments Bredow, “They identified the key common challenges they were seeing with open source software. The first was making it easier for engineers to contribute to OSS with a modern software build environment hosted for free in the cloud. The second was supporting users of open source software by helping to reduce the existing version conflicts between various open source software packages. And the third was providing a common legal framework to support open source software. “The mission of the Academy Software Foundation,” Bredow elaborates, “is to increase the quality and quantity of contributions to the content creation industry’s open source software base; to provide a neutral forum to coordinate cross-project efforts; to provide a common build and test infrastructure; and to provide individuals and organizations a clear path to participation in advancing our open source ecosystem.” The ASWF has achieved solid early acceptance, with AWS (Amazon Web Services), Animal Logic, Autodesk, Blue Sky Studios, Cisco, DNEG, DreamWorks, Unreal Engine, Google Cloud, Intel, Microsoft, Movie Labs, Netflix, NVIDIA, Sony Pictures, Walt Disney Studios, Weta Digital, Foundry, Red Hat, Rodeo Visual Effects Company and Warner Bros. already on board.
“The ASWF work is an excellent vehicle for healthy open source collaboration and adoption. They are supportive of other industry efforts by the VES and others because all of those groups are collaborating toward a common goal to standardize the areas of our industry that do not give anyone a competitive advantage.” —Steve Shapiro, Marvel Studios Director of Software and Production Technology WHERE ARE WE NOW?
Says Guy Martin, Director of Open Source Strategy at Autodesk and Board Member of the Academy Software Foundation: “OSS is the lifeblood of VFX departments in the context of how it enables them to use best-of-breed tools – and even write their own in-house applications – while seamlessly moving their effects and shots among these tools. Vendors like Autodesk have been asked why we support open source in the visual effects space. The answer is simple – our customers demand it. They use our tools like Maya and 3ds Max, but also a variety of other tools, and combine them in their tool pipelines to make great content. Any tool vendor or technology partner who is not participating in open source will be behind the innovation curve. There is healthy competition among many of the vendors and technology providers in this space, but there is also universal acceptance that open source is the ‘lingua franca’ that allows the industry to successfully share and collaborate.” “We are no longer in the infancy of open source software and open standards within the movie, TV and VFX industries,” comments Steve Shapiro, Marvel Studios Director of Software and Production Technology. “Within VFX and post-production, we have been moving from a model of proprietary solutions toward collaborative standards. We are past the infancy of that migration, though there is still a long way to go before it is a mature practice.” OPEN SOURCE ADVANTAGES
By having an open standard, everybody can contribute their development resources towards that, rather than having many different proprietary formats for common data and workflows, according to Jordan Thistlewood, Group Product Manager for Katana, Mari and Flix at Foundry. “It’s the idea that everyone is invested in a common technology that covers all the basics. This philosophy extends through many types of file formats throughout the industry, so everyone contributes to one massive technological pool,” Thistlewood comments in the essay “Open Standards Will Change the VFX Industry As We Know It” on Foundry.com. “By sharing the code with a large number of engineers facing diverse challenges and allowing those engineers to improve the software and share it back to the community, we get better quality software. In addition, when you think of long-term preservation of data, sharing the algorithms for reading the data openly helps ensure the data is readable long into the future,” explains Bredow.
OPPOSITE TOP LEFT: Rob Bredow, Sr. Vice President, Executive Creative Director and Head of ILM, and Chair of the ASWF Governing Board. OPPOSITE TOP RIGHT: Guy Martin, Director of Open Source Strategy at Autodesk and Board Member of the Academy Software Foundation. OPPOSITE BOTTOM: Steve Shapiro, Marvel Studios Director of Software and Production Technology. TOP LEFT: Jim Jeffers, Sr. Principal Engineer and Sr. Director of Advanced Rendering and Visualization at Intel Corp., and ASWF Board Member. TOP RIGHT: Andy Maltz, Managing Director, Science and Technology Council, AMPAS, and ASWF Board Member. BOTTOM LEFT: Daniel Heckenberg, R&D Supervisor - Graphics, Animal Logic, and Chair of the ASWF Technical Advisory Council. BOTTOM RIGHT: Jordan Thistlewood, Group Product Manager, Katana, Mari, Flix, Foundry.
SPRING 2020 VFXVOICE.COM • 89
TECH & TOOLS
THE ACADEMY SOFTWARE FOUNDATION AND THE ADVANTAGES OF OPEN SOURCE SOFTWARE By CHRIS MCGOWAN
“Now that there is a clear path to inclusion for engineering work, we are seeing a large uptick in new code being written and older bugs fixed. It speaks volumes about the visibility that the Academy Software Foundation has brought to this important work, and the engineering talent that is looking to contribute to the future of filmmaking software.” —Rob Bredow, Sr. Vice President, Executive Creative Director and Head of ILM, and Chair of the ASWF Governing Board
88 • VFXVOICE.COM SPRING 2020
As visual effects have proliferated in the movie and TV industries, open source software (OSS)’s appeal and importance have steadily grown. As a result, the Academy of Motion Picture Arts & Sciences (AMPAS) teamed with the Linux Foundation in August 2018 to form the Academy Software Foundation (ASWF) to create and maintain free open source software for the industry and construct a healthy ecosystem for engineers and artists who want to contribute to this software. The VES is an Associate Member of the ASWF. The origins of the ASWF trace back to when the AMPAS Science and Technology Council conducted a two-year survey into the use of OSS across the motion picture industry and discovered that almost 84% of the industry uses open source software, especially for animation and visual effects. AMPAS undertook the survey because it identified open source software OSS as a key strategic area for the motion picture industry. “Open source software is used in the creation of every film, and every frame of film is stored digitally in open source software formats – this software is critical infrastructure for both making and preserving movies,” says Rob Bredow, Sr. Vice President, Executive Creative Director and Head of ILM, and Chair of the ASWF governing board. The initial investigation included an industry-wide survey, a series of one-on-one interviews with key stakeholders, and three Academy Open Source Summits held at the Academy headquarters, according to Andy Maltz, Managing Director, Science and Technology Council, AMPAS, and ASWF Board Member. Comments Bredow, “They identified the key common challenges they were seeing with open source software. The first was making it easier for engineers to contribute to OSS with a modern software build environment hosted for free in the cloud. The second was supporting users of open source software by helping to reduce the existing version conflicts between various open source software packages. And the third was providing a common legal framework to support open source software. “The mission of the Academy Software Foundation,” Bredow elaborates, “is to increase the quality and quantity of contributions to the content creation industry’s open source software base; to provide a neutral forum to coordinate cross-project efforts; to provide a common build and test infrastructure; and to provide individuals and organizations a clear path to participation in advancing our open source ecosystem.” The ASWF has achieved solid early acceptance, with AWS (Amazon Web Services), Animal Logic, Autodesk, Blue Sky Studios, Cisco, DNEG, DreamWorks, Unreal Engine, Google Cloud, Intel, Microsoft, Movie Labs, Netflix, NVIDIA, Sony Pictures, Walt Disney Studios, Weta Digital, Foundry, Red Hat, Rodeo Visual Effects Company and Warner Bros. already on board.
“The ASWF work is an excellent vehicle for healthy open source collaboration and adoption. They are supportive of other industry efforts by the VES and others because all of those groups are collaborating toward a common goal to standardize the areas of our industry that do not give anyone a competitive advantage.” —Steve Shapiro, Marvel Studios Director of Software and Production Technology WHERE ARE WE NOW?
Says Guy Martin, Director of Open Source Strategy at Autodesk and Board Member of the Academy Software Foundation: “OSS is the lifeblood of VFX departments in the context of how it enables them to use best-of-breed tools – and even write their own in-house applications – while seamlessly moving their effects and shots among these tools. Vendors like Autodesk have been asked why we support open source in the visual effects space. The answer is simple – our customers demand it. They use our tools like Maya and 3ds Max, but also a variety of other tools, and combine them in their tool pipelines to make great content. Any tool vendor or technology partner who is not participating in open source will be behind the innovation curve. There is healthy competition among many of the vendors and technology providers in this space, but there is also universal acceptance that open source is the ‘lingua franca’ that allows the industry to successfully share and collaborate.” “We are no longer in the infancy of open source software and open standards within the movie, TV and VFX industries,” comments Steve Shapiro, Marvel Studios Director of Software and Production Technology. “Within VFX and post-production, we have been moving from a model of proprietary solutions toward collaborative standards. We are past the infancy of that migration, though there is still a long way to go before it is a mature practice.” OPEN SOURCE ADVANTAGES
By having an open standard, everybody can contribute their development resources towards that, rather than having many different proprietary formats for common data and workflows, according to Jordan Thistlewood, Group Product Manager for Katana, Mari and Flix at Foundry. “It’s the idea that everyone is invested in a common technology that covers all the basics. This philosophy extends through many types of file formats throughout the industry, so everyone contributes to one massive technological pool,” Thistlewood comments in the essay “Open Standards Will Change the VFX Industry As We Know It” on Foundry.com. “By sharing the code with a large number of engineers facing diverse challenges and allowing those engineers to improve the software and share it back to the community, we get better quality software. In addition, when you think of long-term preservation of data, sharing the algorithms for reading the data openly helps ensure the data is readable long into the future,” explains Bredow.
OPPOSITE TOP LEFT: Rob Bredow, Sr. Vice President, Executive Creative Director and Head of ILM, and Chair of the ASWF Governing Board. OPPOSITE TOP RIGHT: Guy Martin, Director of Open Source Strategy at Autodesk and Board Member of the Academy Software Foundation. OPPOSITE BOTTOM: Steve Shapiro, Marvel Studios Director of Software and Production Technology. TOP LEFT: Jim Jeffers, Sr. Principal Engineer and Sr. Director of Advanced Rendering and Visualization at Intel Corp., and ASWF Board Member. TOP RIGHT: Andy Maltz, Managing Director, Science and Technology Council, AMPAS, and ASWF Board Member. BOTTOM LEFT: Daniel Heckenberg, R&D Supervisor - Graphics, Animal Logic, and Chair of the ASWF Technical Advisory Council. BOTTOM RIGHT: Jordan Thistlewood, Group Product Manager, Katana, Mari, Flix, Foundry.
SPRING 2020 VFXVOICE.COM • 89
TECH & TOOLS
“Long-term preservation of digitally-created motion pictures is a real challenge. How does one access a digital movie 100 years from now given the pace of technological innovation – and obsolescence? Standardized open source software APIs and file formats are key solutions to this challenge.” —Andy Maltz, Managing Director, Science and Technology Council, AMPAS, and ASWF Board Member.
Maltz adds, “Long-term preservation of digitally-created motion pictures is a real challenge. How does one access a digital movie 100 years from now given the pace of technological innovation – and obsolescence? Standardized open source software APIs and file formats are key solutions to this challenge.” “The most successful examples of open standards and software will improve the quality of life for large productions or post houses. VFX houses have to collaborate on shared shots and shared assets much more than ever before, which would be significantly harder without some standards,” says Shapiro. Says Jim Jeffers, Sr. Principal Engineer and Sr. Director of Advanced Rendering and Visualization at Intel Corp., and ASWF Board Member, “Either for speed of development of compelling and complex assets or to take advantage of the latest state-of-the-art techniques in content development, it’s important to have interoperability and interchange become as seamless as possible. Through collaborative open source projects, the industry as a whole benefits from this kind of ‘standardization’ that still is changeable and tunable, which is a natural state for OSS.” IDENTIFYING CHALLENGES
“Either for speed of development of compelling and complex assets or to take advantage of the latest state-of-the-art techniques in content development, it’s important to have interoperability and interchange becomeas seamless as possible. Through collaborative open source projects, the industry as a whole benefits from this kind of ‘standardization’ that still is changeable and tunable, which is a natural state for OSS.” —Jim Jeffers, Sr. Principal Engineer and Sr. Director of Advanced Rendering and Visualization at Intel Corp., and ASWF Board Member
90 • VFXVOICE.COM SPRING 2020
The AMPAS survey identified various challenges to the use of OSS, including siloed development, managing multiple versions of OSS libraries (“versionitis”), and varying governance and licensing models that need to be addressed in order to ensure a healthy open source community. Shapiro observes, “Sharing technologies across studios has a variety of challenges, but open source standards do make it easier to collaborate on industry solutions to difficult problems. However, it is not a silver bullet because the standard is just the start. The adoption and evolution of those standards is crucial to their success. If a standard is not leveraged in the right places and improved to adapt to changes within the industry and technology, it will be antiquated before it is widely used.” “Visual effects is still an innovative industry with a lot of shows breaking new ground each year. The really successful and widely adopted work we have seen in open source software has really been to provide useful core libraries for the basics – reading images, reading geometry, handling looks for shots – and that is key to get right. In terms of sharing something very complicated like a full rigged and groomed character, that’s not yet an area that we are seeing standardized due to the amount of innovation we are still seeing in the space,” says Bredow. Sustainability, effective collaboration and clarity of purpose are challenges common to OSS projects, according to Daniel Heckenberg, R&D Supervisor - Graphics, Animal Logic, and Chair of the ASWF Technical Advisory Council (TAC). “The ASWF supports sustainability and collaboration by providing established and successful governance and
“Any tool vendor or technology partner who is not participating in open source will be behind the innovation curve. There is healthy competition among many of the vendors and technology providers in this space, but there is also universal acceptance that open source is the ‘lingua franca’ that allows the industry to successfully share and collaborate.” —Guy Martin, Director of Open Source Strategy at Autodesk and Board Member of the Academy Software Foundation process models for its projects. Key to these outcomes is the reduction of risks and barriers to entry for contributions and involvement through standardization of licensing and engineering resource commitments from members.” LEGAL AND LICENSING
“The biggest complaints we heard from the community were around the wide variety of licensing requirements and some of the corporate requirements around adopting and contributing back to open source projects. Even free software has a cost if you need a team of lawyers to pick through obscure licenses in order to confirm you can leverage the tool for your next film. One of the ways the Academy Software Foundation can help is ensuring that our projects have common, permissible licenses that are easy for companies to review once and accept for their engineers and artists,” says Bredow. ACCOMPLISHMENTS TO DATE
Adds Bredow, “One of the great things we have seen in our first year at the Academy Software Foundation is the tremendous increase in code contributions from a much larger number of contributors. In the past, many of these projects only had one or two main contributors, usually from one sponsoring company. Now that there is a clear path to inclusion for engineering work, we are seeing a large uptick in new code being written and older bugs fixed. It speaks volumes about the visibility that the Academy Software Foundation has brought to this important work, and the engineering talent that is looking to contribute to the future of filmmaking software. “The first projects that the Academy Software Foundation have adopted are OpenColorIO, OpenVDB, OpenEXR, OpenCue and OpenTimelineIO,” says Bredow. “It’s important to note that the foundation is not a standards body and doesn’t act in that role. However, the nature of open source software often leads to de-facto industry standards, since many companies are already contributing to and using the software. So, when our software, like OpenEXR, gets wide adoption and becomes a de-facto industry standard, that’s great news too.” OTHER OPEN SOURCE ECOSYSTEMS
“I think the ASWF work is an excellent vehicle for healthy open source collaboration and adoption,” notes Shapiro. “They are supportive of other industry efforts by the VES and others because all of those groups are collaborating toward a common
goal to standardize the areas of our industry that do not give anyone a competitive advantage.” The VFX Reference Platform, set up by the Visual Effects Society (VES), also works towards maintaining standards in this space. Initially focused on Linux only, the VFX Reference Platform is a set of tools and library versions to be used as a common target platform for building software for the VFX industry. Comments Jeffers, “There is a pretty significant open source ecosystem that is related to but not directly managed by the AWSF. Intel is part of the ASWF to support the overall beneficial use of open source software, both the directly managed codes and the large ecosystem of OSS that is not yet an AWSF fit right now or that is standing strong as is. Liberally licensed codes like Intel’s Embree ray-tracing kernel library and our recently launched Open Image Denoise are becoming important tools for animation and VFX workflows. The Pixar-led Universal Scene Description (USD) OSS project for content and asset interchange is enabling multiple studios to collaborate and share the development and expertise of many to make a high-quality film or TV show. A robust mix of open source and advanced commercial software tools is now the norm for making modern films with digital assets.” INDUSTRY ACCEPTANCE
“The Academy Software Foundation is really a broad industry effort. It’s always free to contribute to and use the software that is hosted by the foundation. And we are supported by members who provide engineering support and funding to ensure a successful foundation. In the first year, our membership base has almost doubled, and five open source projects have applied and been accepted into the foundation. We are seeing growing support for the foundation both at technical and management levels,” says Bredow. More and more people are moving away from closed standards to open standards, says Thistlewood. “There’s now a proven track record of these projects, so there’s now faith in what they will become.”. “We have gone past the tipping point,” concludes Shapiro, “so it is no longer a question if open source software and standards are worthwhile. The question now is what fits within that model and how do we prioritize those efforts to bring them into the community.”
SPRING 2020 VFXVOICE.COM • 91
TECH & TOOLS
“Long-term preservation of digitally-created motion pictures is a real challenge. How does one access a digital movie 100 years from now given the pace of technological innovation – and obsolescence? Standardized open source software APIs and file formats are key solutions to this challenge.” —Andy Maltz, Managing Director, Science and Technology Council, AMPAS, and ASWF Board Member.
Maltz adds, “Long-term preservation of digitally-created motion pictures is a real challenge. How does one access a digital movie 100 years from now given the pace of technological innovation – and obsolescence? Standardized open source software APIs and file formats are key solutions to this challenge.” “The most successful examples of open standards and software will improve the quality of life for large productions or post houses. VFX houses have to collaborate on shared shots and shared assets much more than ever before, which would be significantly harder without some standards,” says Shapiro. Says Jim Jeffers, Sr. Principal Engineer and Sr. Director of Advanced Rendering and Visualization at Intel Corp., and ASWF Board Member, “Either for speed of development of compelling and complex assets or to take advantage of the latest state-of-the-art techniques in content development, it’s important to have interoperability and interchange become as seamless as possible. Through collaborative open source projects, the industry as a whole benefits from this kind of ‘standardization’ that still is changeable and tunable, which is a natural state for OSS.” IDENTIFYING CHALLENGES
“Either for speed of development of compelling and complex assets or to take advantage of the latest state-of-the-art techniques in content development, it’s important to have interoperability and interchange becomeas seamless as possible. Through collaborative open source projects, the industry as a whole benefits from this kind of ‘standardization’ that still is changeable and tunable, which is a natural state for OSS.” —Jim Jeffers, Sr. Principal Engineer and Sr. Director of Advanced Rendering and Visualization at Intel Corp., and ASWF Board Member
90 • VFXVOICE.COM SPRING 2020
The AMPAS survey identified various challenges to the use of OSS, including siloed development, managing multiple versions of OSS libraries (“versionitis”), and varying governance and licensing models that need to be addressed in order to ensure a healthy open source community. Shapiro observes, “Sharing technologies across studios has a variety of challenges, but open source standards do make it easier to collaborate on industry solutions to difficult problems. However, it is not a silver bullet because the standard is just the start. The adoption and evolution of those standards is crucial to their success. If a standard is not leveraged in the right places and improved to adapt to changes within the industry and technology, it will be antiquated before it is widely used.” “Visual effects is still an innovative industry with a lot of shows breaking new ground each year. The really successful and widely adopted work we have seen in open source software has really been to provide useful core libraries for the basics – reading images, reading geometry, handling looks for shots – and that is key to get right. In terms of sharing something very complicated like a full rigged and groomed character, that’s not yet an area that we are seeing standardized due to the amount of innovation we are still seeing in the space,” says Bredow. Sustainability, effective collaboration and clarity of purpose are challenges common to OSS projects, according to Daniel Heckenberg, R&D Supervisor - Graphics, Animal Logic, and Chair of the ASWF Technical Advisory Council (TAC). “The ASWF supports sustainability and collaboration by providing established and successful governance and
“Any tool vendor or technology partner who is not participating in open source will be behind the innovation curve. There is healthy competition among many of the vendors and technology providers in this space, but there is also universal acceptance that open source is the ‘lingua franca’ that allows the industry to successfully share and collaborate.” —Guy Martin, Director of Open Source Strategy at Autodesk and Board Member of the Academy Software Foundation process models for its projects. Key to these outcomes is the reduction of risks and barriers to entry for contributions and involvement through standardization of licensing and engineering resource commitments from members.” LEGAL AND LICENSING
“The biggest complaints we heard from the community were around the wide variety of licensing requirements and some of the corporate requirements around adopting and contributing back to open source projects. Even free software has a cost if you need a team of lawyers to pick through obscure licenses in order to confirm you can leverage the tool for your next film. One of the ways the Academy Software Foundation can help is ensuring that our projects have common, permissible licenses that are easy for companies to review once and accept for their engineers and artists,” says Bredow. ACCOMPLISHMENTS TO DATE
Adds Bredow, “One of the great things we have seen in our first year at the Academy Software Foundation is the tremendous increase in code contributions from a much larger number of contributors. In the past, many of these projects only had one or two main contributors, usually from one sponsoring company. Now that there is a clear path to inclusion for engineering work, we are seeing a large uptick in new code being written and older bugs fixed. It speaks volumes about the visibility that the Academy Software Foundation has brought to this important work, and the engineering talent that is looking to contribute to the future of filmmaking software. “The first projects that the Academy Software Foundation have adopted are OpenColorIO, OpenVDB, OpenEXR, OpenCue and OpenTimelineIO,” says Bredow. “It’s important to note that the foundation is not a standards body and doesn’t act in that role. However, the nature of open source software often leads to de-facto industry standards, since many companies are already contributing to and using the software. So, when our software, like OpenEXR, gets wide adoption and becomes a de-facto industry standard, that’s great news too.” OTHER OPEN SOURCE ECOSYSTEMS
“I think the ASWF work is an excellent vehicle for healthy open source collaboration and adoption,” notes Shapiro. “They are supportive of other industry efforts by the VES and others because all of those groups are collaborating toward a common
goal to standardize the areas of our industry that do not give anyone a competitive advantage.” The VFX Reference Platform, set up by the Visual Effects Society (VES), also works towards maintaining standards in this space. Initially focused on Linux only, the VFX Reference Platform is a set of tools and library versions to be used as a common target platform for building software for the VFX industry. Comments Jeffers, “There is a pretty significant open source ecosystem that is related to but not directly managed by the AWSF. Intel is part of the ASWF to support the overall beneficial use of open source software, both the directly managed codes and the large ecosystem of OSS that is not yet an AWSF fit right now or that is standing strong as is. Liberally licensed codes like Intel’s Embree ray-tracing kernel library and our recently launched Open Image Denoise are becoming important tools for animation and VFX workflows. The Pixar-led Universal Scene Description (USD) OSS project for content and asset interchange is enabling multiple studios to collaborate and share the development and expertise of many to make a high-quality film or TV show. A robust mix of open source and advanced commercial software tools is now the norm for making modern films with digital assets.” INDUSTRY ACCEPTANCE
“The Academy Software Foundation is really a broad industry effort. It’s always free to contribute to and use the software that is hosted by the foundation. And we are supported by members who provide engineering support and funding to ensure a successful foundation. In the first year, our membership base has almost doubled, and five open source projects have applied and been accepted into the foundation. We are seeing growing support for the foundation both at technical and management levels,” says Bredow. More and more people are moving away from closed standards to open standards, says Thistlewood. “There’s now a proven track record of these projects, so there’s now faith in what they will become.”. “We have gone past the tipping point,” concludes Shapiro, “so it is no longer a question if open source software and standards are worthwhile. The question now is what fits within that model and how do we prioritize those efforts to bring them into the community.”
SPRING 2020 VFXVOICE.COM • 91
[ VES SECTION SPOTLIGHT: GEORGIA ]
Enjoying an Exciting Period of Rapid Growth By NAOMI GOLDMAN
TOP: Georgia Section members celebrate their festive inaugural event at Turner Studios’ E League. BOTTOM: Rob Wright, Chair, Georgia Section Board of Managers
92 • VFXVOICE.COM SPRING 2020
The Georgia Section, established in 2019, is the 14th and newest Section to join the VES. With almost 80 members, the Georgia Section is enjoying an exciting period of growth, mirroring the explosion of the local visual effects industry. “We have been so fortunate to have the support and insights from the VES community to help realize the birth of our Section,” says Rob Wright, Chair, Georgia Section Board of Managers. “Many years ago, I understood that for our community to get screenings and other event opportunities, we needed to formalize a Section. While I and some others were relentless in getting people excited about the prospect and recruiting them in our effort, it took some additional help from the outside to help us navigate the process and cross the finish line to get our first 50 members. Mei Ming Casino, who was transplanted from L.A. to Atlanta, was a fantastic partner and guide before passing the baton to me. And Tim McGovern is a marvelous asset, and was excellent in helping us get established.” The Georgia Section is coordinating efforts in two cities – a larger group in Atlanta and a smaller group in Savannah, which has a number of VFX professionals in the animation and gaming sectors and some former practitioners now serving as professors at the Savannah College of Art and Design (SCAD). The Section is benefiting from longstanding VES members now working in Georgia, such as David “DJ” Johnson, formerly from the L.A. Section. “We inherited a great guy with DJ coming to Savannah. He has been exceptionally supportive in molding the Savannah group and making them feel included and a part of our strong camaraderie.” While the Section is operating in two cities, they are working to bridge the two groups. The Section hosts monthly “VES Hangouts,” informal social networking get-togethers that also serve as member recruiting events. At one of their recent Hangouts held simultaneously in Atlanta and Savannah, they set up a video portal so that the two groups could see each other and interact, fostering unity and community. “Once people show up to one of our fun Hangouts, nine times out of 10, they are hooked on the VES and come on board!” adds Wright. The Section burst onto the scene with an inaugural celebration sponsored by the Turner Broadcasting Studios’ E League, and it was deemed a huge success. The event included a VR demo on one of the Turner soundstages, decorated to herald the new Section. In addition to the monthly Hangouts, the Section has a full calendar of screenings held in Atlanta. Getting screenings in Savannah is a goal for 2020. Next on the agenda is to develop educational and career development programs, potentially in conjunction with SCAD. “As we are setting goals and planning events, we continue to reach out to our sister Sections to brainstorm and look for ways
“We are focused on making VFX more appealing to the next generation of artists, so outreach to students is a priority as we move forward. It’s vitally important for young people, especially young women, to see VFX as a possibility, a real career option. We can help do just that.” —Rob Wright, Chair, Georgia Section Board of Managers to collaborate,” says Wright. “Because the industry is young here, we have reached out to New York to glean some of their lessons learned and look at ways we might participate in their events digitally and help cross-promote each other’s efforts. The possibilities to collaborate are really exciting.” The Section membership includes a wide breadth of talent, with VFX professionals working in TV, film, animation and gaming. That said, the Board of Managers has made diversity a primary goal in the coming year. “We are very mindful of expanding our membership to include more women and people of color,” adds Wright. “And we are focused on making VFX more appealing to the next generation of artists, so outreach to students is a priority as we move forward. It’s vitally important for young people, especially young women, to see VFX as a possibility, a real career option. We can help do just that.” Many companies have established offices in Georgia, including Method Studios, Crafty Apes, BOT VFX, Spin VFX, Stargate, Undertone, Pixel Rodeo and Turner Broadcasting. In this rich environment, VFX practitioners have the opportunity to do a lot of extraordinary work because the local studios are also part of larger shops. “We are really riding a wave with a rapidly growing VFX community,” says Wright. “I think at some point, we will start to see not having enough talent here to fulfill the work. One VFX company told us that they are using our Section as a draw to recruit talent to the company, and that’s so rewarding to know we are making a difference and providing that kind of value. And we vest a lot in educational institutions like SCAD and the Georgia Film Academy in bringing forth the next class of VFX pros. All in all, it’s a very dynamic environment with a bright future.” Wright also acknowledges the Board of Managers, who are “engaged, dedicated and such a strong and collaborative group. Together, we are very focused on building community and making our Section a great success. “Georgia is very welcoming, the people are approachable and it’s very family-friendly – attributes we aim to reflect in our Section. We are so appreciative of the VES community for helping us get our footing. As we grow and learn, we see it as our job to support burgeoning groups and pay it forward. That’s the beauty of being a part of the VES family.”
TOP AND BOTTOM: Georgia Section members and guests enjoying exclusive screenings in Atlanta.
SPRING 2020
VFXVOICE.COM • 93
[ VES SECTION SPOTLIGHT: GEORGIA ]
Enjoying an Exciting Period of Rapid Growth By NAOMI GOLDMAN
TOP: Georgia Section members celebrate their festive inaugural event at Turner Studios’ E League. BOTTOM: Rob Wright, Chair, Georgia Section Board of Managers
92 • VFXVOICE.COM SPRING 2020
The Georgia Section, established in 2019, is the 14th and newest Section to join the VES. With almost 80 members, the Georgia Section is enjoying an exciting period of growth, mirroring the explosion of the local visual effects industry. “We have been so fortunate to have the support and insights from the VES community to help realize the birth of our Section,” says Rob Wright, Chair, Georgia Section Board of Managers. “Many years ago, I understood that for our community to get screenings and other event opportunities, we needed to formalize a Section. While I and some others were relentless in getting people excited about the prospect and recruiting them in our effort, it took some additional help from the outside to help us navigate the process and cross the finish line to get our first 50 members. Mei Ming Casino, who was transplanted from L.A. to Atlanta, was a fantastic partner and guide before passing the baton to me. And Tim McGovern is a marvelous asset, and was excellent in helping us get established.” The Georgia Section is coordinating efforts in two cities – a larger group in Atlanta and a smaller group in Savannah, which has a number of VFX professionals in the animation and gaming sectors and some former practitioners now serving as professors at the Savannah College of Art and Design (SCAD). The Section is benefiting from longstanding VES members now working in Georgia, such as David “DJ” Johnson, formerly from the L.A. Section. “We inherited a great guy with DJ coming to Savannah. He has been exceptionally supportive in molding the Savannah group and making them feel included and a part of our strong camaraderie.” While the Section is operating in two cities, they are working to bridge the two groups. The Section hosts monthly “VES Hangouts,” informal social networking get-togethers that also serve as member recruiting events. At one of their recent Hangouts held simultaneously in Atlanta and Savannah, they set up a video portal so that the two groups could see each other and interact, fostering unity and community. “Once people show up to one of our fun Hangouts, nine times out of 10, they are hooked on the VES and come on board!” adds Wright. The Section burst onto the scene with an inaugural celebration sponsored by the Turner Broadcasting Studios’ E League, and it was deemed a huge success. The event included a VR demo on one of the Turner soundstages, decorated to herald the new Section. In addition to the monthly Hangouts, the Section has a full calendar of screenings held in Atlanta. Getting screenings in Savannah is a goal for 2020. Next on the agenda is to develop educational and career development programs, potentially in conjunction with SCAD. “As we are setting goals and planning events, we continue to reach out to our sister Sections to brainstorm and look for ways
“We are focused on making VFX more appealing to the next generation of artists, so outreach to students is a priority as we move forward. It’s vitally important for young people, especially young women, to see VFX as a possibility, a real career option. We can help do just that.” —Rob Wright, Chair, Georgia Section Board of Managers to collaborate,” says Wright. “Because the industry is young here, we have reached out to New York to glean some of their lessons learned and look at ways we might participate in their events digitally and help cross-promote each other’s efforts. The possibilities to collaborate are really exciting.” The Section membership includes a wide breadth of talent, with VFX professionals working in TV, film, animation and gaming. That said, the Board of Managers has made diversity a primary goal in the coming year. “We are very mindful of expanding our membership to include more women and people of color,” adds Wright. “And we are focused on making VFX more appealing to the next generation of artists, so outreach to students is a priority as we move forward. It’s vitally important for young people, especially young women, to see VFX as a possibility, a real career option. We can help do just that.” Many companies have established offices in Georgia, including Method Studios, Crafty Apes, BOT VFX, Spin VFX, Stargate, Undertone, Pixel Rodeo and Turner Broadcasting. In this rich environment, VFX practitioners have the opportunity to do a lot of extraordinary work because the local studios are also part of larger shops. “We are really riding a wave with a rapidly growing VFX community,” says Wright. “I think at some point, we will start to see not having enough talent here to fulfill the work. One VFX company told us that they are using our Section as a draw to recruit talent to the company, and that’s so rewarding to know we are making a difference and providing that kind of value. And we vest a lot in educational institutions like SCAD and the Georgia Film Academy in bringing forth the next class of VFX pros. All in all, it’s a very dynamic environment with a bright future.” Wright also acknowledges the Board of Managers, who are “engaged, dedicated and such a strong and collaborative group. Together, we are very focused on building community and making our Section a great success. “Georgia is very welcoming, the people are approachable and it’s very family-friendly – attributes we aim to reflect in our Section. We are so appreciative of the VES community for helping us get our footing. As we grow and learn, we see it as our job to support burgeoning groups and pay it forward. That’s the beauty of being a part of the VES family.”
TOP AND BOTTOM: Georgia Section members and guests enjoying exclusive screenings in Atlanta.
SPRING 2020
VFXVOICE.COM • 93
[ VES NEWS ]
VES Board of Directors Officers 2020; Mike Chambers Re-Elected Board Chair By NAOMI GOLDMAN
The 2020 VES Board of Directors officers, who comprise the VES Board Executive Committee, were elected at the January 2020 Board meeting. The officers include Mike Chambers, a globally renowned freelance visual effects producer, beginning his sixth term as Chair. The 2020 Officers of the VES Board of Directors are: Chair: Mike Chambers 1st Vice Chair: Lisa Cooke 2nd Vice Chair: Emma Clifton Perry Treasurer: Laurie Blavin Secretary: Rita Cahill “Our Society is fortunate to have strong, impassioned leadership represented on our Executive Committee,” says Eric Roth, VES Director. “This dedicated group of professionals gives their considerable expertise and volunteer time to our organization. I’m excited about their collective vision and plans to advance our global membership even further.” Mike Chambers is an award-winning Visual Effects Producer and independent VFX consultant, specializing in large-scale feature film productions. He is currently in post‐production on Tenet, his fourth collaboration with esteemed writer-producer-director Christopher Nolan. Chambers has been a member for over 20 years, serving multiple terms on the Board of Directors. Lisa Cooke has spent more than 20 years as an animation/VFX producer, creative consultant, screenwriter and actor. Firmly believing that it is vital for science to effectively communicate its truths to a broader audience, she started Green Ray Media. She served six years on the Bay Area Section Board of Managers, currently serves as Chair of the Archive Committee, and was 2nd Vice Chair of the Board of Directors in 2019. Emma Clifton Perry is a Senior Compositor with more than 15 years of experience working across feature films, longform/TV series, commercials and advertising at VFX facilities worldwide. Based in Wellington, New Zealand, she offers remote compositing and VFX consulting services. Perry has served several terms as Chair, Co-Chair and Secretary/ Treasurer on the VES New Zealand Section Board of Managers and is in her fourth year on the VES Board of Directors.
94 • VFXVOICE.COM SPRING 2020
Laurie Blavin’s 20-year career spans the culture, creativity and technology of both Hollywood and Silicon Valley. She counsels professionals and companies (from start-ups to international businesses) on strategy, brand value stewardship and full-cycle staffing and recruiting. Having served on the Bay Area Section Board of Managers, this is Blavin’s first term on the VES Board of Directors. Rita Cahill is an international business and marketing/PR consultant for a number of U.S., Chinese, Canadian, U.K. and EU companies for visual effects and animation projects. She is also a partner in MakeBelieve Entertainment, a film development company, and serves as Executive Producer on a number of international projects. This is Cahill’s sixth term as Secretary. Previously, Cahill served as Chair or Co-chair of the VES Summit for eight years.
“Our Society is fortunate to have strong, impassioned leadership represented on our Executive Committee. This dedicated group of professionals gives their considerable expertise and volunteer time to our organization. I’m excited about their collective vision and plans to advance our global membership even further.” —Eric Roth, VES Director
[ FINAL FRAME ]
From Old School to Real-Time
The above still shows Jimmy Stewart, Jean Arthur, director Frank Capra and cinematographer Joseph Walker on a crowded set during the filming of Mr. Smith Goes to Washington from 1939. Anyone from the early years of filmmaking walking onto the set of a superhero movie or other elaborate filmmaking set today, such as The Lion King or Star Wars: The Rise of Skywalker, might go into a state of shock at the dramatic changes. Instead of large and bulky mechanical cameras
that need to be loaded frequently with nitrate film, no video village for feedback, large painted backdrops, small crews, hand-built film sets, giant hot lights, etc., they would enter a world of multiple, small lightweight, digital and virtual cameras; greenscreens; bluescreens; virtual sets; mechanical props; tracking markers; pyrotechnics; model and miniature work; actors in mocap suits; hundreds of onset specialists; customized iPad Pros; and newer real-time engine rendering
technology. And post-production? That also would be mind-blowing for the time traveler from earlier days of filmmaking. Instead of painstakingly cutting film strips, they would experience small computers with enormous processing capacity, CGI, 3D animation techniques, and armies of visual artists and tools involved in the pre- and post-production process. Real-time production is set to even further alter the dynamics of filmmaking in the 2020s and beyond.
(Photo courtesy of Columbia Pictures and The Academy of Motion Picture Arts and Sciences)
96 • VFXVOICE.COM SPRING 2020