5 minute read
VIRTUAL PRODUCTION – 5
VIRTUAL VS PHYSICAL: THE STRENGTHS, LIMITATIONS AND CROSSOVER OF CAMERA
In the fifth part of this series, Matthew Collu explores how in-camera visual effects can be explored powerfully in a virtual production environment
Throughout the last 100-plus years of filmmaking, storytellers both big and small have always needed to flex their creative intuition and inventive problem-solving to ensure the fantastic can be properly captured through the lens of the camera. As we slowly moved away from stage play-like scene compositions into a much more dynamic framing standard, the notion of ‘playing to camera’ became commonplace. We discovered that what truly matters is what can be seen at a chosen time.
The idea that if it looks good through the camera nothing else should matter was an incredibly simple yet art formdefining revelation, and one that opened the floodgates for visual effects practices to truly carve out a place in the industry. Incorporating the limits of the camera’s frame has made for limitless creative solutions and discoveries, and no innovative visual effects workflow better showcases this than virtual production and in-camera visual effects.
For a while now, most visual effects have been achieved, built and implemented solely in a post-production pipeline, for both budget and time efficiency reasons. Aside from a few purists, the industry has migrated away from doing most visual effects directly through the lens; and beyond that, stories and visual demands have become much more complicated than ever before, as have turnaround times and content demands.
However, with the advent of virtual production and incamera VFX, there has been a resurgence in having everything achieved during the production phase, not simply being fixed in post. With extensive experience in both the production
and research sides of this everswirling hurricane of innovative filmmaking, I’d like to expound on what capturing it all through camera truly entails, and how the virtual and physical worlds harmonise in virtual production – how they match, how they speak, and ultimately where it all comes together.
To prevent this from becoming a textbook rather than an article, I’ll discuss the most crucial and common examples of working in and understanding virtual production, starting with how the LED wall and your camera need to match (and trust me, the choice matters).
Believe it or not, a camera and a screen function in a relatively similar way. One captures visual information at a specific speed and frequency, while the other presents visual information in the same way. When these speeds are different from one another, you get that ever-so-lovely screen flicker that turns a TV image into a broken, glitched-out mess. This issue can very easily present itself on-set in an LED volume (since this is basically a massive television itself), but with the correct measures is totally avoidable and non-intrusive.
How? Match them. Prior to production, the camera and LED wall can be jammed and genlocked to ensure that from both sides, what is being captured and subsequently presented is happening at the same time. The refresh rate of the LED wall and the frame rate of the camera can still be matched without jamming, though there is a higher propensity to gradually fall out of sync. With all the other mishaps and Murphy’s law reaffirmations during film production, this isn’t another issue you want to add.
This is why your choice of camera (and ultimately a keen understanding of virtual production) matters, since not all cameras can receive a genlock signal, Being diligent and meticulous regarding frame rates and frequencies is part and parcel of maximising your operation within virtual production.
Both camera and screen can now stare longingly at each other without any hiccups or fatigue – but that only goes so far, right? Part of developing a deeper connection is communication, and that rings true with cameras and LEDs. Getting them to match is a needed first step, but beyond that, what is being controlled in the real world must be properly reflected in the virtual world – and that comes down to how they both speak to each other. iPads and web controllers aside, having the camera communicate with your real-time engine is a crucial, criminally under-appreciated element that takes your shoot from seemingly real to seamlessly real.
Regardless of how far you extend the virtual world out into the horizon, the wall itself is still a flat surface affixed to a singular location in space. Without proper communication, your depth of field is flat, static and strangely uniform. With the proper communication, however, the virtual world reacts as if it truly was extending miles out from the studio.
This is achieved through lens encoding, which is exactly what it sounds like. A small device attached to your camera rig sends a signal to your real-time engine, encodes the metrics of the lens, and binds that to the engine’s virtual representation of your real-world camera. For every twist and turn of your focus, the same optical effect that would normally occur in a real-world background now occurs virtually, making a flat background into a fully dynamic, virtual extension that feels less like a moving image and more like a completely unified and integrated environment. Vendors like ARRI and LOLED have made this encoding process streamlined and accessible, for even easier integration into the virtual production pipeline.
Finally, the place where it all comes together: the frame. Trust me, I understand that seems like a very simple and obvious statement, but rarely is anything simple in the wonderful world of content creation.
At the end of the day, what comes through the frame is what is seen by those watching. To our eyes, colours may look odd, frame rate may seem choppier than what we’re used to seeing on a screen, the scale of the set and the perspective may seem askew. However, just like using a fishing line to suspend something in the air or having a crew member frantically wave a flag in front of a light, it’s all in the admirable pursuit of achieving the desired illusion through the lens. Truly, whatever the camera is seeing is ultimately what makes or breaks the illusion of the world.
Bolstered by creative solutions, innovative techniques and keen eyes for subtlety, virtual production’s incamera workflow further incentivises creating at the speed of imagination. It just takes some finessing.
Matthew Collu, Studio Coordinator, The Other End
Matthew Collu is Studio Coordinator at The Other End, Canada. He is a visual effects artist and cinematographer with experience working in virtual production pipelines and helping production teams leverage the power of Unreal Engine and real-time software applications.