12 minute read
Tech Reviews
World Creator 3
World Creator, developed by BiteTheBytes, came from modest means about a decade ago and evolved into a plugin for Unity, and shortly after into a standalone product sitting on top of the Unity engine. This year the third reincarnation of World Creator was released, untethering itself from Unity exclusively and growing into its own, GPU-accelerated, real-time landscape generator — with numerous features to help you design the landscapes to your liking.
The power of World Creator is not just in the real-time rendering, because what is the point of rendering something that doesn’t look good? No, the power is in its procedural tools to allow you to quickly sculpt and design your world. You can craft your own landscape using different filters like ridges, craters, terrace, rocky surfaces, choose from template patterns of things like canyons or volcanos, or use real-world locations through MapTiler. In addition, you can mix all of those together — And if that weren’t enough, you can top it off with erosion and sediment systems, all in real time. And that’s just the surface
What is a terrain without color and shading? World Creator utilizes Substance 3D shaders in the .sbsar format, so you can dig in and modify the shaders within World Creator itself. Or you can utilize a number of maps like albedo, normal, ambient occlusion, displacement and roughness. Then, you can mix these with gradients or rule-based filters driven by the features of the terrain — slope, height, cavity, angles, edges, noise … just to name a few.
And now with your terrain looking pretty, you can render it using all the things: ray-tracing, global illumination, clouds, atmospheric scattering, fog, God rays, water. Literally, all the things. In real time.
You can also export your terrain to any number of formats to be used in any 3D package you would like: Unity (naturally) and Unreal 4 (support for 5 is in the works), Maya, Max, Clarisse, Houdini.
There are a few things missing in this build — but they have been announced for future builds: unlimited terrain, custom object imports (like trees and foliage), camera animation, procedural rivers, thermal erosion, a terrain wizard to build up terrains quickly and easily, and a bunch more filters.
I did do a review of World Creator 2 a few years ago, but please understand that this is a pretty substantial rebuild. The third release has a higher price tag than the previous one; the price for a hobbyist is $349 to buy, and $689 for pros. You could also rent annually for half that per year. If you need environments, the investment shouldn’t even be a thing.
Website: world-creator.com Annual Price: $169 (individual); $349 (small co.); $1,289 (large co.)
RealityScan
For a long time, RealityCapture has been my go-to software for photogrammetry, even before they were acquired by Epic Games. So when I received notification of an iOS app from the same team that would allow for scanning from your device, I immediately jumped on the Beta. And it was a good thing that I responded. The 10,000 open slots were filled within a few hours after the mail was sent.
Even in its Beta stage, RealityScan is robust and effective. The interface is clean and straightforward and the steps are clear. You begin by holding down the red “take picture” button. Then you move around the object you are scanning, taking picture after picture to get enough coverage for the photogrammetry scan. This is the normal process for photogrammetry, but what’s cool about RealityScan is that it utilizes the AR in your device to leave your snapshots floating in space, as seen through the iPhone. You get to see exactly how much coverage you have around the object, and, in turn, how much more you need to cover.
In the process, the app begins to show you voxels representing the volume of your object. From there, you can adjust the size of your scanning volume to tell the app what to cull and what to calculate.
After the app aligns the images, the data is sent to the cloud where the model is calculated utilizing the same algorithms that power the full version of RealityCapture. When the model is complete, it is sent to your SketchFab account for you to accept and publish.
The results are really clean, especially if you provide enough pictures. And the interactive nature of the process is actually pretty fun. It’s definitely set up for scanning objects rather than, say, environments. I’m a bit surprised that it doesn’t utilize the LIDAR camera in the iOS devices to help in the calculations. RealityCapture definitely can use multiple sources of data for its model solves.
That said, I am turned off by the fact that your models appear to be trapped in the Epic ecosystem — which is great if you plan to keep everything in Unreal Engine, but not so much if you are using other 3D software. Sure, you could import your Sketchfab model into Unreal using the plugin, and then export it as an FBX. But it certainly would be more convenient if you could download directly from Sketchfab. I suppose I could get over that in time.
I look forward to the release version of RealityScan, too. I would have waited on a review until then, but I was so excited about it, that I thought the photogrammetry community ought to be looking forward to it as well.
Website: capturingreality.com
by Todd Sheridan Perry
Foundry’s Flix 6.4
Acouple of years ago, I reviewed Foundry’s Flix 6.2, the collaborative animatic software which connects the animatic source software (mostly likely Photoshop or Storyboard Pro) to the editing software (Avid or Premiere) and acts as a hub for creative discussions between the director and the artists. The 6.4 version of the product has been recently released, and it offers some new features to speed things up, make it more efficient and be more flexible.
For me, the coolest features are about the interconnectivity between the different software in the workflow: the glue that holds things together. In 6.4, that glue is stronger between Storyboard Pro, Flix and Avid with camera keyframes stored inside of the AAF files.
Basically, camera moves from Storyboard Pro get transferred over to Flix, which then migrate to Avid through the AAF file. However, with this new version, the editor can make modifications to that camera, which can then
continued on page 45
be sent back to Flix for review, and then again back to Storyboard Pro so that the boards can be updated if necessary.
Flix 6.4 has also updated support for Photoshop 2022, which augments the already- powerful connection between the two programs through a menu of plugins for pushing and pulling artwork back and forth.
In addition, Flix 6.4 has updated additional aspect ratios for different types of displays and media from a square (i.e. Instagram) format of 1.0 to 2.2 for widescreen stuff. It may sound like a minor, not-too-sexy feature, but it’s certainly important to those who need it.
The remaining features are, in actuality, the more robust ones — but they focus more on the efficiency of Flix and its workflow. Transferring data back and forth between the software can be time consuming and messy. But, the Foundry has seen speed improvements with a revamped Transfer Utility that imports batches of art panels up to 23% faster than the last release. The UI also has had a makeover so you can visually track the progress of the import. The transfers now happen in the background, so you can move on to other tasks in the project — or even go to a different project without disrupting the transfer.
Importing into Flix from Storyboard Pro is now cleaner through the support of .SBPZ packaged projects, which contains the .SBOARD files you previously imported. Upon import, you can select which board you want — but on the server, those files live inside the package. This makes not only for cleaner drives, but also fewer errors during migration or archiving.
These are definitely some worthwhile features to consider upgrading — or implementing if your next project is storyboard-heavy, whether it’s a fully animated work or an VFX-heavy short, TV show or movie.
Website: foundry.com/products/flix Price: On Enquiry
HP’s Z8 G4 Workstation
This review may seem a little late given that HP pushed out its Z8 G4 workstation back in 2019. However, not only does a pandemic change our perception of time, but I’m reviewing this product in comparison to my own Z820 workstation which has served me since 2013. So, three years isn’t “old” at all. Let’s take a look at this system.
The cabinet is pretty low-profile at 17”x22”x8.5” with a sleek, unassuming exterior. The front panel sports a couple USB 3 ports, two more USB-C, an SD card slot and a mini-headphone jack for good measure. The back has six more USB 3s, four network ports (!), a couple more mini jacks for audio in and out. Also, the power supply is easily swappable without even cracking open the system. This is super handy, especially if your power needs change. Power supplies are available in 1125W, 1450W and 1700W flavors.
But the outside isn’t the important thing, right? I always love HP’s design of its workstation interiors. Everything is toolless, clean and efficient. And while it all fits tightly together, the combination of phase change cooling for the CPUs and the fans and venting keep everything running pretty cool, avoiding throttling. This means it’s generally quiet — although I’ve certainly pushed it far enough for the fans to kick into high gear. Finally, when you pull out the modular covers, you reveal the guts.
HP fitted me up with dual procs: Xeon Gold 62246R @ 3.4Ghz. Along with 128 GB of RAM. Which is pretty moderate considering that you can push it up to 3TB (with the right corresponding dual CPUs). I also have a 1TB system drive and additional 4TB SATA — a mere fraction of the 56TB potential that you can fit inside.
Now, this would be quite enough for most of my visual effects needs (although I would probably take the RAM to at least 256GB for Houdini stuff). However, it doesn’t end there. The addition of the NVIDIA RTX A6000 with 48GB takes the already powerful workstation and ratchets it up to ludicrous speed — especially with GPU accelerated situations like Unreal, Premiere Pro, Nuke, Redshift, V-Ray — really, whatever you might be using.
On a show I am currently supervising, I fired up the Z8 for previzing a bunch of complicated scenes in Unreal, animating eight Metahumans with hair using ray tracing and GI. I’m not sure I could have pushed the scenes that far without the power underneath the hood of this monster. HP has been putting a lot of emphasis on their powerful line of ZBook mobile workstations, and I do look forward to HP’s next iteration of desktop.
Website: hp.com Price: Starts at $3,907
Blackmagic Design’s DaVinci Resolve 18
It doesn’t look like a post-COVID post world will be returning to a not-from-home type scenario. Something that the folks at Blackmagic might agree with me on. I say this because Resolve 18 (Beta 5 as of this writing), sports tools for cloud collaboration between artists. Project libraries are hosted on a DaVinci Resolve Project Server, and then the team of artists, including the editor, colorist, VFX artists and audio guys, are all logged in — and can access the same project. All of this comes with specialized tools to remap file paths and to stream your viewer display to a remote computer or reference grading monitor. This is one of the numerous features in Resolve 18, and probably the one that changes the paradigm the most.
I’m really excited about the new proxy workflow. In earlier versions, making proxies would create a file for the master clips, but nothing would make sense if you tried to look at those files on the hard drive. Resolve would see it, but it didn’t make sense in human terms. In my case, I was looking at migrating a Premiere Pro edit to Resolve and I wanted to repurpose the proxies that had already been generated in Premiere. This new workflow is now more in line with that. Easy to read external proxy files which can be easily migrated, or even generated by other programs and later linked in Resolve.
There are a number of tools that are driven by the DaVinci Neural Engine — Blackmagic’s machine learning system (which is supported in the Apple M1 Macs). The object mask tool recognizes differences in selected objects in the scene, and then can track those objects through the scene. This includes people. It doesn’t work perfectly in all situations, but definitely can be used in the majority of cases. You also have an AIbased scaling algorithm. A complement to the people-recognizing mask is a face-recognizing tool to help in beauty work. Also, removing dead pixels, object removing and patching frames are augmented by the Neural Engine — including automatic corrective grading to help blend the patches.
Additionally, the color module now has a Mocha-like track warping tool to track and replace non-rigid surfaces like shirts or skin.
I’ve only touched on the new tools. And I haven’t even mentioned features in the Edit, Fusion or Fairlight modules. There are enhanced subtitles, accelerated transitions, GPU-accelerated paint and denoising tools, binaural rendering, audio placement in 3D space, mixing up to 2,000 audio tracks in real time, OFlow speed changes. There are just so many features!
I have to add that Resolve 18 runs blazingly fast with 8K footage on the HP Z8 G4 with an NVIDIA A6000 — mentioned in the previous review.
A lot of these advanced features are only in the Studio version — not that the free version of Resolve isn’t incredibly powerful. But if you want it all, you may have to pony up the ridiculously low cost of $295. I mean — really! You might spend more on a single plugin for de-noising!