27 minute read
Media Storage
Because there is more than just disks and memories for storage
The constant changes in the possibili es of content crea on, as well as the new media needs to properly manage them throughout the various stages of the crea on process, are key aspects in our audiovisual environment.
We are now used to handling virtually all of our information in digital media: from a simple text or voice message to large volumes of information required by quality images require, especially when it comes to audiovisual content.
From the standpoint of a generic user, disregarding everything that is behind such storage is the usual thing. But from a professional point of view, it is part of our responsibility to know as much as possible in regard to everything that is related to our tools so as to make, as always, the most of them.
When discussing “media storage” or more specifically in our sector “audiovisual content storage”, we do not intend to delve into brands or models, but into the qualities and features of said storage, highlighting those that are critical for our creation process, from camera capture to broadcasting and long-term archiving.
It is not an exact figure, but at the beginning of this 21st century it was already estimated that in a little more than a decade of digital photography more photos had been lost due to technical or human failures of any kind than all the photos that had been created for more than a century since the invention of chemical photography. This is something that cannot happen to us.
But it is not simply about not losing information, it is about being able to record and retrieve it in the most efficient and reliable way at each and every one of the different stages and situations through the creation process. So, let’s review what are the most important aspects for each of those stages, hoping that no one will be surprised by the large differences that exist regarding the needs to handle the same content throughout the whole process.
Let’s start by clearly setting our scenario by establishing the large sections, because we must not limit ourselves exclusively to the media, but also consider “what”, “when” and “where” for our contents. Naturally, other questions such as “depends”, “what for”, etc. will arise later on, and this will lead us to make the right decisions.
Dealing with “what” is very simple; what are we going to save?: given our audiovisual creation environment we are going to focus only on the aspects related to our AV contents, that is: the file (or set of coherent files) that save the image, the sound and the associated metadata that allow us to correctly manage our recordings.
As for “when”, we will consider four major stages in the creation process: registration, while recording is taking place; handling, which normally involves all processes related to ingestion-editingpostproduction; broadcast, when it is made available to our viewers; and archiving, when it is saved for later use. Because each particular stage involves very different needs.
“Where” is precisely the medium that will store our contents, not in all cases will it be a physical medium.
“Depends” will almost invariably be the right answer to all the “what for’s” that we will be facing. These “what for’s” will be the goals, the purposes for which the recording and/or archiving and/or broadcast is made. Different “what for’s” of the same content will determine very different storage needs.
We will begin by explaining the two main features of storage systems, since these will be what will most frequently determine the ideal media in each particular circumstance.
The first and most significant feature is capacity, measured in terms of Giga-Tera- or Petabytes, and it is the easiest one to grasp: it is the size of the warehouse and determines the amount of information that will be possible to save, their respective factors being of the order of 109, 1012 and 1015. It is important to remember that if we refer to an order of 103 in computer terms (1 kilobyte) we are really talking about 1,024 bits, 106 (1 Mega-byte) in the same terms = 10242 = 1,048,576 bits, and so on.
Next comes speed. But let’s slow down now because there is more than one speed or rate of transfer. Usually, we talk about speed figures of the order of MB/s (Mega means 106). In addition, we must take into account that writing speed is usually lower than reading speed, that these speeds are not the same for a small block of data as for a continuous flow (precisely that is our need!). We must also take into account access speed or time it takes to find the data sought, and the most delicate task: make sure if we are handling data in MB/s (megabyte) or Mb/s (megabit): 1 MB/s = 8 Mb/s!
In our case, the speed that really interests us will always be the sustained speed, whether for reading or writing. But be careful not to confuse bit rate (transfer rate) with frame rate (the frame rate or fps in our recordings), although a higher frame rate will always require higher bit rates and storage space.
Having explained all this beforehand, it is now possible to start exploring possibilities.
The capture stage seems relatively simple, since the internal media is always conditional to the camera. While today there are a lot of cameras that already use memory cards of standardized formats such as SD, XQD or the new CF extreme (types A and B), there are still in service a good amount of equipment that only support proprietary media such as Panasonic P2, Sony SxS and even optical disks. Not to forget that it is common to use external recorders such as those from Atomos or Blackmagic among others.
The necessary storage capacity will be based on resolution, frame rate and codec chosen, but do not need a calculator: manufacturers usually offer quite reliable figures for the various recording times possible in each combination, as well as the minimum writing speed necessary to ensure correct recording. In fact, it is often the case that the cameras themselves do not allow you to select formats that are not supported by the internal card.
That is where external recorders come into play, which in turn also use cards and/or disks that must also meet the necessary capacity and speed requirements for our shooting needs. But pay a lot of attention here, because under the same physical card shape, especially in SD media, it is possible to find a huge variety of speeds, a factor that is usually directly related to the enormous range of prices for the same capacity. Categories C (class) U (UHS) and V (video) give an idea of performance.
Although we have used the word disks, at this stage mechanical hard disks are no longer normally used, but SSD type, that is solid state hard disks. They are actually large memory cards that for all intents and purposes behave like a hard disk, but with the advantages of being much faster, not having moving parts and consuming much less energy. And they also exist with a very wide range of speeds.
The impact of each of the factors that affect the volume of information generated is very different, which is normally referred to as “the weight” of a file: quite proportional and predictable if we consider resolution (HD vs UHD), frame rate (24/25/29.97 vs 48/50/59.94) and color space (4:2:0 vs 4:4:4). But enormous among very diverse codecs (AVCHD vs H.265 vs RAW).
Let us get a little closer to the extremes to illustrate the breadth of ranges that can be handled nowadays: from about 16 Gb per hour for a HD 4:2:0 in compression media formats suitable for editing (AVCHD type), to more than 18 Gb per minute for a 4K RAW 4:4:4:4 (ARRI Alexa) with the maximum possible information for the most delicate and demanding tasks.
And as if the possibilities were not wide enough, we start with the “depends”. Because in addition to the original record, if it depends on the criticality of the work, we will need a backup as soon as possible, it depends on the immediacy that content must be available in another place almost simultaneously, and if it depends on the type of project we will have to approach its treatment in very different ways.
Which leads us to stage two -the so-called handling- that encompasses everything related to ingestion, editing, composition, color grading, audio mixing, mastering, etc., where needs begin to multiply and diversify.
We start from the simplest end where the same person films and edits, connecting the original media to the computer. With the necessary caution to start by transferring the content to a first medium such as the computer disk, and then make another copy on external disk or cloud service before proceeding to edit. Because we already know what can happen when we rush with editing on the original media from the camera or recorder…
In addition to the computer in general, especially the disk must meet the necessary requirements so that all the filmed material flows with the necessary smoothness in its original quality.
As the project grows, either due to volume or complexity, it will be necessary to have a NAS storage system capable of responding to all our needs. And these will no longer just be those of running an AV content, but it may be necessary instead to move several streams of different AV content simultaneously. Again, both capacity and speed of the relevant medium will be the key factors to take into account, with different manufacturers offering different solutions in terms of speed, performance, scalability, etc.
As our project continues to grow or becomes sophisticated, the shared storage system will need to be sized proportionately to maintain sufficient capacity and speed to provide performance for a growing team of people working simultaneously on the same project or in the same organization, both by ingesting new content and by using existing content. At this point we must already add the concurrency factor, that is, how many simultaneous accesses we will need to provide support to.
Regardless of the size of the equipment and depending on the criticality of the materials it may be advisable to adopt a RAID configuration for our shared media that allows to increase capacity, improve access times, and above all, save and recover any potential data losses. The various possible configurations for RAID systems allow these different functionalities by combining different techniques.
It must also be considered at this stage, although it always “depends” on the project, that not all processes will be linked or sequential. Thus, stages such as color grading, soundtrack or so many others, may have their own needs for access to storage on the same materials.
There will be a turning point in growth, different for each organization, where it becomes more interesting to have media storage -or part of it- managed by a cloud service. In this case, the decision will not be given simply for reasons of capacity or speed. Factors such as equipment scattering, security, accessibility, confidentiality, and profitability, among others, will be the real keys for the final decision.
In short, in this handling stage is where we will normally find the highest demands around capacity and speed requirements simultaneously, in addition to reading/writing concurrency to properly manage our AV contents.
This stage ends when we have already composed the final piece, which almost always becomes a single file ready to be enjoyed. And this brings us to our third stage: broadcast.
Nowadays we must take into account the very different broadcast possibilities: from a sequential broadcast grid in traditional broadcast channels, to on-demand content. In any case, by this stage the volume of information to be handled has been considerably reduced. We no longer have dozens of materials to compose for which we need the highest possible quality to handle them in the best conditions.
The codecs available enable to handle significant compressions without loss of quality, since the material will not undergo subsequent modifications, so the files have a much lighter weight and will be easy to handle. Even so, a sufficient storage volume is needed to keep all of our contents. And with the right access speed to serve them on demand if we offer such service.
In this case, we must bear in mind that we may be required to send the same file at different times to multiple destinations and do so through networks over which we have hardly any control, so it will be important to size up the file to ensure that transfer is carried out as smoothly as possible.
In short, in this stage it will be necessary to take special care of the ability to access and read, while capacity will only be conditioned by the amount of contents available in our catalog, or simply by the grid to broadcast if we are only considering linear broadcasting. And just by using a bit of sense, it will be a different platform to the one used in the manipulation phase. In this case, at present NaaS (Network as a Service) formats prevail, as they have sufficient capacity to be the most profitable investment for a good number of content broadcasters. Although profitability will not always be the decisive factor, and in other circumstances or for certain organizations, internal management of these contents will be preferred.
Already reaching the next stage, long-term storage, requirement types are the other way around. It will be capacity rather than speed the decisive factor, since now a few seconds to find the content will not be critical, but it will to have enough space. And the criterion of a new platform that is separate from the previous ones for this storage still holds. Making an analogy, this would be the library in the basement.
But all of this leads to yet another “depends”. Because, what do we want to store on this occasion? all the shooting raw files with maximum quality for future reediting. Only the final piece? Even if it is only the final piece, we must preserve our master file with the highest possible quality, and not only a version that will be enough for broadcasting. Because we know from experience that the capabilities of all systems increase exponentially over time.
Conclusion
There is no single solution that will cater to all our needs, being instead “scalability”, “flexibility” and “adaptability” the most important notions to take into account when designing and managing our storage system. Going back to the beginning: “What for?” may be the key factor to always keep in mind in each of the stages.
Technology inside Kelvin Hall
BBC Studioworks’ latest step to create a complete TV production ecosystem in the UK
BBC Studioworks is one of the UK’s leading providers of television studio infrastructure and services. As part of its ambitions to create a nationwide television production ecosystem, it opened the Kelvin Hall studio in Glasgow in mid-2022.
This facility built on the pillars of sustainability and the best production technology available has been the talk of the industry for the past six months. Has it been developed on IP infrastructure? Can it accommodate virtual production techniques? What picture standard does it work on? We asked David Simms, Communications Manager, and Stuart Guinea, Studio Manager, of BBC Studioworks, all these questions. Here are all the details.
What are the functions of BBC Studioworks?
David Simms: BBC Studioworks is a commercial subsidiary of the BBC providing studios and post production services to all the major UK TV broadcasters and production companies including the BBC, ITV, Sky, Channel 4, Channel 5, Nexflix, Banijay and Hat Trick Productions.
Located across sites in London (Television Centre in White City, BBC Elstree Centre, Elstree Studios) and in Glasgow (Kelvin Hall), our facilities are home to some of the UK’s most watched and loved television shows.
Some of our credits are “Good Morning Britain”, “Lorraine”, “This Morning”,
“Loose Women”, “The Jonathan Ross Show”, “Saturday Night Takeaway and The Chase” for ITV. For the second channel of ITV we can highlight “Love Island Aftersun” and “CelebAbility”. For Channel 4 we produce “Sunday Brunch” and “The Lateish Show with Mo Gilligan”. For Sky we make “A League of Their Own” and “The Russell Howard Hour”. “The Crown” and “Crazy Delicious” are some contents we produce for Netflix. And, for the BBC we create “The Graham Norton Show”, “Pointless”, “Strictly Come Dancing” and “EastEnders”.
Kelvin Hall is one of your latest projects, what are the reasons for its construction and what objectives did you intend to achieve with it?
David Simms: Kelvin Hall is a 10,500 sq.ft. studio. It opened on September 30, 2022 and its construction was co-funded by the Scottish government and Glasgow City Council. Between them, £11.9 million was invested. The Scottish city council owns the building.
The studio has been designed to host all kinds of productions. In its development, sustainability was one of the main pillars on which we based ourselves. To meet these obligations, we brought new life to an existing building that was already in disuse.
Other facts that make it a sustainable project are that it uses 100% renewable energy and has been designed with LED lights.
Kelvin Hall responds to our ambition to open more studios throughout the UK. We did our research across major UK cities and the studio at Kelvin Hall comes in direct response to a growing demand from producers and industry bodies to make more TV shows is Scotland, as well as a niche in the Scottish market for more Multi Camera Television
(MCTV) studios to meet the demand.
Our objectives with Kelvin Hall were to create a boost to Scotland’s capacity to produce multi-genre TV productions and as such help fuel the demand for Scotland’s creative workforce and economy.
After six months, have expectations been met?
David Simms: Six months since opening the studio at Kelvin Hall we have facilitated over 40 episodes of television —on two shows: BBC Two’s “Bridge of Lies”, produced by STV Studios; and BBC One’s “Frankie Boyle’s New World Order”, produced by Zeppotron— created job opportunities for local production, technical staff and freelancers (six permanent staff and an average of 100 people working on a studio record day).
We have also helped to support the next generation of talent to join the TV industry - we launched a Multi Camera TV Conversion Programme in collaboration with the NFTS and Screen Scotland that trained 12 trainees who are now being offered paid work placements at Kelvin Hall.
We understand the importance and necessity of inspiring, nurturing and developing the next generation of production talent, so collaborating with NFTS Scotland and Creative Scotland to offer entry points for young Scottish people to the industry was a priority for us when opening Kelvin Hall.
With regard to supporting the nurturing future local talent, we aim to forge an industry collaboration, supported by Screen Scotland, which routinely invests and delivers initiatives that create a sustainable talent pipeline to meet the growing demand for skills and people.
The creative sector in Glasgow flourishes, built on a reputation of having the best talent that can be sourced locally. We want to increasingly deliver on this ambition over the next five years.
Kelvin Hall is said to be integrated with stateof-the-art technology and has been built to be consistent with the other BBC studios. How did you achieve this?
Stuart Guinea: The BBC Studioworks project team that managed the construction and installation of Kelvin Hall consisted of a group of our in-house experts who had previously project managed the remodelling and technical fit-out of our facilities at the Television Centre in London between 2013 and 2017, as well as our move to the two Elstree sites in 2013.
One of the key decisions made in the design of the Kelvin Hall studio was the commitment to install the infrastructure for LED lighting. This not only reduces the carbon footprint of the studio operation, but it also puts us at the forefront of the change to LED lighting in multicamera TV Studios.
Unlike our facilities in London where our project teams had to work to specific space limitations in existing buildings, at Kelvin Hall we had the fortune of building a brand-new studio facility from scratch in a cavernous space. We took advantage of this to maximise our spaces, ensuring the control rooms and galleries were amply sized – these technical spaces are now the largest across our entire footprint, allowing room for possible tech expansion in the future.
The biggest challenge for the whole project was timescale. There are always small delays in every building project and although we had planned for these, we were also working towards the deadline of our first confirmed client production series at the end of September, so the pressure was on to deliver on-time.
It is also important how the connections between equipment and devices within Kelvin Hall and even between other studios are developed. Designing them over IP is the big challenge and promises many capabilities. What is the situation in Kelvin Hall with respect to this technology? And in other studios developed by BBC Studioworks?
David Simms: After extensive research, we decided to implement a 3G (non-IP) solution for Kelvin Hall.
This was the more costeffective route for our business model and is already known to our clients, as this is the tried and tested solution that we have in our London facilities.
The core infrastructure at Kelvin Hall is all 4K ready, so we can accommodate a 4K production in the future.
What are Kelvin Hall’s virtual production capabilities? LED or Green Screen, which has been your bet?
Stuart Guinea: Our studio at Kelvin Hall is a large flexible space (10,500 sq. ft.) - we have integrated hoists, data and power distributed throughout our overhead grid and the studio floor is paintable so can be matched to any greenscreen system. We can therefore accommodate both LED or Green Screen for VR.
Focusing on the technology, which manufacturers have you relied on to develop the Kelvin Hall installations?
Cameras, visions mixers and monitors
• Six Sony HDC-3200 studio cameras. The 3200 is Sony’s latest model which has a native UHD 4K image sensor and can easily be upgraded to UHD.
• Sony XVS7000 vision switcher, LMD and A9 OLED monitors for control room monitoring.
Hardwired and radio communications systems
• 32 Bolero radio beltpacks with the distributed Artist fibre-based intercom platform and for external comms, VOIP codecs.
• A SAM Sirius routing platform solution to support the most challenging applications in a live production environment and to ensure easy adoption of future technology innovations.
Audio
• Studer Vista X large-scale audio processing solution that provides pristine sound for broadcast.
• Calrec Type R grams mixing desk. A super-sized grams desk providing ample space for the operation of the Type R desk and associated devices, such as Spot On instant replay machines.
• A Reaper multi track recording server.
Lighting
• ETC Ion XE20 lighting desk and an ETC DMXLan lighting network.
• 108 lighting bars with a mix of 16A and 32A outlets (if tungsten is required).
• 48 Desisti F10 Vari-White Fresnels.
• 24 ETC Source 4 Series 3 Lustr X8.
Another major focus has been on sustainability awareness. In practical terms, how can broadcast technology reduce its impact on the environment?
David Simms: We were fortunate enough to find an area within Kelvin Hall that was already part of a wider redevelopment initiative at the site - the studio repurposes a previously derelict section of a historically important building. And as there is no gas plant, all electricity is sourced from renewable sources.
The studio has been designed without dimmers to encourage LED and low energy lighting technology. The reduced heat generated by the low energy lighting has enabled the use of air-source heat pump technology for heating and cooling, and the ventilation plant has class-leading efficiency using heat recovery systems.
For the past three years we have proudly held the Carbon Trust Standard for Zero Waste to Landfill accreditation at our Television Centre facility, and we are currently working on achieving this for Kelvin Hall too. The Zero Waste to Landfill goal was achieved through a combination of reducing, reusing and recycling initiatives to reduce environmental impact. Specific waste contractors have been procured to ensure diversion from landfill and an onsite waste disposal system, incorporating eight different streams, has been implemented to encourage responsible waste disposal and reduce cross contamination.
We are also a founding collaborator of BAFTA’S / ALBERT’s Studio Sustainability Standard, a scheme to help studios measure and reduce the environmental impact of their facilities.
Handmade Animation
Second Home Studios
“What VR offers creators and consumers still seems to be in the Commodore 64 phase of its evolution. I think it still has much more to come.”
With this statement, Chris Randall, head of the Second Home animation studio based in Birmingham, positions us in front of a really promising future. His hands, and those of his team, have developed a multitude of projects ranging from bringing stories to life through handmade models captured with stop-motion techniques, to the creation of virtual worlds consumable through immersive technologies. Here’s what an animation craftsman has to say about the cutting-edge technologies that will inevitably revolutionize his profession.
What is the origin and history of Second Home Studios?
Second Home Studios came about as a happy accident. I’ve always loved visual effects and animation. I cut my teeth as a film Clapper Loader whilst at University working with the BBC VFX Unit (when it existed) on shows like “Red Dwarf” and helping to blow up Starbug in my Uni holidays. Great fun. I learned a lot and had a brilliant mentor in the form of DoP Peter Tyler.
I moved back to Birmingham, worked in theatre for a bit, then started working in Central Television’s Broadcast Design Department as a Rostrum Cameraman. I’ve always been a keen modelmaker so I started teaching myself stopmotion in my lunch hours. Soon after, CiTV commandeered me to make promos for them for children’s content. My first proper body of work for broadcast won World Gold at Promax and I started to take it more seriously.
Around this time I was asked by an old colleague in theatre to create a film to cover a scene change for The Wizard of Oz. It was a two minute tornado sequence and encompassed much of what I’d previously learned on both sides of the camera while working as a Clapper Loader.
The following year more projection work was on offer, so I left CiTV and set up our Digbeth [Birmingham’s neighbourhood] studio where we’ve been ever since. Now we’re a team of ten, soon to be twenty five.
What are the characteristics that make Second Home Studios different from other animation production houses?
I think it’s our versatility. Some houses operate around a distinct house style, whereas we like to work in different media and try new things. We’re a pretty small operation but we’ve managed to punch above our weight a few times over the years. I think there’s a tenacity within the team that is slightly contagious. It’s certainly helped us get through troubling times. When one of us gets deeply into a problem, it kind of draws everyone else’s interest. It’s from this culture of curiosity which spawned our motto; ‘Be Curious, Tell Stories, Make a Mark.’ Even if all that curiosity ends up revealing is a shortcoming in our skillset. At least then we know how to improve. Plus there’s always a narrative in everything we design, draw or animate, no matter how short it might be. And it’s always good to have our work leave a positive scratch in the raging torrent of online content which gets produced on a daily basis.
Regarding your projects, which have been the most technically challenging and on which technology have you relied to solve them?
Every project is challenging for different reasons and with varying levels of complexity. Some projects rely on rigorously technical pipelines, others fall to the skill of the animator.
One of our signature stopmotion projects remains the Pilsner Urquell ‘Legends’ film crafted entirely from paper and comprising dozens of multiple motion control passes on what is essentially a single camera move.
We also do mixed-media pretty well and were asked to collaborate with Glassworks on a Christmas production for Penny a few years back. This involved shooting miniature stop-motion sets for CG character integration. We used our Manta rig for this and had to match move to the pre-vis for certain shots. Image integration of this sort is always helped by observing good studio discipline for covering things like decent HDRIs, shadow passes, and clean matte plates.
Our rebrand for German edu-channel Da Vinci Learning was a mixed media extravaganza which combined studio effects, stop-motion, CGI and a lot of 2D work as well. This was a great collaboration which won us a British Animation Award for Best Motion Design in 2020.
More recently we produced animated route maps for the Birmingham 2022 Commonweath Games. Our concept was to convert the city into a table top model, a nod to our preference and history for creating in miniature. This involved the lengthy processing of incredibly dense mapping data into geometry which could then be animated. Render wrangling the immense poly count became a significant challenge here, but the results turned out really well thanks to an efficient pipeline through Houdini into Nuke.
We’ve not long wrapped on a large stop-motion project with Nexus Studios and Oscar nominated director, Siqi Song, made easier in post through the use of Polycam for photogrammetising puppet heads for face animation. Another great team effort on both sides which launches later this year.
Our current stop-motion shoot is an animated documentary featuring all hand-knitted puppets. This presents a challenge of a more tactile nature where we have to work within the properties and limitations of the physical material. The main technological saviour here is humble aluminium wire.
What parts of the production chain of an animated project do you develop inhouse? Do you have the technical infrastructure and human resources to cover the creative processes, designs, 3D creation, final composition, conformation, etc.?
In short, yes. We always tell clients we can be involved as much or as little as is needed. We are often handed fully developed boards or animatics and asked to facilitate their production. But we also get commissioned to build projects from the ground up, sometimes from a one or two sentence brief.
I’ll be honest and say I think we prefer the latter because I think that’s where we’re used to best effect. I’m glad to say I work with some brilliant creative minds, who can invent, research, pitch, create, produce, analyse, rework and fix at all stages of the pipeline in whatever medium the animation takes.
At heart, I suppose we’re generalists and Second Home was founded as the first studio offering stop-motion animation in our city. But we’ve since expanded our toolset because there’s too much fun to be had with all the other techniques available in the animation universe. Of course we draw upon an extended family of freelancers and partner organisations of similar size for support when needed to achieve such versatility.
What internal technological infrastructure do you have to develop your specialties?
We’re brand agnostic. We started as a Mac design studio I think because of familiarity and habit from the old CiTV days, but now we operate this along with self-specified PC’s.
We run a simple NAS drive hub as the core data repository, which is soon to be replaced with an 80TB Linux driven server.
Software-wise we’re mainly Adobe driven but push everything through Nuke for final image composition.
Grading often happens in Da Vinci.
For stop-motion capture we use Dragonframe which is an incredible piece of software; faultlessly robust and always being innovated by Dzed. It’s a pleasure to use, especially with its motion control and DMX integration for lighting.
We use Raw Therapee to convert frames from studio ready for comp and where possible we’ll use this stage as a pre-grade filter to get the look of something as close to how the Director and DoP feel it should be.
The 2D stuff happens in After Effects, Animate, Toonboom or whichever feels most appropriate for the task in hand.
3D work happens in Maya, Blender or Cinema 4D for the artistic modelling and animation. The more technical product demo work normally lands with 3DS Max for accuracy and polish.
What is the last technological renovation or expansion process you have carried out?
What is the next one you are planning?
The last innovation project was probably our Manta motion control rig which we designed and built ourselves. We needed something with a bigger reach than most other rigs on the market for stopmotion use. It’s a 7 axis rig and gives a pretty wide reach over most miniatures. Plus the live preview courtesy of Dragonframe’s DMC-16 interface makes it good enough to shoot live passes on smallish life-size sets. We’ve also done some tinkering with LED strips in making our own budget lighting solutions for lighting cycloramas and model sets, all with DMX control.
Our next process will build on this use of LEDs when we come to gearing up for our next production.
The technology around the creation of animation content has been the same for some time, but it is now in a time of transition thanks to technological advances. What would you say are the main trends that will develop in the future of animation?
I’d like to say that stopmotion will always find a way to trump the digital methods. We’re crafters at heart and will always look to that as our first creative option if we can, but only if it’s right for the project. My suspicion is that soon there will be a ‘make look like stop-mo’ button in most desktop packages and you’ll be able to dial in exactly how much needlefelt or plasticine fakery you want and the quirks and nuances of the human hand will be approximated by algorithms. It’s already most of the way there, and AI will likely complete the gap. I hope that even if it does, there will still be an appreciation of the effort that traditionally has gone into those projects that means stop-motion commissions will continue to be given to human hands to realise the aesthetic, as opposed to a simulation of it. I guess it will probably come down to cost as it always has.
That said, we love the possibilities that more powerful digital tools can open up – but for very different reasons than our craft based approach. Mocap’s entry barriers are crashing down making the integration of authentic human performance into digital characters so much easier to do, which hopefully means that the expression of the human body will always have a part to play.
Is The Technology Around Video
How games interacting in the world of animation? Are you already using it? What use cases do you predict are most suitable for this technology?
I’m in awe of how gaming tech can influence immersive storytelling. We’ve dabbled in games, and supported some development work of projects using our craft experience. The learning curve for us is too steep, but we’re happy to partner with other organisations in this sphere to try out new things. How do you envision the use of virtual reality techniques both in consumption and in the creation of animation projects? Do you have any experience in this field? What are its possibilities at both levels?
We’re currently working on our second VR project which is funded by the BFI and Storyfutures, called “Beachcomber”. It’s an experimental exercise in suggestive storytelling where, as emotional beings, we can attach significance to tangible (or rather intangible) objects by engaging with our memories and imagination. I’m interested in how VR can be used to change our relationship to the real world and the people within it, and that’s what Beachcomber is about.
Again, we’re partnering with a VR production studio (Holosphere) where we can focus on the creative steering of the project and they can hone in on the technical details. It’s our third project with them and a good relationship.
Experientially, it’s hard not to get drawn into so many new things on offer within this space, even if I resent the monopoly that Meta are trying to cultivate across mobile. And in terms of its haptic ability, it feels like we haven’t even scratched the surface yet. So for all its lightsabers and landscapes, VR for me still feels like it’s in the Commodore 64 phase of its evolution. So much more potential to come I think. Lots more to lose yourself in, both as creator and consumer.
Another big trend is the introduction of artificial intelligence and machine learning technologies into the M&E industries. Do you anticipate any use of these technologies in animation?
I think it’s inevitable. We’re already using Midjourney AI for rapid concepting. It’s speed and reaction to prompts is absolutely staggering. However, this AI can’t read minds (yet!) so at some point you still have to wade through the reams of images which you splurge on, and chop them up and overtrace and actually design the thing you want. But as an inspiration tool, it’s quite an addictive thing to use.
And while it’s easy to be blasé about this technology, the dangers are pretty hard to ignore. Deepfake feels like it’s only a command line or two away from becoming something quite nasty if wielded with malintent. Animation for me has always been a visual effect, to be enjoyed in the creating and the watching. But when you introduce machinery which can interpolate so much better and faster than a human hand, and not just approximate human behaviour but replicate it accurately, one hopes that this will only ever be used to entertain and educate and nothing else.
What will be the evolution of Second Home Studios in the years to come?
We’re in the process of expanding into a new studio unit to facilitate the production of our first pre-school stop-motion animated series. This will be the first show of its kind to be made in the city. It’s a big step for us, and not in the most ideal of circumstances, all things considered. But we’re setting it up with a view that we’ll have enough space to accommodate more interesting projects and invite more ambitious creative collaborations in all the fantastic media that animation affords.