3D Modelling Article

Page 1

3D Modelling: Theory and Applications A study of 3d applications and its uses

T

he use of physical models for the purposes of entertainment in the film and television industry includes the likes of Jerry Andersons’ Thunderbirds, and Aardman’s Wallace and Gromit, and both are considered great successes with a huge fan base. However, moving nearer towards the 20th Century technology progressed to the point of artists being able to create the first digital models in 3D. These models are now a part of everyday life, from

1

TV commercials, children’s programs, film, and of course the games industries. What unites all these different mediums is the process in which the 3D models are created, refined, animated and rendered. No matter the final destination of the model, it will go through the same process like many of those before it. For the Games Industry, the use of 3D models is total. Everything from environments, characters, and even gameplay trailers are created using

3D models, in either a high resolution or low resolution form. In order to create all the models for big budget productions an army of 3d artists and animators is required, following the best working method known as a production pipeline. A pipeline will typically consist of a few modelling programs and working methods that provided the most efficient means of work. Broadly speaking, 3D modelling can be split into two


categories: ‘history’ modelling, and ‘direct’ modelling. As Gordon (2009) writes, traditional ‘History’ modelling relies on storing instructions inputted by the user, much like writing code, in order for the program to generate the 3D model.

“pure 3D direct modelers provide highly interactive, flexible tools” (Gordon, 2009)

Examples of these programs include SolidWorks and Catia V5. This is a particularly long winded way of creating 3D models compared to the modern ‘Direct’ modelling programs such as Autodesk 3DS Max, for example. Artists using ‘history’ based modelling programs have to extensively think and plan ahead projects, otherwise changes to early instructions can result in ‘regeneration

failures’, and are difficult to trace and repair. ‘Direct’ Modelling, on the other hand, is described by Gordon (2009) as coming before ‘History’ modelling, but has become reinvigorated by advances in technology and computer power. ‘Direct’ Modelling, as known as ‘solid modelling’, means the artist can create and save the model itself without having to rely on coded instructions unlike ‘history’ modelling. Typically, this means that the artist can see and manipulate the model in 3D space as they work, offering a huge advantage over ‘history’ and other information based modellers. As the industry has grown, so has the number of programs available for 3D artists to use. Some of the most popular and used programs in the industry are Autodesk 3DS, Autodesk Max, Autodesk Maya, Autodesk Mudbox, Pixologic Zbrush, and Maxon Cinema 4D. Autodesk can boast that 3DS

Figure 1. Next-Gen Weapon Creation – Part 1: The High Poly Model (2011) (Bolton, 2011)

An Example of a ‘Direct’ method 3D model combat knife made in 3DS Max. Here the wire mesh is displayed, enabling the artist to see how the topology flows throughout the model 2

Max, Mudbox and Maya are intergrated, artists can send projects to another program with a single click, allowing for more efficient workflow pipelines. Many features are shared between the programs, Maya benefiting from sculpting tools from Mudbox and polygon tools from 3DS Max. While Autodesk can be considered the mainstream 3D modelling software provider, there are plenty of other successful and recognised programs within the games industry, like Pixologic’s ZBrush and Maxon Cinema 4D.

“Programs such as Maya and 3D Studio Max will often be used to model all of the game’s environments” (Edwards, 2006)

Some of these programs tend to focus on a particular aspect of digital modelling, and as a result feature more powerful and innovative tools than the Autodesk equivalents. Since taking over Mudbox in 2007 and Softimage on 2008 as mentioned by Kerlow (2009, pp46-47), Autodesk has become a giant in recent years, continuously acquiring new companies and software at rapid pace. One of those acquisitions in 2004 (Kerlow, 2009, p.45) included the .fbx file format, an important format that is now widely used in the games industry.

3DS Max 3DS Max, previously known as 3D Studio developed by the Yost Group in 1988, is a popular 3D modelling and animation program that is owned by Autodesk. According to TutorialBoneYard.com (2015), it is used mostly by the film and games industry, creating 3D models for both CGI and game development, and mainly uses polygon modelling. As ‘jason’ (2014) writes, 3DS Max features many powerful modelling functions like XGen, created

in collaboration with WDAS in order to create more efficient, creative, and intuitive tools for 3D creation. It has been used to great effect for creating fur and hair, and even trees. 3DS Max also utilises a powerful Inverse Kinematic animation system. This enables the animator to

quickly create complex animations by setting the end result of the animation and let the IK system generate a movement based on pivot points and joint structure that can be edited later. (Autodesk, 2014)

being so closely linked with 3DS Max and Maya enables great workflow. It also includes Ptex, a unique and powerful painting tool made in collaboration with Pixar that doesn’t use mainstream UV texture maps. As Jarret (2014) mentions, since ZBrush is widely

regarded as establishing 3D sculpting as a viable modelling method, there has been an increase of alternative sculpting programs, such as Blender and Modo. Autodesk Mudbox is powerful, but does not dominate the market.

Mudbox Mudbox is perhaps first and foremost a 3D sculpting tool, boasting greater features and power over 3DS Max and Maya when it comes to retopology and sculpting in general. As De la Flor (2013) explains, retopology as a basic explanation is the creation of a new 3D model from an original, existing 3D model. As he goes on to summarize, De la Flor (2013) comes to the conclusion that Autodesk Mudbox is a powerful sculpting and retopology tools, and

3


4

Maya

Cinema 4D

Maya is perhaps one of the most well-known and used 3D modelling programs in the games industry at the moment. As Amin (2015) notes, Maya 2016 includes significant upgrades to existing systems, including BiFrost and XGen. Notably, Maya 2016 also includes features taken from Mudbox, increasing its versatility in sculpting from previous versions. Maya also includes advanced animation systems, offering more than 3DS Max for animators looking

Cinema 4D, created by Maxon, can be considered a Jack-of-all-trades 3d modelling program, one that can be compared against the likes of 3DS Max and Maya. While it doesn’t enjoy the popularity of the big Autodesk programs in the games industry, it can certainly throw its weight around. Cinema 4D sports tools and features to cover all aspects of production, from animation, 3D modelling, to texturing and rendering. In particular, the BodyPaint

for top quality tools. However, the modelling tools in Maya, whilst more extensive still require artists to use dedicated sculpting programs such as Zbrush and the full suite of features in Mudbox to complete projects. Lastly, Maya contains

many obscure features that can cause information overload to artists new to Maya. However, Autodesk does include a stripped down Maya LT 2016 program, offering core functionality at more affordable prices for independent developers and artists.

ZBrush

Houdini

ZBrush, created by Pixologic, is mainly a 3d sculpting program that has been credited with establishing digital sculpting as a viable modelling method, according to Jarret (2014). It features clean UI layouts, powerful tools and compatibility with Keynote 5. While known for its powerful and intuitive sculpting tools, Pixologic have started to introduce features like ZModeller, a unique and powerful tool for hard surface modelling. With minimal steps artists

Houdini, created by Side Effects, is a 3D modelling program that’s centred around creating VFX and particles. Currently up to version 15, Houdini offers artists a friendly and easy to use interface, with powerful VFX and particle tools coupled with polygon modelling tools. There are 2 main Houdini packages that Side Effects offers; Houdini for 3D modelling and animation, and Houdini FX which includes a full suite of powerful VFX and particle

Figure 2. First Zbrush WIP (2013) (JenPenPen, 2013)

can create high quality, detailed 3D models a lot faster than in 3DS Max and Maya. Schwulst (2014) notes that both ZBrush and Mudbox achieve more or less the same end results, despite the unique features in both. While ZBrush is a lot cheaper than Mudbox, it doesn’t

benefit from integration into other programs, like Mudbox does with 3DS Max and Maya.

Cinema 4D R15 review (2015) (Pichot, 2015)

3D feature houses tools to eliminate texture seams and enables artists to paint directly onto the model, much like Mudbox. Certainly an able platform, Cinema 4D is a viable alternative to those unable to afford the prices Autodesk asks for its

industry standard programs. However, Cinema 4D comes in several packages, a full feature enabled version will cost almost as much as Maya, and simply isn’t considered a staple program in the games industry.

Figure 3. Side Effects release Houdini 13 (2013) (sphereadmin, 2013)

system tools. Scenes created in Houdini FX can be imported and rendered in Houdini. While the full version of Houdini FX contains powerful VFX and particle tools, the price is significantly higher than the Autodesk modelling programs. However, Side

Effects does offer an Indie version with a limited commercial license, and Houdini Apprentice; a free, restricted feature version for non-commercial use.(Side Effects, 2015)

5


When 3D modelling and sculpting, there are several different file types to choose from. As Kerlow (2009, p 105) writes, there are generally two types of file formats; Native and Portable. Native file formats are unique to the program that is being used, for example 3DS Max uses .3ds, Maya uses .mb and .ma, and Cinema 4D uses .c4d. Portable file formats as Kerlow (2009, p.105) describes them, are formats that can be accessed by a range of other programs, not just the one the artist is using. For example, Maya users can save files in .fbx or .obj format, and can reopen them later in another program such as Mudbox or Unity. These files save a lot of data, from textures to the objects location in 3D space, etc etc. All objects, whether real or virtual, can be broken down into simple shapes, or geometric primitives. According to Kerlow (2009, pp.118119) these primitives have a fixed structure, and each 3D program will include a varied selection of different primitives for the artists to work with. Some of the common geometric primitives include; cubes, spheres, cylinders, cones, toruses, polyhedral, and 2D polygons. Some illustrated examples of these can be seen in Figure 4. 3D objects are constructed out of Meshes, these are made from a combination of 5 distinct features; vertex (vertices), edge(s), face(s),

6

polygon(s), and surfaces. These features are all interactive, by manipulating the position of these elements the shape of the polygon can be changed. Firstly, a polygon mesh will contain many Vertex, or vertices. These are the points in 3D space in which 2 or more edges with meet each other (Kerlow, 2009, p.98). For example, a simple cube will have 4 vertices, one at each point of the cube, to make its structure. By moving the vertices, the cube can be deformed, extended, shortened, and so forth.

Figure 4. The Art of 3D Computer Animation and Effects (2009c) (Kerlow, 2009)

Edges are the direct lines of space that connect 2 or more vertices together (Kerlow, 2009, p. 98), these can be seen as the lines in the shapes in Figure 4. Moving an edge will also move the attached vertices with it. This can be more efficient way of shaping an object instead of moving each vertex individually. Also, edge extrusion is a method of modelling using the edges of a polygon to create more complex shapes by ‘extruding’ new edges from the one selected. This also created new vertices, faces and elements. Polygon Faces - or Planar Surfaces are surfaces that is defined by the bounding lines made from connected vertices and edges. (Kerlow, 2009, p.98). Like edges, these can be selected, moved, and manipulated to form new shapes, affecting the positions of attached vertices and edges. The more polygon faces an object has, the more resources and time it takes to render the object, particularly in game engines where a low polygon count is mandatory. Surfaces are the connected faces that share the same smoothing group. It is possible to have many smoothing groups assigned to faces, in order to correctly render the smoothed objected. This can help sort out any problems rendering the normal of a polygon mesh. Additionally, meshes can be viewed and rendered as Wireframes. Wireframes are

while curved lines can include subtle curves and impart an elegance in design. Figure 5 shows some of the most commonly used lines and curves. As Kerlow (2009, p.117) lines and curves contain control points, knots and weights, hulls, and tangent points. By manipulating any of these the line or curve can be changed. Moving the control arms as seen in Figure 5 can drastically affect the shape of the curve. Another method of modelling that contains similar properties to lines and curves are NURBS Surfaces, or NonFigure 5. The Art of 3D Computer Animation and Effects (2009a) (Kerlow, 2009)

the structural skeleton of a 3D model, without displaying any surfaces or textures. In effect, it is like an X-Ray of the model. The only elements that can be seen in a wireframe are edges and the connecting vertices. Lines and curves are also used in 3D modelling, and these offer a different method of creating 3d meshes. They may look similar, yet there are different types of lines and curves that all differentiate from one another. As Kerlow (2009, p.116) describes them, Lines are used to define the shape and surface characteristics of a 3D model, and all models contain this important element. Lines can be both straight and curved, with straight lines somewhat obviously connecting two points in the shortest distance possible,

Figure 6. The Art of 3D Computer Animation and Effects (2009e) (Kerlow, 2009)

Uniform Rational B-Splines. These are formed from NURBS curves, with the hull containing control points, isoparms, surface points, and knots and weights for a greater degree of control when shaping the surface. Kerlow (2009, p.140) observes that NURBS surfaces do not pass through the control points, instead the artist has to use the weights and knots in the mesh to shape the surface. Examples of meshes created with NURBS curves can be seen in Figure 6. Navigating 3D space can be tricky, however there is a universal way in which artists and developers can move objects around. Kerlow (2009, p.95) refers to this as the Cartesian Coordinate System, also known as the rectangular coordinate system. Devised by French mathematician RenĂŠ Descartes, it uses 3 axis labelled X for width, Y for height, and Z for depth to represent the dimensions in 3D space. The point in which all the axis meet is known as the origin point, or world origin. An illustrated example of this can be seen in Figure 7. It is used in 3D applications to move objects around and can many values, or units. For example, if an artist were to type a value of 5 into the Y axis translation box, the object would move up by 5 units. If a value of -5 were to be used, the object would instead move down. Artists also have to consider the resolutions of models

7


Figure 7. The Art of 3D Computer Animation and Effects (2009g) (Kerlow, 2009)

when creating game assets. A modern method of working is to create a High Resolution – or High Poly – model, then use it as a guide when creating the Low Resolution – or Low Poly – model to be used in the game.

“Decisions will have to be made as to whether to reduce polygon counts on objects”

most common and popular techniques include Extrusion, Box, and Lathe modelling. Kerlow (2009, p.121) explains Extrusion Modelling – also known as lofting modelling as the process of creating 3D shapes from a 2D outline, such as a planar surface or edge, and extruding – or extending – in one particular axis. This allows the creation of more complicated geometry as the artist repeatedly extrudes faces and edges to create an object. According to Jonaitis (2015), Box modelling is a technique where the artist uses primitives like cubes and spheres to ‘rough out’ the basic shape of the object to be created, and uses extrusion and other techniques repeatedly to create the final object.

(Edwards, 2006)

High and low poly resolutions are exactly that, a typical high poly model can number in the millions of polygons, far too much for use in a game engine. Low poly models instead are optimised from the high poly models to give the artist and game engine the best balance of performance and quality. Figure 8 shows there difference between two resolutions of the same model. There are many techniques when it comes to 3D modelling, some of the

8

Figure 8. The Art of 3D Computer Animation and Effects (2009h) (Kerlow, 2009)

Figure 9. The Art of 3D Computer Animation and Effects (2009d) (Kerlow, 2009)

This method is considered to be one of the quickest and easiest to learn, yet harder to master when creating low poly models. Lathe modelling – or Sweeping – according to Kerlow (2009, p.121) is the creation of a 3D mesh by ‘sweeping’ a 2D outline, such as a line or curve, across a predefined path. An example of Sweeping can be seen in Figure 9. The complexity of the 3D object depends on the original 2D outline and the complexity of the path. However, the results of sweeping by experienced modellers is impressive to say the least. Before any work on a production can begin, there must be an established pattern of working, known as a Pipeline. This will cover how the production should progress from Modelling and Sculpting, Texturing and Shading, and Lighting. An example of a typical

Figure 10. The Art of 3D Computer Animation and Effects (2009f) (Kerlow, 2009)

pipeline for any digital production can be seen in Figure 10. Kerlow (2009, p.77) notes that there are 3 basic stages in a pipeline; PreProduction, Production, and Post-Production. For the games industry however, there are some additional and alternative stages. As Edwards (2006) explains, there will always be a Concept phase, in which the initial idea for the game, narrative, and gameplay are drawn up. This could include inspiration from various sources, like TV and film, or even other video games. Following closely is the Pre-Production phase, in which

concept are is drawn, and the structure of the game is laid out on paper and formalised into a Game Design Document, so that the whole team knows what is going on within the production. The Production phase starts once all the pre-production work has been reviewed and authorised. Artists will now start to create the assets for the game in a variety of programs, according to the workflow deemed best for the production. For modelling and sculpting, an artist may create the base mesh in Maya, and import it to sculpt finer, high poly details in a program like

Mudbox. This process may happen several times before the final model has been settled upon. After modelling, the object is ready for texturing and shading. Again, this process of using multiple programs can and often does continue, textures either being created in programs like Photoshop and imported directly into Maya, or even painted directly onto the model programs like Mudbox and edited in Photoshop afterwards. Shaders are used to further refine the look of the model, and can add many special effects and textures into a single shader. Textures are applied to shaders, and these in turn are applied to the object. Most modelling programs and game engines use UV Coordinates to determine the position of the applied textures. Kerlow (2009, p.265) explains that map positions on a 3D model are determined by UV Coordinates, in which U and V represent horizontal and vertical positions respectively. These use values from 0 – 1 to create UV coordinates. Once the maps are created, they are ‘projected’ onto the model, using a variety of methods. Depending on the shape of the model itself, maps could be projected using cubical, spherical, cylindrical, or wrapping projections. These methods warp the map and position it according to the UV layout assigned. It is very important that UV maps are laid out smartly and

9


overlapping seams are hidden from sight, in order for a model to be properly textured. As Kerlow (2009, p.258) explains, textures that need to be used in real time rendering such as game engines need to be efficient and small enough that they do not impact the performance of the game. The typical size for a game texture according to Kerlow (2009, p.258) is around 256 x 256 and 512 x 512, although now video games have advanced far enough to be able to start rendering 2k (2048 x 2048) or even 4k (4096 x 4096) resolution textures. There are different shader types also. As Kerlow (2009, pp.252-253) explains, some

shader examples include Diffuse, Specular, and Smooth. These typically deal with the way light is dealt with once it hits the surface of the model. Diffuse shaders, (Maya uses the Lambert Shading model), assign a constant shading value to the object, resulting in a fast and predictable result. However, Kerlow (2009, p.252) notes that it is not particularly effective on more complex geometry. Specular shaders according to Kerlow (2009, p.252), applies a reflective-like surface to the model, resulting in a shiny, or glossy appearance. Maya uses the Blinn shader model. It calculates the light hitting all points of the surface polygon,

according to the angle of the model. While more accurate than diffuse or smooth shaders, specular shaders are more resource intensive. Smooth shaders are explained by Kerlow (2009, p.253) as using a continuous shading value that is blended across the surface of the object, by averaging the vertex normal across the polygon faces. The result is a ‘smoothed’ model that uses ambient and diffuse values, but lacks the reflectivity of a specular shader. Kerlow (2009, p.253) notes that is an efficient shader, yet lacking realistic and consistent results. The next step after modelling and texturing assets

Figure 11.

Maya 2016 utilises a node based shader system, hosted in ‘Hypershade’. This allows for a plug and play system of assigning different ‘nodes’ to one master shader. The more detailed the required shader, the more nodes that are needed. Figure 11 shows the structure of an Ocean Shader and a Blinn Shader in Hypershade. They contains 2 nodes that are linked to different properties to the final shader, one for the Surface Shader in both, and the Ocean Shader requiring an additional link for a Displacement Shader.

10

Colour Map

Normal Map

Occulusion Map

there are different texture maps that can be used to enrich the detail on a model. Some of the common maps being a colour map, normal map, and an ambient occulusion map. As Kerlow (2009, pp.275-277) explains, Colour Maps dictate the colour of pixels after light has reached them, and can be created digitally or even hand painted. Occulusion maps calculate the absence of light reaching pixels and generates shadows, according to Kerlow (2009, p.253). Hajioannou (2013) writes that Normal maps use RGB values that translates to XYZ values to ‘fake’ light hitting the object, resulting in an object having much higher detail without using up polygons. and the environment is setting up the appropriate lighting. Lighting in 3D as Kerlow (2009, p.221) can be used to set mood, reveal the digital world to player, and most importantly contributes to the overall performance of the scene or game. Kerlow (2009, pp.227-230) describes several different types of common lights sources that game engines and modelling programs use; Point Lights, Spotlights, Infinite Light, Area Lights, Ambient Lights, and Linear Lights. Each have different attributes that can vary the effect of the light source, and some can even be directed by translation, rotation, and scale. Points lights, as explained by Kerlow (2009, p.227), also known as omni-directional light sources, cast light in all directions. Often, these can be used for replicating the

effect of lightbulbs, candles, and other such light emitting objects. These are simple to use and can be placed anywhere, and can have additional effects added to them. Spotlights act much in the same way digitally as in the real world, according to Kerlow (2009, p.228). They can be directed much in the same way, and contain falloff properties and emit light in a cone shaped area. By changing the light source attributes an artist can create soft light and shadows, and vice versa. An Infinite light source, or Directional Lights, emit light much in the way of stars and suns. Kerlow (2009, p.228229) explains that that they can be placed anywhere, and constantly emit light that doesn’t decay over distance, compared to Point Lights which have a limited range. By

inputting longitude and latitude of the light source, an Infinite Light source can be used to simulate the sun in a scene. Area Lights, as Kerlow (2009, p.229) describes them, can take the form of a large light source, or several lights grouped together. These light sources can be scaled to become bigger or smaller, and are usually rectangular in shape. Area lights are most effective when used to light small areas, for example the light coming through a window in a scene. Kerlow (2009, pp.229-230) explains that ambient light sources emit light that is evenly distributed evenly throughout a scene, and can be placed anywhere, yet still lights the scene from all angles. Usually only one ambient light source is used in a scene, and this will determine the shade and global illumination used.

11


Linear Lights as Kerlow (2009, p.230) describes, can be described as being similar to florescent light bulbs in that Linear lights have length, but little width. These lights can be scaled up or down, but can be become demanding on resources if over used. As Kerlow (2009 p.231-232) tells, each light source will share the same basic features, such as position, intensity, colour, shadows, decay, and falloff. Position can affect some lights like spotlights, while adjusting the intensity of the light source determines

the strength of the light being emitted.

“Having different models for different distances allows you to draw more on screen at once” (Silverman, 2013)

Colour can be adjusted, this is helpful for realistic lighting of certain scenes, such as the lighting in an alien ship. Light sources are also responsible

for creating shadows, and varying settings can be altered to adjust how the shadows act upon objects, or turned off all together if needed. Decay and falloff can be adjusted in some light sources such as point and spotlights to create different effects, these settings control how the light falls upon an object, allowing an artist to create soft, weak lights, for example. In order for the computer to process all the graphical information being thrown its way, APIs – application programming interfaces – have

Maxwell Render Maxwell Render from Next Limit is a very powerful renderer plugin that can produce some of the best physically based image renders in the industry. Figure 13. ‘Alma’ (2009) (Blaas, 2009)

It’s powerful lighting methods can be seen in ‘Alma’ in Figure 13. Features of Mawell Render include: - Maxwell Studio, dedicated lighting and rendering program can import sceens from other programs

Figure 12. ‘What’s the Difference? A Comparison of Modeling for Games and Modeling for Movies’ (2014) (Masters, 2014)

An object will have several levels of detail, as seen in Figure 12, that increases as the player gets closer to the model. These are known as Level of Detail Groups, or LOD Groups. Such groups can be created in Maya for an artist to see how the model will look before importing into a game engine. These work by instantly swapping out the current model assigned to a distance to a higher or lower one, depending on how the group is set up. Modern game engines, such as Unity, can even automate this process by themselves, and can be configured by developers to fine tune the performance of the game. Although this means an object may have 3 or even 4 different iterations, the overall performance of the game in real-time will not be affected by needless rendering of detail. (Unity, 2015)

12

to be used. APIs according to Proffitt (2013) essentially dictate how programs talk to one another, in the simplest terms. The Direct3D API, for example, is part of DirectX (itself an API), which drives how 3D applications handle the graphical data incoming from programs and applications. The latest version, DirectX12, claims to vastly improve hardware performance in graphical power and efficiency. According to Rouse (2014), Direct3D allows game developers to cater for any graphical API, for example Nvidia and AMD’s own software, without excluding one or the other. Also, Microsoft offers more advanced features as new

- Real world camera simulation provides intuitive control, incldung f-stop and diaphram settings - MultiLight allows artists to change light intesity after a scene has been rendered

versions are released, increasing efficiency and performance. It’s noted that the main competitor for Direct3D is OpenGL. OpenGL as Rouse (2011) explains, is similar in purpose to Direct3D, in that OpenGL also allows for many graphical API commands to be executed such as special effects and Anti-Aliasing regardless of operating system, unlike Direct3D which is limited to Microsoft systems. However, Rouse (2014) goes on to say that Direct3D remains as the main API in Windows due to arriving first, despite OpenGL occasionally having better performance. When it comes to improving efficiency within a game,

- Easy to use, artists can add light, adjust camera and render (Maestri, 2010)

one method is to reduce the amount of detail on objects that can’t be seen clearly, such as far away objects in the distance. It’s wasteful to render an object the player cannot see at full detail, so games introduce Level of Detail, or LODs, for each model. Masters (2014) explains Level of Detail as a technique used by game developers that involves lowering the detail – and the poly count – of an object the further away it is to the player. This frees up much needed resources on rendering more relevant objects that will be closer to the camera. Masters (2014) goes on to note that this technique is becoming common again after the advent of mobile gaming,

13


where tight restrictions are in place, just like the 1st generations of 3D consoles. Also, an object may have to rely on more detailed textures to hide the reduced polygon count, so that the decrease in the model geometry isn’t as pronounced. When a video game is running, some more shaders are used to render the scene and objects correctly, this is achieved through the use of Vertex and Pixel shaders. Vertex shaders, as explained by Christian (2012), allow the manipulation of an object’s vertex data, including position and colour through a Vertex Shader, which uses mathematical operations and functions. These changes do

not affect the data of the model, rather the way it is rendered instead. Pixel shaders come after Vertex shaders in the rendering pipeline, and although sound similar they are different to a Vertex shader. Christian (2012) explains, pixel shaders calculate what each pixel should be showing up close, increasing the detail on screen than a vertex shader can provide. In essence, pixel shaders try to replicate the real world texture and feel of the model that is being rendered. One of the features of a pixel shader is the use of bump and normal maps, which replicates high poly light and shadow detail on low poly geometry. Smart

use of both these shaders can result in some visually stunning models and renders. When rendering a scene in a 3D program, such as with Mental Ray in Maya 2016, there are several methods which an artist can choose from; Rasterization, Ray Tracing, and Radiosity. OnlineMediaTutor (2015) explains Rasterizing as a realtime rendering technique used in games, which manages to balances performance with good visual quality. Essentially, Rasterization renders look at the all the triangles currently make the scene, and calculates what can be seen in the current view. It then analyses other factors such as shadows and light

NVidia Mental Ray Mental Ray is now a stand alone renderer, available for use in 3D programs, such as 3DS Max, Maya, and Softimage. It has been used in all areas of the creative media industry, on projects such as ‘Hulk’ in Figure 14, and provides powerful light rendering for game creation. Mental Ray features powerful tools such as; - Human Hair Shaders for fast and efficient rendering

14

Figure 14. ‘The Hulk’ (2003), (ILM, 2003)

- MILA Layering Shaders for advanced shader control - Iray 2014 for GPU based photo realistic rendering, utlizing NVIDIA CUDA cores - OpenEXR 2.0 for multi layer rendering saves and future ‘deep’ data storage

- Global Illumination Importance Sampling evaluates Importance of lights in scenes for efficient rendering (Nvidia, 2015)

Guerilla Render Guerilla Render is a light based renderer that is includes its own lighting tool and dedicated renderer. It has been used to good effect in productions such as Resident Evil: Retribution, and Judge Dredd 3D as seen in Figure 15. The list of features in Guerilla Render include; - Optimisied GUI for fast rendering including cached pre-computations for rerendering sources, and adds other details like light and colour to the triangles that will be displayed. This is a popular and effective method for real-time rendering, but loses out to the more advanced Ray Tracing in 3D renders. Ray Tracing, as described by OnlineMediaTutor (2015), is a more advanced and resource intensive method of rendering. Ray Tracing works by creating ‘rays’ for each pixel in the display area, and follows the path from every light source, to the objects, and finally to the camera. Used effectively this method is very adept at creating photorealistic images and reflections, however, it is very resource demanding and isn’t currently

Figure 15. ‘Judge Dredd 3D’ (2012) (Prime Focus World, 2012)

- Unbiased path tracer for quick preview renders for updated workflow

- Powerful and original raytracing methods for fast, optimised pipelines

- Physically plausable shaders to create realistic curves and surfaces

- Fast and accurate Motion Blur can be rendered using precise controls

- Large selection of lights and environment shaders

(Guerilla Render, 2016)

suitable for real-time rendering in video games. Radiosity, explained by OnlineMediaTutor (2015), works by using global lighting to track how light diffuses and spreads in a scene, for example how light bounces and illuminates a room and how natural shadows are formed as a result. This creates a natural shadow gradient, and can be used to create Light Maps. Light Maps can be baked directly into an objects textures to permanently keep the Radiosity data, adding shadow detail much in the same way as ambient occlusion maps. This frees up resources later, especially for real-time rendering where combined

with Rasterization, detailed scenes can be made without massive resource demands. When rendering professional images, it is also important to consider the impact of sampling and filtering. Sampling, as described by Szabolcs (2015) works by firing rays into a scene for every sample that is required, and where these rays hit an object, information is requested about the shading for that point. Once the information is gathered, the renderer can calculate the colour of that particular pixel. Sampling can be an accurate but lengthy process, as each ray has to be inspected, and depending on settings and

15


V-Ray V-Ray, created by Chaos Group, is a Physics-based lighting, shading, and rendering program for a host of 3D modelling programs, such as 3DS Max and Maya. It has found success in video games, such as Warhammer 40,000: Space Marine, as seen in Figure 16. V-Ray has many features, including: - Faster performace in Ray Tracing and rendering the complexity of the shaders used, the number of rays to be inspected can rise dramatically. Adaptive sampling allows multiple passes of sampling within controlled parameters, and can help eliminate aliasing and speed up rendering times. It does this by using minimum and maximum range of samples, and contrast thresholds to selectively concentrate on areas that require additional sampling. (Szabolcs, 2015) For example, after a minimum of two passes the renderer finds that the difference between two sets of values in a particular area is greater than the constant threshold (which governs what the difference should be), the

16

Figure 16. ‘Space Marine’ (2011) (Plastic Wax, 2010)

- Compatibility with open source tools and formats, including OpenEXR 2.0, and OpenColourIO - New shaders such as Metaballs for rendering ray traced iso surfaces based on particle emmissions

- Ability to create any material with phyically correct materials - Built-in frame buffer for quicker render output - colour correction and management tools (V-Ray, 2015)

renderer will keep sampling that particular area to obtain more accurate results until it reaches the designated maximum samples. In this way rendering can be increased without having to sample all parts of an image constantly. It should be noted that for 1 sample 4 rays are produced, increasing the number samples will multiply the amount of rays by 4, higher sample settings will greatly increase the rendering time, and be quite wasteful in time and resources. (Szabolcs, 2015) Filtering is the method in which the sampling rays are added up and averaged. These can have different effects, such as blurry renderings. It is important to choose the right

filter for the task at hand, for example there is no need to use a lot of samples for detail in a blurry image. There are several filters available to use; Box, Triangle, Gauss, Mitchell, and Lanczos. (Szabolcs, 2015) The Box filtering method uses basic mathematical average of the samples collected. This results often in blurry images and contains no real detail. The Triangle filter is shaper than Box filtering, but is slower as a result. Samples that are further from the centre scene have less of an impact on the pixels. (Szabolcs, 2015) Gauss Filtering results in a soft image, with a medium level quality and rendering speed. Mitchell is the most

detailed filter, but is the slowest of all. Images rendered with Mitchell filtering will have no blurring of pixels at all. It is a general rule of thumb that the more complex and slow the filter, the more samples that are needed for a proper render (for example Box would require only 1 sample, while Mitchell could need up to 4 samples). Lanczos filtering is slower still, but results in a finer and sharper quality image. The higher the samples the softer the image, however samples under 1 will create artifacts in the render. (Szabolcs, 2015) Another feature that can be used in both rendering and real-time rendering is Anti-

Aliasing (AA). As Kerlow (2009, p.282) explains, Anti-Aliasing is usually a combination of two techniques, oversampling and interpolation. This allows the AA function the ability to look at nearby pixel colour and create an average, and then use the average to find the best colour for that pixel. As Goodnight (2011) notes, a by-product of using AA is that lines become blurred, reducing the ‘jaggies’, which is desirable in most video games. Gordon (2014) notes that there a several methods of AA today, such as Multi Sampling Anti-Aliasing (MSAA) which applies only to polygons, reducing resource cost but cannot solve pixelated textures, and Temporal Anti-

Aliasing (TXAA), which is newer, uses a lot of techniques to smooth out edges but is only available on modern graphics cards and still has an issue of blurriness and uses more processing power to render. Global Illumination, according to Kerlow (2009, p.181), is a powerful rendering tool that can create physically accurate images as it calculates the indirect illumination on assets, which can include diffuse, specular and glossy ‘inter-reflection’ between different surfaces. It also calculates the transmission light from other light sources. While Radiosity is one form of Global Illumination, there is also another technique called

Furryball Furryball is a GPU renderer from AAA Studios. It is unusual as it a very fast GPU based path tracer plug in, that is advertised as being up to 15x faster than CPU based renderers like Mental Ray. Figure 17 shows Furryball being used in a game concept render. Furryball features include: - Extremely quick GPU rendering, resulting in fewer rendering units used

Figure 17. Frogster Interactive (2015) (Frogster Interactive, 2015)

- Comaptible with Maya, 3DS Max and Cinema 4D - Unlimited output resolution, textures, and lights - Physically based global illumination using ray tracing

- Unbiased and biased final frame rendering with path tracer - Easy to use for artists with simple controls (Furryball, 2015)

17


Photon Map Rendering. Small packets of energy known as photon maps are emitted into the target scene, mimicking the way light will travel through space. The values of the photon maps will change as they become reflected, absorbed and so forth, and are used by global illumination to render light in the scene more accurately. (Kerlow, 2009, p.183) In order to render such heavy visual effects and cinematics, requires a lot more computing and graphical power than a single computer, however well upgraded can provide. Therefore, it is common practice for studios to create Render Farms. As described

by OnlineMediaTutor (2015), render farms consist of multiple networked computers, often built for the singular purpose of rendering.

“Autodesk Backburner is a background rendering network system” (Autodesk, 2015) This works by programs such as Autodesk’s Backburner sending parts of each frame to every available networked computer for rendering, which are sent back once complete and the home computer stitches the scene

back together. In this way it is much faster and efficient than a single computer rendering everything, and more importantly saves the studio a lot of time and money as it could take years for a single computer to render a project by itself. For scenes and projects that require significant amounts of visual effects, such as The Lord of The Rings and Ghost Rider 2, plugins like RealFlow and Miarmy are essential tools. Each plugin has its own strengths when it comes to mass scale graphical implementation. RealFlow, made by Next Limit Technologies, is a powerful fluid simulation plugin that can be used ‘out of the

Arnold Arnold is a Ray Tracer plugin that uses the Monte Carlo method, co-developed by Solid Angle with Sony Pictures Imageworks The extensive gallery of work created with Anrold includes the popular ‘Game of Thrones’, seen in Figure 18. Features of Arnold include: - Available as a plug-in or a standalone renderer.

box’, and is compatible with all leading 3D modelling software. Its strengths are concentrated around accurately simulating how fluids move and interact, and can even be used to create ocean surfaces. It uses a node based system called HyFLIP that allows the creation of small to medium fluid systems quickly and easily. (Next Limit Technologies, 2014a) RealFlow also features Caronte Dynamics, a powerful soft and rigid body dynamics engine. Artists can set and change rigid body values for objects to enable interaction

with RealFlow’s sophisticated particle and fluid system to create highly realistic scenes, and even deform soft body objects. (Next Limit Technologies, 2014b) Also, RealFlow has been used in numerous game cinematic trailers, including Bioware’s Mass Effect 3, in which RealFlow was used by The Mill to create the trailer that includes particle and fluid effects seen in Figure 19. Miarmy, created by Basemount, is a large scale crowd simulation, rendering, and animation plug-in for Autodesk Maya. It uses a

Human Fuzzy Logic network to create and control large crowd simulations, and doesn’t require the use of nodes or programming. Miarmy supports all renderers, and also features its own animation engine that supports Bone structures and skin methods. Miarmy also includes support for Maya visual effects, like particles and fluids. (Basefount, 2015a) Miarmy has been used on projects such as XuanYang Sword, specifically for the cinematics in which Miarmy Crowd was used to simulate horse riders with dust particle

Figure 18. ‘Game of Thrones’ (2011) (RodeoFX, 2015)

- Fast and Efficient Physically based ray traced renderer

- Uses scaling multithreading for efficient CPU use

- Uses ray traced curve primitives for fast and low memory rendering of Fur and Hair

- Modular gemorty creating by using procedural nodes

- Includes an accurate motion blur system

18

Figure 19. ‘Mass Effect 3 “Take Earth Back” Uncut and Extended Cinematic Trailer HD”’ 1:28 (2012) (NerDoneGaming’s Channel, 2012)

(Solid Angle, 2015) Figure 20. XuanYang Sword (2015) (Basefount, 2015b)

19


effects and dynamics. This can be seen in Figure 20.

“There are some scenarios where it will be impossible for the player to see certain parts or sides of a model” (Silverman, 2013)

In the games industry, there are restrictions on the level of detail models and environments can be, due to the nature of the game engines. As everything is real-time rendered, speed and frame rate are some of the most important aspects of the game. As a result, restrictions and limits are placed on a number of areas of 3D modelling; Polygon Counts, File Sizes, and Rendering Time. Rendering time is dependent on many factors, polygon count and file sizes being main contributors. Rendering a low poly, low file size asset will be easier than a large, high poly count model, for example. Films and TV use high poly models that number to the millions of polygons, it is beyond even the most powerful games engine to render models of that complexity in real-time. It is advised by Silverman (2013) that artists adopt the habit of removing any polygons and triangles that will not be seen or needed to reduce the

20

rendering time. Polygon Count restrictions are used in order to maximise efficiency and free up resources that can be used elsewhere. As Kerlow (2009, p.132) explains, optimising models for a video game is a balance between speed and visual compromise. Using appropriately sized texture maps and baked textures all help immensely towards giving the player an illusion of highly detailed models while maintaining low poly counts for fast and smooth gameplay. As Silverman (2013) explains, game engines use Triangles when rendering 3D models. Models that have poor topology will take more time to render as the games engine finds the best method to divide a model into triangles.

“when modeling for games the most important thing to consider is the polycount of your model, and keeping all of your polygons in Quads or Triangles.” (Silverman, 2013)

The file size of assets can be linked to the complexity of the file itself, this could be a model with low poly count compared to a high poly model. A model with a low poly

count can still have a large file size if it uses complex shaders and texture maps. For example, a 512 x 512 texture map will have a much smaller file size than a 4k texture. Using too many files with large sizes can end up dramatically increasing the rendering time as each map is processed. Therefore, artists and developers must make a compromise between the quality of texture maps and shaders against the rendering time.

Conclusion To conclude, it is important for aspiring 3D artists to understand the principles and theory of 3D modelling before modelling. By understanding how 3D models are created and behave, the many tools available, and the application and usefulness of workflows and pipelines, artists can confidently create efficient models that can be used for either the film or games industry that meets the tight deadlines of productions. The games industry is rapidly evolving, and the tools used must evolve to keep up with the latest demands, trends, and techniques. Artists must be able to keep up-to-date with these trends and techniques to take full advantage of what the vast world of 3D modelling has to offer. Author: James Wills


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.