Manual Thea Render

Page 1

physically­based unbiased & biased render engine

USER MANUAL find further information at http://www.thearender.com

Š Solid Iris Technologies 201 0


Thea Render Manual Thea Studio

ABOUT THEA RENDER

Thea Render is a physically-based global illumination renderer of high quality. It is a unique renderer that is able to render using state of the art techniques in both biased photorealistic and unbiased modes. Thea Render comes with its own standalone application (Studio) for the most intuitive setup and advanced staging operations. SYSTEM REQUIREMENTS

OpenGL support required for Studio application. Memory 256Mb Ram (1GB or higher recommended for rendering complex scenes). Resolution 1024x768 or higher, 32-bit color, and 3-button wheel-mouse recommended for Studio application. DISCLAIMER OF WARRANTY

Thea Render is provided "as-is" and without warranty of any kind, express, implied or otherwise, including without limitation, any warranty of merchantability or fitness for a particular purpose. In no event shall the author of this software be held liable for data loss, damages, loss of profits or any other kind of loss while using or misusing this software. The software must not be modified, you may not decompile, disassemble. Any kind of reverse engineering of the software is prohibited. THEA RENDER - VERTIGO EDITION

Copyright (C) 2008-2010 - Solid Iris Technologies

Page 2

find further information at http://www.thearender.com and our Forum at http://www.thearender.com/forum/


Table of Contents INSTALLING THE APPLICATION.......................................... 4 THEA INTERFACE................................................................. 8 MATERIAL EDITOR................................................................ 1 6 ENVIRONMENT..................................................................... 20 RENDER SETTINGS PANEL................................................. 22 DARKROOM...........................................................................25 THEA RENDER MATERIALS................................................. 28 THEA RENDER SETTINGS................................................... 42 GLOBAL ILLUMINATION TUTORIAL..................................... 56 NETWORK RENDERING....................................................... 63

Page 3


INSTALLING THE APPLICATION Just before running Thea Render for the first time, you have to install it. Installation is necessary so that the executable may locate the needed resources even if you run the application from another folder and all files are transferred in the proper place. Windows

You will need administrator privileges to install on Windows operating system. The installer will guide you through the installation steps. On a 64-bit operating system, you can install either the 32-bit or 64bit version of Thea Render. MacOSX

Thea Render will run on MacOSX 10.4+ version (current application is 32-bit only). The installer comes in the form of a dmg image that the user needs to mount. By exposing the contents of this image, you can see the application bundle - simply drag and drop the icon into your Applications folder. Linux

On Linux, Thea Render comes currently in a .tar.gz package that needs to be decompressed into the running folder. In the folder, you will see then two shell files (install.sh and install-x64.sh) - you should run the one that corresponds to the variant of your operating system. On all operating systems, the first time you will run Thea Render, you will be asked for your locale (language). Please note that while you may select your own language you will see no change in the interface since translation files have not yet been created.

LICENSING AND ACTIVATION

If you are currently just evaluating the software, you can skip this section.

When you purchase a Thea Render license, you will receive a serial number that corresponds to your license. The serial number needs to be entered along with some basic information to activate your license. The automatic activation will need an internet connection (should you not have access, you can still request manually the activation code). Note, that needs to be done only once after installing the software. Page 4


The first step to activate your license is to open Thea Studio, go to Help and then click on License Form this will bring up the license input form as shown in figure A1. Here you can immediately see all data associated with your license.

Figure A1

Please go ahead and type your full name and e-mail address that will be used to send you any activation codes. Both of them need to be written in latin and you should make sure that the e-mail address is operational and correctly type. Please fill in the serial number that you received as well. Do not click yet any of the buttons OK, Cancel or Request Activation Code.

After filling the Full Name, E-mail Address and Serial Number fields, you can switch to the Plugins panel by clicking on the corresponding tab. The view will change to that of figure A2. You can enter here the serial numbers you have received for the plugins. But you can also leave this to do at a later time (or add any serial numbers if you purchase any). Do not click yet any of the buttons OK or Cancel.

Figure A2

Figure A3

You can switch back to the Main panel and click on Request Activation Code (figure A3). A message box will pop-up then, asking you if you want to test your e-mail address first. This can both test if your internet connection works and whether your e-mail address is functional, so it is recommended to click Yes. Thea Render will contact then the web server for sending you a test mail.

Should you press No or your test mail arrives you can proceed to your activation code request. Thea Render will contact again the web server sending you an activation code in your e-mail address. Whenever this arrives, you can enter it in the Activation Code field to activate your license. Simply press OK on the license input form and restart the application for the activation to take effect. Page 5


Please note that the first time you activate your license, your data are used for registering the software. It is very important that these data (name and e-mail) are correct as your registered name and e-mail address will be used whenever you need to re-install and reactivate Thea Render.

Figure A4

Please also note that it is not necessary to wait for the activation code to arrive in your e-mail box. You can still press OK on the license input form and quit the application and enter the activation code at a later time.

Some further operations on your license are possible from the Misc panel (figure A4). A typical scenario here is that you export your license data for your archive. The license can then be re-imported whenever you need to re-install Thea Render (or upgrade to a newer version). Please note that re-activation may be needed even if you import a valid license, in case you are installing on a different or upgraded system. Clearing the license should be done only in rare cases, like when changing a license from client to full one or change serial number of application and plugins. It is important to understand that you can not change your registered data (name and e-mail) by clearing the license, editing new data and re-activating. You will need to contact us for this purpose. If you have any questions about licensing or any problems with activating your license, please contact us at <license@thearender.com>.

Page 6


physically足based unbiased & biased render engine

Image by Patrick Nieborg / Model by Tim Ellis

Page 7


THEA INTERFACE The first time you actually run the application, you will see in the center the workspace (Viewport/Darkroom/Console), the scene view and properties panel on the left, the settings panel (Material Lab/Environment/Render) on the right and a browser on the bottom (figure 1). All of these windows may be closed by clicking the X button located on the top-right corner. The only exception is the workspace window where in the same position you can find the maximize/restore button.

Figure 1

Looking at the interface, it looks somewhat empty. Nevertheless, there is big effort put towards a clean and simple design. The interface consists of non-modal panels (with the exception of color selection and few-options dialogs) for improved workflow and efficiency. Page 8

Looking at the menu bar, you can find the most fundamental options like opening and saving a file,


initiating a render, managing the windows and customization. The main “action” and most menu commands have been placed inside the panels. In the next paragraphs, we are going to describe briefly all the panels found in Thea.

SCENE VIEW

The scene view (figure 2) consists of all the objects in your scene, that is the scene models, point lights and cameras. Additionally, in this hierarchical list, we can also find the material and model-proxy lists. Whenever an item is selected, then this item is also selected in the viewer and, in the case of a model or material, its preview can be seen in the material lab as well. You can quickly rename an item by clicking on it for a second time (i.e. after it has been selected). Next to the name of the scene items, there are some property flags that are enabled based on item status. For example, in the case of models, one can easily see whether a model is hidden or is animated. For materials, one can easily check out whether a geometric modifier is applied to material or the material corresponds to an area emitter. Right clicking in the scene view brings a popup menu with basic commands, mostly in order to remove selection and clean up the scene from unused objects. Through this popup menu though, one can also assign a material to a model and currently this is the only way to assign an interface (interface is a secondary material describing the “back side” of the object). Here, there are also present some basic modeling commands for the model (smoothing, vertex welding and normal flipping) and scene (moving/scaling whole scene and swapping Y-Z axis).

Figure 2. scene view

Page 9


VIEWPORT

The realtime viewer in the center of the application (figure 3), is where all the action is when you want to perform staging tasks, like moving an object, adding a light, etc. At the top you can see the main bar, where the action type (that is the left mouse click) can be selected, object commands (grouping/hiding/deleting/etc.), adding lights and cameras and showing other configuration panels in the viewer (transform/animation/interactive render). At the bottom, there are more options which are related to the view mode. For example, from here, you can change camera view or switch to side view and change the drawing mode. By default, the objects are drawn in wireframe but we can change this to hidden-line which makes things clearer. A

B

C

D

E

F

G Figure 3. Viewport with 4 windows (press "0" key to switch between 1 window and 4 windows)

A Action Toolbar B Full screen mode C Camera Frame D Cursor Frame E Viewer Toolbar F Global Frame (can be used to rotate the Enviornment or HDRI background) G x, y and z Frame Page 10


To move around the viewer, you can press the middle/right mouse buttons and move the mouse. This way you can rotate around your selection (if any) or pan the view. With the mouse wheel you can zoom in/out. Note, that you can change the way you move to Maya navigation style by pressing ‘m’ (or changing the controls file from the top menu Customize > Theme and restart the application). From there on, in select mode, the left mouse button serves to select an item or drag and drop a gizmo (gizmos are manipulation tools that appear when you select an item). If you have made one selection, you can hold down Control key and left click on another object to toggle objects in your selection. Note that the realtime viewer is based on OpenGL, so if you come to any drawing problems, make sure that you have updated the related drivers of your graphics card.

ACTION TOOLBAR A

B

C

D

E

A Object selection & Viewport Navigation B Undo & Redo

C Group & Ungroup: by holding the Control key + left mouse click, you can select more then one object and by clicking on the first icon (Group), you can group the selected objects (Group icon will appear in the Scene Editor). the second icon (Ungroup) will ungroup the objects again D Duplicate object

E Object Transform: Move ,Rotate, Scale F Delete Object

F

G

H

I

J

K

L

G Show all objects & Hide selected object(s) (when a object is hidden, it will not get rendered )

H Object visibility/render Layers / control + left mous click add selected objects to Layer I Insert lights and cameras to the scene

J Preference settings for Viewport elements visibility (x,y,z Frame, Compass, Grid) K Tools (Transform, Animation, Interactive Render) L Hide Toolbar: when clicking on this icon you can hide the entire Toolbar and a small button on the left top corner will appear and clicking on it will restore the Toolbar visibility again

Page 11


VIEWER TOOLBAR A

B

A View indicator shows the current view when not in camera view, it also contains the view menu where you can change the view point

C

B Camera tools: the first icon will get highlighted when you are in camera view and also serve to cycle through all the cameras that are in the scene. The second icon serves to “mount” the camera you are looking through and this way when you zoom, rotate or pan, you are actually repositioning your camera, a great option to readjust the camera position (only works when you are in camera view). The last icon “go to camera view” serves to go to a camera view by selection any camera available in the scene.

D

E

F

C GL Render mode display and selector

D Toggle between perspective and parallel view

E The first icon serves to fit selected object in viewer frame and the second icon to centre the selected object in viewer frame F Hide Toolbar: when clicking on this icon you can hide the entire Toolbar and a small button on the left bottom corner will appear and clicking on it will restore the Toolbar visibility again

TRANSFORM TOOL

The Trasform Tool (figure 4) serves for precise placing, scaling or rotation a object in the scene and can be accesed through the Action Toolbar under the Tool icon.

the first icon allowes to change the unites used for the operation, which can be changed to meters, centimeters, milimeters, feet or inches and rotation to radianse or degree.

Page 12

The next three icons serve for aligning between two selected object.

Figure 4.


clicking on the last icon will bring up the Bitmap Coordinates panel (figure 5), which allowes to addjust the bitmp texture scale, offset and rotation.

Also the projection type can be changed by clicking on any of the projection icons. Note that when using the Bitmap transform panel, all bitmaps used on the object/material will get the same transformation applied.

Figure 5.

If only one bitmap should change the coordinates and the rest of bitmaps that are used by the material should keep thier original coordinates, then the Bitmap Coordinates trsansformation should be made inside the Material Editor/Texture Editor. To be able to see in realtime the Texture addjustments, one should change the viewe port render to solid render which can be addjusted through the viewer toolbar in the viewe port. As only one Bitmap can be displayed at a time, you need to select the Bitmap you want to see in the Material Editro, by left clicking on the "Texture button" of the Bitmap you want to display in the viewe port. Right clicking on the "Texture" button will open the Texture Editor.

ANIMATION TOOL

B

A

E

F

G

C

D

H

A Frame window B clickable area for frame selection C Frame navigation D Key Frame navigation E Close Tool F Srollbar G extend/reduce Frame view H set Key Frame/delete Key Frame/delete all Key Framesw Page 13


Page 14


physically足based unbiased & biased render engine

Image by Alex Murat

Page 15


MATERIAL EDITOR On the right side of the application various setting panels exist. These are the material laboratory which is essentially the material editor, the environment options panel and render options panel.

MATERIAL LAB

The material lab is the “codename” for the material editor window of Thea Render. This is the first thing you see at the right of the application, as it is the first active settings page. There are basically two panels in the material lab, one showing the preview with some generic options and another below, with the layer schematics and all the options to tweak your materials. Under the Layer panel, the properties of current selected feature are displayed (figure 6). Both the preview and layer areas support drag and drop. This is very useful since – in conjunction with the browsers – we can easily store and assign materials. Whenever a model is selected, its corresponding material is transferred to the material lab for editing. From there on, we can easily apply a material from the browsers by drag & drop or open a material using the corresponding icon (on the top left bar).

A

G

B

H

C

I

D E J F Figure 6. The material lab

A File/Undo Operations B Layer Schema (Clickable Area) C Geometric Modifiers D Area Light/Inner Medium E Layer Operations F Description G Room Selection/Rendering H Switch Schematics List View I Add-Layer Options J Material Properties Panel Page 16


TEXTURE EDITOR

The texture editor is where complex textures can be created by using the atomic entities of color, bitmap and procedural. The texture editor is essentially a hierarchy of textures that are added (in a normalized manner), multiplied or synthesized.

E A F

B C

G

D

H

I

Figure 7. The texture lab

Looking at the texture editor (figure 7), we can see the preview panel at the top and the texture layers in the middle. The layers of textures are arranged in a 3x3 grid. The 3x3 grid can be “enhanced in resolution” by editing a texture in a sublevel; this can be done by selecting a texture and clicking on Level2 button.

Implicitly, the operation applied between the layer rows is normalized addition and the relative weights can be adjusted by the slide bars found under the rows. The operation applied on textures inside the columns can be either a multiplication or a synthesis (the operation is changed by simply clicking on the symbol between the texture placeholders). Synthesis is a special operation where the results of each texture in the row are used to define new coordinates for the next one; in many cases, some unique and interesting patterns can be achieved this way.

On the right side of the layers panel, we find all the buttons that can import a texture in the editor. Thus, we can open a bitmap, create a color, or browse the bitmaps and procedurals already available in the scene. Note that the available bitmaps and procedurals can also be browsed

A File/Clear Operations B 3x3 Texture chip grid C Texture Options D Delete Texture E Texture Preview Options F Global Texture Coordinates G Texture Selectors H Texture Level 2 I Properties Page 17


using the Textures panel (checked from Window menu) and dragged and dropped inside the texture editor for efficiency.

On the left side of the layers panel, there are the buttons related to texture options. The very first options button holds the properties specialized to the selected texture, while the coordinates and tone mapping properties are more generic and apply to all bitmaps and procedurals. Besides the “local coordinates” options, there is also the global coordinates button; there, any offset or scale affects the whole texture – this way, it is easy to re-adjust a complex texture in different scene metrics.

COLOR EDITOR

Thea Render enjoys a powerful color laboratory where the user can define colors visually in many different color spaces (figure 8). In the tristimulus panel, the color can be parameterized with three different parameters; Red-GreenBlue (RGB), Hue-Saturation-Value (HSV) or the standard CIE XYZ. The first thing one notices on the left of these parameters is the tristimulus color picker, a 2d pattern with Hue-Saturation axis and a column representing the Value. These patterns include all colors that can be parameterized in HSV space (and consequently in RGB).

Figure 8. The color lab

On the left of Thea Color lab we find the color preview, where we can see the plain color that is currently selected, along with a series of colors that show how the selection is perceived under different lighting conditions. You might find interesting that some colors have totally different appearance when, for example, used for an object near an incandescent light bulb. These differences may actually be quite important based on the lighting in your scene (nevertheless, in most cases, an unexpected color shift in the render can be compensated by white balance post-process).

Page 18


Besides the tristimulus setup, there are also the spectrum and blackbody selection panels. With the spectrum, we can directly change the distribution in spectral space (between 380 and 780 nm). The resolution of the edited spectrum can be also increased or decreased by clicking on the corresponding dots on the top-right of Spectrum panel. In the blackbody selection panel we can select a color based on its correlated blackbody temperature. A color selected this way differs by the actual Planckian distribution by a scale factor (this is needed to ensure that the distribution does not exceed 1 and thus can also be used inside BSDF models). Note that whenever a selection is made, the corresponding panel title becomes highlighted. This gives a quick hint of the current working space and prevents any mistakes since there is no 1-1 correspondence between these spaces. On the top-right side of the color lab, there is the current palette of colors. We can click anytime on a palette color to select it and see its properties in the rest panels. By clicking on an empty cell on the other hand, we can store current color to that placeholder (note a palette is automatically saved whenever switching to another or closing the lab). We can also browse the colors by name by clicking on the letter A (lower left of the palette)余 this will change the view to color pages. At the top side, there are controls to undo our changes, select a different palette, delete or create a new one.

Page 19


ENVIRONMENT The environment settings describe environmental lighting additional to any point and area light sources the user may define in his scene (figure 9). Also, the global medium settings can be found here.

PHYSICAL SKY

The physical sky settings contain all the parameters to set how the sky looks. The options are physically-based parameters of the sky model used inside Thea Render. Note that whenever the user enables the physical sky there are two more actions transparently taking place; first, image-based lighting is disabled and second, sun is also generated (this can be seen in your scene view under point lights). From there on, with the location and time, user can affect both sky and sun color and power. Global medium is the medium surrounding the scene and the one that is used whenever there is a surface-light interaction. The index of refraction found here, is the index of refraction of the global space (more accurately the space where camera is inside). The rest of the parameters are the same like in the Medium properties (see Material Editor section).

IMAGE­BASED LIGHTING (IBL)

Page 20

Image-based lighting is a convenient way to add illumination to your scene, coming from captured photos of the surrounding environment. Since a photo of a real scene can be used, the lighting is highly convincing and enhances the realism of your renders. In most cases, the images used for this kind of lighting need to be of high dynamic range in order to provide enough lighting for a scene. In the IBL panel, one can use an image for illuminating the scene, nevertheless, he can also

Figure 9. Envirnoment settings

setup different images for background, reflections and refractions. This makes possible to use different source for lighting and for reflections/background, which in most cases need more detail in the image. This is actually a usual render optimization, where the illumination source is relatively low-detailed texture in order for the image to quickly “converge”, while background and reflections use a detailed map for visually enhanced results.


physically足based unbiased & biased render engine

Image by Duncan Howdin

Page 21


RENDER SETTINGS PANEL In the last panel of settings, one can find the render panel where he can tweak the options related to rendering itself. As you can expect, the settings related to the unbiased core are minimal and only related to things peripheral to rendering – on the other hand, tweaking biased settings may have great impact on the render output and time. In all cases, to separate things easily, the biased settings are written with blue color – these parameters are not used in unbiased core.

In the first panel, the General panel (figure 10), we can change the render engine used along with other generic properties – in most cases, these properties apply to all engines. In Thea Render, there is one biased engine (BSD) and two unbiased engines (TR1 and TR2). The unbiased engines are already finely tuned but the user can pick one based on the scene, in order to speed up rendering. Unbiased engine TR1 should be preferred in exterior renders and interiors where direct lighting is the most dominant in the scene. Unbiased engine TR2 should be preferred when difficult indirect lighting is dominant in the render or heavy caustics are present (such as “sun-pool-caustics”).

Page 22

The time limit found in the main panel can only be used in unbiased engines to limit the render time. Biased engine operates in a bucket-render fashion rather than progressive for whole image, thus a time limit is not of practical use there. Depth blur is always present in the renders (assuming that the camera does not use the ideal pinhole model). Motion blur and volumetric scattering can be turned on and off (note that volumetric scattering for biased engine is still pending although subsurface scattering works).

Figure 10. General Render Panel

Relight option turns on using multiple image buffer, one for each light group, and it is only available for unbiased engines. The Relight option is a powerful means to producing multiple renders or altering the lighting as a post-process without losing the unbiased characteristic of your renders. This option should be turned on, in order to have the relight panel working in the Darkroom after the render is finished. Please note that invoking Relight, will need additional memory resources; the amount of extra memory directly depends on the light sources and resolution of your render.


In the distribution panel, the network rendering and multi-threading can be setup. Network rendering improvements are still pending (scene packaging, send scene on request, client as service).

In animation panel, the basic parameters of an animation render can be setup, such as the number of total frames and frame rate. User may select to render current frame, all frames or selected frames based on given list. The motion of objects with respect to time can be described using the viewer interface. Note that currently, only object motion is supported (point light and camera motion is missing).

Figure 11. Biased Render panels

BIASED RENDER SETTINGS

In contrast to unbiased settings where there are essentially no parameters to tune, biased core comes with a long list of parameters that can have great impact on the final image and render time. The usual approach is to make use of low quality settings during scene tweaking and lighting experimentation and then change to high quality for production renders. The two panels related to biased core parameterization can be found under the settings panel (figure 11) and they are called Biased RT (RT stands for ray tracing) and “Biased GI” (GI stands for global illumination).

The ray tracing panel contains the basic parameters in order to trace reflecting and refracting objects and evaluate direct lighting. Of particular importance is the “Perceptual Based” parameter that changes direct lighting estimator to one that is guaranteed to deliver noise-free estimation of arbitrarily large scenes in terms of light count. Thus, this option is advised to be used whenever the number of lights grows to the point that their manual tweaking requires a lot of time. Nevertheless, for scenes of few emitters, having this option unchecked should be preferred, since the rendering is faster. The global illumination approach used in the Thea Render biased core is either that of photon mapping combined with final gathering or that of light cache (by disabling photon mapping and increasing gather diffuse depth). All the global illumination elements and especially photon mapping and subsurface scattering require significant amounts of memory. Especially for subsurface scattering it is advised to use it sporadically for small meshes, since the engine needs to create dense point clouds for each one of them.

Page 23


physically足based unbiased & biased render engine

Image by Peter Stoppel

Page 24


DARKROOM Clicking on the Darkroom tab of Workspace panel (the tabs are located at the bottom), you will see the actual space where you can manipulate your rendered image and apply any post-processing.

About the darkroom post-processing controls, you can see the render status bar (figure 12). From this bar, you can quickly start/stop a render and save/refresh an image but also check on the render progress. Above, there is the space where the rendered image appears. B

A

D

C

F

E

H

G

J

I

Figure 12. render status bar

As soon as you press the start render button, the render process begins and you can see the progress on the status bar. Currently, the image is automatically refreshed every 30 seconds but there is the refresh button to refresh it at any moment. If the rendered image does not fit within the viewing area, you can use the mouse wheel to zoom in/out and mouse left button to click and drag around the image.

DISPLAY

Under the status bar, we can see the display and relight options. The display panel contains a set of exposure and filter controls for image tone mapping adjustments and enhancements (figure 13). The first controls in the left column (ISO, Shutter Speed, f-number) are the related camera values controlling exposure. Note that these controls are not synchronized with the camera settings when starting a render. The last controls in the first column (Gamma, Brightness) control final monitor adjustment.

Figure 13. Display

A Save Image B Time Passed C Refresh D Render Phase E Network F Threads G Start Render H Pause Render I Phase Progress J Stop Render

Page 25


The Burn control found in the right column is also related to exposure, since it can be used to effectively compress a high dynamic range (HDR) in a low dynamic range image (LDR), presentable on screens and other limited range devices. Setting burn to 100% means that there is no compression (i.e. it behaves the same like disabling the control).

The filters found in the right column can be used essentially for image enhancement. The Sharpness filter is the most crucial controlling the filtering during downsampling the image and it is advised to be enabled at default 50% value which is a balanced value between blurring and sharpening. A value near 100% while produce a highly sharpened image, while a value near 0% will produce a blurred image. Note, that this control is used only in renders where supersampling is enabled (by default, biased renders do not use supersampling and therefore, changing the Sharpness will not have any effect on them). Bloom and Vignetting are controlling the corresponding enhancements. Please note that filtering the image, may take some time, so it is advised not to use time consuming filters (most notably bloom) during rendering. Balance filter can be used to change the overall color balance of a render. Balancing an image using color temperature, makes an exact compensation for any color shift of white objects when lightened by a blackbody emitter of the same temperature. In a typical scene, where there are multiple secondary bounces from colored objects, the optimal white balance temperature may be slightly different than the correlated temperature of your emitters – some experimentation may be needed for optimality.

RELIGHT

The relight panel can be used for relighting a scene by changing emitter power and color. It also makes use of key-frame based animation controls in order to produce a dynamic lighting animation from a single render (figure 14).

Figure 14. Relighting a scene with four lights – thumbnails are on

Page 26

Currently, relight process can only be used for an unbiased render. The Relight option in the render settings panel must be enabled prior to starting render in order for the post-processing controls to be available. During initialization, multiple buffers – one for each light group – are allocated in order to give the capability of blending their results and producing an animation sequence. This means that Relight is more demanding in terms of memory. Since, each of these light buffers must converge, it is easy to see that overall convergence in this case will be slower than rendering into a single buffer (where a bright emitter may quickly hide noise from a dim one). Due to the above reasons it is advised to disable Relight when only a single render is needed. Nevertheless, when used to produce an animation or lighting study, the benefits are high and definitely amortize the extra render time.


physically足based unbiased & biased render engine

Image by Jon Westwood

Page 27


THEA RENDER MATERIALS INTRODUCTION

The two key elements to achieving high realism in computer graphics are geometric complexity and visually accurate materials. In Thea Render framework, the material system consists of the supported scattering models, emission models, participating media and certain geometric modifiers. While the material system is a broad term, whenever we talk about materials we will be referring to the scattering models. Material modeling is the most crucial and perhaps the most difficult part of the material system. The accuracy of this modeling is directly connected to the visual accuracy of the renders and creating realistic materials is often considered as the holy grail of any renderer. This is why, material modeling was and is a very active field of research. What makes it particularly difficult, is the problems arising when we try to describe real world complex materials with models of a few parameters. These mathematical models are called bi-directional scattering distribution functions (bsdf).

A lot of renderers make use of the terms “physically-based” or “physically-plausible” materials. With these terms they refer to the particular choice of the material models (bsdf). A material is said to be “physically-plausible” when it does not violate any physical law and in particular two important principles; energy conservation and reciprocity. Energy conservation refers to the fact that no real object can reflect more light than it receives. Reciprocity refers to the fact that if we exchange incoming and viewing directions of the object, the reflection would still be the same (in the actual generalized reciprocity law the index of refraction is involved but this is beyond the scope of this manual).

We can easily come up with various formulas that satisfy these conditions, nevertheless their visual accuracy may be poor. In computer graphics literature, physically-plausible models are the empirical or phenomenological models (such as the energy conserving Phong model). Physically-based models is a more restrictive category where – besides satisfying these important conditions – the model is worked out based on a physical theory (related to the microscopic surface structure). Thus, in most cases, a physically-based bsdf gives much more convincing results.

Apart from material reflection models, there are also emission models. Although emission from objects is quite complex, the usual approach to describing emission from surface is using a perfect diffuse model. In this case, emitted radiance is the same for all directions above the surface, and viewer incoming radiance is modulated by the cosine of viewing direction and surface normal (the light power is also attenuated with distance following the inverse square law). Page 28


MATERIAL MODELS

There are four material models and one special model used as coating for layered materials. The basic principles of materials in Thea Render, are described below. 1. Physically­based

The key material models are based on advanced physical theoretic models which guarantees high visual realism.

2. Tight and uniform material models

All materials have a uniform logic, as they all are built with both reflectance and transmittance. This guarantees that energy conservation is high and correctly balanced between reflectance and transmittance.

3. Layer system of two directions

The basic materials can be thought as building blocks of more complex materials making use of the innovative layer system. The materials may not only mixed together (horizontal direction), but also modulated by special coating material (vertical direction). The modulation results in highly energy conserving materials which simulate real world materials – usually involving paints and varnishes – and are in general too difficult to produce from single material models.

BASIC

The basic material is a model that consists of a diffuse, translucent and a Fresnel-based specular component. It is a highly energy conserving material and designed to be used mostly for matte and plastic materials. The translucency is modeled as simple back diffuse scattering and – along with the absorption – it can be used to create translucent materials that are much faster to render than employing a subsurface scattering model. It may also be used as single-sheet back scattering material, for example when rendering curtains. Remarks

In biased mode, diffuse and translucent components are resolved relatively fast due to a render caching mechanism. So, it is recommended to make use of these components in biased mode in order to efficiently add global illumination elements to your renders. These materials will be rendered faster than making use of specular components of very high roughness. Page 29


Although, Thea Render supports accurate SSS materials, rendering them may take quite some time. Instead, using a translucent component with absorption – while not simulating subsurface scattering – gives many times visually pleasing results. Thus, it is recommended to use basic translucency, whenever possible.

The basic material is useful for defining matte and plastic materials but it may also be used for metals and translucent materials. Metals have in most cases a non-zero extinction coefficient which corresponds to high value of Fresnel coefficient under any viewing angle. In the typical direct room of the Material Lab, translucency can not be previewed because it is only resolved by global illumination techniques.

GLOSSY

The glossy material is a material that simulates reflection and refraction that can be perfect (roughness 0), very rough (roughness 100), or somewhere in-between. This bsdf makes use of Fresnel equations to balance reflection and refraction, which is controlled by the index of refraction (and extinction coefficient). An index of refraction near 1 will make the material less reflective and more refractive (with the marginal case of being exactly 1, which corresponds to a perfect transparent material). As the index of refraction gets higher, reflection becomes stronger and stronger until it nearly matches the user color for very high values. What is very important to understand is that Fresnel coefficients are based on both index of refraction and incident angle. Thus, even with a very small value of index of refraction, the bsdf will be quite reflective for grazing angles. This is a typical phenomenon observed in real world and, for example, you can see this by observing pool water (if you look straight down you can see the bottom, but looking the pool from far away you can see reflections of the environment). The glossy material makes use of the complex index of refraction (that is, it includes the extinction coefficient) which is of particular importance for describing metals. There, the extinction coefficient takes non-zero values resulting in high absorption of refracted light and amplification of the reflection. Remarks

The glossy material is useful for defining conductors (metals) and dielectrics. Since it supports the use of measured data (.ior/.nk files) it can achieve even higher accuracy.

Page 30

Transmitted roughness is particularly effective when you want to simulate dielectrics with crisp reflections and blurry refractions and it is superior in terms of memory, speed and energy conservation than using glossy/glossy or coating/glossy combinations to achieve the same effect. Use a separate transmitted roughness only when needed though.


Dispersion is enabled either by checking and using Abbe number or checking and using measured data of dielectrics. Please note that dispersion increases rendering time quite much, so whenever dispersion can be neglected, do not use these parameters.

SSS

The bi-directional subsusrface scattering distribution function (bssdf) is a generalization of bsdf where the enter and exit points may differ (while for bsdf these points coincide). Thus, the evaluation of bssdf becomes far more difficult, as it involves the interaction of surface reflectance/transmittance along with scattering through participating media. Thea Render SSS model is based on the glossy material where subsurface scattering is also supported. Besides the surface reflectance entries, there are also parameters describing the absorption and scattering inside the object. Remarks

In order for the SSS material to be evaluated correctly, the object should be a closed (i.e. no holes).

High albedo participating media (i.e. when scatter density is much higher than the absorption density) are particularly difficult to render. A usual technique to accelerate their rendering with minimum loss of accuracy, is to turn an asymmetric medium to an isotropic one with synchronous decrease of its scattering density. Assuming that the asymmetry of the medium is g>0, then one has just to set asymmetry to the isotropic value of 0 and decrease scatter density to a new value that is the old one times 1-g. The new medium will have lower albedo and it will render faster with very small sacrifice to accuracy.

GLASS

The glass model describes thin glass materials that show perfect (mirror) reflection and transparency. They are very accurate models and are great for assigning to thin surfaces, such as windows and thin transparent plastics. Remarks

Although, one could also use a glossy material with transmittance enabled and index of refraction set to 1, it is recommended to use the glass model whenever you want to achieve transparency. Another way to achieving theoretic transparency is to actually model a surface, such as a

Page 31


window, as thin double interface where refraction takes place at both sides. Using the glass model though is optimal in terms of visual accuracy and additionally, it can be traced during shadow evaluation (something that cannot be done with the double interface model which will create shadows).

The glass model does not assume the model to be closed as it does not define an interior/exterior volume. The index of refraction is used as if the model was a double interface, in order to compute the overall transmittance due to double refraction.

COATING

The coating model is a special reflection model that has only specular component. This model can be effectively used in order to simulate varnishes and paints on a layered material. Several coatings may be used one after the other simulating multiple varnishes and paints. The coating always reflects some light but – when used in layered material – it leaves the rest energy of the light to reach the layers underneath. Remarks

The extinction coefficient is used both for modifying the reflectance (according to Fresnel equations) but also to define absorption density for the layers underneath. It is used in conjunction with the thickness map of a layered material in order to calculate absorption on microscopic level.

LAYERED MATERIAL

Material models can be added one after the other creating a layered material. The four first types are added “horizontally”, i.e. they are mixed together. The last one, that is the coating material, is added vertically essentially simulating one or more layers of varnishes over a surface. With the materials mixed together, there is the possibility to assign special mixing weights, in order to change the default uniform mixing. Besides that, the coating layers can also have a thickness which is taken into account – along with coating extinction coefficient – in order to compute absorption within the layer. Remarks

Page 32

The layered material is an effective way to defining complex materials but if a material can be made directly out of the building blocks, then prefer doing it that way. For example, a plastic material could be defined with a basic/coating combination using the diffuse component of the former and specular reflectance of the latter. Although visually there will not be any difference,


it will be optimal to define it directly with a basic material since you can set there both diffuse and specular reflectance.

Very realistic materials can be achieved only with one or two specular components. Thus, having many coatings will usually make things harder to tweak and require more time for rendering. The same rule applies to mixing multiple substrates. In many cases, your materials will be made out of the simple building blocks and only a few complex ones will be made as layered ones.

LIST OF PARAMETERS

Here we present an alphabetical list of the parameters used by the above materials. Diffuse Category: Scatter Materials: Basic

This is the texture for the diffuse component. If no texture is specified then no diffusion is rendered. Translucent

Category: Scatter Materials: Basic

This is the texture for the translucent component. If no texture is specified then no translucency is rendered. Abbe Number

Category: Scatter Materials: Glossy

The Abbe number can also be used to describe the variance of index of refraction with respect to wavelength. The lower the value the more dispersive will be (with the exceptional value of 0 which turns off dispersion). The Abbe number is

a usual quantity of describing dispersion – particularly in jewelry industry – and there are a lot of tabulated data. Absorption

Category: Scatter Materials: Basic, Glossy, SSS

With this parameter you can change the absorption density and color. The higher the density the more absorptive the material will be. The absorption density can take any positive or zero value and it is in 1/m units. Anisotropy

Category: Struct Materials: Basic, Glossy, SSS, Coating

Anisotropy is an important visual cue of many surfaces, particularly of metals. Due to certain orientation of surface elements, the reflection in one direction appears much more extended than the perpendicular one. In Thea Render, the anisotropy parameter controls this difference, where 0 value means no anisotropy and 100% means full anisotropy (the material is perfect

Page 33


reflector/refractor in one direction and completely rough in the other). Asymmetry

Category: Scatter Materials: SSS

This parameter controls the asymmetry coefficient of subsurface scattering medium – assumed to follow the Henyey-Greenstein phase function. This parameter takes values between –1 and +1, with –1 corresponding to perfect back scattering media, +1 to perfect front scattering media and 0 to isotropic media. Bump

Category: Struct Materials: Basic, Glossy, SSS, Coating

One of the very first geometry modifiers is bump mapping which is available inside all materials. Bump is ideal for enhancing renders with the illusion of more complex geometry and since it only involves local perturbations to normal vectors, it is relatively fast to compute. Each material layer can have its own bump map. Emitter Accuracy Category: Emitter

Page 34

This parameter is used only by the biased engine and only in the case when an area emitter has been enabled for the material. In that case this accuracy setting is used in conjunction with emitter minimum and maximum rays to define the effort of the direct light estimator for this particular area light. The engine makes an adaptive evaluation starting with minimum rays until either this accuracy threshold or maximum rays have been reached.

Emitter Min/Max Rays Category: Emitter

These parameter is used only by the biased engine and only in the case when an area emitter has been enabled for the material. In that case the rays parameters are used in conjunction with emitter accuracy to define the effort of the direct light estimator for this particular area light. Extinction Coefficient

Category: Scatter Materials: Basic, Glossy, SSS, Coating

This is the extinction coefficient (imaginary part of index of refraction) of a specular component. Both index of refraction and extinction coefficient are used in order to compute the Fresnel coefficient which in turn, modifies reflectance and transmittance. The extinction coefficient can take any positive or zero value. Index of Refraction

Category: Scatter Materials: Basic, Glossy, SSS, Coating

This is the (real) index of refraction of a specular component. Both index of refraction and extinction coefficient are used in order to compute the Fresnel coefficient which in turn, modifies reflectance and transmittance. The index of refraction can take any positive value. IOR File

Category: Scatter Materials: Glossy

Besides the scalar values of refraction index, user may also select a coefficient file that contains


measured values of index of refraction (n) and extinction coefficient (k) over the spectrum. Using these measured data, we can have accurate reproduction of the corresponding materials. These files have either .ior or .nk file extension. Min/Max Blurred Subdivs Category: General

By default, minimum and maximum blurry reflection and refraction subdivisions are setup by the render settings. The render settings define the subdivisions globally for all materials. Here, the parameters can be used to change the subdivision on per-material basis. Thus, whenever we want improved accuracy for a specific material, we can tune it using these parameters rather than increasing accuracy settings globally for the whole scene. Normal Mapping

Category: Struct Materials: Basic, Glossy, SSS, Coating

Normal mapping is a variation of bump mapping where instead of using a height field producing perturbations for the existing surface normals (as in bump mapping), all three channels of an RGB texture are used in order to directly define the normal vectors. The red and green channel values (0‌255) correspond to x and y axis taking values from –1 to 1, while blue channel corresponds to z axis taking values from 0 to 1. Normal mapping has similar render time performance with bump, but it needs more storage (bump mapping may be also used with grayscale textures). Perceptual Level Category: General

The perceptual level is used by the biased engine in order to accelerate direct light evaluation for the specific material. By default, it is set to 100% (full accuracy) but for various materials that make use of high-detailed textures, this parameter can be decreased to accelerate rendering. The idea behind this parameter is that many textures can mask noise due to their high frequencies and errors in the direct and indirect light evaluation are not easily perceived by the human eye. Thus, the user in these cases can set this parameter to a low value (even reaching values as low as 5-10%) to speed up rendering for the particular material. Reflectance

Category: Scatter Materials: Basic, Glossy, SSS, Coating

Reflectance is the texture for the specular component under normal viewing condition (the viewer is right on top of the surface). By defining this reflectance, reflectance at grazing angle (90 degrees) is also implicitly defined. Thus, the specular reflectance is actually calculated as a blending between user Reflectance and Reflectance 90 (default being white) depending on the viewing angle. It is recommended to put near-white colors for the reflectance and modify specular strength by changing the index of refraction and extinction coefficient. This way, more realistic materials can be delivered and layered materials achieve an even higher overall reflectance albedo. Roughness

Category: Struct Materials: Basic, Glossy, SSS, Coating

The roughness parameter is related to the blurringness of specular reflectance and transmittance. A value of 0 corresponds to perfect (mirror) reflection while positive values

Page 35


give blurry reflections and highlights. As the roughness increases, highlights become bigger and the reflections more blurry and less bright. When the roughness reaches very high values (near 100%), the specular component shows a diffuse-like appearance. Roughness Tr.

Category: Struct Materials: Glossy, SSS

The transmitted roughness parameter can be used in order to set separately the roughness for the transmitted component. This is essential when we want to describe some materials, particularly some dielectrics where the reflection is quite crisp while the refraction is blurry. This parameter works just like the Roughness parameter and the user should check this if he wants to use different roughness for the transmitted component. Scattering

Category: Scatter Materials: SSS

With this parameter you can change the scatter density and color. The higher the scattering, the more time it will take to render the material. The scatter density can take any positive or zero value and it is in 1/m units. Note that the scatter color is used both for defining the in-scattering and out-scattering light interaction. Sigma

Category: Struct Materials: Basic

Page 36

Some materials in the nature exhibit diffuse reflection that is remarkably uniform and Lambertian model cannot reproduce them very

well. For example, looking at the full moon we may see that it is uniformly bright (while Lambertian model is darker at grazing angles). The Sigma parameter here changes the object appearance from perfect Lambertian (value 0) to more uniformly bright and it is used in conjunction with the Diffuse texture. Thickness

Category: Layer

This is the thickness of a coating layer and it is used – along with coating extinction coefficient – in order to calculate the absorption when light penetrates a coating in order to reach next layer (be it another coating or substrate layer). The unit for the thickness is micrometers (that is one thousand of a millimeter). Trace Reflections

Category: Scatter Materials: Basic, Glossy, SSS, Coating

This parameter is only used in biased mode and with this you can check whether you want to trace the specular component or not. This is a global illumination element and it adds rendering time in biased mode. Trace Refractions

Category: Scatter Materials: Glossy, SSS

This parameter is only used in biased mode and with this you can check whether you want to trace the transmitted component or not. This is a global illumination element and it adds rendering time in biased mode.


Two­Sided

Weight

Category: General

Category: Layer

With this option you can tell the renderer to not take into account any intersections with the back surface of the assigned objects. Thus, only the front surface will be rendered. This option is useful also for defining sun/sky portals.

CLIP/ALPHA MAPPING

The weight defines a mask for coatings and substrates. For coatings, it actually modifies the percentage of light that is allowed to reach the layers underneath. For substrates, all weights are normalized according to their sum, so that each substrate reflectance is modified by a relative percentage (without any weights, the substrates are again normalized to reflect with same weights).

Sometimes, it is faster to define the surface boundaries by using a texture map, than modeling it. This is quite usual for planar surfaces, where a grayscale texture can also serve as a mask for the points that are opaque or transparent. Clip mapping is a way to define such mask which tells the renderer whether a surface point is either fully opaque or fully transparent. In the contrary, alpha mapping (which is enabled by the Soft parameter) can describe a surface point as being partially opaque as well, achieving some nice see-through effects.

A clipped map object is considered to be a hollow one, so any medium described won’t have any effect. If someone wants to define a volumetric material without any surface reflectance, then he will have to use alpha mapping (i.e. check the Soft parameter). Please note also that alpha mapping is somewhat slower to evaluate, so if the object is either fully opaque or fully transparent, then it is better to use clip mapping. Only one clip/alpha map can be assigned to an object. Texture

The texture parameter defines the variation of the height field that will be used for clip/alpha mapping. Threshold

This parameter is used when Soft parameter is switched off (i.e. in hard clip mapping mode) and it is used against the texture. Texels that have brightness less than threshold will be considered as transparent while texels that have greater

brightness will be considered as opaque. Soft

When this parameter is checked then the clip mapping is essentially turned to alpha mapping. This means that the texels will not be checked against the Threshold but instead their brightness will correspond to a certain varying level of transparency. The brighter a texel will be, the more opaque will be with a black texel corresponding to full transparency and a white texel corresponding to full opaqueness.

Page 37


DISPLACEMENT MAPPING

While bump mapping can be used for enhancing object with small details and it just involves fast modifications to shading calculations, displacement mapping can be used to actually create perturbations to surface itself. Thus, displacement is an accurate geometric refinement mechanism but it is also relatively expensive to evaluate. Displacement mapping in Thea Render is subdivision-based – the user needs to define the number of subdivisions applied to each mesh triangle. Besides, there can be further “micro-bump” pseudo-displacements that act similarly to a global bump mapping. The latter is an effective mechanism to add extra detail without the cost of additional displacement subdivision. Texture

Height

Subdivision

Center

The texture parameter defines the variation of the height field that will be used for displacement mapping. This is the actual parameter that defines the resolution of the displacement. Low values will produce a coarse displacement grid while high values will produce a more refined displaced mesh. The actual number of displaced triangles produced is given by a power-of-two of subdivision parameter (1:4, 2:16, 3:64, 4:256 and so on), thus high subdivision values will produce enormously high values of triangles. It is advised to start with low values and progressively increase subdivision until the mesh resolution is satisfying. Micro­Bump

This is an extra parameter that can add a pseudorefinement without actually subdividing and displacing the mesh. It acts like bump mapping on the displaced triangles, simulating the normals produced by a further displacement of given subdivision. Since the micro-bump does not displace the surface but only changes the shading normals, it is much faster to evaluate. Page 38

This parameter sets the scaling of the texture height field used for displacement. The parameter is given in centimeter units. This parameter acts as a global offset of the displacement, setting the center with respect to the texel values. It is a unitless parameter that can take any negative or positive value. The actual offset applied can be found by multiplying the height and center parameters (it will actually be: -Height*Center in centimeters). When the value is 0 the offset coming from the texture variation will always be positive and when the value is 1 the offset will be always negative. Middle values will create both positive and negative variation with the 0.5 value centering the texture height field around middle gray tone. Tight Bounds

One crucial parameter controlling displacement is Tight Bounds. With this parameter, the texture evaluations are cached during the beginning of rendering, finding also relatively tight bounds for the displaced surface. The disadvantage here is that sometimes it may take long to make this initialization, but this also means that the intersections will be calculated faster during the render process.


EMISSION

Thea Render supports both area and point light emitters. The area emitters are applied to a surface (sometimes are also called mesh emitters when the surface is a mesh). Typically, the area emitter has a diffusion-like emission model, i.e. it uniformly distributes light along all directions in the above hemisphere. Nevertheless, more complex emission models can be defined by making use of an IES file. Color

Besides emission model, we can also set a texture for an area emitter, thus having a color variation over the surface. IES File

An IES file can be defined that describes light emission according to measured goniometric data. Power

The power of the light (in units described below).

MEDIUM

Efficacy

The efficacy of the light in lumens/Watt units. Maximum efficacy is 683 lm/W which corresponds to lights with no energy loss, i.e. all their power is converted to visible light. This is achieved only for a particular central wavelength where human eye is most sensitive and in practice common lights have efficacy between 2 and 50 lm/W. Unit

The unit where the light power is described. It can be total power (Watts), exitance (Watts/m2), radiance (Watts/m2/sr) or their corresponding photometric units (Lumens, Lumens/m2 and Candelas/m2).

True volumetric scattering is supported and in fact Thea Render is one of a few commercial ray tracers that can solve light transport problem that includes participating media. There are a lot of possibilities since mediums can be both homogeneous and heterogeneous and a lot of supported phase functions. Absorption Color

Defines the transmittance color – this is actually the color visualized after a distance of 1 meter (assuming unit density and no scattering). When the distance is less than 1 meter, the color shifts towards white and when the distance gets bigger the transmittance shifts towards black. The color change along distance is strongly nonlinear and

thus it is recommended to avoid highly saturated colors. Scatter Color

Defines the scattering color – this is the color that bounced particles (in the medium) have. The sum of absorption and scatter color (multiplied by their corresponding densities) defines the

Page 39


extinction coefficient of a medium which is used to calculate the total absorption at a distance. The scatter color may be applied numerous times for particles that bounce inside the medium (especially for highly scattering medium) and thus, it is also recommended here to avoid highly saturated colors. Absorption Density

This entry defines the density of absorption in 1/m units. The higher this density the higher the absorption. This option gives easy control to the magnitude of absorption and it is possible to set a procedural texture in order to define spatially varying absorption (heterogeneous medium). Scatter Density

This entry defines the density of scattering in 1/m units. The higher this density the higher the scattering. This option gives easy control to the magnitude of scattering and it is possible to set a procedural texture in order to define spatially varying scattering (heterogeneous medium).

Page 40

Coefficient File

Absorption and Scatter colors can also be described by numerical data. The file that includes this data has similar format to the nk/ior files. Phase Function

A phase function defines the variation of outgoing radiance over the sphere of directions and it is the medium analog of a bi-directional scattering distribution function (which is used for surface). Most used phase functions are isotropic and Henyey-Greenstein. Asymmetry

This parameter defines the asymmetry parameter of Henyey-Greenstein phase function. This parameter is unitless and takes values from –1 (totally back scattering) to 1 (totally front scattering). Obviously the extreme values of –1 and 1 do not actually scatter light outside the particle direction and they are not of practical use. A value of 0 is balanced scattering between back and forth directions and it is the same like using an isotropic phase function.


physically足based unbiased & biased render engine

Image by Patrick Nieborg

Page 41


THEA RENDER SETTINGS INTRODUCTION

The render settings correspond to all those parameters that influence the output of the render engine. They include both biased and unbiased parameters, input/output parameters, etc. which need to be set before render process begins. Due to the nature of biased and unbiased render engines, biased parameters are many more than the unbiased ones. In fact, there are no user parameters for the unbiased engines at all parameterizing the render algorithm itself, in contrast to the biased parameters where there are plenty to tune the global illumination techniques.

In Thea Render, render settings can be easily saved and retrieved from the main menu. These settings form quick access presets that make render testing much easier and can be also applied from the render dialog (overriding custom settings defined in the scene).

LIST OF PARAMETERS

Here we present a list of the parameters as in the sequence they appear in the user interface.

GENERAL PANEL Main: Engine Core

Values: Photoreal (BSD), Unbiased (TR1), Unbiased (TR2).

Select the engine core that will be used for rendering. Photoreal (BSD) stands for the biased engine of Thea, while the others are two differently fine-tuned unbiased engines. The biased engine should be preferred when render times are important to be kept low, for example, when rendering an animation sequence. The unbiased engines should be preferred when one seeks maximum accuracy with the least possible effort in tuning the render engine (actually none). TR1 engine core should be preferred over TR2 for exteriors and general situations where direct lighting is dominant in the scene. TR2 should be preferred on the other hand for difficult indirect lighting situations, such as indirect caustics (including "sun-pool caustics"). Please, note that the biased name may be somewhat misleading here, since all engines produce photorealistic renders. Page 42


Main: Supersampling

Values: Auto, None, Normal, High

This corresponds to the supersampling used for the image output, i.e. internal resolution multiplier for antialiasing enhancement. None corresponds to no supersampling, Normal to 2x2 and High to 3x3. Auto corresponds to no supersampling for biased engine and 2x2 for unbiased engines. Setting supersampling to a higher level will generally improve antialiasing of the output but will increase memory demands for storing the image (4 times in Normal level and 9 times in High level). The time needed to render the scene will also be increased for biased engine. But for the unbiased engines, the extra time needed to render the higher resolution image is usually amortized by the reduced noise visible in the visualized (downsampled) image. It is usually suggested, for unbiased rendering, to change supersampling to None for high resolution output and High when there is persisting noise. Main: Time Limit Value: in minutes

This is a parameter used to terminate the unbiased render process (it is only used by unbiased render engines). It is given in minutes, and 0 is a special value corresponding to no time limit at all. Main: Motion Blur Values: on/off

This parameter enables or disables motion blur for all render engines. If enabled, motion blur will show up for all visible animated objects. The actual blur amount depends on camera shutter speed and animation properties of the objects. Main: Volumetric Scattering Values: on/off

Volumetric scattering corresponds to rendering participating media. If disabled, volumetric (Medium) and sub-surface (SSS) scattering won't be rendered by unbiased engines. For biased engine, general volumetric scattering is currently not supported but sub-surface scattering has its own separate check in biased settings. Main: Relight Values: on/off

This parameter corresponds to rendering light groups in separate image buffers for relighting postprocess. It is currently supported only by unbiased engines. Due to the allocation of an image buffer per light, the number of lights has a direct impact on memory demands and rendering high resolution images with a lot of lights may require a lot of Gb Ram. Please note that clearing all image buffers separately takes more time than clearing one merged buffer, although this is easily amortized by reusing the render output in a relight animation.

Page 43


Distribution: Threads Values: Max, 1...

This is the entry for the render worker threads that will be used during rendering (not all application process threads). The special value 0, same like Max, corresponds to the number of logical cores on your machine. Exceeding this value (shown explicitly as the last value in the drop-down list) will have no benefit and actually an impact on performance. Distribution: Priority Value: Low, Normal

This parameter corresponds to the priority assigned to render threads by the operating system. Setting to Normal will make rendering faster, but it is not recommended when you plan to use the machine in parallel, or there are other somewhat demanding processes running. Distribution: Network Network: None, Server

This parameter sets render engine in single workstation (None) or acting as Server, in network rendering. Setting the render engine in Client mode is not possible from studio application (application will automatically enter Client mode if a node license is active). Distribution: Server Port Values: 5000...65000

This value corresponds to the server port used by the application during network rendering. It is recommended to leave it at the suggested value 6200. Channels: Normal Value: on/off

Enabling this parameter will render an additional image storing the normals of the first-hit objects in the scene. Black color is assigned to vector (-1,-1,-1), while white color is assigned to vector (1,1,1). Due to the normalization of surface normals, these values are never reached, instead we get intermediate colors. Channels: Depth Value: on/off

Page 44

Enabling this parameter will render an additional image storing the depth (that is the distance along camera z-axis) of the first-hit objects in the scene. The depth values will be mapped afterwards to a grayscale image according to Min/Max Z values in the display panel. Although, setting Min/Max Z can


be done completely as post-process, it is suggested to have some good initial values in order to avoid aliasing issues (when no supersampling is present). Channels: Alpha Value: on/off

Enabling this parameter will create an opaqueness grayscale image with respect to the background. A black color corresponds to no opaqueness (background is fully visible) while a gray color corresponds to partial opaqueness. Channels: Object Id Value: on/off

Enable this channel to get an image of distinct colors for each scene object. Channels: Material Id Value: on/off

Enable this channel to get an image of distinct colors for each scene material. Channels: Direct Value: on/off

Enable this channel to get an image of direct lighting component (only for biased engine). Channels: GI Value: on/off

Enable this channel to get an image of global illumination component (only for biased engine). This is the component computed by the photon mapping and final gathering modules. Channels: SSS Value: on/off

Enable this channel to get an image of sub-surface scattering component (only for biased engine). Channels: Reflection Value: on/off

Enable this channel to get an image of reflection component (only for biased engine). This corresponds to perfect only reflection (glass reflection and glossy/coating zero roughness reflection). Page 45


Channels: Refraction Value: on/off

Enable this channel to get an image of refraction component (only for biased engine). This corresponds to perfect only refraction (glossy/coating zero roughness refraction). Channels: Transparent Value: on/off

Enable this channel to get an image of transparent component (only for biased engine). This corresponds to glass and alpha mapping transparency.

BIASED RAY TRACING (RT) Ray Tracing: Tracing Depth Value: 0...

This is the main parameter influencing tracing depth for biased engine. Increasing this parameter may be needed for certain cases where there are a lot of mirrors or dielectrics in the scene, but it has a direct impact on render time. Ray Tracing: Glossy Depth Value: 0...

This is a separate value to control tracing depth for blurred reflections/refractions. Keeping this value low will save evaluations on rough materials that in many cases contribute little to the overall image. Ray Tracing: Trace Reflections Value: on/off

This parameters enables tracing of perfect reflections (glass reflection and glossy/coating zero roughness reflection). Ray Tracing: Trace Refractions Value: on/off

This parameters enables tracing of perfect refractions (glossy/coating zero roughness refraction). Page 46


Ray Tracing: Trace Transparencies Value: on/off

This parameters enables tracing of glass and alpha mapping transparencies. Ray Tracing: Trace Dispersion Value: on/off

This parameters enables dispersion for biased engine. Dispersion will raise render times considerably for the objects that exhibit this property, so having this option disabled for quick test renders is preferred. Antialiasing: Max Contrast Value: 0...

This parameter controls the antialiasing process fired on neighbour pixels based on their contrast. A very low value will trigger the process more often resulting in higher render times but improved quality. Antialiasing: Min AA Subdivs Value: 0...7

This parameter sets the minimum subdivisions for the antialiasing process, i.e. it corresponds to minimum samples per pixel (in a power-of-two relation). Increasing this value increases render times directly, but it may be necessary in order to capture small details and thin lines (an alternative may also be to increase supersampling). Antialiasing: Max AA Subdivs Value: 0...7

This parameter sets the maximum subdivisions for the antialiasing process, i.e. it corresponds to maximum samples per pixel (in a power-of-two relation). These samples will be taken when contrast between neighbour pixels exceeds Max Contrast value. Direct Lighting: Enable Value: on/off

This parameter enables the direct lighting component of the biased engine. Direct Lighting: Perceptual Based Value: on/off

This parameter switches direct lighting computation from user-based to perceptual-based. The user-

Page 47


based evaluation, which is the default one, makes a per-light evaluation based on user min and max emitter rays (defined on area and point light parameter panels). This is quite fast when the number of lights is relatively small and easily controllable. When the number of lights increases, it is preferred to enable the perceptual-based evaluation which makes, automatically, a balanced evaluation of all lights depending on their relative significance, keeping the evaluation noise at a minimum level and render times short. Direct Lighting: Max Direct Error Value: percentage

This parameter controls the evaluation error of the direct lighting component. In most cases, an error of 2% will produce high quality renders. Blurred Reflections: Enable Value: on/off

If enabled, blurred reflections and refractions will be traced in the scene. The max tracing depth is set by the ray tracing Glossy Depth parameter. Blurred Reflections: Min Blurred Subdivs Value: 0...7

This parameter controls the minimum number samples (in a power-of-two relation) taken on a blurred reflection/refraction. Blurred Reflections: Max Blurred Subdivs Value: 0...7

This parameter controls the maximum number samples (in a power-of-two relation) taken on a blurred reflection/refraction. Subsurface Scattering: Enable Value: on/off

This parameter enables sub-surface scattering for biased engine. The evaluation is based on a fast approximation of SSS component but note also that the process may be quite demanding in terms of memory. Memory demands depend also on the area of the surface that is assigned a SSS material. Subsurface Scattering: Max Samples Value: 0...

Page 48

This parameter controls the max samples allocated for sub-surface scattering evaluation and


interpolation (biased engine only). This value should be high enough to reduce any low frequency noise (blotches) in the renders but still limiting the samples allocated to avoid excessive memory needs for SSS evaluation.

BIASED GLOBAL ILLUMINATION (GI)

Please note the existence of open, save and lock cache for the global and caustic photon map as well as for the irradiance cache. This gives you the possibility to compute once these maps and reuse them again for next renders. The lock and save commands should be issued after the initial computation of photon map or irradiance cache. Photon Mapping: Enable Value: on/off

With this parameter you can enable photon mapping, a well-known global illumination solution. The photon map solution is usually coarse, thus, in most cases, it should be used in conjunction with final gathering. This map is usually referred to as global photon map (as opposed to caustic photon map). Photon Mapping: Leak Compensation Value: on/off

Photon maps are prone to light and shadow leak artifacts. Leak compensation suppresses these leaks (but it also raises somewhat render times). Photon Mapping: Photons Captured Value: 1...

Enter here the number of photons in the global map (captured from all emitters in the scene). A high value creates more detailed map making solution more accurate but also more memory intensive. Values of 1-10 million photons are usual, while setting to much higher values will require a lot of memory. Photon Mapping: Global Photons Value: 1...

Enter here the number of photons used during irradiance calculation. This number corresponds to the number of nearest photons found around a position and used to estimate irradiance. A high value will blur solution, a low value will lead to high frequency noise. Since, in most cases, photon map is used as secondary estimator (in conjunction with final gathering), a blurred solution is preferred (but not excessively blurred because it will easily lead to leaks). Page 49


Photon Mapping: Global Radius Value: 0...

Enter here the search radius for the nearest photon lookup, that will be used for irradiance estimation (in meters). This value is mostly used to accelerate the search process (the smaller the value the quicker the results) but it should also be conservative so that the required number of global photons can be reached. Caustics: Enable Value: on/off

This parameter enables or disables caustic photon maps. Caustic photon maps are formed by light particles hitting diffuse surfaces after bouncing on a shiny reflector/refractor. These maps are directly visualized and should be populated by a lot of photons to yield an accurate visual result. Note that caustics can also be formed by final gathering余 caustics estimated by photon mapping are usually more accurate when the light source is small (or a point light) - on the other hand, final gathering can estimate much better caustics from large area lights or the sky. Caustics: Caustic Sharpening Value: on/off

This is a simple parameter to turn on sharpening filter on the caustics, leading to more pronounced caustic pattern edges. Caustics: Caustic Captured Value: 1...

This parameter corresponds to caustic photons captured in the map by all emitters. Usually, caustics need to be quite detailed and this value should be as high as possible. To avoid excessive memory demands, user should also control the lights emitting caustic photons and surfaces capturing caustics, to keep them as few as possible. Caustics: Caustic Photons Value: 1...

This is the number of caustic photons used for irradiance estimation. The higher the value the more blurry the results. Note that increasing this value, will also have an impact on render times. Caustics: Caustic Radius Value: 0...

Page 50

This is the search radius for the nearest photon lookup procedure (in meters). A small value will produce quicker results and will lead to less blurry results.


Final Gathering: Enable Value: on/off

Final gathering is another global illumination solution that can be used in conjunction with photon mapping or alone. Using it along with photon mapping is preferred whenever light is relatively easy to find its way to the scene余 diffuse depth in that case may be very small (even only 1 leading to fastest results). In cases where lighting is easier to evaluate based on rays starting from the viewer, for example in sun-sky interiors, using final gathering alone should be preferred余 in such a case, diffuse depth should be increased (to a value 3 or more). Final Gathering: Samples Value: 1...

These are the rays used for final gathering estimation. The higher the value the more accurate the results but it will take longer to trace them. Values are typically ranging from 100 to 1000, but it may be necessary to increase them whenever there is big lighting variation. Final Gathering: Adaptive Steps Value: 0...

After the first final gathering samples are taken, it is possible to continue with more samples in an adaptive way. Putting a value 0 here will not take any more samples. If a positive value is used, extra samples will be taken in directions where there is high contrast in previous final gathering samples. Final Gathering: Secondary Gather Value: on/off

If enabled, an extra bounce in diffuse depth will be made for nearby sample hits (i.e. corners), in order to remove leaks. This increases also render times (the actual increase depends on distance threshold). Final Gathering: Distance Threshold Value: 0...

This is the maximum distance of the final gathering samples, that will trigger an extra bounce (secondary gathering). This is useful when final gathering is used with photon mapping, in order to remove the blurriness and leaks of the photon map solution near corners. This value should be as small as possible to not add much to render times, yet high enough to mask the leaks. Final Gathering: Diffuse Depth Value: 1...

This is the maximum diffuse bounces of the final gathering random walk.

Page 51


Final Gathering: Tracing Depth Value: 0...

This is the maximum perfect reflection/refraction/transparency bounces of the final gathering random walk. Final Gathering: Glossy Depth Value: 0...

This is the maximum blurred reflection/refraction bounces of the final gathering random walk. Final Gathering: Glossy Evaluation Value: on/off

This parameter enables evaluation of blurred reflections/refractions during final gathering walk. In many cases, this evaluation may lead to low frequency noise (blotches) without adding much to solution. Final Gathering: Caustic Evaluation Value: on/off

This parameter enables evaluation of caustic paths, i.e. hits of area lights from perfect reflections/refractions. If area light caustics are wanted (and caustic photon maps are not used), this parameter should be enabled. Please keep in mind though, that caustic related parameters of emitters/models are not taken into account in this evaluation. Irradiance Cache: Enable Value: on/off

Irradiance cache is a technique that is used almost always together with final gathering, in order to accelerate rendering. Irradiance samples computed by final gathering, are reused based on a world-space interpolation scheme. Irradiance Cache: Accuracy Value: percentage

This parameter corresponds to the wanted accuracy of irradiance cache, or otherwise, the maximum interpolation error. Increasing this value, will increase the density of the irradiance cache, and as a consequence, will improve the interpolated value. A value between 75-95% should in most cases used. Irradiance Cache: Min Distance Value: percentage

Page 52

Irradiance cache density in world space is usually higher on curved or occluded objects (for example,


near corners). This parameter influences the density of irradiance samples when visualized on the image plane. Setting a minimum distance (in percentage of the image width) will force reusing samples in nearby corners, without overpopulating irradiance cache in these places. Increasing this value too much though, will lead to light leaks since distant samples will be taken into account in potentially occluded areas. Irradiance Cache: Max Distance Value: percentage

Irradiance cache density in world space is usually higher on curved or occluded objects (for example, near corners). It may be the case that in very flat areas, samples are very sparse. This parameter influences the density of irradiance samples when visualized on the image plane. Setting a maximum distance (in percentage of the image width) will force generating samples more often in flat areas, avoiding undersampling these places. Irradiance Cache: Prepass Value: None, 1/1, 1/2, 1/4, 1/8

One problem with irradiance cache, is that since it uses interpolation of samples generated on-the-fly, visible discontinuity artifacts may appear due to the problem that these samples can be used for interpolation only after they have been generated. To avoid these artifacts, a usual workaround is to make an irradiance pass on the scene before making the actual render. Thus, the irradiance samples will be already there, when interpolation will takes place. A value None in the prepass, will skip the irradiance pass余 this may be handy when making test renders. Prepass values correspond to the resolution of the irradiance pass (1/1 correspond to every pixel, 1/2 to every two pixels, and so on). Values of 1/1 or 1/2 are recommended for final renders. Irradiance Cache: Force Interpolation Value: on/off

By enabling this parameter, no more final gathering samples will be taken during normal render pass, but the ones already in irradiance cache will be always forced to be used (even if interpolation error is higher than wanted accuracy). This parameter should never be enabled with prepass set to None, since this will lead to no irradiance samples at all used for interpolation (instead, prepass values of 1/1 or 1/2 are recommended when this parameter is enabled). If disabled, samples may be added in irradiance cache even during normal render pass, which may slightly increase render times. Irradiance Cache: Visualize Value: on/off

When enabled, the locations of final gathering samples (the samples that will populate irradiance cache) are visualized during irradiance pass. They are presented in red color, and user may get a quick idea of irradiance cache density in order to fine tune its control parameters.

Page 53


ANIMATION Animation: End Frame Value: 1...

This is the last frame that will be used during rendering an animation sequence. Animation: Frame Rate Value: 1...

This is the rate of frames per second for the animation sequence. It is used to compute the total time by the whole animation sequence (and taking number of frames into account). Animation: Current Frame Value: 1...End Frame

This is the current frame that the user may need to render. Animation: Render Frames Value: Current, All, Selected

You can select from the drop-down list the frames that need to be rendered. Animation: Selected Frames

Value: selection string (example: 1,2,10-20)

Enter here the exact frames to be rendered. This parameter will be used when Render Frames is set to Selected value.

Page 54


physically足based unbiased & biased render engine

Image by blackice

Page 55


GLOBAL ILLUMINATION TUTORIAL INTRODUCTION

One of the most demanding features that ray tracers are equipped with, is the simulation of global illumination. The term "global illumination" refers to lighting that comes to a point not directly from the light sources but indirectly from at least one bounce on another surface. The former is either called direct lighting or local illumination and it usually attributes most of the lighting that a point receives. Nevertheless, the global illumination will add those visual cues that will make the render looking less artificial and more realistic. Which are the main visual components of global illumination? First of all, it is a smooth fill lighting余 since this is coming out of a lighting simulation, one need not employ fill lights when enabling global illumination. Other very important components are color bleeding and caustics余 these are components that will make the objects an integral part of the scene and not look like being apart. Glossy reflections and subsurface scattering are also part of global illumination but here we will focus on the smooth indirect lighting part. In Thea Render biased framework, global illumination refers to the lighting bouncing to one or more diffuse/translucent surfaces using the techniques of photon mapping and final gathering. In such a case, there are many options that can help accelerate the computation depending on the scene and user goals. In unbiased framework, global illumination is resolved by computing all kinds of lighting transfer between surfaces and there are no user options. Let us write here, the main items we have to consider when we dial with biased global illumination: 1. Computing global illumination is time consuming.

2. There are quite a few options and experience is needed to find best balance between quality and speed.

3. The best options for one scene are not necessarily best for another, since different kind of lighting transfer may be dominant.

Page 56


SHOOTING AND GATHERING

We mentioned before photon mapping and final gathering. These have been traditionally the two most popular techniques for resolving global illumination. And in fact, these techniques are dyadic in the way they operate. They complement each other and used at the same time. The above techniques have concepts related to shooting and gathering techniques. The shooting refers to tracing photon particles starting from the light sources (figure 15). In this case, photons are emitted into the scene starting from light sources and projected on the image plane. This is preferred when the photons can find their way easier to the viewer, for example, in the case where a light source is nearby the viewer or nearly covered.

Figure 15. Shooting photons from light sources.

Gathering, on the other hand, refers to tracing importance particles starting from the viewer (figure 16). Here, rays are traced from the camera and the direct lighting is calculated for the positions that the ray bounces. This is great when the light sources are distant while the viewer and the bounced points are near to each other, such as a room illuminated by sun.

Figure 16. Gathering light by tracing rays from camera..

Page 57


Combining both shooting and gathering approaches results in two-pass rendering techniques. Such techniques are usually superior since they combine advantages of both approaches. In Thea Render biased framework, this is equivalent to enabling photon mapping and final gathering at the same time.

PHOTON MAPPING

When invented, photon mapping was one of the few methods to deal with some difficult lighting situations, most notably caustics (figure 17). Since then, it has become an indispensable tool with the majority of biased renderers supporting it. It is an approximate method for computing the irradiance at any point in the scene by averaging photons deposited during a shooting pass. While theoretically the accuracy can be improved by emitting and tracing more photons, in practice, accuracy is limited by the number of photons that can be stored in memory. This means that photon mapping is a particularly memory greedy technique.

Figure 17. A refracted caustic produced by a rectangular area light.

In this framework, a photon map is essentially a storage for all traced photons. There are typically two kinds of them, a global one representing the slowly-varying interreflection between rough surfaces and another one representing caustics. The former is usually a blurry irradiance solution that is used in conjunction with a final gathering pass in order to overcome the limitations present in that photon map. The latter is visualized directly and thus needs to be highly detailed - this usually means that we need to capture a lot of caustic photons. So, how the global photon map in Thea Render looks like? As said above, since this is used as a secondary solution along with a final gathering pass, it needs not be detailed. Still, it needs to be relatively smooth, capturing the light transition in the scene. In the Thea Room scene, 20 million photons needed to be shot and 100 photons to be used for irradiance estimation in order to arrive at render shown in figure 18. This render is the direct visualization of global photon map. As we can see, the map has a blocky appearance this is expected and not an issue. Page 58

Figure 18. Visualizing the global photon map of 20 million photons.


One of the artifacts that come along with standard photon mapping technique, is the presence of leaks余 light and shadow leaks. Due to averaging, photons that are spatially near, yet belong to regions of different illumination, can contribute to the estimation yielding light leaks (figure 19). Shadow leaks on the other hand, take place in corners where photons are not symmetrically placed and essentially "missing" from one or more sides. Generally speaking, light leaks look worse than shadow leaks because they imply modeling problems (while shadow leaks could be attributed to corner dirtiness). While one may expect that final gathering could mask these artifacts, it just attenuates their effect without completely removing them. Thus, special care needs to be taken. The possible ways to attack this problem are the following:

1. Avoid very thin or one-side objects as walls. Instead, model walls with thickness which will prevent light leaking from one side to another. 2. Enable leak compensation in photon shooting settings. The photon map irradiance will take longer to compute but the leaks will be suppressed.

3. Shoot more photons and/or limit the photons used for estimation. This will essentially reduce the area of photons contributing to the estimation. 4. Enable secondary gathering in final gathering settings. This will avoid the photon map evaluation near corners, switching instead to tracing.

It is best to utilize most - if not all - of the above actions in order to remove light and shadow leaks. Item 1 should receive special attention since this has to be done during scene modeling.

Figure 19. On the left, light leaks are evident in the relatively low resolution photon map (2 million photons captured). With final gathering on the right, the light leaks are diminished but still present.

Page 59


The power of photon mapping becomes more evident in scenes where lights are near and not distant. This is typically the case with interior renders where most lighting comes from light sources present in that area. For example, the render in figure 20 is rendered quite fast with the use of photon mapping and final gathering and without any leak problems. The rule here is that a shooting algorithm, like photon mapping, gives best results when the dominant light sources are near the scene with respect to the viewer.

Figure 20. good and fast results when the light source is inside the scene.

FINAL GATHERING AND IRRADIANCE CACHING

As said in the beginning, besides photon mapping, global illumination can be resolved with the use of final gathering alone. In Thea Render, final gathering is a special form of gathering that takes place on rough surfaces (the ones that have a diffuse or translucent component).

Standard ray tracing itself is a kind of gathering technique since rays start from the viewer and propagate inside the environment, but what makes final gathering powerful is the ability to cache and reuse calculations made on a position for nearby locations. This way the effort made in computing the irradiance at a particular point, is amortised by reusing the result. The structure used for caching these values is simply called "Irradiance Cache".

With final gathering one does not encounter light and shadow leaks, as in photon mapping, but still the reuse of a cached value can lead to some problems. One particular issue is that we can use the result of final gathering to nearby locations only if these locations are inquired afterwards. The ones that are inquired before do not take into account that result which can lead to a blotchy discontinuous appearance. The cure to this problem is to compute final gathering values in a prepass, in order for all of them to be available during irradiance interpolation.

Using the irradiance cache accelerates rendering enormously since we do not have to compute final gathering values for all pixels. Even more, the cache can be reused during animation renders making rendering of subsequent frames even faster. Page 60


Final gathering is particularly effective when the viewer is close to the scene while the dominant light sources are not. That is for example the case of the room, presented previously, that is illuminated by sun and sky. In this case, the result of using final gathering can be computed quite fast giving very good quality (figure 20). On the other hand, when there are many light sources in the scene, final gathering may take particularly long to be accomplished. Figure 20. good/fast render using final gathering and irradiance caching.

Page 61


physically­based unbiased & biased render engine

Image by Pentti Lahdenperä

Page 62


NETWORK RENDERING INTRODUCTION

Network rendering is a way to remarkably improve render times by utilizing more machines to contribute in one or more renders. In our framework, there are two types of network rendering:

1. Co-operating on the same frame. All machines contribute to the current frame by sending and merging their image buffers. 2. Co-operating on different frames. In this case, each machine processes fully a frame that is part of an animation sequence.

In rendering, as in most network processing applications, the server-client scheme is used in order to distribute workload to several machines. In this scheme, one machine takes the role of splitting and distributing the work (the server) to other machines that receive these jobs, complete them and send back the results (the clients).

In Thea Render, there can be only one machine acting as server while there can be many client machines. The server machine can also act as a client, i.e. receiving and completing a render job itself. Because of this and the additional effort needed to communicate with client machines, the server machine is usually chosen to have high performance characteristics.

NETWORK RENDERING SETUP

Let us see here how to setup the application in order to run in network rendering mode. As said in the previous paragraph, there can be many clients and one server, thus, we need to setup the application in more than one machines. Only a machine using the full application software license can run as a server. The machines that will be used as clients should have their corresponding client software licenses. Note here that if you have more than one full licenses, you can still choose one to become the server and the rest running in client mode1. The first thing generally done for network processing applications, is for client machines to look up and find the server machine. In order to facilitate this process, Thea Render comes with a search mechanism that will show all server machines. Nevertheless, this will only work if all machines are members of the same local area network and connected on the same router. Let us see how the server in such case can be located. 1

This is done by running Thea Render from command line, using the following syntax (without the quotes): "thea -client"

Page 63


First of all, we need to run Thea Render on the server machine (in either the studio or darkroom mode). Then, from the help menu, we select the Server Beacon option. This will popup a small window indicating that the machine is broadcasting its identity (figure 1). There is nothing else needed to do on the server machine, so we keep the server broadcasting information and we move on now to client machines. Figure 1. Employing Server Beacon help tool

Running Thea Render on client machines, will automatically show the client user interface (figure 2). This is essentially the darkroom interface along with a special client menu bar on the left2.

Figure 2. Running first time the client user interface

Page 64

2

Please note that actual interface may vary since the darkroom controls operate in a passive mode.


Looking at the client menu, the first thing we would like to do is to configure the client. By clicking on that button, the client configuration menu pops up (figure 3). The server port is a number used to allow simultaneous communication of several applications between machines. As long as two applications do not use the same port number there is no risk of data conflict. It is recommended to use the default value (6200) or in (a rare) case of conflict with another application to try a higher value (but this will also need setting up the server beacon using that port). The overriding of threads is necessary in order to utilize certain number of threads in the client machine, not necessarily the same with the settings that come in a scene. The default value (0) corresponds to maximum efficient threads for the machine.

Figure 3. Client configuration menu

The upload period is the time period used to upload data to server (in minutes)余 for starting up, a low value may be used to verify everything is working properly but once you get accustomed to network rendering it is recommended to use a higher value (like 10-20 minutes) in order to minimize network traffic. Note that the upload period is only used by unbiased cooperative rendering, since in any other case, resulted data are uploaded as soon as the render job is finished. We haven't discussed yet the Server Address entry - this is where the IPv4 (internet protocol version 4, for example 192.168.0.1) address should be entered. This address can be found from the network settings of the server machine but as we said before, we can locate the server easier. We already have the server machine broadcasting information at this point, so we click on the "sonar icon" found at the end of Server Address entry. This will bring up the server search list dialog (figure 4)余 this is where the application checks the local area network for server beacons3.

3 If

there is a firewall installed then you may be inquired to allow network access for the application.

Figure 4. Searching for server beacons

Page 65


Once the server is found, it is displayed in the list, we can click on the corresponding line and then OK to select it (figure 5). It is important to note that automatic detection can take place only if the machines are connected on the same router. Also, installed firewall may not allow connection of the application, without even informing the user. In such case, the user should explicitly check the firewall, in case it forbids access4.

Figure 5. Selecting a server from the search list

Once the server has been selected, we return to the client configuration dialog where the IP address has been setup and we can accept now the configuration by clicking the accept button. Now, we can see that the Start button (in client menu bar) is also enabled. The application will remember our client configuration settings, so we can also exit the client application at this point.

COOPERATIVE RENDERING OF SAME FRAME

As soon as all our clients have been setup, we can perform our first network render. The simpler network rendering mode is that of the same frame. This is typical for unbiased rendering, where many machines are working together contributing more and more passes on the same image. There is no particular sequence in starting a network rendering, in the sense that server and clients can be started asynchronously and in any order. The most easy way to do this though, is to have the clients started and waiting for render jobs sent by the server. Even more, if you press the Start button on client interface, the application will automatically start next time the client runs. This is quite useful, since the whole procedure can be automated so that the application is launched once the operating system starts up. Now, once the client machines have been started, they get essentially in a stand-by mode where they periodically check the server for any render jobs. As soon as the server starts running, the scene will be automatically sent to clients to be rendered. The easiest way to get a machine acting as a server is by overriding the network render settings, when the Start Render button has been pressed (figure 6).

Figure 6.Overriding network settings to act as a server

Page 66

4

For example, on Windows Vista OS, users should customize appropriately Bit Defender


During rendering you can check the client status and progress in the network panel located inside darkroom interface (figure 7). This is purely for information and diagnostics of all machines working on rendering.

Figure 7. The network panel info余 a server and two clients in action.

STOPPING A NETWORK RENDER

With unbiased rendering, you can stop the render processing whenever the image looks good enough (other by stopping by maximum time or passes). When in network rendering, one can stop the whole render farm by simply clicking the stop button in server machine. This will send a termination signal to all clients which by their turn, will make their final render commit. At this point, several clients will try to connect to the server to transfer data, and this process may take some time to complete. After that, the clients return to stand-by mode for receiving new render jobs. It is also possible to stop a client separately from the others. At the moment, this can be done only directly on the client machine though. In this case, it is recommended to press the Stop Client button in the menu bar, which will both stop rendering (and make the final commit to server) but also avoid setting the client into stand-by mode again (thus, new render jobs won't be handled).

Page 67


Š Solid Iris Technologies 201 0 Page 68

find further information at http://www.thearender.com and our Forum at http://www.thearender.com/forum/


physically足based unbiased & biased render engine

Image by Patrick Nieborg

Page 69


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.