Making Games – Special Edition GDC 2014

Page 1

THE

N E W V O I C E O F G A M E D E V E LO P M E N T DESIGN BUSINESS ART TECHNOLOGY

THE TECHNOLOGY OF

UNREAL ENGINE 4 EPIC GAMES EXPLAINS THE TECH USED FOR THE INFILTRATOR DEMO

PARADOX INTERACTIVE COMMUNITY MANAGEMENT FOR HARDCORE AND CASUAL GAMERS

BADLAND HOW TO CREATE A HIT BY TRUSTING NOTHING BUT YOUR GUT FEELING


Making Games Magazine: 10,000 readers Central ­Europe’s most relevant magazine for game developers makinggames.de: 8,000 unique visitors Germany’s ­biggest website about game development facebook.com/MakingGames: 4,700 fans Europe’s largest game developer community on Facebook Making Games Mail: Email database with more than 9,000 B2B-contacts Making Games Professionals: Lead database with more than 650 fully qualified games professionals Making Games Talents: Germany’s most successful ­ recruiting event for the games industry Key Players: The world’s biggest games industry ­ compendium with more than 80 company portraits

TALK TO US!

projects@makinggames.de


E

8

Editorial Making Games – GDC 2014

YEARS IN THE MAKING

ight years ago, a couple of German gaming journalists went bonkers. While visiting the GDC in San Francisco, they figured, hey, let’s do a magazine specifically for game developers. Writing stuff about games was all fun and, well, games, but taking a closer look at how those games were actually made seemed so much more intriguing. From a business perspective, it was a crackpot idea. Quite simply, there was no market for the kind of magazine we really wanted to make. We knew that we wouldn’t sell a lot of copies. We also knew that we wouldn’t have the budget to employ a proper editorial staff. So we built a rough (and boy, was it ever rough) concept in our spare time and eventually showed it to a couple of game developers, asking for feedback and whether or not they would be interested in contributing a story or two. They did ... and Making Games was born. A magazine by professionals for professionals. We don’t like buzzword bingo any more than you do, and we’re not interested in marketing bullshit. We care about making games, from every angle. For instance, we’re rather proud of Crytek’s extensive and exclusive case study about building »Ryse: Son of Rome« for the

Xbox One. We also invite you to have a look at all the other stories featured in this special GDC issue, showcasing the whole editorial range of Making Games.

Heiko Klinge Editor-in-chief Making Games

Custom-made for tablets So we’re not newcomers. We’ve been doing this for eight years now, publishing hundreds of articles by hundreds of industry professionals. We’re Central Europe’s biggest game development magazine with 10,000 readers per issue. But now we’re going international, and it’s literally a dream come true. For years we’ve been getting requests for an English version of Making Games. Thanks to the rise of tablets, we are finally able to do just that – and do it in style, too. The Making Games app isn’t an old fashioned PDF document; it’s custom-made for tablets, allowing for easy reading and navigation (please refer to page 4 for further details). With this special GDC issue we basically just try to give you an impression of our editorial quality. If you like it please give our tablet ­version a shot as well and spread the word. If you don’t like it than please give us your feedback via email (editor@makinggames.de) or facebook.com/MakingGames. We can’t pull this one off without you!

»We care about making games, from every angle.«

The team of Making Games

The rise of tablets finally allows us to become an international publication.

3


Shameless Self-Promotion Making Games – GDC 2014

MAKING GAMES

@ TABLET

Thank you for reading this special GDC issue of Making Games. It’s designed to show off the editorial quality of our magazine and subtly make you subscribe to our interactive tablet app. Would you kindly?

Here’s the deal ¾¾ The next issue of Making Games, featuring a cover story about open development, will be a free trial issue. Give it a shot! ¾¾ Starting with Making Games 03/14 a single issue for tablets is 6.99 $. ¾¾ A half-year subscription is 16.99 $, a year-round subscription is 31.99$.

A

ccording to rumors, Rupert Murdoch spent about 30 million Euros on his tablet newspaper »The Daily«. It launched in February 2011 and went down in flames in December 2012. To be perfectly honest: We didn’t spend 30 million dollars on the tablet issue of Making Games. We didn’t even spend one million. But what we’ve spent is a lot of blood, sweat and tears. And it’s quite a risk we’re undertaking here. We’ve got about 10.000 readers, making us one of the leading business magazines for game developers, but the number isn’t exactly awe-inspiring for launching a successful tablet issue. So why are we doing this anyway? Number one: Tablets have replaced dogs and men’s best friend; at least when it comes to game developers. And we’ve been asked again

and again why Making Games wasn’t available on tablets, so there’s that, too. Number two: We realized that our content works amazingly well on high-resolution touch screens. Number three: Because we’ve been there and done that. Back in 2005, people kept telling us that there was no market for yet another game developer magazine. Turns out there was. We’re still alive and kicking.

It’s not just a PDF file Quite a lot of tablet magazines are rehashed PDFs. Sure, it’s serviceable, but it’s about as fancy as Cher after a night of binge drinking. And quite frankly, it’s rather annoying due to all the zooming. So we decided to custom build the content for Making Games. It’s prettier, it’s more elegant and, most importantly, it’s a perfect fit for the capabilities of modern tablet devices. On most tablets, you won’t have to zoom at all. But if you particularly want to take a closer look at a certain graph, you most certainly can.

It’s interactive!

Browsing through our magazine is a breeze. Our app even remembers where you stopped reading for every single article.

4

We’re convinced that a digital magazine needs interactive elements. Simply tap the »+« symbol next to the picture of an interviewee to promptly access their biography. But we’re not just going for convenient, we’re also trying to do something that quite simply isn’t possible in print. Take, for instance, our obituary on the Xbox version of The Witcher 2. Sure, you can admire a tree’s details in full screen mode and retina resolution. But with a single tap, you can also switch back and forth between the Xbox and PC versions of the same tree, instantly compar-


ing one to the other. Or how about »A day at Irrational Games«? You can pretty much tour the studio in HD quality from the comfort of your living room couch.

It’s connected By the way, our tablet issue is a lot more fun when you’re online. If one of our case studies piqued your curiosity about a mobile game, we’ll gladly take you to the matching page in the app store. If you think one of our authors is worth following on Twitter, all you have to do is tap the link below their pictures and you’re right on their Twitter page. Fancy one of the software solutions we featured in our tools section? Tap the link and visit their website. You don’t even have to close our app to do any of this (with the exception of visiting the app store): Simply tap »done« and you’re right back to where you left Making Games. It’s pretty much a given that all of our sources are properly linked in, too. Oh, and don’t worry about annoying animations, whistling sounds or bells going off. We don’t deal in that particular sort of »interactivity«. We prefer a layout that’s clear, wellstructured and, most importantly, easy to read. There’s also a pretty neat side effect to this: Our app doesn’t eat storage space for breakfast.

By the tap of a button, you can switch between the PC and Xbox iteration of a tree from The Witcher 2. Thanks to full screen and retina resolution you’ll be able to make out even the smallest of details.

Here’s to the future We’re pretty proud of the way Making Games has evolved from a print magazine to an interactive tablet app. So we’d like to invite you to come along for the ride. Check out our free trial issue, and if you happen to like it, please spread the word and take a second to rate the app. But what we’re really looking forward to is hearing from you directly. You can either con­ tact us via e-mail at editor@makinggames.de or leave a comment on www.facebook.com/ MakingGames. This project will only succeed if you have as much fun with the tablet issue as we had creating it. And one last thing: If you’re already reading this on a tablet device, you simply have to tap the respective link to give us a piece of your mind. Pretty cool, right? Heiko Klinge

All of our sources are linked in, so you can access an author’s twitter feed by tapping the link below their picture. You don’t have to close the app.

5


CONTENTS

GDC 2014 80 03 04 80 82 82

Editorial

Making Games @ Tablet The digital version of our magazine

A day at ... ... Irrational Games

Also available from Making Games

22

Programming Best Practice Creative Mobile Porting Matthias Hellmund

32

Business Case Study Giana Sisters: After the kickstarter Adrian Goersch, Emily R. Steiner, Nikolas Kolm, Patrick Harnack and David Sallmann

37

Technical Postmortem Porting The Witcher 2 on Xbox 360 Marcin Gollert, Krzysztof Krzyscin and Piotr Tomsinski

22

Level Design Case Study Building the Scope of Planetside 2 David Lafontaine and Corey Navage

28

44

6

Imprint

60

Interview: Guild Wars 2 Player Engagement through Storytelling with Bobby Stein

72


COVER STORY

The Technology of Ryse

08

Best Practice Developer Expectations and the Reality Chris Brunning

11

Rendering Case Study The Transition to Physically Based Shading Nicolas Schulz

16

Character Design Case Study The Making of a Hero Peter Gornstein, Mike Kelleher and Christopher Evans

48

Community Best Practice Managing a Hardcore Community Björn Blomberg

52

Graphics Workshop Scottish Landscape in the CryEngine 3 Martin Teichmann

60

Game Design Postmortem Badland: »We were just having fun.« Johannes Vuorinnen

66

Engine Case Study Creating the Infiltrator Demo Alan Willard

72

Development Case Study Building A Cross Platform Pipeline Domenique Boutin

76

66

48 52

Business Best Practice The big PR FAQ Gunnar Lott

28 7


Making Games –

GDC 2014

RYSE

DEVELOPER EXPECTATIONS AND THE REALITY Technical Director Chris Brunning elaborates on the major hurdles that the team had to overcome during the development of Ryse: Son of Rome for the Xbox One, including the complex Instant On feature, SmartGlass optimization and Kinect functionality. Chris Brunning is Technical Director for Ryse at Crytek.

Chris Brunning is Technical Director on the Ryse Project, joining the game team in early 2012. Chris has been in the games industry for over 25 years in a variety of studios. His experience covers everything from programming, production, technology development and management.

O

ne of the first things that the vast majority of developers, and publishers, seem to assume is that the new generation will remove all previous limitations: We can have as many polygons on screen as we want, we can use huge textures everywhere, and lots of them, we can physically simulate everything, we can have thousands of simultaneous sounds playing, we can have finer, detailed control of hundreds of AI, because we have »next gen graphics«, »next gen sound«, »next gen process­ ing«. The reality is a little different ...

How it plays out As with every new generation of console, hardware developers have been able to increase graphical quality and fidelity, the number of ac­ tive sound channels commonly increases and the raw processing power increases. These are all »in­ creases« not a sudden jump from hard limits to infinity; we don’t always know exactly what that means right from the start. Expectations need to be carefully managed from day one as best we are able with assumptions and guesstimates. These increases, even when the hardware it­self is capable of it, often come with costs, particu­ larly in terms of code time to allow/utilise them, art, design and sound creator costs to generate them and raw back-end processing power to convert them to something more readily usable. The »next gen« is NOT a no-limits panacea, rather an intellectual challenge to show that limits just mean try harder.

8

The »Fun« Bits Yes, there is a lot of fun involved in develop­ ing on a new piece of hardware and the new operating system and a new IP, but here I am talking tongue in cheek. I just want to mention the areas that can be challenging, not only for Ryse, but for every launch or near launch title. Not Final Hardware In this console cycle Microsoft put a lot of effort into making the very early development kits to appear to be very close in terms of perfor­ mance as the final target machine. They did a good job and right from the start of developing Ryse the kits did have a very similar perfor­ mance. However there is always a certain amount of guess work and trust that what you do will be pretty much the same at launch. Not Final OS This one is a little trickier. The Microsoft platform team have been in a tougher position than developers as they have been working much closer to »the metal« and have had to deal with the hardware being actually very different from the final hardware, even though to developers it appeared very similar. What this means is that the libraries they supply are all very new, prone to bugs and don’t always perform (or be used) in the same way in the next release of the OS. Things change underneath your feet. There needs to be a degree of trust in the promises of software support and deadlines and mitigation plans if some of it goes wrong.


The action game Ryse is set in ancient Rome and is one of the exclusive titles for the new Xbox One. To show off all next gen features of the console as well as of the CryENGINE 3, the developers had to adapt to the new technology and correct their expectations to be able to innovate. At the bottom of the screen you see that a short video clip is saved directly out of the game while the player is earning an achievement. This clip can be shared, for example on a tablet using the SmartGlass functionality.

Fortunately for us on the Ryse team not a lot has gone irretrievably wrong and the Micro­ soft team in Redmond have also provided a great deal of support.

The Real Fun Bits There are quite a few platform related features that are coming online with the new console, some completely new, some advances on technology that has been at least partially developed for the Xbox 360. Being a second party development studio we are strongly encouraged to use any and all that make sense for our game. These areas have been both a big design challenge and a technical challenge, one we have relished and pushed on. Instant On With the expectation that next gen games are probably going to be very big, with lots of meshes, lots of textures and lots of videos, to copy from a BluRay or the internet is likely to take a long time, we can have around 50 GByte on a disk. As gamers, we’re not always the most patient of people, so Microsoft has come up with a way of getting us playing as soon as possible. A disk is inserted, or the game starts to stream from the internet, the system must know how much of the data is needed to be present on the console HDD before the game can start. Not all data is used in every level of the game, some is used in every level and some, like the UI, needs to be persistent, and some is used in two or three levels. The problem facing the development team is how best to package the data for all possible circumstances. So what possibilities are there for the ­sequence of loading?

1. The disk goes in the machine and play is start to finish. This would be the default layout on disk (or in the internet package). 2. The disk goes in the machine; the user goes immediately to multiplayer. 3. The disk goes in the machine; the user plays the first level, quits and removes the disk. Goes to their friend’s house wanting to carry on where they left off. Their friend doesn’t own the game so he takes the disk with him. What Microsoft has given us is the ability to change the order in which files are copied to the internal HDD. This helps a lot but it cer­ tainly isn’t all that has to be done. There is little advantage in duplicating data to avoid a layer switch, in fact it has the disad­ vantage of a longer install time and a bigger HDD footprint. So the problem has been how best to group our data so that any level, or mul­ tiplayer, can be copied to the HDD and played as soon as possible. In the end, as data is copied in blocks of files rather than individual files, there is no perfect mathematical solution, just a closest approach. Considerable time and effort has been spent in first analysing the problem for a suitable lay­ out algorithm, collecting the data to construct a layout and adjusting our standard build system to spit out this idealised disk layout. Kinect Everyone will have Kinect right from the outset with Xbox One, so lots of people will be open to trying it out for the first time. Our job was to find uses for it that added to the game experience; taking advantage of what Kinect can do so that players realize its value and potential. We have found that the key is

9


Best Practice Making Games – GDC 2014 to not impose Kinect usage, but to add an ad­ vantage by using Kinect. Certain areas of the game have been enhanced to give the player a gameplay advantage when using Kinect, certain actions can be performed both more efficiently using the system whilst the player continues to do something else. This feels like a good use of the technology for us. SmartGlass Almost everyone has a SmartGlass capable device, and I think people are really curious to see how well this technology can be integrated with games on Xbox One. We have devoted a lot of thought to how using one of these devices associated with Ryse can improve the user experience. So what if you can sit on the bus or train and see how far your friends have progressed, check out the latest videos and tutorials on the section you are finding difficult, all from the device in your pocket. We have looked into, and imple­ mented all of these areas and more. Multiple Operating Systems Not only multiple operating systems but multiple environments that can run code simultaneously. This gives us the opportunity of running two communicating applications as part of the game itself, in this case the main game and the UI. What it has allowed us to do is develop a UI that very closely resembles the SmartGlass companion application. Whether you are on your tablet, smartphone or on the box itself

your user experience and options will be very nearly the same.

CryENGINE Features CryENGINE is constantly evolving as new techniques are researched and new hardware opportunities arise. Many features are always in the pipeline, just waiting for specific applications or time to develop them. Every new project drives new features, or extends existing ones. Given the timeframe for the Ryse production we had to choose a set of features that we could exploit and get developed. On the following pages we will take a close look at two of those features that we chose as a major focus: Physically based shaders and character tech.

Final Words Developing on a brand new platform with a brand new operating system was never going to be a walk in the park, but that hasn’t stopped Crytek and Microsoft shooting for the stars. As with every other Crytek project this has pushed innovation in the CryENGINE itself and is pushing the new hardware and features, if not to the limit (Ryse is a launch title) a good long way towards it. Chris Brunning

Hardcore Gamers are still uncertain about the SmartGlass functionality of the new Xbox One. For Ryse the developers aim for a full support of this new technology, in the demo at E3 2013 Crytek already showed different menus on a tablet, including a game progress timeline and several social features that extent the game experience.

10


RYSE

THE TRANSITION TO PHYSICALLY BASED SHADING While the benefits of physically based shading are very obvious, the actual realization of this concept demands the full commitment of the whole team as well as several programming tricks. Senior Rendering Engineer Nicolas Schulz explains the implementation on the basis of several scenes in CryENGINE 3.

T

his current generation has seen some remarkable advances in rendering quality: from believable skin with plausible subsurface scattering approximations to various forms of anamorphic lens flares, the feature list of the leading rendering engines is quite impressive. Post processing in particular has obtained a great amount of attention in this generation and while it surely does help to achieve a more cinematic look, it is not a secret that its extensive use also serves the purpose of disguising something that most games of this generation have in common: the quality of materials and how light interacts with them is still far from the standards that are common in offline rendering and CG movies. The common state of materials in current video games is either a strongly diffuse look with a very high contrast to break uniformity, or a look that features overly glossy surfaces with very noisy and sparkling highlights. While there are many games that do certainly look beautiful as a whole, it is quite obvious that there is still a considerable quality gap between an in-engine screenshot and some footage that was rendered offline.

With Ryse: Son of Rome being a launch title and as such inherently a tech showcase for the next generation, we decided early on to tackle this area and put a stronger focus on the authenticity and readability of materials, hoping that this would bring us closer to the aesthetics of CG movies. Instead of adding a lot of fancy rendering features, we opted to spend most of our efforts on the consistency of the shading and lighting models, trying to unify them under a more physically based framework. Knowing that the next generation consoles would improve mostly on memory size and GPU compute power, allowing considerably higher quality texture maps and more complex math to be executed in shaders, this move felt like a great fit for the upcoming hardware.

Nicolas Schulz is a Senior Rendering Engineer at Crytek.

Nicolas is a Senior Rendering Engineer and has been at Crytek for more than 5 years now. He worked on Crysis 2, where his main focus was on the cross-platform Stereo-3D implementation and feature development for the DX11 upgrade. Nicolas joined Ryse early in its development cycle where he has been actively driving the transition to a more physically based rendering paradigm. Nicolas started writing his own graphics engines long before attending University and has since then not lost any of his passion for rendering technology.

Why Physically Based In contrast to traditional observation based shading, physically based shading models simulate the light material interaction using well understood laws and principles from physics, like the Fresnel equations and the basic law of energy conservation (Figure 1). It is quite understandable that replicating more closely how light behaves in the real world will lead to more natural and believable results and en-

11


Rendering Case Study Making Games – GDC 2014

Figure 1 Physically based materials as they appear in Ryse. sures that materials look plausible regardless of the current lighting conditions, ultimately resulting in a lot greater consistency. A further benefit of applying physically based models is that fewer parameters are required to achieve certain effects, leading to a more streamlined and approachable content creation workflow. A good basic example of how being more physically based can simplify material setup, is the normalization of specular highlights. On a smooth reflective surface like a chrome sphere, for example, the highlight coming from the reflection of a light source is very sharp and clear. Once the material gets rougher, the highlight starts to lose its original shape and becomes wider and at the same time less bright (Figure 2). To achieve this effect, in a traditional model there might be two parameters for controlling the highlight, one for the size and one for the intensity. However, due to the principle of energy conservation, the two

Figure 2 Three plastic spheres with varying roughness. Note how the size and intensity of the highlight changes.

12

parameters are actually coupled. On a rough surface, the incoming light is distributed over a larger area and hence has to be less intense at a single point. Knowing that, the exact intensity can be computed by integrating the energy over the area of the highlight. While this involves some non-trivial (although also not overly complex math) the result is often a simple factor that can efficiently be applied to the BRDF (bidirectional reflectance distribution function) in the shader. In the end, a single roughness parameter is enough to correctly control both the size and the brightness of the highlight at the same time.

Art Pipeline Changes A physically based shading model needs to be fed with meaningful data. As the lighting and shading computations are involving physical terms, it is understandable that the input data needs to respect certain real-world limitations to deliver plausible results. This is quite different from traditional asset workflows where less strict rules were required for authoring material maps. At first it took some time for our artists to get familiar with this new workflow but once everyone grasped the concept, it simplified the asset creation process by taking away a lot of guesswork and leading more quickly to consistent results. To make better sense of the input attributes, it is helpful to have a basic understanding of how light can be thought of to interact with objects. When light hits a surface, some of it gets reflected directly off the surface, while the rest gets refracted and enters underneath the


Figure 3 A simple test scene with visualization of common material attributes. In clockwise order: shaded scene, diffuse color, material smoothness (inverse of roughness) and specular color. surface. The amount of light which is reflected versus refracted is determined by the Index of Refraction (IOR) which is a common physical number. From the light that enters underneath the surface, some of it gets absorbed while the rest gets scattered around and exits with a random direction. This is at least true for common opaque materials; some other substances like skin or liquids require a more complex light interaction model to look convincing. The light which is reflected off the surface is usually referred to as specular, while the refracted and scattered light is referred to as diffuse. Common material attributes in a physical shading system are the diffuse color, specular color and roughness and those are as well the most common material attributes in Ryse 足(Figure 3). The amount of light that gets absorbed during scattering is specified using a diffuse or albedo map. Albedo maps have to be clear of any lighting information or ambient occlusion and are ideally created from photos that were calibrated using a reference color chart. The Index of Refraction of a material gets converted to an RGB value for more convenient handling and is stored as specular color in our system. Depending on the material type, artists pick the appropriate specular color from a table with common values. For artists,

using that reference table is one of the biggest changes from the traditional workflow where the specular map acted more as a mask for highlights and was often just painted arbitrarily to look good. To better help them with the transition, we added a debug view which validates if the specular color is within a physically plausible range to avoid basic common setup mistakes (Figure 4).

Figure 4 Overlay for validation of specular color. Blue means the value is below physically plausible range, orange indicates it is too high.

13


Rendering Case Study Making Games – GDC 2014

Figure 5 Same environment probe applied with parallax correction (left) and without (right). Note how the reflections are matching a lot better in the left image. The most interesting attribute is the surface roughness, as technically it influences several aspects of the shading. The practical effect it has is close to what one would expect intuitively from the term »roughness«, creating sharp reflections on smooth surfaces like a mirror or polished marble and more blurry and less distinguishable reflections on less smooth surfaces. It is worth mentioning that for Ryse we decided to always couple roughness and per-pixel normals, with the roughness value usually being stored in the alpha channel of the normal map. Normals and roughness are actually conceptually closely related. While the normals define surface variation on a macro scale, the roughness defines the same on a micro scale. This insight can be used to improve results when shading with mipmapped normal maps and to reduce specular aliasing.

Figure 6 Glossy realtime local reflections, helping to make objects appear more grounded on a reflective surface.

14

Lighting System Overview When it comes to lighting, we are very careful to keep the integrity of materials, making sure that the system preserves the exact ratio between diffuse and specular as defined for the asset. To ensure that, we have completely dropped some traditional lighting approaches like a constant or hemispherical ambient term that does only have a diffuse but no specular contribution and would therefore flatten the material. For direct lighting, the majority of our objects are shaded in a deferred way using a unified BRDF. Exceptions are more complex materials like hair or eyes which are forward rendered with a separate specialized BRDF. All shading computations involve a Fresnel term which yields an increased reflection at grazing angles and we incorporate a screen space generated directional occlusion term everywhere. While the material roughness value is conventionally used for the specular term, we also incorporate it into the diffuse term to get some view dependent retro-reflection that improves the quality of rough materials like stone. Indirect lighting is captured using light probes that are manually placed by artists. The probes are essentially cubemaps that are preconvolved for different roughness values. To overcome the limitation that cubemaps are just valid for a single point but are applied to a larger area, the probes support geometry proxies with which reflections can be parallaxcorrected (Figure 5). In addition, lighting artists have the option to further refine indirect lighting intensities locally with something that we call ambient lights and which are useful to mimic local bounce and occlusion effects. To get local reflections that can’t be represented within the probes, a raytracing pass is


Figure 7 The inside of the palace is among the heaviest lighting set­ ups with up to 250 light sources on screen. The bottom image shows the light coverage for the tiles processed by the compute shader.

performed in screen space with a subsequent convolution step to support glossy reflections. Where available, the local reflections overwrite the probes, providing reflections for dynamic objects and giving convincing reflection occlusion where the probes are not accurate enough (Figure 6). Since a typical Ryse scene can have a plenty of light sources and probes, performance is something we have to be highly concerned about. To reduce the bandwidth pressure that reading the G-Buffer attributes creates, all lighting computations, including application of the indirect light contribution, are performed on screen tiles within a single compute shader. This helps to considerably cut the cost of probes and lights on our target hardware (Figure 7).

Conclusion One of things to be aware of is that physically based shading can only work well if everyone involved is fully committed to it. This does include a lot of people from various departments – rendering tech, character art, environment

art and lighting. Doing the transition does in the first place mean giving up several proven production guidelines and can naturally require a lot of persuading to get everyone on the same line. We are very happy though to have made this transition for Ryse and with the outcomes we get from it. For the next project we will definitely keep building on our system and with all the experience that we have gained during the production of Ryse, we are very confident that we will be able to further raise the quality bar. Nicolas Schulz

15


Making Games –

GDC 2014

RYSE

THE MAKING OF A HERO Global Cinematic Director Peter Gornstein, Cinematic Producer Mike Kelleher and ­ Art ­Tech­nical Director Christopher Evans follow the process of creating a realistic character for Ryse: Son of Rome including performance capturing and three dimensional face scans. Peter Gornstein is Global Cinematic Director at Crytek.

As Crytek’s Global Cinematic Director, Peter is the owner of the creative quality of all story and cinematics company-wide. After studying at the American Film Institute, he began working at Sony Pictures Image works, where he worked on titles spanning from Starship Troopers to Stuart Little. Later, as Lead Concept Artist for IO Interactive, he worked on several AAA videogames, including Freedom Fighter, Kane & Lynch and the Hitman series. His work as a film and Television director has seen him win multiple awards, including the Canal+ Prix at the Clermont-Fernand Film Festival and a nomination from the Cannes Film Festival.

Mike Kelleher is Cinematic Producer­ at Crytek.

Mike is responsible for running the cinematic production on Ryse. After receiving his BA from the Art Institute of Pittsburgh in Media arts and Animation, he started his career as a production animator and shot designer. He then moved on to team lead at Hydrogen Whiskey before leaving there to manage and coordinate all outsourced cinematics at THQ. His most notable games are Darksiders 1 & 2, Saints Row 3, Space Marine, WWE SmackDown ­vs Raw and Dawn of War 2.

16

S

torytelling is an art form as old as mankind, but in the relatively recent of history of videogames, it’s an area that has provoked much debate. From questioning how important narrative is to the gameplay experience, to insisting that games won’t truly be »art« until they better integrate stories, players and critics are increasingly vociferous in defense of their particular points of view. At a glance, it’s easy to see games have become more complex in how they handle storytelling since the days of blasting asteroids in arcades. Of course, we’re still typically out to save the world from impending catastrophe or recover a lost love, but the level of detail depicting the process makes the journey from A to Z a much more cinematic and emotional experience than early developers would have dared to dream. The most obvious catalyst for that shift is simply the evolution of technology. With the ability to create worlds that look and feel more like the real thing, many developers have naturally sought to immerse users by closing the gap between what they experience when they press play and what they see in movies and the world around them.

Barriers of Believability Crytek has long been known as frontrunners in this race towards reality. And while we do strive to squeeze every ounce of power from the technology at our disposal, we’ve always been aware that beauty is only skin deep. For us, ma-

king a game look great is pointless if the visuals aren’t creating a deeper emotional connection between the player and what they’re playing. With Ryse, we’ve once again invested a great deal of energy into making a world with as few barriers to believability as possible. It’s our hope that the high fidelity of everything in the game will help clear the way for players to feel closer to the characters, environments and story they’re interacting with. In terms of the technology used to drive this process, Ryse represents another step forward for Crytek. Put simply, you’ll see more tech at work in a single eye of Ryse’s hero, Marius, than you would in the entire move-set of your average current-gen console character. We’ve also embraced the potential for more cinematic experiences by looking to the processes used in blockbuster movies.

Filling the roles Our casting for Ryse was done through an agency called »Side«. They have a lot of experience, and we’d enjoyed a successful relationship in the past. At the outset, we provided them with detailed briefs of every character we were looking to cast. Those briefs included information on things like how we wanted the character to sound, body type, and their in-game persona – so whether they were stoic, weak etc. Side then worked on filling those roles and sent us bios and videos of potential actors. We’d then choose a few actors to audition in person over in London. Finally, we’d head back


Working with the actors on the performance capture stage at Imaginarium.

to our studio in Frankfurt and share video footage of those auditions with a wider group before making final decisions. Once we’d made our final casting, the process of capturing the actors’ likeness for Ryse began. We did full body and face scans, taking 175 different facial expressions for the head scans to allow for incredibly detailed animation. It’s just one more way of attempting to push the medium forward and add depth for players. To this end, we also teamed up with The Imaginarium – the London-based studio fronted by Andy Serkis (played Gollum in Lord of the Rings). Along with Jonathan Cavendish, Andy set up the studio to really push the art of performance capture for both films and games. Ryse became the first game they’d worked on. By the time we set out to work with them, all the actors had been cast. The key members of the team would meet the night before filming and run through in detail what we were looking to shoot the following day, which actors would be involved and how to most efficiently schedule the running order.

The Shooting Workflow The schedule for the filming wasn’t necessarily connected to the flow of the story, but rather optimized around the actors so that people weren’t having long waits in the studio between scenes and could remain in character. In the morning, actors would arrive at 8am and start getting suited up, and we’d begin filming at 10am. The atmosphere was very much that of a film set, and everything was

very slick. A typical shoot involved around 50 people; from a director to facial cam specialists and runners. We’d usually do around two to three takes of each scene and we’d spend an average of 10 to 12 hours in the studio each day. We were at the studio for 18 days doing the p-cap side of things and we filmed 49 scenes – 42 of which are in the game. At the end of each week we’d sit down and review the footage along with the guys from Imaginarium to select the final cuts. When it came time to integrate footage into the game, one major advantage was the fact that Imaginarium don’t farm their motion editing out – they do it all internally. This made our lives much easier after filming because what we were working with was so smooth. Imaginarium would take the raw data from filming and implement into Blade and then Motion Builder. At Crytek, we’d then take the Motion Builder data and take it to Maya before finally integrating it into CryENGINE.

Christopher Evans is the Art Technical Director at Crytek.

Christopher is responsible for art and animation technology and pipeline. Chris was Lead Technical Artist on Crysis, where he was responsible for »bridging the gap« between R&D and the game team, rigging, firefighting, and pushing art technology. Chris’ passion has always been for characters and rigging, a rabbit hole he followed all the way to Industrial Light & Magic, where he was a Creature Technical Director on multiple films, including James Cameron’s Avatar. Rejoining Crytek as Art Technical Director of the Frankfurt studio, he focused on Crysis 2, while spearheading »CINEBOX«, a small internal team focused on re-purposing the CryENGINE for film previs and virtual production. He is currently taking a holiday from guns and aliens to work on Crytek’s upcoming character-focused title Ryse.

The Bigger Picture Working with a team of Imaginarium’s pedigree has really helped to refine the way acting is integrated in Ryse. One encouraging thing we’re already experiencing with Ryse is that it’s being singled out as a game that really »looks next-gen«. For us, that shows that the time and effort we’re putting in is shining through. What we’re really looking forward to though, is when people start playing the game and realize it’s more than just a pretty face; that all the high-end technology

17


Character Design Case Study Making Games – GDC 2014

Preparation for the shoot means getting marked and suited up.

John Hopkins, the actor impersonating Marius, is ready for battle.

is really only there to help place you in Marius’ shoes as convincingly as possible. From that perspective, the idea of the game being cinematic goes beyond just using some technology shared with the film industry. The whole process of settling on Ryse as a game we wanted to make started by singling out Rome as a fertile ground for storytelling. From there we focused on coming up with the specific narrative – always keeping it front and center during development. With that framework in place, everything else in the game can flow from the story. At the same time, part of the reason that storytelling and games have always had a slightly strained relationship because games are a nonlinear medium – but it is the freedom to explore in their own unique ways that attract players to games in the first place. We’ve always understood that at Crytek, and our games all offer evidence that we want the player to be able to make their own mark on the game world. Ryse continues that tradition and we’ve worked very hard to make sure we strike that sweet spot between storytelling, visuals and engrossing gameplay. By starting with the story and building from there, we’ve tried to make Ryse a cinematic experience that will really transport players to the battlefield and inspire them to fight as if their destiny depended on it. Peter Gornstein, Mike Kelleher

Character Tech From day one, a pillar of Ryse was to emphasize story and emotion; this was to be our greatest challenge. It manifested itself in what we called »six feet to six inches«. Meaning that we would focus on owning that distance, in a cast of 14 story characters, they all needed to be extremely high fidelity and hold up to incredible scrutiny. Crysis, Crytek’s flagship game franchise, puts you in the first-person boots of a masked protagonist that fights many mostly masked enemies. The player never sees his face. This posed a real challenge for the technical art and

18

animation team. For the first time, we would have an unmasked protagonist and a 3rd person game camera where the player could see him at all times. The actors’ performance had to come across on the faces, at an extremely high fidelity, and across the entire span of the game. We not only wanted to do this, but also to do it in the most efficient and streamlined way possible.

Innovate and Outsource? The plan was to have two people on staff responsible for rigging, and for the first time, build a pipeline to outsource rigging tasks to a third party vendor. We had never outsourced rigging at Crytek. We are a technology company; we are often »building the plane while flying it«, we not only needed to build a pipeline that allowed us to send rigging work out, but we needed to break new ground and seriously increase fidelity at the same time. The two are at odds? This does not happen.

Lady Luck For months I contacted different OS vendors that offered rigging services, and it was abysmal. I couldn’t find anyone that I felt comfortable working with; everyone was trying to pass off shoddy work and just doing the bare minimum, but promising the world. Around the same time, I was desperately looking for a senior rigger and coming up empty. I spent nights and weekends on forums, we hired head hunters, conducted an incredible amount of interviews. No one seemed up to the challenge. It was then that I stumbled upon the reel of Riham Toulan, a 24 year old character rigger from Egypt. Her interview went well; she was extremely talented and had an infectious enthusiasm that made her a force to be reckoned with. I felt this would make up for any lack of experience. Ryse would be not only Riham’s first game, but first professional project. The other technical artists also believed Riham could pull it off, but now I needed to convince the Ryse management.


A final check of the facial markers.

Battling on the performance capture stage.

It was asking a lot for them to trust me, that a junior would be able to own much of the rigging on Ryse. There was a lot of red tape that needed cutting, and in the end it was clear that if things didn’t work out, this was on me. The same site where I found Riham’s reel, I stumbled upon the work of Vladimir Mastilovic, a facial rigger who had left his Lead position at facial giant Imagemetrics to set up a studio of his own, 3Lateral, in his home country of Serbia. I contacted Vlad and we hit it off immediately. Vlad and his team are extremely competent, and really passionate about characters and rigging. In choosing to work with 3Lateral we would have a partnership where both parties were invested in making Ryse a ground breaking title. Vlad had been working on a feature film / scan based pipeline for quite some time and he had not only people on staff with rigging expertise, but a team of technical sculptors for scan alignment and processing. He understood that we had just switched to Maya and were rewriting the character pipeline from scratch, he came out to Frankfurt and met the team; we discussed our plans. It was now 10 months from the ship date, and we were looking to do something more ambitious than many games do in many years of development. Vlad saw this as a challenge but also an amazing opportunity, Ryse would be the largest project 3Lateral had ever embarked on, but also the perfect testbed for finding out what facial fidelity could be achieved on next gen hardware.

Challenge Accepted Unlike the other departments whose most experienced guys continued on making Crysis 3, I pulled all but one of the Technical Artists from the Crysis team and put them on Ryse. We knew we had our work cut out for us; it wouldn’t be possible without some muscle. But this reorganization was much more difficult than it sounds, because Crysis 3 marked the company’s continuation of a series that had been our focus and identity for

The virtual camera in action on stage.

so long. My decision to pull talented guys off Crysis IP was challenged by many, and in the end required final sign-off from company’s founders. Over the next months, the tech art team worked out an ambitious plan that required us to completely rewrite the character and animation pipeline. For quite some time we had wanted to switch to Maya for characters and animations, but never have had the chance. Harald Zlattinger, our Senior Technical Animator, wanted to use Ryse as a chance to rebuild our animation pipeline, but changing the core software meant wiping the slate clean, nothing from Studio Max made it over: we replaced everything. We had wanted to switch to Maya for many years, but the timing was never right. ­On Crysis 2­,­­­ the animation department organized what can only be described as a »mutiny«, they were so fed up with animating in 3D Studio Max. Maya was easier to use from both an animation and a technical standpoint, and it is much easier to write tools and software for. It is also the industry standard for character rigging, something we would be doing a lot of on Ryse. We focused on building a modular system where all parts of a character were generated with Python code. The system encapsulated each character at every level, from things like arms and legs to individual rigging pieces. This metadata or »markup« allows us to use one set of tools for all characters, and better manage large data sets. Formerly being a 3DS Max studio, we christened this system »CryPed«, a wordplay on 3DS Max’s often cursed »Biped« character framework. At the same time the Frankfurt team was focusing on rebooting the pipeline and the rigging framework, Vlad and the team at 3Lateral were working in parallel on faces. The goal was to have one rig for both cutscenes and the game. When you play Ryse, you are in fact »playing the cutscene«. We weren’t really sure how powerful the Xbox One would be, but I felt safe that if we shot for the moon and failed, we would at least have made it out of the atmos-

The motion capture work has been done by Imaginarium, a company founded by Andy Serkis who is most famous for his role as Gollum in the Lord of the Rings movies.

What is Rigging? Rigging means putting the bones and muscles into the characters that define the movement . You are basically building an extremely high fidelity puppet that will receive motion capture data. Once it receives that data, animators are able to ­significantly edit the rig using »controllers«.

19


Character Design Case Study Making Games – GDC 2014

Rachel McDowall sits patiently as she is coached through the five hour FACS scan session.

Next-­Next-Gen? Breakdown of character fidelity ­ on Xbox One (in-game): Polygons 150,000 Skeleton Joints >500 Facial Shapes 230 Simulated wrinkles, muscles, fat Python code to generate rig 8k lines Maya rig network 40,000 nodes

phere. We would leverage 3Lateral’s expertise and man power to create feature-film quality facial rigs for all Ryse characters, something unthinkable on previous generation hardware. 3Lateral was invaluable; they were completely immersed at every step of the way. It’s very rare to have a vendor suffering through so much R&D with you in pre-production. Many people doubted we would be able to pull it off, even more doubted the Xbox could handle it, but in January 2013, as we were wrapping up our first facial prototype, it was pretty clear that we had reached our goal. We had a Marius talking and emoting ingame that looked phenomenal, it was a rig built to the fidelity I was used to seeing in film, but running at 30 frames per second on prototype gaming hardware.

Capturing an Actor’s performance The performances you see in the game are the final product of a very lengthy pipeline that starts with each actor’s face getting three dimensionally scanned. This is not your normal scan session, but a grueling, five hour session where the actor is asked to contort their face in every possible way. Using the work done by Paul

Here, John Hopkins rallies his troops on the motion capture stage, as Marius Titus mirrors his performance in the game.

20

Ekman in 1978 on the Facial Action Coding System, or FACS, we scan the actor going through a routine of over 100 facial »poses«, which we scan in high definition. This data allows the team to break the face down into all the muscular components that make up an actor’s expression. This data is then used to replicate the actor’s face in any pose, with extreme precision. We made the choice not to pursue something like LA Noire’s tech, where you playback a captured video and mesh of a face; it was important that our faces fit into the world our artists create. Using the same artistic process to create Marius’ face as his armor makes the character feel grounded and consistent in the world we are creating. We have some of the best character artists in the industry and saw no reason to slap photo textures on and call it a day. That’s not to say Ryse isn’t photorealistic, but it’s more »hyper-realistic«, it’s the world –but on steroids. On the motion capture stage, the actor gets special face makeup and their face is captured with a head mounted camera or HMC. The HMC video is synced to the body motion capture animation, and the audio. The facial video drives the facial rigs, while the body motion capture drives the body rig. The goal is to preserve as much of the actor’s performance as possible. Peter directs the actors on stage, he judges their performances and when he selects a take to be the final take used in the game, he expects that the emotion he saw on the actor’s face comes across in the game. Much of our process is focused on remaining true to the actor’s original performance. This dedication to retaining the subtle quirks of an actor’s performance lead us to partnering with Imaginarium, Andy Serkis’ motion capture studio. We met them early on in the project when the studio was just being founded, and the discussions eventually lead to Crytek being the first project shot on their stages in Ealing, London. Andy had amassed a team of industry performance capture veterans from films like Kong, Avatar, Tin Tin, and Planet of the Apes.


These are »neutral« scans of John Hopkins, the actor that plays Marius Titus. All actors were captured in high detail.

Crytek fabricated virtual camera hardware that allowed the director to »film« in the game engine.

The guys were deeply invested in bringing the emotion captured on the stage into a high fidelity next generation title. Once again, we had lucked out in finding a partner, not just a vendor. Like Vlad and his guys at 3Lateral, the team at Imaginarium saw Ryse as a chance to be part of something special: A new IP on powerful hardware whose main focus was character and emotion. On stage at Imaginarium, Peter directed as many as 12 actors simultaneously, recording face, body, and audio. Crytek created a virtual camera that allowed Peter to look into the game engine and see the characters, in realtime. He could change lenses, aperture, and focus – which were all recorded along with the actor’s performances. Once Peter has selected the takes he likes, and long after the actors have gone home, he can load them up in CryENGINE and »shoot cameras« on the data. He enters the stage alone, and can look through the camera and see the actors and the scene, allowing him to refine the camera­ ­motion or try new things. This was used for many of the execution cameras, though P ­ eter was not at the gameplay shoot, he could load up the executions and record camera motions on them. This gives Ryse a gritty, cinematic feel that you don’t get in many other games. Once shooting is wrapped and the data begins to come back to Crytek, and we send the headcam video to Cubic Motion, who have a custom computer algorithm to read the video and identify the markers in the face makeup and generate initial animation on the rig. Their team of facial animators then take this and polish the

animation based on direct comparison to the original head mounted video and feedback from Crytek. European studios have always had a hard time finding facial animators. While some studios have 20+ facial animators on staff, we have two – across multiple projects. Working with Cubic Motion gives us access not only to their facial solver that generates animation from headcam video, but more than 20 facial animators. When the facial animation comes back, our cinematics team integrates it into the cutscenes, where it is lit and polished. In the end we hope to replicate the actor’s performance Peter directed on stage as closely as possible.

Drinking From the Fire Hose Ryse is a new team, developing a new IP, in a new genre, on new hardware, with an overly aggressive schedule and an immovable deadline. Our goal was to support the story by immersing the player in an emotional gameplay experience of the highest fidelity. To achieve that, we re-wrote our pipeline to create much more complex assets with fewer resources, and created new tools to deal with the mountains of data associated with performance capture. Right now is the most difficult part of the project, we’re pushing the baby out and it’s a time that is often difficult to take a fresh look and evaluate all you have done. We were able to gain this perspective at E3 in June, where Ryse was hailed by many as the best looking game at the show, it makes all the hard work feel justified – now we just need to finish. Christopher Evans

21


Case Study Making Games – GDC 2014

BUILDING THE SCOPE OF

PLANETSIDE 2

Each of the four maps in Planetside 2 is 8x8 kilometers large – therefore optimizing gameplay and performance is an incredibly difficult task. Here’s how the SOE environment team pulled it off and created one of the biggest PvP open worlds to date. Devin Lafontaine is a Senior Artist at Sony Online Entertainment.

As a Senior Artist for Planetside 2, Devin is largely responsible for the terrain, natural environments and lighting. He has worked in the video game industry for 16 years, mostly focusing on online games.

Corey Navage is a Lead Level Designer at Sony Online Entertainment.

As the Lead Level Designer for Planetside 2, Corey Navage manages the Level Design team creating all the continents and facilities for the game. While Planetside 2 is the first MMO he has tackled, Corey has 15 years of experience in game design on PC and console, leading teams on first-person shooters, flying and vehicle combat games.

22

A

s we were building Planetside 2­,­ we created a handful of theories to guide our design decisions. In most cases, these theories served us well and led to a fun, successful game and experience for players. There were also a few instances where these gameplay theories broke down, or where we implemented them too stringently. Simultaneously, we were creating a game at an unprecedented scale. We wanted literally thousands of players playing at the same time in the same zone, and we wanted the game to run on medium strength machines so we could appeal to as many players as possible. Additionally, we were unwilling to sacrifice on the visual and gameplay quality of a modern day shooter to achieve our goals. Writing this now, it seems crazy that we even attempted it, but in the end we pulled it off! Since launch, we’ve also found a handful of ways to make the game run even better. In addition to being a top quality FPS, Planetside 2 is also an MMO. We update the game frequently, averaging one patch every two weeks; we are discovering ways to

improve and implement changes. In several instances, we have been able to address our gameplay concerns and get better performance at the same time.

Other Players are the Resistance Planetside 2 is multiplayer only, so obviously all players are also forming the resistance for others. We kept this aspect in mind when creating our bases. While we eventually got it right, at first we took it too far: Gameplay Initially we designed open layouts at our bases. Capture points were spread very far apart with little cover between them and often, major roads would run right through the heart of a base. We gave the defenders almost no assistance in terms of walls or shields or other common defensive structures (Figure 1). We expected defenders to be at these bases in force when the attackers showed up. Defenders would place mines along the roads, every turret would be manned, and there would be dozens of Heavy Assault infantrymen firing rockets out of every window. What we saw was total vehicle domination at base fights. No one played passive defense in the live game (manning a turret is boring when there is no one to shoot at). When the attackers arrived at a base they found it empty. Capture points and spawn rooms were quickly secured. As the defending infantry finally be-


gan to respond, they spawned into a hopeless situation with their spawn room surrounded by tanks and the capture process half way over. This poor experience made players less likely to defend in the future and created a downward spiral. Large armies began to actively avoid each other; players were circling around the map and capturing empty bases. To solve this problem, we implemented a handful of rules at all of our bases. To reduce spawn camping we isolated the spawn rooms and added a second spawn spot. We also moved the vehicle pads closer to the spawn points to give defenders easier access to their vehicles. To combat vehicle dominance, we made sure that the capture points at each base are inaccessible to vehicles, using walls, cliffs and other barriers to keep the tanks out. For the attackers we added defendable spots to park Sunderer vehicles, providing more reliable spawns. Finally, we added plenty of cover around the base entrances to allow infantry to push out and for vehicles to pull in close with decent cover (Figure 2). These changes helped create clear infantryonly, mixed forces, and vehicle heavy combat areas. Defenders coming into the bases are no longer surrounded by hordes of vehicles, allowing them to push out and participate. Mixed forces combat regions around the entrances to the bases are where infantry and vehicles both participate equally, while the regions between bases are dominated by vehicle play (Figure 3). We now have many more players willing to spawn into defense situations because they know they have a fighting chance to beat back the attackers and re-secure the base. Performance Our early wide open designs used mostly man-made objects as cover and decoration. We did this to create a clear aesthetic difference between the bases and the wilderness areas between bases. This also led to a bloated unique object count. Since our engine uses instancing, unique objects are much more costly than repeated uses

Figure 1 Original base layouts allowed vehicles unfettered access to the entire base. Spawn points (magenta) were not well sheltered, and were easily camped by attackers who had already overtaken all the control points (blue). Areas throughout the base had no cover or protection, resulting in killing fields for infantry (green).

Figure 2 Newer layouts created infantry exclusive areas by blocking access to vehicles (red). Spawn points (magenta) are sheltered, and not as easily overrun. Extra cover now allows mixed engagement areas of infantry and tanks (yellow).

Figure 3 This picture shows vehicle deaths at Saerro Listening Post on the continent Esamir. The data is from the full week before and the full week after the Esamir map update. We created areas within the base with no vehicle activity, seen in the two small voids in the after shot. We also see a dramatic increase in vehicle activity in the region between bases, shown in the large window.

23


Case Study Making Games – GDC 2014

Figure 4 The original base layout (top) compared to the new base layout (bottom). Gameplay sensitive buildings (green) are surrounded by many more objects already used nearby in the environment.

of an already placed object. All the rocks and trees surrounding the base as well as all the crates and fences within the base had to be loaded and rendered, which negatively impacted the games performance. By walling up our bases, the need for a clear aesthetic difference is reduced. We now use environmental objects like rocks and trees as cover and decoration inside the bases, reducing our overall unique object count. With fewer unique objects we can have more individual objects and better performance (Figure 4). The walls themselves also help with performance. They provide excellent occlusion from either side, reducing the number of objects and players the client machines have to render and track. These two changes lead to better performance and more visually interesting bases (Figure 5). These approaches also extended to the wilderness areas outside of bases. In our new continent, Hossin, we were able to address our object usage right from the start and leverage all the lessons we had learned from the previous three continents and their optimizations. Our first two continents, Indar and Amerish, had object libraries of 72 and 52 meshes respectively. Players could conceivably see and render all of those assets. Efforts to mitigate what they can or cannot see are only marginally effective given the open nature of Planetside 2. Knowing this, we carefully examined our library for Hossin and pruned it down to only 30 unique meshes. Rocks and plants that only added subtle variations to the environment were thrown out. Entire plant species and varieties were abandoned to keep our quantity as low as possible. Our artists were then able to place objects with a little more freedom and not have to keep track so carefully of which objects are uniquely used in a given area. It let the artists and designers focus more on making the game look and play even better (see Figure 6 and Figure 7).

Creating Geographical Variety Planetside 2 offers fierce infantry FPS battles, intense vehicle gameplay, and mixed forces battles in a persistent world. This makes Planet­side 2 a unique experience, but with our initial map layouts, we didn’t provide enough geographical variety to allow these three gameplay styles to fully emerge.

Figure 5 This new base from player perspective is now much prettier and has many more gameplay opportunities for either attacking or defending players. All of this with better performance makes everyone happy.

24


Figure 6 There are 72 unique environment objects on Indar, many of which look similar. Com足 pared to only 30 unique Hossin environment objects, each serving a specific visual purpose.

Figure 7 Less unique assets free us to place significantly more of the ones we have left, which helped us achieve the dense vegetation in Hossin. The entire continent, for example, has only three leafy plant assets (one visible in foreground).

Gameplay To cover the huge 8x8 kilometers map, we spread the bases out in a roughly uniform pattern. Each base was 600-800 meters away from its neighbors. We thought that since the distance was so long, players would travel between bases in vehicles, and we would have nice vehicle battles in the wilds and infantry/ mixed forces battles at the bases (Figure 8). When the game went live, we saw very quickly that players would often travel between bases on foot. Large infantry armies would move across mostly open fields. They died by the millions to aircraft and snipers. This further encouraged the herd mentality as huge pockets of players would travel together for their own individual safety. Friendly tanks would join these Infantrymen creating massive armies that would steam roll over everything in their path. Enemy players learned to avoid these huge masses, so we soon had giant armies trying to avoid each other rather than engage in the epic fights that make Planetside 2 unique.

Figure 8 Original map layouts favored evenly spaced bases, with large wilderness areas separating them. Notably, we envisioned large skirmish areas around the large facilities, so we cleared space for them.

25


Case Study Making Games – GDC 2014 We noticed that this was less of a problem in one particular area on Indar – The Highlands, which were a little closer to the 600m part of the distance spectrum (Figure 9). Of course, we investigated. It turned out that by having clusters of bases (300-400 meters apart) separated by much larger gaps (1000-1200m), we got the more defined gameplay types we wanted. Within the clusters, players are able to move from base to base more successfully, reducing the need to bunch up, and once it’s time to travel across the large gaps, players are more likely to get vehicles for those extremely long trips. We get better infantry and mixed forces battles within the clumps, and better vehicle combat between the clumps.

Figure 9 Infantry deaths in the Indar Highlands area. We see very few infantry deaths between the bases.

Performance When players gather in huge bunches, everyone’s frame rate suffers. Somewhat counter intuitively, moving the bases closer together helped to spread players out. We also saw a reduced number of vehicles within the base clumps. This also helped performance as vehicles are more impactful on frame rates than infantrymen.

Fights Cover Every Square Inch Planetside 2 uses 8x8 kilometers maps. They are huge. We wanted players to enjoy all of it, not just isolated pockets of play in a sea of empty space, and we wanted all of it to be in play all of the time. Gameplay To achieve our »fights cover every square inch« goal, we implemented the »Influence« system. Controlled by the Influence the attackers had over that base, this system adjusted the capture time of the base being attacked. Influence was defined by how many nearby bases attackers owned. Attackers could travel deep into enemy territory to capture a base, at the cost of a longer capture time. The defenders had to stay vigilant and guard everything, rather than focus on a front line. We were sure that this would spread players out across the whole map, keeping every base in play at all times.

Figure 10 New Esamir map layout using the Lattice system favors clusters of bases, helping infantry players move freely between them before joining up convoys moving on to the next front.

26


It turns out that we had created map layouts that encouraged players to avoid each other, and implemented the Influence system that allowed them to do so. The Influence system turned out to be way too permissive. There was no way to predict where attacks might come next. Players simply could not defend effectively, so everyone went on offense. Not only did this further contribute to the large armies massing together, it also allowed these large armies to avoid each other. There would always be another base relatively nearby that armies could move to if the enemy army showed up, so players adopted the mentality, »why fight when there is land to take that’s resistance free?« We solved this by removing the Influence system completely. We replaced it with an idea pulled from the original Planetside – the »Lattice« system (Figure 10). The Lattice only allows players to attack a base they have a lattice connection to. This creates a clear front line and provides predictability for defenders. Once we added the Lattice, we finally started seeing the defensive behavior we wanted all along. The Lattice also made it tough for armies to change targets, since bases have Lattice connections to only a few other bases. These changes lead to medium sized armies clashing into each other frequently. Finally, we’re seeing the ideal Planetside 2 experience. Performance Since there is only one target to move towards, the Lattice system tends to bunch up attackers (whereas before we used to see some players break off towards different targets). However, defenders can now accurately predict where attackers will go, making defense a viable option again. Many defenders choose to fall back, and prepare the next target along the attacker’s path with turrets, mines, and other defenses. Defenders are pulling themselves out of the current fight to help prepare for the next fight, thus reducing the total number of players in a given area. The Lattice system allows us to get the best gameplay experience much more frequently, and we’ve yet to see any performance reduction from its implementation.

Planetside 2 Mobile Uplink Players who want to chat with their teammates in the game even when they are not logged in themselves, can use the Mobile Uplink app for Android and iOS. Planetside 2 offers a persistent world so the live dynamic updates of the territory and facility control changes allow for a deeper experience. Addi­ tion­ally players can check the leaderboards and all weapon, vehicle and class statistics. You can check out the app at www.makinggames.de/PS2uplink-itunes www.makinggames.de/PS2uplink-android

Conclusions Level Design is always a delicate balancing act of gameplay, performance and beauty. It’s difficult to achieve the proper balance even in the best situations. Planetside 2’s massive scale and player-directed actions made it more difficult to predict the best ways to achieve this balance. Fortunately, Planetside 2 is a live game. We get to make changes based on real data and player feedback. We now have prettier bases that perform better and play better; clear attack lanes that produce the huge battles unique to Planetside 2; and well defined areas for infantry, mixed forces, and vehicle gameplay. We will continue to make revisions that not only improve upon the unique fun available in Planetside 2, but also increase the framerate for everyone playing. Best of all, we have a growing population of diverse and dedicated players who continue to help define the future of Planet­side 2. Devin Lafontaine, Corey Navage

27


Best Practice Making Games – GDC 2014

CREATIVE ­MOBILE PORTING What’s behind the black art of bringing games to multiple groups of mobile gamers? Matthias Hellmund explains about learnings and best practices acquired over a decade of projects. Matthias Hellmund is Head of Mobile Development at exozet.

Matthias joined the Berlin-based exozet back in 2003, building up their mobile teams for games and applications across a broad range of mobile platforms. Titles include popular board game adaptations such as »Catan« and »Carcassonne«. With his twelve+ years of experience in mobile development, Matthias also advises exozet’s agency clients in the fields of mobile technologies and creative mobile porting. His background is in Media and Computer Science with stopovers in Furtwangen (Germany), Tampere (Finland) and San Francisco. @exozet_games

H

ello developer, so when is your game available for ›PickYourPlatform‹?« That is a question you hear quite a lot – at least if users love your game and want to play it in their favorite environment. In this article I would like to illustrate important aspects to consider when approaching a »creative« porting project. We are not only talking about iOS vs. Android but rather about a combination of: ¾¾ OS – operating system, API versions ¾¾ Ecosystem – distribution channel, channelspecific features (billing, notification) ¾¾ Screen – resolution, size, density ¾¾ Networks – IP transport from offline via 3G to WIFI, social networks ¾¾ Locale – language, culture

features can also be released on the ported versions with little or – even better – no delay.

Common codebase One way to tackle this challenge is to use a common codebase. Unity is the obvious candidate that comes to a developer’s mind. Other environments where large portions of the source code can be shared between platforms work as well. For instance, C++ can be seen as a good common denominator, which is why our mobile versions of »Settlers of Catan« for iOS and Android share approximately 90% of the C++ code between those platforms. If you fancy C# without Unity, Xamarin is your choice, whereas Apportable is the best option when you want to stay in Objective-C. For ActionScript, Adobe Air provides a deployment path.

Code and feature porting Depending on the type of source project, different techniques can be applied. Let’s take a look back at the world of Java ME games: Many large studios divided their production process into two phases: One team created the »reference builds«, usually just a handful of completed versions of the same game for selected screen resolutions, touch/non-touch and the like. Another team had the mostly painful job to create hundreds and thousands of different »portings« originating from those reference builds, usually with much by-hand tweaking eventually resulting in lots of disposable builds. In creative porting projects for our clients (for example from an existing Flash-based game to iOS/Android or from iOS to Android) there’s often the requirement to achieve a maintainable porting codebase, which is structurally close to the master codebase. Especially with Free2Play titles that receive continuous feature iterations on the »source« codebase, it’s critical to empower the porting team so that new

28

A special porting of »Catan« for Amazon Kindle e-ink devices, derived from the Java ME feature phone codebase with fully reworked pixel-art assets. Below the game board, a book-like representation of the move history provides a view consistent with the Kindle environment.


Script web code and the C++ mobile code. To be clear, controlling the »binary« outcome of a porting achieved through Unity, Xamarin or Apportable is absolutely possible as well. However, tweaking needs to be done in the original programming language (C#, ObjectiveC), and if those tweaks reach their limits you need to work directly with the tool providers, modifying their way a binary is generated or replacing parts with custom C++/Objective-C/ Java code for the target environment.

UI challenges

Haribo App utilizing a Unity3D view with native views on top (implemented in Android UI and iOS Cocoa Touch).

Source code porting Source porting is another way, which we pursue in various projects. While it requires considerable efforts to essentially translate the full source code from one programming language into another (for example from Objective-C to C++, from ActionScript 3.0 to C++), the native target programming environment gives the project team maximum direct control and optimization possibilities. In order to ease the translation process itself, we utilize a collection of in-house tools, consisting of language parsers, reference call modifications (custom libraries emulating language features such as ARC, Lambda expressions etc.), syntax translation/pre-formatting and also a set of custom merge tools. By using this semiautomatic tool chain, code changes within the »source« codebase can be reflected in the translated code in relatively short timeframe while preserving any additional optimizations and previously implemented changes in the »ported« source code. At the same time, the porting engineers can keep track on the actual changes on both versions, the »source« code as well as the »ported« code. Having both code bases structurally similar facilitates rapid merges of new features, tweaks and balancing. In some cases, complete frameworks or libraries can be used in platform-specific variants in order to optimize performance on each platform while sharing the same interfaces. For example, during the project »Forge of Empires« by InnoGames, the popular ActionScript 3.0 framework Robotlegs was ported to a custom C++ variant allowing for swift code merges between the Action-

Besides the technical porting of the program flow, we put much thought into the way we adapt the overall UI experience from one platform to another. Typical UI areas include: ¾¾ Navigation flow – including Android hardware buttons, split views vs. serial views ¾¾ Mouse vs. Touch UI – Specifying active click areas, gestures, »mouse overs« ¾¾ Resolution groups – Catering for Full HD smartphones down to original iPhone resolution and various tablet flavors. ¾¾ Texture management – Smart grouping of related textures into few atlases to minimize number of texture switches during draw calls, dynamically load/unload/generate texture atlases.

About exozet exozet is Germany’s leading independent game development studio in the family and brand entertainment sector. Since 2004, the Berlin-based company specializes in the development, publishing and marketing of multiplatform mobile and online games for an international audience. Among exozet’s best-known games are titles like Catan, Carcassonne, Playmobil and HABA apps for kids.

With hundreds of different Android tablet models on the market, there is a huge bandwidth ranging from 480x800px low-cost 7“

Wooga’s Diamond Dash for Android comes in 3 well-balanced grid configurations, depending on the physical screen size.

29


Best Practice Making Games – GDC 2014 UI known from iOS »Catan HD«. This includes floating windows which are re-using large portions of the corresponding iPhone assets such as buttons. With Catan and Catan HD for iOS, we distinguish between different screen real estates: phone320, phone640 (retina), tablet768 and tablet1536 (retina) asset groups.

Aspect ratio

In the Adobe Air-based game »Luke The Liftboy« the number of different characters per level depends on the device memory.

models up to 24“ tablet heavyweights, not to forget the continuously emerging category of »Phablets« situated somewhere between 5 to 6.9 inches in size.

Smartphone vs. tablet One basic rule that can provide a solid guideline is to differentiate between a smartphone experience and a tablet experience. The decision where exactly to put this threshold should primarily depend on the physical screen size and the »tap-action-factor« of your game. For an action-heavy title such as »Diamond Dash« by Wooga, every square millimeter of tap space counts, especially on smaller phones. Diamond Dash on 7“ tablets such as Nexus 7 is a pixel-accurately scaled flavor of the game, to be played in portrait mode, while 10“ Android tablet players receive a tablet experience to be played in landscape mode with more decorative elements on the additional screen real estate. A fine balance of 3 different grid sizes provides optimized user experiences on a comprehensive range of screens (phones: 7x8 portrait, phablets/7“ tablets: 9x8 portrait, 10“ tablets: 10x9 landscape). With less action-driven titles like »Catan«, the screen real estate on devices such as the Nexus 7 is sufficient for enabling the tablet

»Creating your ­master 2D assets in high definition (and in 16:9 wide aspect ratio) is a solid starting point.«

Another important factor when it comes to considered UI differentiation is the screen’s aspect ratio. The table below shows some popular resolution groups, all ranging between 4:3 and 16:9. Once you take additional parameters into account such as system bar height or portrait/ landscape formats, things get even more »interesting«. The challenge really is to make »good use« of the additional space instead of simply pillar boxing your game (which should only be used as a last resort). So creating your master 2D assets in high definition (4k is around the corner!) and in 16:9 wide aspect ratio is a solid starting point, as long as the main gameplay works great on 4:3 iPads, too. Looks like »Overscan« and »Safe Areas« known from film and console gaming are experiencing a renaissance on mobile platforms, just with a lot more variants. Automation is very important in that regard: Automated testing platforms like Testdroid can also provide regular sets of screenshots to verify layout behavior on a set of devices. Smart asset pipelines enable rapid content tweaks.

File size and memory management Although this might not sound particularly creative, the hard and soft limitations imposed by both, target devices and distribution channels, are really challenging in regard to implementation and game design. There can be a Million (Mega) reasons, why you need to keep a game below an arbitrary file size. So-called Over-The-Air download limits, meaning the file size per app that can be downloaded without WIFI access, are direct limits enforced by the iOS and Android default app stores. There has never been a real technical reason behind these arbitrary limits but more of a concession towards mobile network operators in order to limit 3G network overload.

iOS Device

iPhone 4

iPhone 5

iPad Mini

iPad Mini 2

iPad 1

iPad 3

Resolution (px)

960 x 640

1136 x 640

1024 x 768

2048 x 1536

1024 x 768

2048 x 1536

Aspect ratio

1.50 (3:2)

1.78 (16:9)

1.33 (4:3)

1.33 (4:3)

1.33 (4:3)

1.33 (4:3)

Screen size

3,5“

4,0“

7,9“

7,9“

9,7“

9,7“

Screen density

326 ppi

326 ppi

163 ppi

326 ppi

132 ppi

264 ppi

Android Device

Nexus S

Galaxy Nexus

Nexus 4

Nexus 5

Nexus 7 (2012)

Nexus 7 (2013)

Resolution (px)

800 x 480

1280 x 720

1280 x 768

1920 x 1080

1280 x 800

1920 x 1200

Aspect ratio

1.67 (5:3)

1.78 (16:9)

1.67 (5:3)

1.78 (16:9)

1.60 (16:10)

1.60 (16:10)

Screen size

4,0“

4,65“

4,7“

4,96“

7,0“

7,0“

Screen density

233 ppi

316 ppi

318 ppi

444 ppi

216 ppi

323 ppi

Apple iOS reference resolutions (excluding original iPhone) and a subset of Google’s Nexus-branded Android devices.

30


In any case, you should aim to keep your game below 50 MB initial download in order to be downloadable on-the-go, which can be a tough challenge. Besides limited data plans, users with only little on-device storage might also consider particularly large apps the first ones to delete in order to free some disk space. On top of that, some pretty popular Android devices unfortunately come with a disk par­ titioning that reduces the effective limit of the downloadable app size even more, depending on (no joke!) the amount of previously accessed Facebook photos. They unfortunately happen to be stored in the same temp space. There are ways around those limits, including selected asset downloads from a dedicated content server, maintaining multiple builds (each targeting a specific device/CPU/screen group), utilizing Android APK expansion files or by using iOS 7’s nice background downloading capabilities. Some titles even offer their users to download »HD« graphical assets to enhance their game experience, essentially replacing heavily compressed texture assets with more detailed versions upon the user’s request.

changes in your game can be required in order to resonate with your audience. While some extensively used particle and glow effects feel over-the-top in most western markets, a vibrant user interface paired with characters designed for the local market’s preferences might be just right. In any case, working with experienced partners is key. When comparing the top download charts, it also becomes obvious that Facebook is huge but not the best choice for a successful title in Asia. The social networks you might want to connect to include WeChat (China), LINE (Japan) and Kakao Talk (Korea), the latter is said to be installed on over 90 percent of all smartphones in South Korea. Integrating a different social network is never a drop-in-replacement for the Facebook SDK. Instead, each network comes with proprietary features, slightly different concepts and established reward mechanisms which should be considered when making a thoughtful implementation.

»Some popular Android devices come with a disk partitioning that reduces the downloadable app size limit.«

Distribution channel and culture Reaching new audiences for your game means thinking out of the box, with your usual boxes being Facebook, iTunes and Google Play. A web-based Facebook game can leverage a logged in FB user literally from the first mouse click on. However, when deploying a mobile Free2Play title which deeply integrates with Facebook, alternative entries into the game need to be available as well. This includes offline gameplay and playing without registering an account for a significant time. Technically, in those cases a local user profile is being created and, depending on the backend capabilities, it can be merged with a serverbased profile once the user logs in. Having this kind of »Direct Play« functionality is crucial to keep the barrier of entrance for new mobile users as low as possible. Channels such as Samsung Apps or Amazon App Store provide an additional distribution opportunity. On Amazon Kindle Fire tablets this is in fact the only store available. A full integration could include GameCenter-like achievements via Amazon GameCircle, devicesynchronization via Whispersync, Push Notifications via Amazon Device Messaging and utilizing the Amazon In-App Purchasing API. If there are physical products available around your game (such as fluffy toys, collection figures, physical board games, etc.), you might want to integrate seamless in-app shopping via Mobile Associates API. When targeting Asian core markets such as China, Korea or Japan, more significant

Conclusion Bringing a game experience to players on mul­tiple channels and devices has become crucial for many developers, ranging from AAA studios to small indie devs. Creative Mobile Porting is an approach to find individual, smart solutions for each project. Matthias Hellmund

Splash screen of Catan for JavaME and BREW phones featuring Mangastyle characters for the ­Japanese market.

»When tar­ geting Asian core markets, more significant changes in your game can be required.«

In-game view of »Jelly Splash« by Wooga (note the subtle differences between global and Korean variants). The Korean version integrates with Kakao Talk instead of Facebook and also includes an array of additional visual and sound effects.

31


Case Study Making Games – GDC 2014

GIANA SISTERS: TWISTED DREAMS

AFTER THE KICKSTARTER

Achieving your funding goal is awesome. But what actually happens, when your game is finished? The guys from Black Forest Games look back at a very different kind of crunch time. Adrian Goersch is one of the two Managing Directors of Black Forest Games.

Adrian is in charge of marketing for Giana Sisters: Twisted Dreams. He was strongly involved in the Kickstarter project and personally interacted with the Project Giana backers.

D

avid While the circumstances of our project weren’t unprecedented, it wasn’t a typical Kickstarter, but rather a »Kickfinisher«. Instead of collecting funds to launch »Project Giana«, we were looking to complete it. As a result, we had a lot of material that we could share with our backers, proving that we were indeed professionals and that our project wasn’t just a pipe dream. Nevertheless, kickstarting a nearly completed project also has its downsides. As any developer knows,

the end of a project tends to be the busiest, and we had contracted work coming up, leaving precious little time to plan and run the campaign, let alone complete the game. In comparison to other game projects on the Kickstarter platform, our positioning was risky. While we had no illusions about even coming close to the success of a Double Fine Adventure or Wasteland 2, in Kickstarter terms, the funding goal we required was still relatively high, especially for a newlyfounded studio with no star developers or a pedigree of hit games to speak of.

Emily handled the social media channel during the Kickstarter and now is the go-to person for all matters Kickstarter.

Emily R. Steiner is an Associate Producer at Black Forest Games.

32

Giana Sisters: Twisted Dreams is a charming oldschool-platformer which was nearly finished when we started our Kickstarter campaign.


It’s Not Finished When It’s Over… David … but you will be. After a successful Kickstarter campaign, the first thing your backers are prone to hear from you usually is nothing at all. Kickstarter time is crunch time. A crunch time where you will likely be blindly stumbling outside of your comfort zone, armed with what little knowledge you have to stave off the constant threat of impending doom. Unlike what runaway successes like Tim Schafer’s campaign may lead one to believe, most successful Kickstarter projects barely manage to hit their goal on the very last day. On a more personal note, I burned out on that infamous last day, just before we hit our goal. I have since retired from Kickstarter duties at my request and passed the torch to Emily Steiner, who supported us during the Kickstarter and officially joined Black Forest Games as Associate Producer a few weeks after the completion of our Kickstarter project. I wasn’t the only one affected by the Kickstarter crunch. Even with a dedicated Kickstarter team, the workload spilled over to the rest of the studio, creating extra work for anyone with a shred of time left. This is where the downside of our atypical »Kickfinisher« situation struck: the success of our campaign didn’t ring in a preproduction phase, but more crunching to finish the game on time. As we didn’t have a publisher for Giana Sisters: Twisted Dreams, we also had to handle distribution, which – as you may guess – resulted in more work, all while keeping in touch with our backers and organizing the Kickstarter rewards. While our Kickstarter team had officially dissolved, its members and the Black Foresters who volunteered to help out during the project were still very much involved with Kickstarter work.

art book and other digital rewards. This solution had the advantage of close to zero initial cost, high scalability and a pay as you go cost model. Emily Coordinating the physical rewards was a little messy for us. Originally, we were planning on setting aside time and specific people on the team to focus solely on ordering the rewards and getting them ready for shipment, but in the chaos of crunch time our many-hatwearing team members were juggling a little too much. We have to order each piece of our reward packages in huge numbers from different manufacturing companies, and take time to provide the designs required for each item. (What do the pins look like, CD covers, poster art, mouse pad sizes/style etc.) Manufacturers have specific time windows set aside for ordering, and on top of their estimated ordering time they also have other customers, so we had to assume that our packages would arrive later than originally anticipated. Most of our designs were completed in the planning process long before our release date, but as it drew closer, gathering the information needed to create certain designs became more difficult as everyone got busier. Having people from the design team and marketing team also trying to focus on the shipping of rewards was difficult, especially since during crunch time, and right after the release, strong marketing focus was critical, not to mention bug fixes and keeping up with patch updates. In all this commotion, shipping the rewards unfortunately had to wait on the backburner. Once we had released, and sent out the first few patches, the team was able to relax a bit in the sense that now we could take the amount of people we needed to form a proper group with an orderly process to get the rewards ordered, prepared, and shipped. We weren’t so much worried about the shipping taking place later than we planned, but rather relieved that we could officially and clearly communicate a date to our backers. As always, they were patient and understanding. We expect our backers to receive their well-deserved rewards in time for Christmas.

»In comparison to other game projects on Kickstarter, our positioning was risky.«

Keeping our Promises: The Rewards Patrick While we were lucky with distributing the game over Steam and gog.com (more details below), we also needed to provide our backers with the versions we promised them. A reliable and accessible solution for distributing the game and the other digital rewards was required; we decided to write a small web application that used Amazon Web Services to provide our backers with the backer-exclusive versions of the game, as well as the soundtrack,

Nikolas Kolm is a Game Designer at Black Forest Games.

Nikolas volunteered his free time during the Kickstarter and was a huge help on several fronts, ranging from managing the BFG/Giana forums to handling backer rewards.

Patrick Harnack is a System Administrator at Black Forest Games.

Patrick keeps things running smoothly and was key to delivering the game to Project Giana backers on time.

David Sallmann is a Senior Game Designer at Black Forest games.

David was the head of the Kickstarter campaign and has shelved his newly-acquired proficiency at running around while on fire.

We were not only be able to provide demo videos, but also schematics to illustrate the game mechanics which gave our backers a little insight in the production of a video game.

Avoiding the reward pitfalls Nikolas There is a lot of work associated with creating physical rewards

33


Case Study Making Games – GDC 2014 for backers. We knew this in the beginning, but still underestimated the complications involved. To start off, we had to come up with an estimate on how many items we would need of any given physical reward. Since we had no clue how much publicity we would generate, that was a very difficult thing to do. Add to that the fact that usually, when you create any kind of physical reward, the more you order, the better your deal will be. When we were at the end of the Kickstarter, we could really dig into the research, because only then did we know how many items we had to get. Now the main focus lay on finding the best value for money we could find. But when your orders only number in the hundreds, this can be quite tricky. You want to put out the best possible products, but at those numbers, the best possible products start to be really expensive. It then was a matter of researching the best manufacturers in the shortest time possible and getting the rewards finalized. This was another pitfall we hadn’t anticipated. Since we had not done this

kind of marketing before, we had to adjust a lot of our artwork/designs to correspond to exacting specifications provided by the manufacturers.

Marketing: From Kickstarter to Greenlight Adrian For marketing, we concentrated on press and community management. Spending only the few thousands of euros we could afford on banners and other advertising didn’t seem very effective with all the big publishers spending millions of euros on their equally big title releases. Self-publishing requires taking care of several distribution channels yourself. For us, the initial step was contacting Valve and Good Old Games as PC was our lead platform and done first. Originally we assumed that we »just« have to finish development, put the game on Steam and sell it. Then the Steam Greenlight process was implemented. We learned that you have to do a second marketing campaign for a slightly different target group. We motivated our backers to vote for Giana on Greenlight and redirected all our communication channels to Steam instead of Kickstarter. We even changed the banner for Giana to better meet the Steam audience. The good news: After being greenlit, it took us only days to release the game on Steam. There are several digital distribution platforms to choose from; we will tackle Origin and Amazon next. The paperwork for the several contracts and the efforts to provide different builds and marketing materials like trailers and screenshots should not be underestimated. Important for your finance guy: Prepare the documents required to avoid the withholding tax. In the case of Steam, this includes having a U.S. tax number, the EIN. Fortunately it’s easy to obtain by phone. Giana is planned for XBLA and PSN as well. Business-wise this is a bit like playing the Game of Thrones. Each platform holder would like to get your stuff exclusively, at least for a while. As

»Coordinating the physical rewards was a little messy for us.«

We had to manage hundreds of keys for Backers and Journalists.

It‘s one thing to describe the rewards. But a picture showing them is much more appealing to anyone.

34


soon as you already released on Steam it makes things more difficult. If you have the chance for a simultaneous release, go for it.

PR: Juggling with press and social media Nikolas Visibility is key to successful sales. This was something we were aware of during the kickstarter, but we had precious little time to obtain it. Unsurprisingly, we experienced the consequences of a lack of visibility firsthand. After release, our game generated very positive reviews, but the big sites did not put up any, which perpetuated the lack of visibility, particularly in the US. There is very little we could do about that with our limited time and resources. As frustrating as it is, when your game launches and is a comparatively small scale production with no »story-worthy« content (e.g. sex/ violence/controversial topics to name the easiest ways to generate buzz) you won’t get exposure. Even with the Greenlight process and the game having a very respectable metascore of 81 (which put us in the Top 20 recent releases), we were not big enough to warrant press on a large scale. Additionally, our schedule was so tight that we could not send out review copies of the game until a few days before release, which meant that even if larger sites did decide to write about us, the reviews would not hit until several weeks after release. Emily Social media is a huge part of marketing. In addition to our dedicated PR team contacting the press and organizing interviews, we had to have an active community on Facebook, Twitter and our forums. Our fans and backers would not only receive updates on the project along with their other daily news, but also get to know us as a team and not as a faceless company. The fans and the backers are what made it possible for us to bring Giana Sisters: Twisted Dreams to the surface, so interacting with the community and getting to know these people was very important to us. This included creating a place where people knew they could receive answers to questions promptly, find help with difficult passages in the game, and feel confident knowing that they would receive the answers they were looking for. Not only is it enjoyable to interact with them, but we want people, old fans and new fans, to know that we as developers really appreciate their dedication to our team, just as much as they appreciate the product we’d created for them. Nikolas Perhaps one of the biggest advantages of doing a Kickstarter today is social media. Social media is the Alpha and Omega of advertising when you don’t really have a mar-

Two of our Facebook-Artworks: We communicated via timeline cover updates to a) announce important milestones like the the Steam Greenlight process and b) be able to thank the community. keting budget. If you play your cards right, you can generate a lot of buzz without spending anything but your time. Keeping your social media feeds updated regularly is paramount to staying on the radar of as many people as possible. Even then, we did not get the visibility we would have liked, but without them, it would have been a lot more diffi­ cult to keep the interest high. As we will see below, the importance of social media did not abate after the Kickstarter campaign.

»If you have the chance for a simultaneous release, go for it.«

Backers and Community Nikolas One thing we did gauge correctly is the importance of communicating with your backers. It is important to strike a good quantitative balance so that your backers don’t feel ignored but also don’t get annoyed and unsubscribe from your Kickstarter feed. There are two things that are increasingly important after the Kickstarter is over: don’t suddenly go silent, and don’t ignore any group in your community. As mentioned above, it is expected and accepted that developers will shut down communication for a short time after a Kickstarter campaign. However you cannot rest too long. Backers want to be kept up-to-date about your progress; they want to feel that the money they »invested« is going to good use. To keep everyone in the loop, we utilized the following methods:

»Social media is the Alpha and Omega of advertising.«

The Kickstarter Site Kickstarter itself offers you the easiest way to provide followers of the project with regular updates. It provides a direct way to interact with your backers and is a crucial tool for gathering the information you need for shipping and manufacturing. Forums Early on, we implemented forums for all people interested in the game. This proved to be a

One thing we did to engage the community was to offer badges and banners that backers could use as avatar in their forums.

35


Case Study Making Games – GDC 2014 hotbed for communication between developers and community, both backers and non-backers. Our efforts to be transparent in everything we did were very positively received. Our developers post on the boards, answer questions and generally socialize with our community. Forums do create additional workload. In addition to needing a variable number of people to moderate them (we were lucky that our community was very easy to moderate), the forums will drain time from the developers. They are a great way to gather suggestions, bugs and the like, but are also an easy way to lose time. However, in the long run, we believe that the active communication did wonders to create a positive image for the studio as a whole. Support We introduced a number of support-specific e-mail addresses which the users could write to in order to get answers and help. The problem with this was the wide-spread channels of communication. While offering many avenues of contact to your community makes for smoother communication, this leads to a large number of different sources that all must be monitored. In retrospect, it would have been smarter to streamline the different avenues to have all ques­tions and complaints coming in at one central point; no matter how often we asked/ pleaded with the community to go through one specific e-mail for questions, we would still get dozens of requests through all the other channels we made available throughout the campaign which resulted in coordination headaches. Livestreams Originally, this was intended to be only an end-of-campaign event, showing the team as we counted down to the finish. Due to its popularity, we decided to continue making livestreams as a form of weekly updates, keeping our backers informed in an approachable way. Throughout the time after the campaign, these liveshows served multiple purposes:

At first the livestream was intended as an »end-of-campaign« show. But based on the popularity we decided to install it as a permanent offer.

36

­clarifying updates, shedding light on why certain decisions were made, answering questions more clearly than could be done per e-mail and staying as close as possible to the community. Connecting to the backers and the community, and not appearing as a faceless company, but as actual people who talked about the project with passion and drive contributed to us retaining a very positive image. However, as in the other cases, this resulted in an additional workload. A liveshow takes time to prepare and execute, and costs around fifteen man-hours in total for an audience of up to 200. Is it worth it? In our opinion, yes. While our community already is happy, and a liveshow basically just is »icing on the cake«, in the long run we are building a reputation. We believe that the extra effort we put into the community work has generated a very passionate and friendly community and a good starting point for future projects.

A Word in Hindsight David All in all, I believe we did well, managing crucial research and preparation in a limited amount of time, which allowed us to position ourselves properly and avoid most pitfalls. The short preparation and execution was born out of necessity; should we ever kickstart another project, we will dedicate a lot more time into preparation, with room to breathe, marketing professionals on the team, a strong focus on generating buzz months beforehand and an interesting story for the press. Giana Sisters: Twisted Dreams is some of the hardest-earned money we’ve ever made, easily doubling the workload we would have had with the support of a publisher. But it was also one of the most rewarding projects we’ve ever had, not only for the creative freedom and sense of ownership it provided us with, but also for the strong involvement with the community it gave us. This experience was what many of us originally signed up for, what we naively believed game development to be as rookies, and what we still hope and strive for as veterans. Adrian Goersch, Emily R. Steiner, Nikolas Kolm, Patrick Harnack, David Sallmann


PORTING THE WITCHER 2 ON XBOX 360 Some of the Senior Programmers and the Lead Technical Artist from CD Projekt RED explain in great detail how they managed to port The Witcher 2 on Xbox 360, including changes to the anima­tion system, memory optimizations and the adaptation of lighting.

A

fter finishing development of The Witcher 2 for PC we had exciting and frightening work to do for the Xbox 360 adaptation. Frightening, because only a small fraction of the team had console development experience. And exciting, because the team was hungry for a new set of technical challenges after the PC version wrapped.

Geometry Optimization Two common problems with our in-game geometry were that most of the meshes had high density topology, even up to 60k triangles(!). Second problem was that we had a lot of unique objects rendered in frame – up to 7,000 – so our render queue processing was taking way too long. In the PC version most of our base meshes already had at least one level of detail (LOD), for Xbox, we’ve added even more – up to three per mesh. The basic idea behind LODs is that instead of having

just one graphical representation for your geometry, you have a few – so that you can swap and render them with the relation to distance to the camera, or with any other criteria (like screen size etc.). Thanks to that, you render high-poly models only when they are up-close to the camera. All the distances of LOD-»swaps« were set up by artists, so they had more control over popping. Also we’ve gained some performance after the introduction of »default auto-hide distance« for all renderable components. We’ve found out that artists wanted to build their own LODs (again, they had more control and quality was better) so we didn’t introduce any automatic solution for mesh degradation. To avoid sudden pops and make the whole transition smoother, our LOD swaps happen over 0.3 seconds – with some simple dissolve shader applied at that time. Remember the prologue prison cell with Geralt saying »This tower was ridiculous ...« (huge siege tower attacking La Valette castle)? The rumor is, that he already knew that some

Marcin Gollent is a Senior Engine Programmer at CD Projekt RED.

Over three years in CD Projekt RED, Marcin developed and maintained both the gameplay and engine systems like the inventory system, NPC jobs or particle systems on the Witcher 2.­For the Xbox 360 version Marcin mostly worked on memory consumption issues. Marcin is currently a member of the engine team, focusing on the features that will define future RedKit capabilities.

Balázs started in the industry as a junior at Eidos Hungary on the Battlestations: Pacific project where he quickly proved himself to be valuable member of the team. Balázs then went to Digital Reality where he became a senior while building a data driven multithreaded render engine that supported SkyDrift and Bang Bang Racing. He started working in CD Projekt RED after the Witcher 2 PC release, as an Xbox 360 specialist and also took part in the Witcher 1 Mac development. Now Balázs works as part of the newly formed engine team where he defines the future of the render engine that was used to make one of the best looking games on the Xbox 360.

The Witcher 2 was first released on PC and then ported on Xbox 360 as an Enhanced Edition. Before it could be released on April 17th 2012 several optimizations and adjustments had to be made in every major system.

Balázs Török is a Senior Engine Programmer at CD Projekt RED.

37


Technical Post Mortem Making Games – GDC 2014

Krzysztof Krzyscin is Lead Technical Artist at CD Projekt RED.

Krzysztof started his development adventure with small indie games, worked also on the first Witcher and on a few unannounced Star Wars universe games. He has worked at CD Projekt RED since 2009, started as Environmental Artist, and right now he is in charge of all technical matters covering all projects. Krzysztof specializes in new tools, R&D, optimizations and simulations.

Piotr Tomsinski is a Senior Programmer at CD Projekt RED.

of the biggest PC meshes had to be completely redone to fit our triangle budget. From the artist point of view, dealing with the long render queue was mostly done by reducing the number of different in-game components used. Components are basic blocks that we use to build up our game entity. For example, a character is constructed at runtime with few randomly selected components (like shoes, hats, or gloves) and some predefined components (like chest armor and weapon). Thanks to that we have a great variety of characters and a long render queue as I mentioned before. Another great character optimization was our »background character« system. What it did was basically removing all the unnecessary sub-systems from the NPCs’ templates leaving only (merged) skinned meshes – playing some looped animation. That worked well for all background fighting scenes, that otherwise would have taken too much memory and CPU juice. The navigation mesh was used to drive both players’ and all NPCs’ movement. On the PC on our biggest location (the battlefield in Act 2) it could take up to 80 Mbytes of data. For Xbox360 we introduced an offline tool that could filter and optimize the navigation mesh – within a given threshold. In the end we’ve also had to create an option for exporting/importing this mesh to our 3D package so the artist could fine

PC

Comparison between geometry details on the balista model.

The Elven ruins in the PC version and the Xbox 360 version.

38

Streaming and Textures We barely used data streaming in the PC version, mostly because almost every location fit in memory. The Xbox version had to fit in 512 Mbytes, and well, we had to divide the game content – so it wouldn’t cause OOM (out of memory crashes). We’ve streamed not only textures and meshes but also animations, terrain tiles and foliage tiles. Balancing memory on Xbox 360 was tricky, because our game had huge amount of high-resolution details. Our first step was to assign strict budgets to every system. Then we’ve run series of tests and balanced those budgets for every streaming zone. For a few places in the game, we decided to change rendered cutscenes to pre-rendered, and we used extra time when the video played to load actual data to memory. Next thing we looked into was texture sizes, mostly because downsizing their resolution (again based on artists’ specified values) was both extremely efficient and extremely easy to implement. We’ve developed a special tool that runs through given streaming zones, then lists (and sorts, displays thumbnails, formats output data into clickable html etc.) all used resources, so artists could easily check what takes most of their budgets. Also

Xbox 360

Piotr is responsible for the animation system in all games by CD Projekt RED. He is interested in the technical aspect of motion – from cartoons and CG movies to the most advanced video games.

PC

tune it. We ended up with 11 Mbytes of data, which was way more acceptable.

Xbox 360


technical artists used PIX to debug specific stuff, for example not visible dragons in location’s point (0,0,0), or NPCs spawning and continuously staying underground etc. Another part of the texture workflow was to switch from DXT5 to DXT1 compression in as many places as possible. That worked well, especially for static GUI textures that didn’t require an alpha channel – also because of the fact that we were re-doing our GUI to a »gamepad friendly« version. We optimized a lot of subtle GUI effects, like glows or grunge maps. To save on memory and performance the biggest, shaders had to be redone. Generally, for the shader optimization we introduced a separate, simpler version of shaders so the artists could apply them to all background and non-important objects. Also we combined rarely used shaders into one with similar properties, once again level statistics tools come in very handy here.

New Foliage System The original foliage system on the PC was built the easiest way possible. We just took the entire painted foliage store for each tree, bush or patch of grass as an entity of the world and rendered them one by one. In the very early stage of the Xbox project we realized that this wouldn’t work and we also knew that we had to find a better way for storing the data because it was scattered around the memory and used

PC

way more space than optimal. We had to find a way to change the whole system so that the quality wouldn’t be worse. But we couldn’t render thousands of bushes and trees for the forests around Flotsam. The solution we used was to build a quadtree for each type of foliage and store the necessary information for each instance, like position, rotation, size and color. This kind of system is not only optimal for storage, but the quadtree is also very effective for culling. And since the data is ordered as the quadtree nodes are built the locality of the data is the maximum possible. After the culling, it is possible to render all the instances of a certain foliage type in one drawcall using instancing and the data stored in the quadtree. This would have its own benefits but after many tests we decided that we wouldn’t use this since the performance was worse in some cases on the Xbox 360. Instead we iterated through the stored data on the CPU. It might sound very similar to the PC version but in fact it is very different: There is almost no traffic going on between the CPU and the GPU, other than setting a few shader constants, and so both the CPU and the GPU usage are very efficient this way. The only compromise we had to make with this system was the way the LODs were calculated. In the PC system the calculation happens per instance and all the LOD distances were set up for that solution. But in the new system to

Xbox 360

View of the city streaming area. PC

Xbox 360

Comparison of the forest in the PC version and the Xbox 360 version.

39


Technical Post Mortem Making Games – GDC 2014 first LOD

second LOD

took from the shadow map. With these two changes we could achieve a solution that was quite efficient and allowed us to have visually pleasing directional shadows on the Xbox 360. Another part of the lighting was the point lights. We had a deferred renderer setup so we could light the scene with many point lights. The main problem here was that the lighting process was taking way too much time in case there were many point lights in the scene. So we had to apply stenciling tricks where we marked pixels in the stencil buffer if they were close enough to the point lights to be lit by them and only calculate the lighting on these pixels. Sometimes we needed to have shadows for those point lights to make the scene look realistic. In this case we used exponential shadow mapping.

Garbage Collector Optimization

Tree LOD geometry comparison (first LOD on the left, second on the right). make the rendering really fast we had to decide on the LOD for a quadtree node. In this case the distances were sometimes too far so the artists had to tweak them.

Shadows/Lighting

The debug visualization of the foliage quadtree.

40

Shadow rendering is still one of the biggest problems in graphics. And with the PC being so much more powerful nowadays, the challenge is even harder when adapting the game to the Xbox 360. At the very beginning of the project we had to make a hard decision and decrease the resolution of our shadow maps and also decrease the sample count. With this change we gained not only rendering performance but also memory consumed by the render targets. After this change there was a huge demand in the company to improve the shadow quality but we knew that we had to do this without increasing the size of the buffers. And increasing only the sample count couldn’t solve our problems. So we started looking into other ways of filtering the shadows. We tried different solutions, some way of variance shadow mapping seemed like a good idea, this way we could decrease our shadow sampling even further. After doing many tests in the game we realized that in some cases we had too much shadow complexity and the light bleeding was unacceptable. So after all we went back to optimize our original PCF solution. We tried to find a better fitting algorithm for the shadow buffers and tried to slightly increase the number of samples we

One of the most problematic systems in the engine was the garbage collector. Not many engines use garbage collection unless the garbage collector is part of the language or the framework that the engine uses (there are some written in C# or there is a way to use garbage collection in Objective-C, also which is the base of many iOS capable engines). Optimizing garbage collection is a really hard job. And when it’s not part of the language one has to be really careful because any change can lead to mysterious crashes and other problems in the game. We started this optimization process quite late in the development and so we had some chance to avoid major changes happening to the system at the same time. The performance of the garbage collection was really bad at the beginning; it took more than 5 frames sometimes to go through the whole process. The first idea was to multithread everything but we quickly realized that if we wanted to parallelize the deletion of the objects, then we would have lots of different problems. So we had to reduce the scope of our change and traverse the object tree parallely but keep everything else single threaded. This solution already gave us a huge gain in many cases but we had a few where it just didn’t help. First it was quite hard to understand why, but after excessive profiling we realized that the object hierarchy was so unbalanced that in the worst cases all the threads were waiting for one thread to finish the work on one part of the tree. With the better usage of synchronization objects and doing the traversal in multiple passes we could balance the distributed work between the threads so that the process usually took around 1 frame time. Even after this optimization, we had long garbage collection time in some rare cases and we found that in a very special case the object deletion caused a lot of string comparison. We replaced this with hashes so the comparison time wasn’t a problem anymore. The last optimization on the system was related to cache misses but we will discuss that in the following lines.


Low level CPU Optimizations Near the end of the project we started looking into more low level optimizations. We knew that in some cases it would give us great benefits. So we started profiling and gathering the information about the biggest problems and possible solutions. We found that one of the biggest problems was the way we allocated memory. The low level memory management systems in the hardware worked on pages and the size of these pages had a huge impact on the memory access. After changing the page size, the memory access speed became much better and we could look for other problems. The next one that we found was the amount of L2 cache misses. When the memory access was almost random, the caches wouldn’t help, so there was a way to tell the hardware what would be used in the near future. This prefetch was very useful in cases where the object or scene hierarchy was traversed, because these objects were usually not next to each other in the memory. After applying prefetching in the most crucial places, like the garbage collection process, we had a much better cache usage and we gained 10-20 percent on these heavy methods. The last low level optimization that we did was vectorization of some functions in the renderer. With this we could do SIMD processing, meaning we could perform an operation on more than one data at a time. This was very helpful in many cases but sometimes it was much harder to come up with an efficient vectorized algorithm and it needed someone who was good at understanding the consequences because sometimes the vectorized algorithm could be actually slower.

PC Xbox 360

Reducing overdraw on tree branches. PC Xbox 360

Threading with Jobs and Immediate Jobs One of the main challenges in the port was distribution of the work between the cores so that we could fully utilize the multithreading. This was especially hard when all the middleware components were also using multithreading and sometimes we didn’t have total control over their thread creation. There were many difficulties when designing a system for multithreading and in the end we came up with an unconventional solution. Some of the work done originally on the main engine thread was divided into jobs and distributed between two other threads. This worked perfectly for everything that was doable as an asynchronous call. The main thread just created the job and everything was handled from there. When the job was finished the main thread checked this and used the results. We also offloaded most of the loading to such jobs, which was a great help in disk access. But sometimes asynchronicity wasn’t possible when we wanted to use our other threads, so we had to create something that we called »Immediate Jobs«. When there was something that

Overdraw comparison. we wanted to distribute to multiple threads but we needed this to happen immediately, we just created jobs and set threads in a state where they only processed these jobs. There are problems with this in the general case and some care must be taken in the usage of synchronization objects to avoid the threads waiting for each other. But with careful usage this can be a cheap and fast way to optimize functions that take too long to process on only one thread.

41


Technical Post Mortem Making Games – GDC 2014

The debug visualization of two threads in the job manager system.

Particles When simulating the particles, it is very important to minimize per-particle CPU stalls caused by the memory fetches. In the PC version of the game we addressed that problem to a certain degree. We had the simulation stripped of the virtual calls, and we visited each particle only once per simulation step, thus greatly decreasing the amount of memory access penalties like cache misses and load-hit-stores. When we were reviewing the particle system with Xbox-oriented optimizations in mind, we knew that our simulation routine already fits the PowerPC architecture quite well. Memory access penalties are many times more painful in the RISC architecture compared to the x86, and we had that covered for the simulation, but not for the whole particle rendering pipeline. The way to solve that was to conclude the whole pipeline in the rendering thread. At that time our particle rendering worked in quite a straightforward way, common among many game engines. First a set of particles was iterated and updated on the engine side (the main engine thread). A few engine thread routines later, when a frame update was being composed, the particles were iterated again. In this iteration a buffer of intermediate particle representations was generated. Just as we had various particle types, we also had various intermediate particle representations and their purpose was to encode a quad of vertices in an optimal way. A resulting buffer was being passed to the renderer thread, which would create a vertex buffer and fill it by decoding the intermediate representations. In fact the whole transition was unnecessary, as not having particle data available in the engine thread usually is a virtual problem that can be tackled in various ways. Long story short, having the whole pipeline contained in the renderer thread has been a double gain. Firstly it didn’t require us to store the intermediate particle representations in the memory. Secondly, it freed the CPU cycles wasted for the thread transition scheme and it allowed us to truly process any particle only once overall, not just in the simulation step. We could fill the DirectX vertex buffer just a line of code below the update and add it to the bounding box just after that.

42

»But what about different particle types?«, one might ask. »If you do all that in one code path, you get a terribly branched code, and that’s all against performance!« That’s a good point, as one can’t simulate a rich particle system without supporting various particle types and processing schemes. To address that problem, we made use of the compiler loop unwinding feature. The particle simulation code was highly templated, doing branching in the compilation stage to create a dedicated, compact and unbranched update assembly. The industry standard approach of grouping render objects in batches allowed limiting the cost of paging such runtime in the CPU. After getting rid of unnecessary costs in the particle rendering pipeline, we looked at how many particle emitters we ran in various zones of the game. We noticed that in some areas the cost reached 8+ ms, even though we only saw a few particles here and there. It became obvious that the criteria for pausing and resuming particles simulation had been too loose. We had lots of wide range fluffy emitters that we didn’t want to get rid of, but those were costly even when not contributing to the frame. We made a risky attempt of having any culled or occluded emitter halt completely, not was­ ting a single CPU cycle for its update. The risky part was that in this way we checked visibility against outdated bounding boxes, as all those were dynamically computed in each simulation step. If the emitter wasn’t updated for a few seconds, then we wouldn’t know if its bounding box wouldn’t extend towards the frustum by those few seconds (especially if the emitter spread particles in all directions). Actually the solution could be patched in a couple ways, for example by introducing some estimated bounding box changes or simulating one culled/­ occluded emitter per frame. Luckily we didn’t even have to do that and we were surprised with the results – no missing effects reported. That may not be the case for a differently paced game, but for the Witcher 2 it was just fine. With both improvements described here, we were able to limit particles processing cost to 2 ms even in the heaviest zones of the game.

Animation Streaming Animations were an extremely important part of The Witcher 2. Many systems were based on the proper functioning of animations such as the community, NPCs work system, combat system, explorations and dialogues. The transition from PC to Xbox 360 was a great challenge for the animation system. Memory consumption was one of the major problems. All animations in the PC version were kept in the memory and as their number grew over 7,000, they took more than 160 Mbytes. While sharing the memory budgets for the Xbox version the animations took 15 Mbytes, therefore we had to reduce the memory consumption more than 10 times. The


solution that we decided to choose was animation streaming. However, the main issue when it came to streaming was the latency between requesting the data and when we could use it. Since the gameplay for the PC version was completed, we couldn’t tell designers to support a situation where some animations were in the streaming process in the Xbox and couldn’t be used, for example a monster couldn’t run because it was waiting for the run animation or it couldn’t attack because it didn’t have any attack animation loaded. Cutscenes were another issue. We couldn’t wait 30 seconds for a finisher to load after the last strike was blown. These situations were solved in two ways: Each cutscene was divided into small pieces. Having loaded the first small piece of a cutscene we started to play it, streaming the remaining parts in the background. Loaded parts were dynamically connected to the previous ones. The first part was usually small, so the wait for a cutscene to load was very short, with none or only minor pauses. We couldn’t load cutscenes in advance as we rarely knew which particular cutscene would be needed next. Usually it was known only after having selected a dialogue option or having the finisher button pressed. We didn’t have enough memory to preload several possible options. In case of gameplay, we could not allow even a short waiting time for an animation. We solved this problem differently. We split our engine animation into three parts: a buffer of poses (raw animation), event and movement (root motion, motion extraction). The raw animation was streamed, while events and root motion were always loaded and took about 1.5 Mbytes. Thus the character could always send events or move even though raw animation was not available. This ensured that the gameplay remained unchanged. But as we could not have NPCs walking around in t-poses, each animation had a specially compressed pose

PC

created on the base of the selected frames of animation. All these poses were always loaded and occupied more or less 1 Mbyte. Thanks to this we could display a compressed pose during raw animation stream­ing process and after the buffer was loaded, we could smoothly blend the compressed pose to raw animation. Raw animations were stored in 12 Mbytes ring buffer, so the new requested animations could override the last ones used. Streaming time was usually so small that we needed to use compressed poses only during blend transition from previous animation to a new one. When the blend transition was finished usually a new animation was already streamed. The system worked well for most situations, however there was a problem with fights. Sometimes when we started a fight, we needed a lot of animation simultaneously. Each type of enemy or monster needed animation of run, hit, etc. right from the beginning. We wrote special script functions that allowed designers to inform the animation system that they would require specific sets of animation. The animation system could start streaming these animations in one batch. Moreover, proper animation setting on the disk proved to be very helpful during streaming.

Final thoughts While porting the game to Xbox 360, our team focused not only on keeping the original quality as much as possible, but also on enhancing gameplay experience and delivering additional (free!) content to the game. This was quite a challenging task – and we’ve learned a lot in the process. Thanks to Charlie Skillbeck, Ivan Nevraev and David Cook and the whole Microsoft ATG for all the help they provided us. If you didn’t have a chance to check out The Witcher 2: Assassins of Kings on Xbox 360 – you should! Marcin Gollent, Balázs Török, Krzysztof Krzyscin, Piotr Tomsinski

Xbox 360

The image is showing the simplification of particle effects.

43


Interview Making Games – GDC 2014

PLAYER ENGAGEMENT THROUGH STORYTELLING Telling a good story in a game is hard. Telling a good story in a multiplayer online game is even harder. Bobby Stein, Lead Writer for Guild Wars 2, explains how they divided the story into several parts and still managed to keep the vision of the story and the world the same for every team member involved.

Bobby Stein is Lead Writer at ArenaNet.

Bobby holds a degree in Film and Visual Arts. He joined the games industry in 2003, writing editorial content and guides for Microsoft and Nintendo. In 2005 he joined ArenaNet as a writer on the Guild Wars Factions strategy guide. Soon after he began assisting content designers with writing and editing quests and dialogues. Since 2007 he is the Lead Writer of ArenaNet, managing a team of narrative designers and writers.

Making Games Guild Wars 2 is no doubt a massive game with an enormous amount of narrative content. How much manpower and what kind of team structure is needed to handle such a task? Bobby Stein I am responsible for three smaller teams. The team for narrative design has two members, Angel McCoy and Scott McGough. They are responsible for the high-level story structure, the main characters and keeping track of lore and continuity. On the second team are five writers who focus on any kind of in-game text. When we’re developing big drops of content story-wise and character-wise, it will come from the writers and narrative designers. They’ll write a first draft of dialogue to give us a guideline for anything that has a gameplay focus, whether we’re trying to let the players know what’s going on or we want to give them instructions. The writers will look at those drafts and say »Okay, we need to say this, but we need to make sure that it sounds like it’s coming from this character.« The third team consists of our three editors. Once all the text and voice-over scripts are done, they go through the scripts and try to identify things like plot-holes or character inconsistencies. From there, they’ll prep these scripts so that we can send them off to our recording partners and later on to localization. Making Games So more than ten people are solely working on the story? Bobby Stein As far as writing and editing are concerned, there are eleven of us in total. Our design team consists of more than 50 people at this point. So we were able to split off into multiple teams and focus on different things after we shipped the game. The big priority at the moment is our »living world« initiative. We want to do a little bit more with some of the characters from the core game as well as introduce new characters and tell some stories that take weeks and months to play out. For

44

example the first two months of our »Flame & Frost« story were teaser content. We introduced some events in the game and had characters around that were in situations where they were being attacked or you could talk to them and they would tell you a little bit about what was going on. However, we were careful to hold back some of the details. We gave out some information to get people thinking about what was coming while we worked on the next batches of content. We want to keep the game world alive and vibrant to give the players a reason to keep coming back. Making Games How important is storytelling for user retention in Guild Wars 2? Bobby Stein I think it’s especially important because we don’t have a subscription model. So there are quite a lot of casual players who may take a while to get through all the content in the game. If the player wants to take a break, they can come back later. And we want to make sure that they feel like there is something new for them to do when they return. Making Games How big was the writing team during the initial development of Guild Wars 2? Bobby Stein The writing team was the same size. However, we had an additional team for lore and continuity design. Writing, revision and editing was handled by my team. So after we shipped the game, we consolidated those teams to get a bit more uniformity. Making Games When you remember the beginning of the development of Guild Wars 2, how did you decide to do a very story-heavy MMO? Bobby Stein I believe a lot of that came from the original Lead Designers of the project. Back when we started, both James Phinney, our Lead Designer at the beginning, and later on Eric Flannum wanted to explore storytelling a bit more. In the previous games we were able to do


Sometimes not disclosing details works best: »The first two months of our ›Flame & Frost‹ story were teaser content. We gave out some information to get people thinking about while we worked on the next batches of content.«

it through quests that would be bookended by missions and you would see cinematics at one point that told you the story. The player was an active participant but it really was more about the world at large and some of these characters. As time went on, we started identifying the technical hurdles we had in telling a story and started pushing ourselves a little bit harder. At the very beginning Eric, James, Bree and Jeff, who were the lore and continuity designers on the core game, got together and talked about »How wide do we want to make it, do we want the story to change depending on character creation and selection?« In the very beginning of the game you choose from a list of questions to set up your biography. Based on those choices you’ll get different content from level one to level 30 and from there, you will make choices that will also affect your story – which gives you more of a single player-RPG-feel. Making Games Were you aware of what kind of monster you would create? Bobby Stein I think in the very beginning we were probably a little naïve in thinking that it was going to be not as much work as it ended up being. As time went on, we started realizing that we had created this monstrous spider web of content. You’ll make the choice to save a person, let a town fall or something similar. If you follow the personal story, the game leads to the same conclusion, you just make different choices along the way. However, the original scope was far too ambitious for us to pull off. Once we started breaking it down into its component parts, we realized that just the implementation angle was going to take much more time than we had. I think it’s a common theme in game development. You start blue sky and after a while you start coming down to earth a little bit. Here’s what we want to do, here’s what we actually can do with our resources. Making Games What were the biggest challenges you had to overcome? Bobby Stein Storytelling in games is pretty complex. If you’re writing a book you can describe something, the reader interprets it and imagines what’s going on. You can choose whether you want to be very specific or very vague. Either way, it has a desired result.

Guild Wars 2 became famous for its huge monsters. However, monsters can also come in entirely different shapes: »As time went on, we started realizing that we had created this monstrous spider web of content.«

Games, I think, try very hard to emulate the storytelling mechanics in movies or television. That can work if you have a big budget for animation, cinematics and voice-over. Making Games And does it work for an MMO? Bobby Stein It’s a lot harder for an MMO. When we finished recording we had done close to 90,000 lines of voice-over. We’ve surpassed that since launch, I think we probably have 120,000 by now. Granted, it’s not the largest project that’s ever been done, but it certainly was by far the largest thing we had ever done at that point. Adding voice made it more complex. It started showing us problems that we had never thought about. Once we start realizing the way this content impacts other teams, we ended up seeing that we could script a scene out in English, but it doesn’t work anymore once we translate it into German or French or Spanish. We also had to solve some engineering problems. When you’re talking about storytelling in a multiplayer game you can’t expect every player to be in the same place at the same time to see the same thing. So we started exploring the concept of ambient storytelling. We would see the world with a lot of different stories that come through things like dynamic events, their characters or even just ambient people who are walking around and are not tied to any gameplay. But they are there to show you a bit about the characters who live in that particular area. If I was playing a Human and I went to an Asura city, it should not only look different but it should also sound different. We want to tell you a bit about their lore and history but we also want to show you a bit about their personalities and the things these people care about. Simply throwing lore at the players only works for so long. What you really want is to

»We want to keep the game world alive and vibrant to give the players a reason to keep coming back.«

45


Interview Making Games – GDC 2014

Guild Wars 2 uses ambient storytelling among other things to tell players about the races: »If I was playing a human and I went to an Asura city, it should not only look different but it should also sound different.«

»With the ›Flame & Frost‹ story, we started going back a little bit to the Guild Wars 1 style cinematics where the camera is in the game world. It’s a simple concept of which we’re testing the waters right now.«

have them make a connection based on what makes these characters feel alive.

»When you’re talking about storytelling in a multiplayer game you can’t expect every player to be in the same place at the same time to see the same thing.«

Making Games You have a very complex lore and story. With a huge MMO project, it can easily happen that a small quest story doesn’t fit into the lore of the world. How did you make sure that the vision of the story and the world stayed the same for every team member? Bobby Stein That was really challenging because the team was so big. There were fiftysomewhat people working on it within design. For a pure writing and lore angle, we split everything up. The team of our Game Designer Ree Soesbee focused on the personal story. Jeff Grubb’s team focused on the dungeons and on making those stories cohesive while at the same time tying them to the overall lore. My team primarily focused on the dynamic events and ambience. We early on talked about the things we want to portrait zone by zone. Where in the personal story would players be if they were in this location? For my team, I would say »You guys write a thousand lines of ambient voice-over for this area. These are the things we want the people to talk about. Just make sure that it doesn’t really violate agerating rules and try to keep it in-world so that it doesn’t feel like it’s a pop culture reference.« Making Games But you didn’t have a single vision-keeper that would look at every single piece of content and would go through it? Bobby Stein That would have been nice. Unfortunately, the project and the teams were so big that it was more of a collaborative thing. So we each took sections that we were in charge of and what you’re seeing in the game we shipped is what we ended up with. Making Games Did you have story-meetings to get the teams together and keep them informed on the way the world evolved? Bobby Stein In the production phase, Ree and Jeff would host monthly or bi-monthly story presentations. Eric Flannum would come in and give his presentation, then the lore folks

46

would come in and say: »These are the playable races, these are the characters and these are the stories that we want to tell.« Making Games MMOs tend to be seen as a very mechanical type of game. Was it a conscious decision to really aim for storytelling to somehow differentiate your game from the market? Bobby Stein We take the gameplay very seriously of course. We have a team of people whose sole focus it is to make sure that the game is fun to play and that when people are logging in they want to go around, kill a bunch of things, get cool loot and have a good time. However, we wanted to give the game a bit more depth in terms of story and character development for the people who weren’t just going in to click buttons but who are actually there to role-play or to get a little bit more of a story. Making Games Did you see that people are enjoying the story and reading all that stuff or do they just click it away? Bobby Stein Part of the way we gage people’s reactions is everything from what journalists are saying about the game as well as reading fan feedback on forums. We also work closely with the community team and our analytics team. What we found is there are certain things that people like and that we did pretty well and there are certainly things that we need to improve upon. If we have the opportunity to fix some of the things that people have made good recommendations for, we’ll look at it and we’ll fix as much as we can. If we can’t work on that, we’ll make sure that we’re trying something different that hopefully will resonate better with any of the new content that we’re making. The prime example of that is the cinematics that we used in our personal story. The ones that were the vanguard pieces would look like a moving painting and everybody seemed to love them. The ones that were a bit more divisive were the ones that we call »cinematic conversations.« They pull you out of the game world and it’s two characters on screen, talking. If you have more than two characters involved in a conver-


Making Games So the dynamic events are mainly stories that don’t affect the big picture but are fun to follow? Bobby Stein Exactly. When you’re playing the game and you’re exploring, you’re able to piece together your own stories and adventures. You’ll stumble upon these things and you might participate, you might help the characters succeed an event or you might back off and let it fail so that you can affect how the world is changing. And even though it’s the story of the world, it can feel like a tailored story because you’re in the moment.

»I think that some of the best and most interesting stories are the ones that are more personal in nature, a good example is Telltale‘s »The Walking Dead«.

sation, they’re swapping in an out and we can’t really show any action in it. So a lot of times the characters would be implying that they’re doing something through what they’re saying but you wouldn’t see it. And it was a jarring experience and no matter what we did with trying to improve things like gesturing or lip synching or animation. With the »Flame & Frost« story, we started going back a little bit to the Guild Wars 1 style cinematics where the camera is in the game world. It’s a simple concept of which we’re testing the waters now to see if we want to go more in that direction. Making Games Another thing that’s special about Guild Wars 2 are the dynamic events. Quests are happening on the flyby whether you’re there or not. What does that mean for the writing process? Bobby Stein We broke the story of Guild Wars into three parts. Your personal story was how you start off as a hero and make a name for yourself. The dungeons were the story of our iconic characters who were having a conflict among themselves and who were also concerned with the bigger threat in the game. The dynamic events are the story of the world. We were able to have players participate in anything that was hinted at either in the personal story or that was core to the themes of Guild Wars by showing the world as an active, constantly changing environment. The thing about dynamic events that is hard from a storytelling standpoint is that they have to be endlessly repeatable. So you can’t show a lot of character progression within the constraints of one event or several events that may be chained together to form a larger story. But we can have some fun with the situation and we can add some variety such that as the event is pushed in a different direction, due to either player interaction or how the monsters are winning or losing a fight. We can change things to make it feel more alive and to make sure that the characters in there aren’t just set pieces. Wherever possible we try to involve them in content that was outside of the dynamic events.

Making Games You put in rewards for taking part in the story. Do you think that this was a good way to make sure that people actually get the story and to lure them with gameplay mechanic rewards? Bobby Stein People who play games purely for storytelling are a segment of the market. I’m not sure how big of a segment that is, but it’s clear that people generally appreciate a well told story or interesting characters. We certainly try to make the story unobtrusive, especially in dynamic events so that you’re not being pulled out of the game unless you want to be. The people who appreciate it can dig deeper or just pay attention. People who are just playing for the rewards, that’s obviously fine as well.

»I think we need to force ourselves and stretch out a little bit, to look more closely at the characters.«

Making Games But weren’t a lot of those rewards connected to story things? Bobby Stein In the personal story definitely. ­ We wanted to make sure that the people who were taking their time to play through the personal story were getting rewards that were worth their time and effort. Making Games What are things in terms of storytelling that you would personally like to see in games and that haven’t happened yet? Bobby Stein I’d like to see more stories that speak to the player on a personal level and aren’t about the grand themes of »Here’s the bad guy, you go kill him, you need to save the world!« I think that some of the best and most interesting stories are the ones that are more personal in nature. We’re seeing some of that now in certain games, for example in Telltale’s »The Walking Dead«. At the end of that five episode arch you are not going to save the world. You are concentrating on the people you are with, the decisions you make and ultimately the safety of a little girl. I think we need to force ourselves to look more closely at the characters. They should feel like characters who have other concerns and motivations or problems that they need to get through and don’t solely exist for this one end goal. A lot of the most interesting stuff will come out of how they react to certain situations. I’d definitely like to see a little bit more of the humanity brought into games than just the simple »good versus evil«. Heiko Klinge, Patricia Geiger

47


Community Best Practice Making Games – GDC 2014

MANAGING A HARDCORE COMMUNITY How does Paradox engage a middle aged history scholar and a teenage shoot’em’up fanboy in a meaningful discussion? They give them free pizza, a shiny avatar, and a discussion board. Björn Blomberg about the tricky parts of community management for hardcore and casual gamers. Björn Blomberg is Community and Technical Support Manager at Paradox Interactive.

Previously employed by Blizzard, now taunting his co-workers with special editions of games they must purchase like peasants! Björn is a diplomat, well known internally for his ability to calm the angry masses of the forums.

I

n the beginning, there was no Internet, and all was good. Then a nerd said, »Let there be global inter-linked hypertext documents«, and behold, there were online communities and Internet trolls! It was the natural evolution of the BBS, the dominant species of its time. Those of us old enough to remember the exclusivity of the elite Bulletin Board Systems sometimes look back in awe. Such a wonderful time of innocence it was. Or, at least, that’s what we want to believe. Who was the first Internet troll? Where did they come from? What was their motivation? In this harsh reality of the year of our lord 1999, Paradox Development Studio set out to start up an online community for its upcoming game »Europa Universalis«. The game quickly gained a following of enthusiasts wanting to rewrite history, which also lead to some heated debate about history’s many injustices and occupations, sometimes invoking bad blood between people of different nationalities.

Games with a strong history focus like Europa Universalis appeal to the hardcore strategy fan, many of them eager to engage in forum discussions, joining mod teams and taking part in community events.

48

In 2002, Paradox Development Studio followed up with »Hearts of Iron« – a game set during World War II – which was also a hot topic for some people, especially since it attracted a large group of people intent on winning »World War II« for Germany, which is a notion that is not well received by most. To maintain order in the chaos that would erupt when thousands of users tried to push for their personal view of history, both as it was and how they think it should have gone down, a very strict forum rule policy was put in place. Due to the close relationship the development team had with its player base, a reliable staff of moderators were quickly recruited to uphold the law and maintain a mature discussion. The result has been a self-sufficient organism run by and for the community. The devotion and loyalty from our fans over the years is the cornerstone of everything the company has achieved so far.

Devotion in the Private Forums In the dark corners of the Paradox forums, closed off from public access, lingers the mythical domain called the »Private Forums«. This is where a selected group of devoted fans get to test the latest patches and expansions as well as provide historical research for upcoming projects. Many are those that have tried to gain access to this sacred part of the forums, but only a select few have been chosen. Visiting these parts is a heaven for any history nerd or strategy buff – from detailed accounts of Venician noble families made by self-proclaimed descendants to in-depth analysis of probabilities, all are scrutinized in detail and reworked until near perfection is achieved. The scope of it all, of course, makes it a bit too ambitious to claim total perfection. Paradox quickly realized that the potential scope of what could be created within the Clausewitz Engine (named after an 18th century Prussian military general and strategist) was so


The official Paradox forum is the first and most important contact point for the community as well as for the devs. A private forum serves as a communication platform for beta testers and history enthusiasts. great that its potential could never be fully utilized by the Paradox Development Studio team alone, and therefore the decision was made to leave it wide open for user modification. The result is a plethora of mods that enables any player to conduct historical experiments for thousands and thousands of hours. Some of these mods have also turned into their own stand-alone games such as »Darkest Hour« and »For the Glory«. Not only does the extensive library of mods provide many hours of additional gameplay, it also allows for anyone with basic programming skills and objections to certain aspects of the game design realize their own vision of how things should be.

Rewarding Loyal Players Paradox realized early that DRM is usually a bigger problem for those that paid for the game, rather than for the pirates with a cracked version that removed it completely. So instead of wasting time and money on more sophisticated DRM, trying to stay ahead of the hackers, Paradox quickly turned to rewarding loyal fans. The task of rewarding is usually a nicer one than the drudgery of punishing people. The ever popular forum icons were a child of this realization, as well as providing people with registered copies of the game free access to modding tools, information, and support. Many times has it been heard from new fans that they began their historical odyssey as lowly pirates, but after a while decided to pay the relatively low price of a legal copy to be able to fully enter the Paradox family and partake in the activities going on in the subforums, where the registered owners roam. No matter your 1

history, if you redeemed yourself, you are more than welcome to join the community.

Fan gatherings & the Era of Social Media With the beginning of the second millennia came the social media revolution with its microblogs and instant access to millions of people faster than you can say »Twitter«. When Paradox decided to take on the publishing business, the fan-base grew exceptionally. When »Europa Universalis« was released in 2001, an email was sent out to the 2,500 strong community members active on the forums to thank them for their loyalty and support. About ten years later, not long after the release of »Europa Universalis III«, the Paradox forums held its 200k celebration in honor of member 200,000 joining the ranks. About a year prior to the successful release of »Europa Universalis IV«, the 500k celebration took place. As of this writing, the total count is now over 600,000 and it’s still growing. Every year Paradox holds at least one »Fan Gathering« where anyone1 can attend. The location of this gathering may vary, but so far it’s been held in Stockholm and in Sydney, Australia. These events offer play sessions for our latest games, often together with the developers. Mingling, beer, pizza, popcorn, you name it! Pictures of these popular events can usually be found on Paradox various social media channels and, of course, the forums.

Into infinity and beyond! With the new times and the exponential growth of new social media outlets, so too

If the event features an open bar, you need to be legally allowed to drink alcohol to attend.

49


Community Best Practice Making Games – GDC 2014 must Paradox expand its focus. Keeping a close eye on Twitter, Facebook and other channels ensures the company’s ability to capture thoughts from the emerging generation, to whom a traditional discussion board may not seem at all appealing. There are many that prefer a more streamlined channeling of information, introduced with micro and photo blogs. Since Paradox consists of many individuals that are already active on these channels, it’s logical to give them powers over official Paradox accounts and have them keep track. Since the forums still provide a better platform to explain things, you can use the limited space provided by microblogs to simply help the newcomers find the right spot in the rather intimidating (information-wise) forums, as well as provide some fluff for casual amusement. The hopes are to gather people to join the more in-depth discussions on the forums where the main feedback used to improve the gaming experience is gathered. It’s important to have a staff of qualified moderators on the forums to help keep the discussions mannered and streamlined. If you are found to be out of line, you will receive a slap on the wrist and instructions on how to better yourself. Paradox always wants the bad eggs to turn into productive members of the community rather than excluding them. But should you prove unwilling or unable to turn your behavior around, you will find yourself looking for new pastures pretty quickly. Paradox currently enjoys the help of over 200 volunteers spending their free time to keep order only out of love for Paradox’s games and its community. Paradox is truly blessed having all these people devoting so much time and energy into this.

Case study of an AI gone rogue

Paradox hosts annual Fan Gatherings where the community meets with the employees. Dressing up is an appreciated activity, and of course the fans can try out the latest games, all of which helps to strengthen the relationship between the company and their players.

50

The Community Manager is alerted by one of the voluntary moderators that there is a problem with a game. The main concern among the player base is that the AI is not acting in an optimal way, and the players feel that they have been promised a patch for this problem but too long a time has passed without further acknowledgement of the problem. The Community Manager contacts a developer to inquire about what their view on the situation is. The developers feel that they have already made clear that it is not something that will be addressed unless someone can pinpoint a specific bug which can then be reproduced. The general behavior is a design decision which now can’t be redone without a major change to the entire game. There are no resources to make this massive change, which would represent a rewriting of the game’s core. The development team feels dismayed by the general tone in the forums. They are in particular concerned by one user that is proposing programming solutions which are not viable. They decide to keep a low profile however, as they don’t know how to respond

properly and what they do want to respond with is not appropriate. After quickly going through the threads in the forums dedicated to this topic, the Community Manager finds that it is not a onesided discussion. There are posters admitting that although it’s not optimal, the behavior issues are not game breaking and that making changes to it may not be viable in the current version of the game, but lessons can be learned for a new iteration in the series. A quick analysis by making SQL queries towards the forum’s database reveal that one user in particular stands out with criticism. In fact, one person alone has done 4 times the amount of posts as the second person on the list. These torrents can blow a problem out of proportion, making it look more severe than it is, causing irrational behavior from the developers or lowers their morale, which can be a significant problem, especially with new developers who may not yet have developed the thick skin needed when putting their hard work out to be scrutinized by people that may not be qualified to judge these types of things in the first place. An ill-thought committed to a post from a developer in the heat of the moment can become a real problem where the initial issue really wasn’t that critical. It also turned out that the initial »thorn in the side« of the developers was in fact one of the biggest advocates of the game and its developers. A misdirected defensive act by the developers could have been devastating, turning an »ally« to the opposite side. Luckily the developers followed the sagely advice of the Community Manager to »Not say anything at all when not being able to say something nice«. This is where the importance of the Community Manager comes into play. A person with insight into the production, without being emotionally invested, who can act as a buffer and mediator between the players and the developers. Now we know who to target with a statement that can hopefully calm things down. Even if not all players are happy with the outcome, they will hopefully feel they are not ignored, and as long as their questions are addressed maturely and subjectively, things can go back to normal.

Casual vs. Hardcore What is the difference between a classical Paradox Development Studio fan and a more casual gamer such as a follower of »Magicka«? Now, I may have offended many Magicka players by saying they are not hardcore, but this was not my intention. Magicka is the best-selling title in the Paradox portfolio and, therefore, has the largest following if you count all the people that have played the game as a following; it’s a game you can jump in and play for 20 minutes and have fun, whereas a PDS title often requires a larger time investment to give anything back. On


these grounds, I classify Magicka as a casual game and titles directly from the Paradox Development Studio as hardcore. Defining the difference between these two types of players is difficult, but a pretty typical Magicka player would be someone who enjoys a couple of hours of blowing their friends up, while failing miserably to perform the actual task given by the game. This is the charm of Magicka. There is, however, one likeness between Magicka and the internally developed games such as »Europa Universalis« and »Crusader Kings«. In both types of games, it’s actually just as fun to »fail«, if not even more fun, than to »win«, although it’s somewhat easier for a Magicka player to share their fun (i.e. fails) than it would be for someone playing »Europa Universalis«. In Magicka, you can pretty much go, »Hey, look at this! I was supposed to cast this AOE attack on that mob over there but accidentally set me and all my friends on fire, lol!« Whereas a fail in »Crusader Kings II« may be more on the line of, »Hey, I was in the middle of planning my major campaign against Burgundy to secure the throne of France but, unfortunately, my current ruler accidentally died while hunting boars, causing my entire kingdom to be split up between his three sons. And now the entire land has erupted in internal strife over supreme rule, lol!« That first exclamation up there is a whole lot easier to convey through modern social medias with a micro or photo-blog format and may appeal to gamers in general who might not even have tried Magicka yet themselves, whereas the latter might be more of a niche thing, only appealing to people that are already into the game. Also, explaining exactly what ramifications this whole »Crusader Kings« succession ordeal had could not even be properly conveyed through this text; I bet you didn’t find it very amusing just by reading this but it is certainly entertaining to someone who is familiar with the game. We are looking into ways of telling these stories from the more hardcore world to the people outside of it, trying to find a way of showing these moments and the amazing times that can be had in our grand strategy titles. For now, a traditional forum format is best suited for the task. If you look at »Europa Universalis IV«, you will find a great story telling tool which compresses the world history you create into a ledger-like format, this is a great way of showing what the game is actually about in a way that will speak to most people (some prior interest in history may be required to enjoy it still). One of the biggest challenges with this tool, though, is devising a way for it to be made more appealing and to fit better into the modern snapshot of the based lines of communicating that are dominant in the gaming community today. I do believe that the future consists more of streamlined boards, similar to Facebook, than the old traditional

In contrast to hardcore games like Europa Universalis the action game Magicka appeals to people looking for a more casual play style. Both community bases also have different habits of sharing their experiences and interacting with fellow players, which is why Paradox has to approach them differently, making available customized social media tools, for example. forums, though the old format should always be available to those who wish to engage in in-depth discussions about the forming of the Lichtenstein Empire or other topics of the like.

So what have we learned? Rewarding the players we want is better than punishing the ones we don’t want. It’s better for a bad egg to turn good than to have it tossed away. Just tell it how to do it. The loudest, and sometimes the most ­annoying, people are usually the most ­devoted. The silent dissenters are your biggest concerns, not the whiners. Don’t let your emotions influence your interactions with the fans; don’t feed the trolls. If you can’t reply courteously, don’t reply at all. Most importantly: Your fans are willing to do a lot for you if you only give them the tools. Paradox may not have that many on the company payroll, but creating close relationships with core members of the community will create a trickle-down effect where fans are so closely tied to the organization that they will act more or less the same as an »official« representative would, thus extending our reach exponentially and leaving less people feeling excluded and unheard. Developers are not suited to directly respond to criticism from the fans as it is human nature to take it personal when someone attacks the work you spent hundreds of hours into. Community management is not an exact science; you must be empathic, reactive and always leave your door unlocked (metaphorically). You may not always be able to reach everyone, but everyone must be able to reach you. Björn Blomberg

51


Making-of

Figure 1 A video fly-through of this scene is available on Youtube: www.makinggames.de/landscape-fly-through

Making Games – GDC 2014

SCOTTISH LANDSCAPE IN THE CRYENGINE 3 In just three months’ time Environment Artist Martin Teichmann created an impressive Scottish landscape for the Polycount challenge, using the Free CryENGINE SDK. You can follow the design process in his detailed Making-of, from concept phase to the final composition.

Martin Teichmann is an Environment Artist at Rocksteady Studios.

In 2008 Martin finished his studies in Computer Science in Erfurt, Germany. In this time he already worked as a 3D Artist on the action-roleplaying game Venetica by Deck13. In 2010 he was hired by Crytek to work on Crysis 2 and Crysis 3, since 2013 he lives in London and works as an Environment Artist at Rocksteady Studios, the developers behind Batman: Arkham Asylum and Arkham City. www.martinteichmann.com mail@martinteichmann.com

52

T

he main idea for this scene comes from a photo of a lonely, desolate Scottish landscape I saw a few years ago. For quite a while I had this Scottish scene in my mind and the idea to recreate such a scene as a real-time game environment. When I saw the Polycount community was starting the »The Escape« challenge, I knew I wanted to build this scene. The Polycount challenge was to create either an escape scene, a character or both. There was no restriction in the type of setting or technical limitations such as poly count or texture resolution. It only had to run in real-time in a game engine. The challenge was to run for three months which sounded like the perfect amount of time for me to be able to finish my plans for the environment. I used the Free CryENGINE SDK as it is a powerful tool especially to render large landscapes and vegetation in high quality (Figure 1).

The Beginning I started my work by looking for reference images. This was important for me to get a better idea of what I wanted to create in detail. It also helped me to figure out what elements I needed for my scene to achieve the most interesting and detailed result in the given time. At the end of my research I had around 400 images in my reference folder for all different kinds of elements and ideas. As I searched for reference material my general ideas got much more defined. Relatively quickly I decided to add the road to the horizon as one of the key elements in the scene. This would show »The Escape« in my work, as it was central to the Polycount challenge. At this point I wanted the landscape itself as a whole to be the focus for the viewer. They should look at the image and should be guided to the horizon, the destination for the escape. This would match the challenge rules and make the image interesting and more meaningful. I didn’t


Figure 2 Final node graph I created in World Machine. The brown node in the middle is the »Erosion« node which generates the realistic, weathered look of the terrain.

want to tell a story that was too obviously to the viewer, he should think to himself about what had happened here. I decided to add several »evacuation« props to the scene and abandoned luggage onto the street to make it clear that people passed this area to escape a disaster. What this disaster was or where all the people had gone to was something I wanted the viewer to think about. With all those ideas in mind I finally started working on the first and most important part: the terrain.

Terrain creation in »World Machine« It is a complex and time consuming task to paint a realistic looking terrain to fulfill all requirements I had in mind. Areas for streets or buildings needed to be considered and later the terrain adjusted to this. However, to paint a terrain can easily lead to building unrealistic shapes or proportions in the environment. I decided to create my scene the other way around. My idea was to first create a realistic landscape and pick an interesting spot to use for the final scene afterwards. This meant that the road had to follow the existing landscape shapes instead of creating the terrain around the road. With this approach I gave away some freedom and control but on the other hand I found the results to be way more realistic and believable. A landscape consists of a large number of different organic shapes. The different materials present such as hard or soft rocks behave differently to the weather and the erosion. They so form a unique look of the terrain. Water flows down, forms valleys and erodes the lighter parts such as dirt and soil. To achieve this realistic, weathered look, I picked the terrain generator tool »World Machine 2.0« from Stephen Schmitt (www.

Figure 3 Base terrain rendered in World Machine. world-machine.com). This tool contains very powerful erosion tools to mimic the behavior of weathered terrains. World Machine is a node based program which makes it easy and efficient to play around with all kinds of different looking terrains. The »Erosion« node adds the weathering effect on top of a terrain. At first I generated a terrain roughly fitting my ideas and finally added the erosion effect to give the terrain the realistic look (Figure 2). I also used the terrain texture from World Machine which was very useful as a base color when I applied textures and materials later in the CryENGINE (Figure 3). After the terrain was imported into CryEN­ GINE I got a nice source of inspiration and started looking for interesting spots on the terrain (Figure 4). I jumped into the game mode several times and walked through the landscape, to get the best feeling for how it would work from the viewer’s perspective and which

53


Making-of Making Games – GDC 2014

Figure 4 World Machine terrain imported into CryENGINE. I used a 16 bit heightmap as a »pgm« and the terrain color texture as a 8 bit »bmp« file.

In order to add interesting detail and structure while also saving texture memory and time, I used the same textures with different material setups in the CryENGINE. For the grass texture I created a material with strong normal map details, and another material without. In order to have more variation and less repetitive pattern effects on the terrain I switched between those two materials while painting the grass to the terrain. I made three different materials for the rocks; one for large scale distant rocks, one middistance material for the average sized rocks and finally a much smaller scaled version for close up details. For the distant rock material I also used a non-uniform scaling to get more direction into the material and onto the terrain. Using these three rock materials in a clever way, it is possible to achieve organic rock surfaces while just using one single texture (Figure 6).

Road areas were best to start working with (The Free CryEN­GINE SDK also provides several already functional vehicles to drive through the environment which is a great fun!).

Terrain textures After I imported the terrain into CryEN­ GINE some close up details for the different materials on the landscape like rocks, mud or grass were needed to embellish the scene. At first the terrain was covered with a gray »checker« texture. The terrain material textures work more or less like detail textures. They get blended together with the color of the terrain and should contain as less color information as possible. The detail texture fades out in the distance to improve performance. Only the terrain color is then visible on the terrain. To avoid a bad transition between the close areas use the detail textures and the areas far in the background, the textures shouldn’t contain too much color or light information. I desaturated and reworked my diffuse textures to get a relatively even and good tiling pattern for the terrain (Figure 5).

Another important part for my scene is the road. Before I started working on smaller elements like plants, rocks, buildings etc. I needed a base road as it was fundamental for the look and composition of my scene. Firstly I picked an area on the terrain where I found the details of the landscape came out very nicely. Then I needed to adjust my terrain at this point to create a valley for the road into the terrain. For this I used the terrain tools of the CryENGINE to raise and lower parts until I was satisfied with the rough shapes. Looking at some of my reference photos I noticed that the roads do not usually take the shortest route, but the easiest path. These roads historically developed from small shepherd pathways into actual streets while still keeping the old and natural shape. This makes them fit so well into the landscape. The CryENGINE provides a handy road tool that I used to create the street. This tool simply generates a »spline« path with just a few clicks directly on the terrain surface (Figure 7). I created a path following the given »natural« shapes of the generated landscape as much as

Figure 5 Terrain textures diffuse map, normal map and specular map. Left to right: grass, dirt, small stones. Note the strong contrast in the specular maps for nice detailed highlights and slightly wet look of the materials.

54


Figure 6 Terrain painted using varying scales of rock materials and normal-mapped material instances for the grass.

possible to mimic those typical organic streets. I had to raise or lower several parts of the terrain gently again to get a nice path for the road and to help the terrain to come out even a bit more naturally. When I was happy with the path for the street I used the ÂťAlign Height MapÂŤ function which adjusts the terrain to the generated road path and creates a very nice road fitted onto the terrain.

Vegetation The next step was to fill the landscape with vegetation for a nice level of detail. In the first step I worked on the grass patches which were the most important vegetation element as I wanted to use them everywhere. To achieve the best quality I created several high poly grass models and rendered the diffuse map, normal map and alpha map to a texture for the use on a simple plane object in the engine. The big benefit using high poly vegetation renders instead of a simple photo texture is the much cleaner alpha map and the rich and accurate normal map (Figure 8). To speed up placement of the vegetation I

Figure 7 Road tool in CryENGINE to generate the street as a simple spline. I used the given topology of the terrain to fit the road onto the terrain.

created various grass patches which contained several planes and grass textures. For more control in the finer detail of the vegetation I also exported a few different shaped single grass blades (Figure 9).

Figure 8 Rendered grass texture using high poly meshes I created in 3ds Max.

55


Making-of Making Games – GDC 2014

Figure 9 Several grass patches using the grass textures I rendered from high poly meshes.

Props The scene also needed some traffic signs and buildings to put more man made elements into the landscape. I think those elements really make the feeling of a lonely abandoned scene even stronger because although there are no humans shown in the images, some people were clearly living or passing through this area. What could have happened to them? Aside from dropping in generic assets like different shaped rocks and stones, I created a number of props to help the human story telling in the scene. I added suitcases, shoes and medical boxes to put more items you usual wouldn’t expect lying around on a street. Some barrier assets like the barbed wire and the tank trap were also made (Figure 10). For most of these objects I created a high poly mesh and baked the details into the low poly version I used in the CryENGINE. For some items, such as the shoes, I just created a low poly mesh as I was short on time. As these were further away from the camera, the lack of detail didn’t matter as much. The little bird is my favorite detail in this scene (Figure 11). Even though it’s a very small element and only visible on the second or

Figure 10 Props I created to populate the scene. The medical box adds a bit of color to the scene.

56

third glance, it immediately adds vitality to the scene. It tells you that the people have gone, but that nature had reclaimed what humanity has abandoned.

Composition After combining the parts for my scene, I focused on the composition of the whole landscape. The road and its destination on the horizon already dictated the composition but there was still a lot of room for improvement. At this point it was very helpful that I got feedback from a number of sources. Firstly from the Polycount community where my work was reviewed in the forums and of course also from friends I showed my work to. It is very important to get an outside opinion, as other people will look at what you have created with a fresh eye. It’s very easy to lose the overview of your work once you’ve been working on it for a while. A good tip I realize is to take a screen­ shot of the scene and flip it in Photoshop to get a new perspective on the image. Also a rough overpaint is a very efficient way to try out different elements and moods quickly instead of changing a lot of parameters in the engine again and again (Figure 12). I got a number of

Figure 11 The little bird was fun to model. It also adds some live and color to the scene.


very helpful points out of this which I then started to implement into the scene. Once again I reminded myself that the main focus of the image was the road and its destination at the horizon. To reach this goal I used several elements to frame the image better. Trees and telegraph poles are used to frame the image while also adding silhouette detail. To lead the eye of the viewer to the horizon I expanded the »V« shape of the valley. For my second shot I put the little bird on top of a pole in the silhouette to show this specific element as a more important part of this scene. I felt that the small details on the road should not attract the viewer’s attention too strongly, while it should be visible at the second glance to give an idea that something had happened here. To achieve this I kept the colors of these items closer to that of the road itself. The exception is the red first-aid box that I wanted to stand out and add some »artificial« color to the scene just to increase the manmade aspects in this area.

Lighting A typical scene of a Scottish landscape would usually be misty or at least cloudy. I wanted to get close to this look but I also wanted to add a warm light to the scene. I felt that it should look a bit wet like after a rain and when the first warm light beams break through the clouds. This would help to increase the contrast of the story I was telling. There should be signs of »hope« visible in this depopulated and overcast scene. I added a relatively strong and bright fog to achieve a misty look. This also helped a lot to separate the different layers of hills in the background of my scene and to give it additional depth.

Figure 12 Overpaint to improve the scene’s composition. Top: original screenshot, bottom: overpaint in Photoshop.

The main light in the scene was obviously the sun. I set the sun color to a desaturated but bright yellow to get a slightly warm tint to the scene. A strong yellow light in the focal point of the image, the horizon, helped to emphasize

Figure 13 Left without, right with color grading enabled. More contrast and red tones added to the image.

57


Making-of Making Games – GDC 2014

Figure 14 Early state of the scene. The castle was cut later because it took away too much attention from the road and landscape. And those were the elements I wanted to bring into focus. this section of the image and also helped me to get some nice rim lighting to pick out and separate the different elements and make them more visible. To achieve the final layer of quality I used color grading to adjust the image inside the CryENGINE. I added red tones and contrast in order to match the reference photos I had used more closely (Figure 13).

Summary As you can see, the final image is actually not very close to the first rough block out I created (Figure 14). In the beginning I had planned to include a lot more objects such as a stone wall, cars and a castle in the left of the background. I had included more and more ideas that I had in mind during my research. However, considering the time, I found that the more the idea and the scene matured in the planning and development process, the less unique elements were left in the scene. First of all, the time I had for the scene itself were only approximately three months and second of all I found that

Figure 15 The final shot of the Scottish landscape scene.

58

having too many elements meant that they started to compete with each other for attention on the screen. So I stepped back a bit and removed one element after another until the right balance for this scene was found. The creation of »The Escape« took a lot of hard work and effort to complete in the time allowed. On the other hand it pushed me to really work hard on the landscape to finalize it and to reach the goals I had in mind. I found that having a deadline helped me to keep things as simple as possible. The amount of stuff I learned while working on this scene was huge. Even though I was not one of the winning entries, I really enjoyed the competition and the hard work. At the end I am very happy with the final result (Figure 15). Now I am looking forward to the next challenge! If you are interested, you can find all the final entries of the Polycount chal­lenge under: www.makinggames.de/polycount-finals Finally I have to say thank you to Dominikus Reiter, Martin Elsaesser and Ronan Mahon who helped me a lot with their feedback during my work on this scene and the making of this article. Martin Teichmann


one jump ahead!

We are hiring!

www.sproing.com

Austria's leading game developer in mobile, console and online games, is looking for you to join their team!


Game Design Post Mortem Making Games – GDC 2014

BADLAND

»WE WERE JUST HAVING FUN« The two-man studio Frogmind suddenly became famous for their highly acclaimed iOS game BADLAND, which was developed without any design documents . The Finns just followed their own gut feeling. Here they share with us the entire thought process. Johannes Vuorinen is Programmer, Designer & Co-founder at Frogmind.

Johannes is an experienced game programmer and a gamer. He is the co-founder of the 1-year-old two-man indie studio Frogmind, which is the developer behind the iOS premium indie hit game BADLAND. He was the lead programmer and co-creator behind the game. Before co-founding Frogmind, he was the lead programmer of the Trials Evolution editor in RedLynx, a Ubisoft Studio. @JohannesVuorine

W

hen we founded Frogmind, we decided that we wanted to do as much as we could by ourselves. We wanted to be 100 per cent indie and develop the game of our dreams without making any compromises. We did not think about monetization at all. We wanted to keep our focus fully on creating the best possible game experience for players. We decided to choose iOS as our primary platform because of many indie success stories there. We also both have played a lot of iOS games so we were familiar with the platform.

Gameplay Everything started with finding the perfect gameplay, which fits the touch devices perfectly.

We both strongly value gameplay and precise controls so the game design had to fulfill both. We had always enjoyed the one-touch auto-scrolling gameplay mechanic familiar from games like Helicopter and Jetpack Joyride. There’s just something so addicting in this simple game mechanic of surviving and evading danger combined with perfect controls. It’s so much about skill and reflexes. However, we thought that this genre has much more potential and we decided to try to unleash that. So we quickly created a simple prototype with the familiar one-touch auto-scrolling gameplay and then added two things to it: the heavily physical gameplay combined with various physical objects, and a bit varied gameplay rules: the character only dies when it leaves the screen – not when it hits something.

Trying to survive the level with as many clones as possible can sometimes be really hard.

60


The atmosphere with high quality visuals and audio plays a significant role in BADLAND.

These additions made the gameplay feel fresh and much more fun and versatile. And the player still controlled everything with a single touch. We realized that this could really have potential. We were amazed how versatile the gameplay can be with only single touch, and decided to not stop at that point: let’s add power-ups that alter the gameplay! Some of them make the character big and feel heavy; others make it small and light. These powerups were followed by ones that make the character fly faster or slower, and ones that make it roll forward or backwards. With the added rolling element we realized that the character must be circle shaped to really get everything out of it. Suddenly the gameplay changed and felt like a racing game when a roll power-up was collected. And everything was still controlled with a single touch. Then we added probably the most groundbreaking power-up: the one that clones the character. This added a ton to the already proven fun gameplay. Suddenly you controlled not just one of those characters but a group. It was just so much fun to try to save as many of those as possible. Or maybe sacrifice a clone for greater good. Quickly, this concept of saving clones became one of the main concepts in the game. It is about surviving with as many characters as possible for as long as possible. With all these added elements we realized that we had changed the gameplay to really feel fresh and unique while keeping the familiarity of the genre and the precise one-touch controls. So the gameplay idea evolved by just

playing around with the prototype and adding various elements to it. There were no design documents. We were just having fun.

Atmosphere Now that we knew we had great gameplay, we started thinking how the game should look and sound. We did not want to go the typical mobile game way with ultra-colorful worlds combined with funny characters with funny sounding names. We wanted the game to feel cool and unique, and real. It should feel as non-game-like as possible, and as much an immersive experience as possible. We chose nature as the main reference and we started adding things to make it feel more mysterious. When we added objects and animals, we also began to tell the story. What is this place? What has happened here? What is happening right now? We realized that we could tell the story by only using visuals and

What is BADLAND? BADLAND is an award-winning action adventure platformer developed fully independently by Frogmind, a two-man 1-yearold indie game studio formed by Johannes Vuorinen and Juhana Myllys. BADLAND is the studio’s first game and the development started in early spring 2012. Since the game’s launch on April 4th 2013 in iOS App Store BADLAND has been downloaded more than 7 million times. The game has been successful for Frogmind and we are very happy to continue developing BADLAND further.

61


Game Design Post Mortem Making Games – GDC 2014

The four sections of BADLAND: Dawn, Noon, Dusk and Night.

Figure 1 The simple but effective and satisfying progress screen ­ is shown between the single player levels.

62

sounds – without interrupting the experience through cutscenes or text. The story is there but it is not forced onto the player. The player can choose voluntarily to follow the story by looking at the graphics and listening to the world, or the player can just concentrate on playing. When we had finished the first concepts of the world, we started to think how to fit the gameplay to it without making any sacrifices. We quickly realized that using black silhouettes as the gameplay layer makes the perfect distinction between the beautiful visuals and the gameplay, and also adds a really nice contrast to it. With this decision it was 100 per cent obvious for the player, which graphics were in the gameplay layer, with which he could interact and which were in the backgrounds. Following the »feel as non-game-like as possible« principle we chose to add only the minimal amount of menu elements to the game (only a very small pause icon) and not include any kind of tutorial. We wanted the player to just experiment and explore how to interact with the character and what the different kind of obstacles and power-ups were. When you first load up the game, the gameplay immediately starts without any kind of explanation and the player begins to interact with the world. For the atmosphere the Audio was equally important as the visuals. We wanted it to be in the same quality level, obviously sounding like a real place. Following this principle we did not include music because there isn’t any music in the place the character exists. However, we added a lot of sound effects to make the place feel really live and real. Lots of sounds from the nature were added to the game followed by sounds of machines. As a technical side note: the sound files of BADLAND take the same package size as the graphics. So there are a lot of different sound effects.

Game progress After having the fun gameplay combined with a really good looking and sounding atmosphere, we started thinking about how the game should progress. We had the basic story designed that when the player progresses in the game, the place starts to look less like a beautiful forest and more like some kind of machine-invaded destroyed place. A forest looks most beautiful in the dawn when the sun has just risen so we decided that the game begins in dawn. To get everything out of the machines with lights and other bad things, the game should progress until night. So we decided to include four sections in the game: Dawn followed by Noon, followed by Dusk, followed by Night. The four sections together form a complete Day so we can logically continue the game by introducing the next day starting again from Dawn, followed by Noon and so on. At first, in the spirit of »feel as non-gamelike as possible«, we had the idea that the game should progress seamlessly without separate levels. However, gameplay-wise this did not feel right. We wanted the player to experience the progress and his success in the game more clearly. This is why we decided to divide every section of the game (Dawn/Noon/Dusk/Night) into ten separate levels. However, we did not want to show the typical popup result screen with three stars at the end that everyone knows from mobile games. We wanted the game to feel as immersive as possible even after finishing a level. Thus we created the circle of dots you see in Figure 1, which one by one lights up when the player progresses through the levels. This very simple progress screen combined with a special sound effect felt really satisfying. We created the sense of progress without ruining the immersion.


Multiplayer With all these different aspects designed, we started creating the actual content and levels. One day, after a couple of months into development, we suddenly got the idea of an on-device local multiplayer. After all, we were only using a single touch so we had plenty of touches and screen left available for additional inputs. So we quickly designed a multiplayer prototype for up to four players playing on the same device – it was instantly proven to be extremely fun. We immediately decided to include the multiplayer for up to four players as a separate game mode.

Replay value The multiplayer mode brought lots of replay value to the game. It is always fun to play a few rounds of multiplayer with friends. However, we did not want to stop there. So we included three missions to every single player level. These missions (Figure 2) are handcrafted

to the level, for example you have to save a certain amount of clones, collect all power-ups, explode all mines in a level and so on. It was clear that these missions added a lot of replay value to the single player but we did not want to introduce them in the first playthrough of a level. Again, this would have ruined the immersion. That’s why we decided to make the missions available only after the player had completed a level.

Menus The menus were the absolute last thing we designed and implemented. Since the beginning we knew that they had to be very simple: something similar to Half-Life 2 where the game is always shown in the background. With that principle in mind the last few months of development were used to develop the game’s menus. We wanted the first screen that is shown to the player, the main menu, to be really clean setting the atmosphere for the

The multiplayer is the true test of survival with up to four players trying to reach the end.

Figure 2 All single player levels include three missions that become available after the first playthrough.

63


Game Design Post Mortem Making Games – GDC 2014

The main menu is clean and simple with the game running in the background.

game. That is why we put a lot of information to the other menus, such as the statistics to the options menu, and the leaderboards with links to Facebook/Twitter/etc. in a separate social menu. However, we wanted that the progress to be clear to the player so we put that information clearly visible in the main menu giving the player the opportunity to immediately see how far he progressed in the game. All the other menus were designed with the same basic principle: to be as clear and clean and as minimal as possible.

Marketing As the game slowly developed further we started thinking about marketing and how we could get people to actually play our awesome new game. We totally believed in it and that it would stand out of the crowd with its unique visual style. We thought that this could be something interesting that people would talk about with their friends. However, it was clear from the beginning that we could not just develop the game in silence, launch it and hope for the best. It would have

64

been an interesting experiment but just felt too risky. After all, we were just two developers (with an additional part-time very talented ­audio guy, Joonas Turner, and a few testers). There could be numerous things that we were just not able to see by ourselves that we could improve or change before launch. So we decided to talk about our upcoming game many months before releasing it to the App Store. Already back in July 2012 we opened the game’s website (www.badlandgame.com) and started to write about the game’s development in a blog. About two to three times a month we wrote about the game’s progress and showed new concepts and screenshots. Screenshots are nice but they don’t really demonstrate the gameplay, which is the most important part of any game. In October 2012 we had reached the point in development at which we decided to create the first gameplay video (Figure 3). We wanted the video to be filmed with a camera so that the viewer would clearly see how the game was played with a single touch. We carefully crafted a nice-looking level, which demonstrated various aspects of the game and filmed one of us playing it. We posted the video on YouTube and sent the link not only to various iOS game sites like TouchArcade, PocketGamer and 148Apps but also to more general gaming sites like Destructoid and Eurogamer. We can say now that this day was probably the most exciting day of the whole development. We showed the gameplay to the whole world for the first time. This was a true test whether the game was interesting enough not just for us, but also for the rest of the world. And it was! Many of the big gaming sites wrote a story about the video and this new


upcoming game the very same day it was posted to YouTube. And when the big sites wrote about it, the smaller sites followed. Moreover, we also got an email from Apple saying that they were interested in the game. Suddenly we had lots of press contacts and even a contact from Apple, which so many developers keep in high value. It was a great day and gave us a big boost to work even harder to ship the game. When the launch day slowly approached, we continued writing blog posts and showing more screenshots and videos about new features. Every now and then a gaming site wrote about the stuff we wrote about. This way the game was kept alive all the time and there was almost always someone talking about it somewhere.

Launch In late February 2013 we decided that the game was ready for the final test round in QA. We started thinking about our launch strategy. First, we were talking about a late March launch but we noticed that GDC San Francisco was happening at the same time and a lot of the media and game industry would be there attending the conference and writing about GDC announcements. So we postponed the launch for a week, the new date was April 4th, and I flew to GDC to talk about the game and speak with as many people as possible. At the same time we prepared the launch trailer and all the other marketing materials for the launch. We decided the final price point two days before launch. It was clear to us that this game was not a typical 99 cents game but a more high quality, premium title. We wanted the price point to tell about this high quality. The game is a universal app – with a single purchase you get the game for both iPhone/iPod and iPad. In iPhone games the typical price point for premium games was 2.99 dollars while in iPad games the typical price point was 4.99 dollars. So we wondered which one we should choose. In the end we couldn’t decide and with 3.99 dollars we chose a price in the middle. Finally the launch day came and all emails were sent to various gaming sites. The game was live globally in the iOS App Store for anyone to download. We sat nervously in our office room refreshing the iTunes page for Apple’s Featured page content update, and also refreshing Google for first reviews. BADLAND became Apple Editors’ Choice both in Europe and the US! It meant that we got the best possible ad spot in the App Store. It was a day of celebration but only for a short time as our plan was to keep developing the game.

Figure 3 The first gameplay video was posted to YouTube in October 2012. The video received about 40,000 views in just a couple of days.

Our strategy is to keep bringing new single player and multiplayer levels to the game in updates, which also introduces new features. We want to continue to surprise the player in the levels the same way as we did this far. Every new update is actually like a new game’s launch – only in a bit smaller scale. We make a trailer video, take new screenshots and email about the update to the gaming sites the same way we did the first time. We have so many amazing things coming in the updates and in general in the world of BADLAND that we cannot wait to show everything to the public. If you want to be up to date you can visit www.badlandgame.com for the latest ­development updates or follow us on t­ witter ­ via @badlandgame. Johannes Vuorinen

Post-launch We designed the game so that we could easily continue creating more levels and content. After the launch, we immediately started to develop the Day 2 section of BADLAND. We basically continued working the same way we did before. And we still do.

BADLAND was Apple Editors’ Choice worldwide in the App Store right after launch.

65


Art Case Study Making Games – GDC 2014

UNREAL ENGINE 4

CREATING THE INFILTRATOR DEMO Quick and beautiful – Epic Games delves into the new Unreal Engine and the techniques used to efficiently build its Infiltrator showcase.

Alan Willard is Senior Technical Artist and Level Designer at Epic Games.

With more than fifteen years at Epic Games under his belt, senior technical artist and level designer Alan Willard is a long-time contributor to the blockbuster Unreal and Gears of War franchises. With an intimate grasp of the award-winning Unreal Engine technology, Alan travels the world giving demonstrations of the toolset and training developers how to leverage Epic’s latest technology.

M

aking great tools has always been a top priority at Epic Games. With a robust, polished toolset you can build bigger games with smaller teams, empower artists and designers to create abundant content with little to no coding required, and free up programmers to craft new gameplay mechanics and systems. It’s also important to be able to rapidly prototype and iterate on new ideas without breaking the creative flow. In addition, players expect amazing looking games on their favorite platforms, from PC to console and mobile, so using a toolset that is built to scale and deploy games across the spectrum of modern hardware is vital for success. As Unreal Engine 4 enters maturity, we’re using quite a few new tools and techniques for full production on our upcoming game »Fortnite« as well as for preproduction for high-end PC and upcoming consoles. Unreal Engine 4 powered the first public real-time demonstration on PlayStation 4 hardware at Sony’s launch event, and then at GDC we revealed a new demonstration called »Infiltrator« that I’ll talk more about in this article. So here’s what’s new in Unreal Engine 4:

Blueprints Blueprints are Unreal Engine 4’s new solution for visual scripting (Figure 1). Epic’s teams are using them in almost every part of the development pipeline, unleashing our creativity in ways never seen before. Besides using Blueprints for level scripting, we can make fully playable, bite-sized games and have them up and running in no time! One example is a hovercraft game built by principal artist Shane Caudle in his spare time, with zero programming help. Shane used Blueprints to create all

66

of the mechanics, including controls, a simple AI and the HUD. In the hovercraft game, a Blueprint drives the behavior of each game object, the primary one being the ship itself. The hovership’s Blueprint contains a user-created curve that defines how the hovership bobs up and down when the player triggers a blast. When playing the game, all of this visual scripting code animates and responds in real time, and displays the flow of the Blueprint activity as the game is played. Everything is visible as it is being engaged, from the event for the button press activating all the way through to the scripting and behaviors that are being executed. Blueprints are also used to create content procedurally. For example, we’ve demonstrated how to insert and adjust a ring of pillars that are placed using a single Blueprint. The visual scripting allows you to define how many sections to form around a circle and how wide to make the circle, plus you can adjust object rotations uniformly and all at once. You can also tell the script to remove one of the objects from the ring, and then choose which of those will be removed. In addition, you can space the pillars apart equidistantly without any manual measurements. This method of creating and placing objects allows for rapid prototyping and the insertion of complex systems – not just simple objects – and it also paves the way for future implementation of features by extending the Blueprint. The Blueprint is split into a couple of different sections: The first, called the Construction Script, defines how variables are used, and in this case it calculates how many segments are in the radial array and creates a mesh in each location. The nodes can be laid out in a very user-friendly way, and it’s easy to follow what is materializing. There is also


an Event Graph that enables you to define behavior that will be implemented during gameplay, so if there’s something that needs to happen when the player interacts with the Blueprint, you can easily define it here and extend the behavior later if needed. Blueprints are also really handy due to their »create once, deploy everywhere« nature. Once you create a Blueprint object, you can place it in as many levels as desired. If you need to modify the Blueprint’s properties, you can adjust it and those changes and any modified behavior will propagate throughout the entire game.

What else is new? We’ve also developed a dynamic Content Browser with an editing mode for customizing thumbnails. When editing is activated within the browser, it’s possible to view the asset in real time, move the camera around and even change the thumbnail’s object preview using materials. For assets that don’t have an easy preview object, the editor can capture the current view of the world and use the new screenshot as a substitute. Persona is a new suite of animation tools to meet the growing complexity of high-quality animations. Persona uses Blueprints to define the interaction and blending of animation in response to gameplay. A State Graph can be used to customize animation blending, which enables you to change states easily from walking to crouching, or to any other state needed for a project. In addition, this is all visualized within the graph, showing the flow from one node to another based on the values interactively defined in the tool. Again, like the core Blueprint system, this is extensible as the design of a project evolves. Any number of new states, animations, blending rules or other behaviors can be easily added and refined as needed.

In addition to overhauling the way we handle assets and build animations, we’ve evolved our rendering and art pipeline to support physically-based lighting and shading. IES profiles for lights accurately capture the intensity and falloff of real-world lights, using the same standards applied in film CGI toolsets. Lights can use IES profiles that define how much energy they emit, and how it’s distributed within the light for much more realistic illumination. Traditionally, we would tweak the falloff and color, and maybe apply a texture to refine the result. Now, however, all we have to do is simply apply an IES profile to immediately achieve realistic photometric illumination in our levels. Because the lighting and shading are physically-based, every surface transmits light and reflects its environment based on physically accurate materials. Examples include actual samples from our office – things such as aluminum, wall paint and carpet. We also progressively blend in additional levels of detail based on camera distance, and these values can be blended together freely. All of these values can be modified dynamically at any time, which lets artists tune and tweak the values directly to any scene, or make directed changes at runtime, and trigger changes based on gameplay or any other event. In addition, all the tiny bits and pieces in a particle system can now emit light with control over color, lifetime, size and a number of other properties. We’ve also implemented extremely fast GPU collision for particles, plus collision is dynamic, so as particle systems or the individual sprites move, or objects move through them, collisions behave realistically.

Under the Hood of Infiltrator Now that this subset of tools and features we’ve developed in Unreal Engine 4 has been intro-

Figure 1: The new Blueprint feature allows developers to create content like animations procedurally.

Watch the Infiltrator demo To really understand what we talk about in this article we included time stamps letting you jump easily to the currently discussed scene. You will find the video of the Infiltrator demo under www.makinggames.de/infiltrator.

67


Art Case Study Making Games – GDC 2014

Figure 2: The main character in the Infiltrator demo is a blend of fabric, metal, rubber, armor, flesh, and hair. The layers are melded together in the final shader with different layer blends to define which one is visible on any given portion of the model.

Figure 3: The corridor including the bot vehicle at the top has 1,000 lights onscreen at once, which was necessary for our artists to achieve their creative vision.

Figure 4: Most of the electrical and ricochet sparks, water drips and blowing smoke you see ­ in the Infiltrator demo are high-density GPU particles.

duced, let’s look at the specific techniques Epic used to create the »Infiltrator« demonstration. Infiltrator was created internally by a team averaging 14 artists working over the course of three months. There were a number of new additions to the engine during that time that caused us to develop new workflows and techniques. Let’s use our main character as an example.

New material pipeline Looking at his character shader, code structure and textures that define his appearance, there are significant differences that contrast with how we’ve created characters in the past. For starters, instead of using a small number of high-resolution textures for his diffuse, specular and normal properties, we implemented a layered method to define the different types of material that comprise his look. This character is a blend of fabric, metal, rubber, armor, flesh, and hair (Figure 2, 2:30). Each layer is built uniquely and then melded together in the final shader with different layer blends that use masks to define which layer is visible on any given portion of the character. Some of these layers are fairly straightforward, such as the cleaner chrome used for items like buckles, whereas other parts, like his flesh, are much more complex. These techniques are also used in many of the surface materials seen in the Infiltrator environment.

68

Another example of this is the bot vehicle in the beginning of the demo (see Figure 3 at the top left). The base metal is defined in one layer, while a paint layer is opaquely blended over the metal. This blending is more complex than just laying two textures on top of one another because it also blends material properties such as how metallic, specular or noisy the layers are. Additionally, there is a decal layer that adds touches of detail independent of the other layers, allowing for material changes that do not affect the finer details of the vehicle.

Next-gen lighting Infiltrator lighting is accomplished through a variety of techniques. In order for artists to achieve their creative vision we had to support many lights, with 1,000 onscreen at once in some places, such as the corridor in the opening scene (Figure 3, 0:13). According to senior graphics programmer Daniel Wright, we developed distance field shadowmaps, which is a method for applying the shadowing of these lights efficiently. Shadows typically have a large cost, so this method is the primary reason we are able to support so many lights. We use tiled deferred shading with DirectX 11­ ­compute shaders to apply the lights to the screen in Infiltrator. This splits the screen into tiles and culls lights for each tile before applying them. It


Figure 5: Light emission from particles in case of CPU particles is also implemented as a very low-cost, easily authored system. also makes our light application so efficient that we are able to afford many more dynamic lights attached to particles, and use them for numerous effects, such as sparks. Particle lights are used in every muzzle flash and impact effect. Particle lighting allows for precise timing of lighting with particle effects within the environment. This new technique improves scene integration, and we are able to edit light settings in one location which then propagates to the entire project to tune lighting all at once, and have it update tons of instances. This is a massive time saver. For Infiltrator we even attached particle lights to soldiers’ helmets to quickly add lighting detail which could be globally modified from a single particle effect. To elaborate on what we mean by scene integration, characters meld with the environment better than ever before because particles can now contribute light to the world. For example, if a sprite has a uniform range of time it is allowed to live, i.e., one sprite lives a half-second while another lives a full second, then the light emitted with each sprite will exactly match the sprite’s visible lifetime. This allows artists to have an efficient and precise method to control light within the environment. In addition, this saved many man hours of work in Infiltrator because artists didn’t need to update individual lights to time with particles, which exhibit random behavior that can’t be easily predicted. There are hundreds of bullet impacts, tracers, muzzle flashes, exploding sparks, electrical bursts and ambient effects in the level. Thanks to particle lights we can modify the instances of each of these effects from a single asset, which makes art direction and iteration a breeze. It’s also useful to note that volumetrics and particle effects are lit in Infiltrator using a special method through which a volume texture follows the camera around, and all the visible lights have their influence injected into it. Volumetrics then look up into the volume texture to get the lighting at their position in space.

High-resolution, high-count particles Particle systems received a significant quality overhaul for Unreal Engine 4. CPU particles enable us to produce very complex individual particles, while GPU systems allow for much higher densities of particles with simulations applied to them that are processed on the graphics card.

Figure 6: The soft and fuzzy dreadlocks of the main character are achieved through a new feature called temporal anti-aliasing. It jitters the camera’s results every frame for a cinematic feel.

In Infiltrator, most of the electrical and ricochet sparks, water drips and blowing smoke are high-density GPU particles (Figure 4, 1:31). Artists can author them using the same tools used to create CPU particles, with additional modules available. For example, GPU particle collision relies on evaluating the depth of the scene and comparing it to the current position of the particle. This is exceptionally fast, so we are able to perform collisions on a large number of particles per frame at almost no cost. We also implemented light emission from particles for CPU particles, again as a very lowcost, easily authored system (Figure 5, 3:08). Here the artist chooses the light module for the particle system, and it will then create a light per particle that inherits the color and size of the particle. These can be used in any scene.

Temporal anti-aliasing and motion blur The hair on the character is a mass of dreadlocks, and it needs to look softly fuzzy during the cinematic (Figure 6, 0:57). To achieve that look, we rely on a new feature called temporal anti-aliasing, which was being implemented as we were creating the demo. Temporal AA jitters the camera’s results every frame and collects the data into a buffer that is blended over time, along with edge detection results, to figure out which pixels should be smoothed into their neighbors. For the character’s hair, this produces noisy masks extending outwards from the body of each dreadlock that the temporal AA blurs into a much softer, more realistic result. Temporal AA isn’t restricted to just the character. It is in use throughout the entire cinematic, and is one of the major features that enabled us to achieve the desired cinematic feel.

Camera tricks and refraction tips The cloak shield is created using an Unreal Engine 4 tool called »Scene Capture«, which is a camera that writes to a texture. For Infiltrator, senior technical artist Jordan Walker placed the camera facing away from the character’s back to capture the hallway behind him, and the resulting texture was projected onto an arced sheet in front of him (Figure 7, 0:51). In addition to creating the captured texture, Jordan layered color tints along with refractive elements that had attributes that were viewdependent. He then created multiple refrac-

Figure 7: When the guard approaches, the corridor seems completely ordinary. When the camera turns though a texture becomes visible displaying a crooked perspective. This is achieved by using the Scene Capture tool which turns a specific camera view into a texture.

69


Art Case Study Making Games – GDC 2014 tions of the scene at different intensities for each color channel to enhance the refractive elements of the effect and give it a more digital appearance. This creates a blurry refraction that also has some color fringing.

New moves with Persona Building the motion for our characters is what prompted the development of the new Persona animation pipeline, which implements node-based Blueprints for creating states, and rulesets for blending between those states. This is used to blend numerous animations together simultaneously based on input from any source, game or user. Again, like the core Blueprint system, this is extensible as the design of the project evolves. Any number of new states, animations, blending rules or other behaviors can be easily added and refined as needed. This also facilitates designers, animators and engineers to expand the system to accommodate new needs as the design of the project evolves.

Better animations with improved FBX support

Figure 8: The rolling fireball explosion is a three-dimensional animated volume generated in 3ds Max using the FumeFX plug-in.

Thanks to Unreal Engine 4’s updated FBX integration, senior FX artist François Antoine led the effort exporting all shots from the Infiltrator cinematic featuring the falling bot vehicles, including environments, skeletal mesh animation and camera motion to an FBX file. Once imported into Maya, we converted the bot’s existing skeleton to a jointed ragdoll object using the NVIDIA PhysX DCC plug-in, adding breakable constraints for parts that separate during impact. We then generated low-resolution meshes for the entire robot to use as collision bodies. The final high-resolution mesh is parented directly to the joint hierarchy. Using the actual cameras exported from the Unreal Engine 4­cinematic, we simulated to camera while tweaking damping, joint stiffness parameters, constraint strength and adding fields and forces to art-direct the ragdoll performance. These simulations were then imported as FBX animation tracks back into the engine and assigned as separate animation sequences to the worker bot’s skeletal mesh. Animations were then triggered at the right time using our Matinee cinematics system.

3D volume texture explosion! To create a rolling, detailed explosion unlike any Epic had built previously, Jordan and François worked together to make full use of the combined power of Unreal Engine 4’s Blueprints, material editor and material functions (Figure 8, 2:00). The fireball explosion in Infiltrator is a threedimensional animated volume which can be viewed from all sides. It was initially generated in 3ds Max using the FumeFX plug-in. We then rendered 10 cross-sections of the temperature component of the simulation for each frame of

70

the simulation. Each of these animated crosssections was then made into an animated texture sheet and imported into the engine. Using Blueprint and material functions, the fluid simulation was reconstructed into a volume texture by interpolating between each of the 10 animated cross-sections. In order to display the volume texture in the viewport it was assigned to a 3D grid of GPU particles. Finally, we used a physically-based Blackbody Radiation material expression to determine the fire and smoke’s color from the simulation’s fluid temperatures.

Destructive environment dos and don’ts François advises artists to fail fast and early. It’s important to implement a placeholder simulation as promptly as possible so the cinematic and animation departments can roughly take into account what is happening. Resist the temptation to re-simulate until the rest of the shot is nearly final. Spend a good amount of time cutting up geometry and generating realistic fracture patterns up front, and smooth the normals of the internal faces. Motion blur is your friend and will help give direction to simulations. Where there is destruction, there is dust and small particulates, so make sure you add these as early as possible to see how much rigid body simulation you actually need. Save memory by filling out a sparse rigid body simulation with a particle system. Make use of instancing by assigning multiple animation sequences to the same the fracturing mesh.

A single drop of water The single coalescing drop of water in Infiltrator (Figure 9, 1:09) is a combination of multiple meshes and our new lit translucent materials, all controlled with Matinee cinematics. To show the initial drip form on the top of the pipe and roll down the metal, senior FX artist Tim Elek sculpted a droplet mesh which matched the pipe’s profile, and then offset the pivot to the center of the pipe. The scale of the droplet was then animated in Matinee to create the appearance of its formation. To create the illusion of the fluid accumulating and rolling down the surface we used Matinee to rotate the mesh around the pipe along the seam, and timed this with a panning material on a second mesh underneath the droplet, which also matched the contour of the pipe. The droplet that forms and drips off of the bottom of the pipe is actually a third mesh which uses a series of hand-animated morph target blends to create a convincing contour. This technique allows for very detailed animation on each vertex of the mesh. When the droplet falls away from the coalescing shape the morph target animation quickly snaps the liquid to reveal another droplet mesh. The newly falling droplet’s position and shape are


Figure 10: This city shot may look like a matte painting but it is actually built in 3D. The sky for example is a large dome over the space with a high-resolution sunset blended in. animated in Matinee to convey gravity’s effect on the liquid as it falls onto the shield. The materials for the water are unique to each mesh, which gives us control over blending and refraction based on lighting and the camera angle for each shot. We rely on the speed of the morph target animation to really sell the droplet breaking off due to gravity’s influence, and make use of the Material Instance Constant system and Matinee controls for real-time tweaking, feedback and animation of the material parameters. This system also enables art direction to adjust lighting and color quickly shot by shot. Refraction is controlled with a Fresnel falloff to the edge of the mesh which pushes the refraction away from the center of the form to emphasize the contour of the droplet edge. Unreal Engine 4’s new deferred decals with a similar refraction material are used in conjunction with the droplet materials to emphasize the amount of moisture buildup on the pipe. Lastly, referencing slow motion photography also helped to ensure the profile and timing of the droplet had an authentic feel.

From machine to skyline One of the focal points of Infiltrator is a massive vertical structure for the bots, which we nicknamed »The Screw«. The alarm lights on the machinery are actually built using Blueprints that describe the machine’s different states. When the guard initiates the shutdown, the lights begin to pulse bright red as the structure progressively switches to emergency mode. The impressive reflections in Infiltrator show off the improved rendering and lighting in Unreal Engine 4. We use a combination of screen-space reflection and reflection environments to tie the entire world together. We find that having placed reflection capture sources achieves the desired resolution and quality for the majority of the world, while the screen-space reflections produce the dynamic reflections needed to maintain the character’s grounding in the environment. Screen-space

reflections also make it possible for all of the moving objects in Infiltrator to reflect their surroundings correctly. Lastly, though it looks like a matte painting, the final city shots from Infiltrator were actually built in 3D, using a number of techniques to achieve such complexity (Figure 10, 2:44). For instance, the sky is a large dome over the space, with a special high-resolution sunset blended in to enhance the overall apparent resolution of the sky. Many of the cars and other small lights are particle systems emitting light and being guided along paths. We used different cloud layers to increase the visual depth of the scene, and many of those clouds are depthfading sheets, which check their surface’s position in space against other nearby surfaces, and control their opacity to eliminate surfaceclipping artifacts. The rest of the environment in those shots is made up of 3D objects placed in layers towards the distance.

Takeaways The Infiltrator demo challenged our team on multiple fronts. We set out to demonstrate the utility of a next-generation engine while working within significant time and manpower constraints, and so we continually implemented new tools and features while refining techniques. Being able to quickly iterate on content and polish our work was critical to finishing the demo in time for GDC. The challenges presented by the upcoming generational change in hardware drive home the fact that tools and workflow are just as important as great design, stunning graphics and an immersive experience. Without one, the others may not happen. It has been rewarding to set a new bar for ourselves, and what we were able to create with Infiltrator makes us truly excited for the next generation of games. Our focus is to amplify the productivity of our team and other developers, making it possible for us all to achieve our creative visions for future games. Alan Willard

Figure 9: The drop of water is combined of multiple meshes and the new lit translucent materials, controlled with Matinee cinematics.

71


Development Case Study Making Games – GDC 2014

BUILDING A CROSS PLATFORM PIPELINE FOR KINGS OF THE REALM

For their upcoming cross-platform MMO the guys at Irish developer Digit Games had to streamline their asset pipeline and adjust Unity to their specific needs. CTO Dominique Boutin tells us what actions he and his programmer colleagues took. Dominique Boutin is Chief Technology Officer at Digit Game Studios.

Born in France and raised in Hamburg, Dominque Boutin is a 15-year technology veteran, currently working as Chief Technology Officer at Digit Game Studios. Dominique was formerly Director of Technology & Development at Bigpoint GmbH in Hamburg. Lured by Digit’s team spirit and bold attitudes, Dominique made the move to Dublin to work with the team on the technology behind their seamlessly cross-platform ambitions. @Dom3D @digitgaming

72

H

ave you ever attempted to walk with your closed eyes for even just a few seconds? If you have, you may have noticed a tendency to walk in a curve rather than straight ahead. Now imagine you are driving on a German highway at 220 km/h (approximately 136 mph). (For non-German readers: Germany still has areas without speed limits!) If you closed your eyes in this scenario – for even just a few sec­ onds – the chances are very high that your trip ends in a literally deadly way. Moving at high speed requires one to be able to look ahead and see that it is safe to do so. Your eyes, combined with your other senses, provide a continuous feedback stream to help you identify where you are – and where you are heading. The same goes for managing a videogame production team or business. However projects are not born with built-in sensory systems. And often the most exciting ones to work on can’t be sure what is ahead. In order to be agile and move fast while also optimising speed, quality and costs, one has to establish tight feedback loops so that current status and current direction become as transparent as possible. This trend applies across industries – and is empowered by technology. Another consistent trend is a need to democratize technology, that is, to make technology more accessible – enabling and empowering more people to use it to enhance their productivity. In game development this is what your content pipeline must achieve: facilitate designers to create content that goes straight

into your product; allow software developers to focus on building systems. One has to be aware though that this may create concerns for those who have been in full control so far and who worry about performance and stability. A good content pipeline should provide means to give feedback as early as possible and give alerts if something is broken or went above budgets. In addition to enhancing your tool chain with feedback mech­anics, you can also, for example, display TVs with dashboards that provide indicators and graphs covering memory consump­ tion, loading times and frames-per-seconds measured of preselected scenarios.

Upfront Investments vs. Time-to-Market If you are in the games-as-a-service business, you are working to get a core game experience out the door as soon as possible with the understanding that you will continue to add further depth as you learn from your customers. A similar approach should be used for pipeline development: once pre-production defines the core experience, create a first version of your content pipeline – and then continue to iterate and improve it.

Starting Simple Writing standalone tools or complex scene editors is quite a big commitment. Using Unity and various extensions from the Unity Asset Store will provide you with a powerful base set. Make sure though that you understand that most probably you will have to extend and customize the way you use Unity. Unity’s goal


Kings of the Realm is a multiplatform strategy MMO currently in closed beta. The game is built in Unity for all devices which is why an efficient asset pipeline and engine customization is so important. The screenshot is showing an Alpha version, all elements are work in progress.

is to democratize game development. It started out by targeting indie developers and very small teams. It comes with some concepts out of the box, which gives you the impression that it’s the way you have to use it. In my experience this is also the basis for common complaints of teams of various sizes, be it 6, 15, 30 or more. But there is quite some flexibility under the hood and sometimes the possibility to bypass built-in workflows. I will give a small example later on. Depending on the project size, you can also start much simpler without editors and still get a good iterative process that enables artists and designers to contribute on their own. You may have heard about »asset conditioning«, »asset post-processing«, »Resource Compiler« and the like. It usually means taking the output of digital content creation tools like Maya and Photoshop and transforming it into authoring or runtime specific formats. You can take it one step further and integrate assets the same way including assignment of behaviour, attributes and runtime materials/shaders. To do so, use naming conventions for files, directories and the content itself. Add prefixes or postfixes to anything that can have a name in your DCC tool, for example: materials, textures, 3D scene nodes, layers etc. Next we need some kind of meta program that triggers various processing routines. This could be your build tool or a custom solution that is able to call your routines and has the ability to pass parameters in order to produce different output, for example in case of debug builds. The routines should be written in languages that provide a rich set of out-of-thebox libraries – such as Python and C# or use the core of your scriptable engine as command line tool. Your routines then parse and process files in specific directories and sub-directories.

There­are various ways you can organize that such as having different directories for cars, houses, characters etc. Often I see projects where assets are organized in directories by type so they either contain meshes, textures, materials or shaders etc. In such cases adding or removing content means to deal with multiple directories each time and sometimes it’s unclear what can be safely removed. I therefore don’t recommend that. Adding a new piece of vertical content like a character, should be done by adding a file or a directory that will contain everything related. Removing or refre­ shing that element then becomes much easier. Big projects may have the desire for incremental builds in order to get fast integration cycles, but building smaller projects should not exceed a few minutes. If it’s longer look at using SSDs and task distribution across machines first.

The typical content update cycle in game development using middleware software like Maya and Photoshop.

73


Development Case Study Making Games – GDC 2014 Automating the Asset Export

Folders are used in Photoshop for giving hints to the import pipeline.

Moreover it’s important that your designers can execute the content-integrating process on their own and run the project locally in an easy way.

Kings of the Realm At Digit we are working on a »seamlessly cross platform«, mid-core fantasy strategy MMO. It’s a bold and innovative quality project: while everybody focuses mostly on mobile and/or tablets, we clearly see that we use multiple devices of various form factors every day. For example, on the road it’s mostly the smartphone, during work hours it’s a desktop computer or a laptop and during relaxing moments on the couch, it’s a tablet. We want to create engaging quality games that are available to core gamers at any time, regardless what device is close to them. Kings of the Realm (KotR) will go open-beta later this year and will be genuinely social due not only to the fact that it’s seamlessly cross-platform but also owing to some innovative game mechanics. In terms of how we improved our pipeline, here are just a few examples:

FileStream stream = new FileStream(path, System.IO.FileMode.Open, FileAccess.Read); BinaryReader reader = new BinaryReader(stream); byte[] buffer = reader.ReadBytes(10000000); byte[] bytesWidth = { buffer[16], buffer[17], buffer[18], buffer[19] }; if (BitConverter.IsLittleEndian) Array.Reverse(bytesWidth); int width = BitConverter.ToInt32(bytesWidth, 0); byte[] bytesHeight = { buffer[20], buffer[21], buffer[22], buffer[23] }; if (BitConverter.IsLittleEndian) Array.Reverse(bytesHeight); int height = BitConverter.ToInt32(bytesHeight, 0); Texture2D texture = new Texture2D(width, height, TextureFormat.ARGB32, false); texture.LoadImage(buffer); reader.Close(); stream.Close(); texture.wrapMode = TextureWrapMode.Clamp; return texture;

Reading a PNG file and converting it into a unity texture using C#.

74

Although Kings of the Realm is a 2D game, we are building a lot of assets in 3D using Maya. The renderings are brought into Photoshop for composition and additional treatment. To automate the export to single images including meta-data, such as positional information, we used the built-in JavaScript interface. Unfortunately this is incredibly slow. Exporting 350 layers took over an hour on an iMac. It takes the same amount of time for scripts that don’t do any exports but just do layer renaming and reordering. It is said that using actions is much faster, but that API is less nice and you may not be able to access specific data easily. As the Photoshop file specifications are public, we were able to go a different route. Unfortunately the specs and the format are a bit messy, but luckily there are partial reader implementations in Python, C#, Java, C, C++, Objective-C and I think I also came across one in Coffeescript. Using an enhanced C# implementation as console tool reduced our export time to less than 15 seconds. A huge difference! When a process takes an hour, one thinks twice before doing an export or not. If it’s less than a minute, one just does it. This is a good example how fasttech impacts behaviour. We added support to read »smartobjects« from PSD-files, which allows our artists to use them inside Photoshop for populating the screen with a set of animated sprites. A smartobject is basically like an embedded Photoshop file that you can reuse in various layers. As Folder/Groups are actually stored in the PSD file as special layers that act as markers, we also added some code that allows us to deal with them more easily.

Bypassing the Unity Pipeline From there we went on and integrated the PSD-reader into our Unity Editor toolset. Inside Unity the artist can load a PSD file from a cus­ tom editor and choose what layers to import for updating. Images stored in the Unity asset folder usually get flagged as »non-readable« which means Unity loads them directly to the GPU without a shadow copy in RAM for further access. Either you use a script to hook into the asset processor to overwrite that property, or as we did, bypass the Unity pipeline for your purposes. We store all sprites separately as PNG-files before baking them into texture atlases. Reading PNG files using C# is straight forward, no need to use Unity for loading them if it complicates the pipeline for you.

Automated Processing In order to reduce the amount of transparent pixels drawn for each sprite, we use convex meshes to display them. We generate them automatically by extracting the silhouette using a simple scanline technique to create a point cloud. Then we transform that cloud into


Our Atlas Manager and the Sprite Inspector Editor in Unity. On the right it is showing a static single frame-sprite. a simplified convex hull before triangulating it. In Unity we use a lot of Scripted-Objects and custom Inspector-UIs to expose properties and parameters for automated processing, previewing and selective reprocessing. Scripted Objects are basically custom data structures, managed by Unity, that reside as assets in the project folder. They are really easy to setup as you can see in the code snippets on the bottom right.

Atlas packing The Unity Editor UI is written using Unity itself with its built-in immediate-mode UI system. This means it’s also straightforward to display sprites and meshes in any custom Editor window for preview or editing purposes. You can spot a few examples in our screen­ shots. The decision what sprite image goes into which texture-atlas is controlled by our artists. A custom atlas editor lists all images based on filter criteria that we extract from the PSD file, allowing them to find sprites easily. In order to use the inspector view in combination with the atlas manager, we simply select the desired scripted sprite asset in the project database in order to show its properties inside the inspector view. To help with the atlas packing we implemented various tools, including the ability to manually reposition a sprite through simple click-and-drag. We save the position of each sprite per atlas allowing us to bring updated PSD layers faster into the game. We are also experiencing with 2D physics: running the shapes through a gravity simulation within atlas boundaries can increase the density and free some more space for new sprites. At the

end the artist can bake all atlases and configurations with a few clicks and experience the updated game right away inside of Unity. In order to build for multiple platforms, we use Jenkins in combination with a Mac mini in server configuration, which can be bought with two SSDs and plenty of RAM. I highly recommend getting a fast machine like this. In our case it reduced our built time for all platforms from over 1 hour to less than 10 minutes. As we are developing online games, being able to run the servers locally is a big advantage. Therefore being able to set up a local installation in an easy and fast way is crucial. The best way is to use virtual machines. We use Vagrant and Chef to do so. Vagrant sits on top of VirtualBox and makes it easy to add scripted automation. As we use Chef to automate our online infrastructure, we can reuse some »recipes« for setting up the server environment inside of VirtualBox through Vagrant (which also supports Puppet, in case you are more familiar with that). Through easy access of our client builds via Jenkins and easy setup of local servers via Vagrant, even artists and game designer have access to local sandboxes for testing, balancing and experimenting. This way they can edit and upload server side configurations without involving a software engineer and without the need of custom editors, as maintaining them while the game is changing a lot could be very time consuming. Kings of the Realm entered closed beta last week and we at Digit Game Studios look forward to making it available to you all in the coming months. Dominique Boutin

[System.Serializable] public class SpriteAsset : ScriptableObject { //... CustomEditor(typeof(SpriteAsset))] public class Inspector_Sprite : Editor { //..

Setting up a custom asset type including a custom inspector view.

75


PR Best Practice Making Games – GDC 2014

THE DEFINITE

PR-FAQ PART 1

Former editor and PR professional Gunnar Lott responds to questions about game public relations that you may have asked yourself occasionally. Gunnar Lott is CEO at Visibility Communications.

Gunnar has been a journalist for many years, he even founded the magazine you’re currently reading. Since 2011 he is a PR professional, currently working primarily for his personal agency, Visual ­Communications, located in Berlin. www.visi.bi

I

n the following lines I will try to answer the questions that I encountered most in my games industry career. They were brought to me from friends, colleagues and even personal enemies. You probably won’t find the answer to the question that you crave most. That’s just how these FAQs seem to go. In this case, please consult the editors of this magazine and demand another episode of this article.

What the hell is PR? That, you could’ve just checked at Wikipedia, but since you kindly asked, I’ll tell you anyways. Public Relations is the communication of organizations, corporations or even single persons (music or movie stars for example) with the broad public audiences. I chose the plural »audiences«, because there isn’t one single audience. Instead we try to deliver customized information to predetermined recipients. Example: Developer Y just made a huge deal with Publisher X to create a game Z as a new part of a popular IP. This news is interesting for many people, however, they are interested in this for different reasons – you might address fans of the series, or fans of the genre in general. When addressing the first audience, you focus on

This is PR: Gameforge got lots of attention by offering Klingon lessons on Youtube. They advertised their browsergame »Star Trek: Infinite Space« with this.

76

communicating the further improvement of the series’ traditional strengths. When addressing the latter, it might be a better idea to communicate key features and the general quality of the upcoming game. Also you can address potential investors. After all, developer Y is making big bucks right now. This might lead to the need for additional personnel, which attracts members of the games industry as an additional audience. You might want to address other publishers out there as well: Here is a sensational studio doing awesome work, we might be open for other interesting projects with new partners. Last but not least, it is important to address your own company. PR also has an effect on company members. It strengthens the corporate identity. Good PR also makes people proud. When the basics are done well and all texts and messages are spot on, then it becomes easier working with social media and the community, since you already have assets you can use as a backup.

Where is the difference between PR and Marketing? PR is, in a way, the sister of the marketing department. It might also be just another arrow in the quiver of the marketing director, all depending on the company. Both are trying to establish a message, which tells the consumer out there just how great their product or company is. The difference is. PR mainly addresses the media and doesn’t pay for articles. Marketing on the other hand is a sales job. You buy advertising space, time or installs, in order to acquire more customers and users. There are companies out there which do not use classic marketing tools at all, and are proud that their products are advertised for and sold simply by word-of-mouth. Others think that PR work is esoteric balderdash and only trust the measurable effects of performance marketing. Both ways have their rightful place. Good PR work is something longterm, it helps building a strong corporate image and helps raising brand awareness, similar to traditional marketing. Performance marketing on the other hand focuses on short-term effects, which have a measurable impact on sales numbers. In theory, PR work improves the conversion rate of performance marketing measures, but that is hard to prove.


PR-Stunt: Zynga sent a horde of actors running through LA in order to advertise »Zombie Swipeout«.

Okay. And what does a PR person actually do? What are the tools? The traditional press release remains one of the most important tools, even today. Just as the name suggests, it is merely a tool to release relevant information to the media audience and therefore the public. Such a release might follow strict rules, or might be individually created. In the end the only thing that matters is this: It needs to be easily grasped. Journalists prefer concise press releases, therefore all relevant information should be submitted in ideally one line: »Who? What? When? Where? How? Why?« This information is usually sent out to multiple recipients. Basically to all journalists that might be interested. Sometimes however, it is better not to make a press release, but rather get in contact with only a few editors who specialize on the topic at hand. But basically PR can be anything. It might be an event where a dozen actors dressed as werewolves rage through the streets of Hamburg in order to create attention for the new Werewolf-themed shooter game (of course journalists are previously tipped off on such events). PR might also be a simple interview or meeting in a Starbucks where you talk about »trade secrets« with a journalist. In turn you get public mention in their magazine or on their website. PR stands for a lot of things: sending out review samples, organizing raffles, visiting journalists, having a beer with them at the bar (»Networking«) and so on.

with so called media equivalencies : A full page ad in Game Informer costs about 195,455 dollars, therefore a full page coverage about my product is also worth 195,455 dollars, since I would have had to spend the same amount of money for the same effect. This method is far from scientific, but it is interesting from a PR standpoint, since it makes for nice looking numbers and it makes it easy to reach a theoretical ROI. The other option is to completely discard marketing and solely focus on PR. Everything coming in therefore can be traced back to PR work. PR people present so called clippings as their references. In essence those are lists with records of attained media coverage. This always looks nice as well, but it also is of questionable value. If you want to know what you achieved, you have to predefine targets and compare the predicted and the actual, real results.

How do you calculate how much PR is worth? Simple: You can’t. Not really at least. There are two logical ways. PR-people like to calculate

Traditional PR-Tool: A press conference (picture: EA) bundles media focus in one place.

77


PR Best Practice Making Games – GDC 2014 What does PR cost? Not counting fancy press trips to the company headquarters in the Bahamas, the cost of PR mainly consists of HR. The employees need to eat, sleep and rest in order to be able to work hard on press releases, event planning, networking and such. Therefore the usual currency agencies deal in is work hours. Those are then sold to the contractor at various rates. The »German PR Association« (DPRG) determined that the heads of PR agencies demand between 120 and 200 Euros per work hour. A regular PR advisor will cost between 90 and 130 Euros. For some events, there are fixed rates as well. A countrywide press conference for example will cost about 4,000 Euros, according to the DPRG. Many agencies despise billing by the hour, so they just use wholesale packages: one month, one game, one county: 2,000 Euros flat. Another favorite is the monthly retainer, which the PR agency gets paid each month, plus bonuses for extra events and such. Which business

model is chosen largely depends on the involved business partners.

Do I need to do PR? Not doing PR means being invisible. And by the way, you are already doing PR. you are communicating with your website, the answers you give at a GDC-talk, the Twitter-postings you do, the job offers, the marketing banners. Everything is fair game when it comes to creating a strong corporate image or a renowned brand. Even if you don’t fancy traditional PR work, it is a good tool to create a stronger self-image, it also helps your employees to identify with the company. And even if you work with a publisher who is in charge of the product PR, it is always a good idea to keep the company name in the game by doing some corporate PR. Flaregames from Karlsruhe for example proclaims itself an »asshole free zone«. They claim to hire only good people. A bold statement with a message that is easy to understand – journalists love this. But that’s not all: it triggers sympathies on the market, with potential employees and the public in general.

We need a 85+ metascore. Can you guys handle that? No. But of course we do know which media outlets contribute to Metacritic or the quality index of Pocketgamer. When in a pinch, it might be a good idea to cater to these mediums in order to get the most out of the product. A lousy game however can’t be saved by good PR alone. PR is no flavor enhancer, it simply makes more people come to the restaurant – if the food is bad, then well ... Tough luck.

Let me ask this way: Can I buy ratings? Of course. Especially in the emerging mobile sector, there are a lot of blogs out there who will write good reviews in exchange for cash. I even provided some of them with texts of my own, since they did not have the time to write one themselves. Larger media outlets however can’t simply be bought. However you can make a trip to the editor’s office, tell them how much time and heart went into the production of the game. You can invite them to dinner, for drinks and so on. You might even invite them on a nice press trip to a homely location. Even seasoned journalists might then feel inclined to be more forgiving, seeing how much effort the PR people put into the product. It is also no secret that some editors are more forgiving when it comes to reviewing products of regular advertising customers, than they are with companies who always beg for good grades but never invest in the media.

Okay, but my game is great. Isn’t that enough? A typical PR chain: Polygon (picture) has taken a release from Joystiq and German media has taken the release from Polygon, even though all websites have gotten the release themselves and could have easily based their coverage on that.

78

The other way round: a bad game can’t be saved by good PR. A good game combined with


good PR however can achieve maximum profit. The most important source for journalists are other journalists. If your game is picked up by one big magazine, then it might inspire others.

How do I choose a PR agency? That is a trust issue. If you are not a PR specialist yourself, it is hard to assess the quality of a PR agency. As a first step you should therefore check if all the soft spots are in place. Do these people get me, my product and the industry I’m in? Do they have experience in my field? Are they recommended by others? Do I get along with them? Is their price model transparent? The logical way is trying a pitch – name a budget and a task and ask several agencies to come back to you with their ideas. This is usually free and helps a lot when assessing the different PR agencies.

I manage a three-person garage ­company. How should I make PR? The basics can be done from home. Create an image, a market position. Ask other developers if they know any well-meaning journalists and write them a friendly mail in which you present your company and your game. If your game is good and you present yourself in an authentic way, then this can help a lot. Services like Promoterapp help you with the tracking of your efforts and results. Gamespress.com helps getting materials out to the public. A company blog is also a nice idea. If you post regularly about current topics, maybe even present real numbers, then it can become a powerful tool. For international PR it is feasible to use inexpensive services which are specialized on games, such as gamepromoter.com or very­ smallmonsters.com (Disclaimer: The latter one is a service of, yeah well, me.)

And how do I get into GameStar or Game Informer? With a topic that interests the readers of these magazines. The big websites and magazines are mainstream oriented and get a lot of their

Good Pictures and videos are important: Musterbrands staging of the Assassin’s Creed IV clothes was vital for the viral media spread.

stories from the big PR companies and larger publishers. It can be difficult to get in for smaller developers. It is often a good idea to get some coverage in a smaller medium, hoping the larger ones will hear about it. Gunnar Lott

Thanks to ... Interviews with opinion making media outlets is a good chance to present company values to the public (Klaas Kersting at »Gründerszene«).

Andreas Suika, Carsten Orthbandt, Martin Ganteföhr, Andreas Herbert and Carolin Stephan, who helped asking the right questions.

79


Home Story Making Games – GDC 2014

A Day At …

IRRATIONAL GAMES The Big Daddy not only protects the Little Sister, but also our lobby.

The meeting room has a calm, zen-like aura.

The whole team meets for a Kickoff-meeting. ... »Om Nom Nom«

Jeff Seamster brought this cake fit for a king to the office. It shan’t survive long, what do you think, Mr. Audio Lead Scott Haraldsen? ...

On the right a glance into the »Andrew Ryan Meeting Room«, on the left a standup meeting. What’s best? You decide.

More Pits. And books. And a shoe carton. And more.

80

The game has gone gold. Of course celebrated with champagne.

Ken Levine and Producer Don Roy are enjoying a glass. Or two ...


We were allowed access to the holy chambers of the Bioshock developers and find out what Andrews Ryan’s personal meeting-room looks like, why two men with a camera are recording snow and just how good Ken Levine looks with two glasses.

Our huge, always spot-on-clean (yeah, right) kitchen.

Keith Shetler and Dylan Schmidt are shooting a scene for »A Modern Day Icarus« from Bioshock Infinite.

»Animate you must!«

Highly focused and yet so graceful: our Producer James Edwards.

Our »Animation Pits« have Skylights for indirect lighting – cozy!

Can we fit every employee on one photo? Challenge accepted!

81


Imprint Making Games – GDC 2014

ALSO AVAILABLE

FROM MAKING GAMES Making Games 04/13

Making Games 05/13

Making Games 06/13

The Next Generation

The Making of Ryse

Open World

Featuring Epic Games, Bungie, Havok, Aeria Games, King Art & Jade Raymond

Featuring Crytek, Digit Games, Frogmind, Studio Fizbin & HandyGames

Featuring CD Projekt RED, Rabcat, Sony Online, Paradox Interactive & inkle

Contact us Advertising Sales

Address Changes and Subscriptions

Article Suggestions and Comments

Nicole Klinge, nklinge@idg-consultant.de

shop@makinggames.de

editor@makinggames.de

IMPRINT Editor in chief Heiko Klinge Editors Yassin Chakhchoukh Jochen Gebauer Patricia Geiger Art Director Sigrun Rüb

82

PUBLISHER Layout Manfred Aumaier Alexander Wagner Eva Zechmeister Anita Blockinger Editing Marion Schneider Translation Tom Loske

Contributors Björn Blomberg, Dominique Boutin, Adrian Goersch, Marcin Gollent, Patrick Harnack, Matthias Hellmund, Nikolas Kolm, Devin Lafontaine, Gunnar Lott, Corey Navage, David Sallmann, Emily R. Steiner, Martin Teichmann, Piotr Tomsinski, Balázs Török, Johannes Vuorinen, Alan Willard

IDG Entertainment Media GmbH Lyonel-Feininger-Str. 26 80807 Munich Germany Phone: +49 89 / 360 86 0 Fax: +49 89 / 360 86 118 www.idg.de

Registration Court: Municipal Court Munich HRB 116 413

We thank our interview partner Bobby Stein

Authorized Representative: York von Heimburg, CEO

VAT Identification Number: DE 186 676 450

Executive Director for Advertising: Ralf Sattelberger, Director Sales (Address see Publisher) +49 89 / 360 86 730


»

The NEW voice of game development

It’s high-quality Every piece of content is ­written by professionals from the industry and specifically optimized for tablet reading.

It’s connected All sources and external ­references are linked in. You can check them out without leaving the app.

Free

Trial ­Issue

www.makinggames.de/appstore

It’s comfortable You can buy available issues straight from your couch. Billing is done through your ­iTunes or Google Play Account.

www.makinggames.de/googleplay


WE ARE HIRING ACROSS ALL DISCIPLINES

WWW.YAGER.DE/CAREER


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.