JIAL 2012

Page 1

The Journal of Internationalisation and Localisation Volume II 2012

Lessius Antwerpen/University of Leuven Subfaculty of Language and Communication Sint-Andriesstraat 2, 2000 Antwerpen, Belgium Brigham Young University Department of Linguistics and English Language Provo, UT 84602, USA


The Journal of Internationalisation and Localisation Volume II 2012

Š 2012 by Lessius Antwerpen/University of Leuven Subfaculty of Language and Communication Sint-Andriesstraat 2, 2000 Antwerpen, Belgium Brigham Young University Department of Linguistics and English Language Provo, UT 84602, USA Editor Hendrik J. Kockaert (Lessius Antwerpen/University of Leuven, Subfaculty of Language and Communication)


Editorial Board Dr. Esperanza Alarcón Navío, Facultad de Traducción e Interpretación, Universidad de Granada, Spain Prof. Dr. Bassey E. Antia, Linguistics Department, University of Western Cape, South-Africa Patricia Egan, Technical Communication, University of California at Berkeley Extension, USA Jorge Estevez, Documentation Service Officer at FAO, Rome, Italy. Dr. Miguel A. Jiménez-Crespo, Department of Spanish and Portuguese, Rutgers, The State University of New Jersey, USA Barbara Inge Karsch, Terminology Researcher, Microsoft Corporation, Redmond, USA Rolf Klischewski, Games Localisation Consultant, Germany Dr. Hendrik J. Kockaert, Lessius Antwerpen, KU Leuven Subfaculty of Language and Communication, Belgium István Lengyel, COO, Kilgray Translation Technologies, Hungary Dr. Arle Lommel, Senior Researcher at Deutsche Forschungszentrum für Künstliche Intelligenz (DFKI), Standards Coordinator at GALA (Globalization and Localization Association) Julia Makoushina, Co-Owner and Operations Director, Palex Languages and Software, Tomsk, Russia Prof. Dr. Alan K. Melby, Linguistics and English Language, Brigham Young University, Provo, USA, Member of the Board of Directors at American Translators Association Dr. Sharon O'Brien, School of Applied Language and Intercultural Studies, Dublin City University, Republic of Ireland, Lecturer/Researcher in translation technology/controlled language & MT Peter Reynolds, CEO at TM-Global & Executive Director at Kilgray Translation Technologies, Poland Florian Sachse, Managing Director, PASS Engineering GmbH, SDL International, Bonn, Germany Prof. Dr. Klaus-Dirk Schmitz, Cologne University of Applied Sciences, Cologne, Germany Prof. Dr. Uta M. Seewald-Heeg, Anhalt University, Köthen, Germany Prof. Dr. Nitish Singh, Boeing Institute of International Business, John Cook School of Business, Saint Louis University, St. Louis, Missouri, USA Prof. Dr. Frieda Steurs, Lessius Antwerpen, KU Leuven Subfaculty of Language and Communication, Belgium Thomas Vackier, Localisation specialist & QA Tools Engineer, Yamagata Europe, Gent, Belgium Prof. Dr. Sue-Ellen Wright, Modern and Classical Language Studies, Kent State University, Kent, USA Dr. Jost Zetzsche, Principal at International Writers' Group


The Journal of Internationalisation and Localisation welcomes contributions on the following themes.

The main themes of JIAL include, but are not confined to:

Interoperability Translation Quality Assurance Development and compatibility of translation and localisation software with today's internationalisation and localisation's requirements Web internationalisation Translation marketing Cultural adaptation Development, implementation and certification of standards Education, research and best practices in the areas of internationalisation and localisation Interchange between course programmes and industry Authoring Software internationalisation Localisation project management Games localisation Multimedia translation The Open Source paradigm Terminology management issues of internationalization and localization

JIAL is a peer-reviewed journal which is published once a year, and on the occasion of dedicated congresses. For ordering print issues, please refer to the website of Lessius Antwerpen, KU Leuven Subfaculty of Language and Communication.


Preface The Journal of Internationalisation and Localisation [JIAL] aims at establishing a worldwide discussion forum for both professionals and academics in the area of internationalisation and localisation. The scope of the journal is as broad as possible in order to target all the players in the internationalisation and localisation profession. The specific aim of the journal is to leverage the full range of information, from academic research results to the shop floor of today's language industries, and, conversely, to leverage business experiences in order to inform academic research.

The journal offers the following opportunities and benefits to its expected audience:

Publicity and branding opportunities for GILT-oriented training programs; Industry feedback enabling programs to remain abreast of industry trends; Opportunities to disseminate critical information at both the academic and the industry levels; A venue where educators, graduates, and industry can come together to foster burgeoning links between academia and industry.

JIAL aims not only to provide for standard academic publication practices, but also to adopt an editorial policy that is able to target academic research results efficiently towards industry. Consequently, each new issue will appear in two publication modes: a pdf file format, accessible on line, and a print version on demand. The Editorial Board is committed to meet the needs for the presentation of academic content, while these more detailed papers will be complemented by the abstracts, and some key conclusions aimed at directly informing all partners involved in the internationalisation and localisation industry.

Hendrik J. Kockaert Editor in Chief


The Journal of Internationalisation and Localisation Volume II [2012]

ISSN (on line) 2032-6912 ISSN (print) 2032-6904 Edited by Hendrik J. Kockaert


Table of Contents Carmen Mangiron The Localisation of Japanese Video Games: Striking the Right Balance ..................................................... 1

Alan Melby, Arle Lommel, Nathan Rasmussen & Jason Housley The Language Interoperability Portfolio (Linport) Project: Towards an Open, Nonproprietary Format for Packaging Translation Materials ............................................................................................. 21 Ian R. O’Keeffe Soundtrack Localisation: Culturally Adaptive Music Content for Computer Games ................................ 36

Dimitra Anastasiou XLIFF Mapping to RDF ............................................................................................................................. 66



The Localisation of Japanese Video Games: Striking the Right Balance1 Carmen Mangiron

Universitat Autònoma de Barcelona, Spain carme.mangiron@uab.cat

Abstract Over the course of the last three decades the entertainment software industry has become a multibillion dollar industry and a worldwide phenomenon. The United States and Japan have traditionally been the main players in this industry, which owes part of its global success to internationalisation and the associated localisation processes. Due to the cultural distance between Japan and Western countries, Japanese games often undergo extensive cultural adaptation in order to market them successfully in those territories. This paper analyses the localisation of Japanese console games. After presenting a brief overview of the history of the localisation of Japanese games it describes the main internationalisation strategies adopted by Japanese developers and publishers. It also explores the main localisation strategies applied to Japanese games, i.e. domesticating or exoticising, exploring the cultural adaptation processes to which some Japanese games have been subject, and examines how critics and players reacted to the localised versions. Finally, it concludes with a reflection on the extent to which Japanese games should be culturally adapted for their international release in order to strike the right balance between domesticating and exoticising strategies taking into account different factors, such as the genre of the game, the gaming preferences of the target players, and the intended audience.

Keywords: video games, cultural adaptation, localization strategies

1

This research is supported by the R + D Spanish Ministry of Science and Innovation project FFI2009-08027 and the Catalan Government funds 2009SGR700.

1


Carmen Mangiron

Introduction The interactive entertainment software industry, popularly known as the video game industry, is a multibillion dollar industry that generated an estimated consumer spend on game content of USA $15.6 billion in 2010, including new and used boxed games, game rentals, subscriptions, game downloads, social network games, and mobile game applications (NPD, 2011). Video games have become an integral element of global pop culture and a preferred leisure activity for many, with over 95 million adult gamers in Europe (GameVision, 2010), and 68% of North American households playing games regularly (ESA, 2009). The USA and Japan have traditionally been the main players in this industry, which owes a significant element of its success to GILT (globalisation, internationalisation, localisation and translation) processes that have allowed gamers around the world to play and enjoy the same games as though they had been originally developed specifically for them. Despite the fact that most video games are developed in English or Japanese, they are marketed and sold around the world and generate up to 50% of their revenue from international sales (Chandler, 2006).

This paper focuses on the localisation of Japanese console games. After describing the main features of game localisation and presenting a brief history of the localisation of Japanese games, the paper describes the internationalisation strategies currently adopted by Japanese developers and publishers. It presents a number of examples of the type of cultural adaptation and rewriting that Japanese games typically undergo and analyses how some of the localised games were received in the target territories. The creative nature of this type of localisation is highlighted and the paper concludes with a reflection on the different approaches to localisation of Japanese games, domesticating or exoticising, and the need to strike the right balance taking several factors into account, such as the game genre, the preferences of the target players and the intended audience.

Main features of game localisation

Video games are technically complex interactive, multimedia and multimodal products designed to entertain, unlike utility software applications that are designed to assist in specific tasks. Games are considered cultural artefacts due to their cinema quality graphics and their universal narrative themes (Jenkins, 2006), which brings them closer in nature to other audiovisual products, such as movies. Video games are also designed to demand a high level of user interactivity. When playing games, players interact as agents with the game, adopting a participatory role that goes beyond that of mere spectator of a movie, which is a passive experience in terms of tangible user input. Game designers seek to foster the development of an affective link between the player and the game, in order to make game play more engaging and to facilitate the player’s immersion in the game world.

2


Localisation of Japanese Video Games

Video games are made up of different assets, namely: a) in-game or onscreen text, such as the user interface (UI); b) audio assets; c) cinematic assets; d) art assets, and e) the manual and packaging (Chandler, 2005). Therefore they contain different text types, such as menus, help and system messages, narrative and descriptive passages, a script for dubbing and/or subtitling, and printed instruction manuals. Some games, such as flight simulators, also contain specialised terminology as well as technical instructions and tutorials.

Game localisation consists of adapting a game technically, linguistically and culturally in order to sell it successfully in other territories, and it involves complex technical, linguistic, cultural, legal and marketing processes. From a commercial perspective, if the business case for investing in localisation is clearly established, developers and publishers localise their games into different languages in order to reach the broadest possible audience thereby maximising their return on investment. From a translation studies perspective, game localisation is a functional type of translation, the objective of which is to provide the target players with a similar game play experience to that of the players of the original. Therefore, the goal of game localisation is not simply to translate text but to translate experience (Di Marco, 2007; O’Hagan, 2007; Ashcraft, 2010). Ideally, players should enjoy a game as if it were an original designed for them. The obvious difficulty in measuring the success or otherwise of this activity highlights the need for comparative intercultural reception studies in this area 2. In game localisation, translators focus on the user, known as player or gamer, terms that highlight the ludic nature of games, and facilitate the immersion and enjoyment of the game. Localisation can encompass the modification of any element of the original product to adapt it to the target market. In the case of video games, any asset —in-game text, art assets, audio and cinematic assets, game mechanics3, etc.— can be adapted, and translators are generally granted carte blanche to modify or recreate any element of the original that they deem necessary or indeed include new references to the target culture in their translations. This creative freedom has been referred to as transcreation (Mangiron & O’Hagan, 2006), highlighting the creative element that is part of the translator’s remit. With regards to language, the use of correct and idiomatic language is crucial in facilitating players’ engagement with a game. A game containing spelling and grammatical mistakes gives a poor impression and can negatively affect immersion in the game. It can also send the message that the developers and publisher did not care about the target players enough to invest in good, quality localisation. This can also have a negative impact on game sales of the localised version, given the

2

Currently there only exists one study of the reception of a localised Japanese game, carried out by O’Hagan (2009b), who is currently working on a project concerning the comparison of the reception of Japanese games and their localised versions. 3 The term game mechanics describes the system of rules that governs game play in a video game.

3


Carmen Mangiron rapid flow of ‘word-of-mouth’ information between players facilitated by the Internet and the abundant fan forum sites.

Brief History of the Localisation of Japanese Games Japan has played an important role in the game industry ever since the beginning, both as a producer of hardware —by companies such as SEGA4 and Nintendo from the late 1970s and Sony from the mid 1990s— and video game software. There are several renowned Japanese developers and publishers, such as Capcom, Konami, Namco Bandai Games5, Square-Enix6, Taito Corporation, and Tecmo Koei7. However, only a fraction of Japanese games is localised into other languages, while the rest is exclusively used at national level 8 (Game Investor Consulting, 2007). Only games that are expected to be well received abroad are localised In particular, most Japanese games belonging to genres that are not popular outside Japan, such as dating simulation games, visual novels and hentai (adult content) games are not released outside of Japan because the sales expectations abroad would not compensate the investment required to localise them.

The first Japanese video games were arcade games designed for coin operated machines. They were originally released in Japan and in the event that they became popular they were subsequently licensed and exported to the United States, such as happened in the case of shooter game Periscope9 (1968) and the renowned Space Invaders (1978). Those games contained small amounts of text, such as “START”, “PLAY”, “SCORE”, “NEW GAME”, etc., which was usually written in English in the original and therefore did not require a translation.

4

SEGA withdrew from the console manufacturing arena in the year 2001 to focus on the development and publishing of game software, which continues to date. 5 Namco Bandai Games is the result of the merger of game developer and publisher Namco with toy and game producer and distributor Bandai Games in 2005. 6 Square-Enix was formed in 2003 when two of the biggest Japanese RPG developers and publishers, Square and Enix merged. 7 Tecmo Koei was formed in 2009 after Japanese game developers and publishers Tecmo and Koei merged. 8 The non-localisation of some Japanese titles lead to the emergence of fan translation of Japanese games in the late 1990s, a practice that is common today that grants access to Japanese games to fans outside Japan who cannot speak Japanese language. It is beyond the scope of this paper to analyse the phenomenon of game fan translation, but for more information, see, for example, Díaz Cintas & Muñoz Sánchez (2006) or Muñoz Sánchez (2009). 9 When games are cited for the first time, the year of original release is indicated in the body of the text. In the Games section at the end of this paper, the name of the developer, followed by the name or names of the publishers, if different, are also stated. The symbol “~” after the year of release indicates that the series is still ongoing.

4


Localisation of Japanese Video Games

Figure 1. Screenshot of Space Invaders (© Taito) (Source: http://www.vgmuseum.com/arcade.htm)

The arcade game Pac-Man (1980) can be considered the first Japanese video game to be localised into English. Its original title was Puck-Man, a derivation of the onomatopoeia パクパク (paku-paku), which in turn is derived from the verb たべる (taberu, “to eat”), with the meaning of “to munch”. However, the United States publisher, Midway, decided to change the name from Puck-Man to PackMan because they feared vandalism and the possibility that the P could become an F in the arcade machines (Kohler, 2005; O’Hagan and Mangiron, forthcoming). The United States title was then adopted for the international release of the game and for subsequent releases in Japan. Another important change to Pac-Man at a textual level was the adaptation of the names of the ghosts who chase the main character, which appeared in Romanised Japanese in the original version. The names were adapted for the United States version in order to make them more meaningful and appealing (Kohler, 2005; O’Hagan and Mangiron, forthcoming). The original ghosts were: Oikake, Machibuse, Kimagure, and Otoboke, names that reflect their personality and that could roughly be translated as “Chaser”, “Ambusher”, “Moody”, and “Silly”. These were translated for the United States version as “Shadow”, “Speedy”, “Bashful” and “Pokey”. In addition, the original ghosts also had the nicknames of Akabei (“Red”), Pinky, Aosuke (“Blue”) and Guzuta (“Slow”), which became “Blinky”, “Pinky”, “Inky” and “Clyde”. This is the first instance in which the localised version of a Japanese game lead to a change in the original one, a phenomenon that is not uncommon when localising Japanese games, but that is generally not often observed in other types of translation, where the relation of equivalence between source and target text is unidirectional.

The trend to modify Japanese video games for the North American market in order to ensure their success continued in the 1980s and 1990s. In the arcade game Donkey Kong (1981), the main character, called ‘Jump-Man’, became ‘Mario’ in the United States version, which happened to be the name of the Nintendo of America (NOA) offices’ landlord at that time. During the localisation process NOA felt that the original name was not “catchy enough in English” (Kohler, 2005: 46). As happened with Pac-Man, the change for the United States version lead to a change in subsequent

5


Carmen Mangiron Japanese games featured by the same character, Mario, who subsequently became one of the best known and most iconic video game characters of all time.

As far as home console games are concerned, the early games had in-game text in English and the manual in Japanese, which was then translated into English. As console cartridges increased their data storage capacity in the mid 1980s, Japanese games started to be written in Japanese and localised inhouse into English for the North American market. Only games that were successful in Japan were released abroad, and localisation was generally an afterthought, which meant that once games were localised they had to be reprogrammed to include the English text (Kohler, 2005). At this time the quality of the localisation was not the primary focus and it was often done by non-native speakers and as a result the localised versions contained many grammatical and stylistic mistakes. For example, the game Ghosts’n Goblins (1985) contains several typographical errors, grammatical mistakes and unidiomatic sentences, such as “This room is an illusion and is a trap devisut [sic] by Satan. Go ahead dauntlessly! Make rapid progres [sic]!” and “Congraturation [sic] This story is happy end. Thank you”. Other examples can be found in games from that period, such as Zero Wing (1989). One particular sentence “All your base are belong to us” has generated a cult following amongst fans and is now proudly displayed on T-shirts, mugs, mouse pads, etc.

A number of games from this early period were re-released in recent years for the Nintendo DS platform, and the English translations were corrected and updated, as nowadays Japanese publishers stress the importance of high quality localisation. Interestingly, however, when Final Fantasy IV (1991) was re-released in 2008, publisher Square-Enix decided to keep the sentence “You spoony bard!” as a translation for the Japanese “貴様!” (kisama), a rude and disrespectful way to say “you”. This translation had been originally criticised because it lacked idiomacity, but it then became wellknown mainly due to its comic effect.

The launch of the PlayStation in 1994, which used CD-ROM as storage medium, allowed an increase in the volume of text and the quality of graphics in games. English continued to be the main language into which Japanese games were localised until the late 1990s and the beginning of the next decade, when some of the major Japanese developers and publishers started to localise their games into French, Italian, German, and Spanish (FIGS), albeit English was often used as a pivot language. Final Fantasy VII (1997) was the first game of this best-selling role playing game (RPG) series that was localised into European languages using the English translation as a pivot. However, the localisation into the various languages was harshly criticised due to its poor quality. As an example, in the Spanish version, the term Party, which referred to the player’s team, was translated as “Fiesta” and the message “Change party members” as “Cambiar miembros de la fiesta”. There were also

6


Localisation of Japanese Video Games terminological inconsistencies related to object names and character names, overlapping text on screen, and truncations. The following screenshot illustrates some of these problems, such as the inconsistent spelling of one of the characters, Sephiroth vs. Sefirot; the truncation of “Lista de habilid”, which should be “habilidades”, and the bad layout of the screen on the top right hand side, where text is hidden by some icons.

Figure 2. Screenshot of the Spanish version of Final Fantasy VII (© Square) (Source: http://www.xkstation.com/vamosajugar/ff7/ff7.php)

As a result, developer and publisher Square switched to the in-house localisation model in order to more closely manage the localisation of the following instalments of the FF saga, translating directly from Japanese when possible (O’Hagan & Mangiron, 2004).

The release of the PlayStation 2 in 2000 had a considerable impact on localisation, as for the first time the game dialogue from the script could be recorded by human actors, bringing games closer to movies and introducing audiovisual translation practices, such as dubbing and subtitling, to the field of game localisation.

Nowadays, Japanese developers and publishers are fully aware of the importance of the quality of their localisations in order to be able to sell their games successfully to Western markets. They do not simply want their games translated into English, they want them “written in English, free of stiff translation constraints” (Ashcraft, 2010, p. 13). In particular, Square-Enix’s approach to localisation has been widely praised by game critics and scholars for producing some of the best Japanese to English localisation (Consalvo, 2006; Parish, 2007), a remarkable achievement considering the poor quality of their first localisations.

A number of Japanese publishers, such as Square-Enix, Nintendo and Capcom, combine the in-house and out-sourcing models, while others outsource all of the translation related tasks to trusted third

7


Carmen Mangiron party localisation vendors, such as Sony Computer Entertainment, and subsequently perform the quality assurance and testing process themselves (Wood, 2009). The in-house localisation model consists of gathering a project team of translators (both in-house translators and freelancers when required) within the premises of the developer or publisher, where they are managed by the localisation coordinator and have access to the original game, even if it is still under development. Translators are allowed to familiarise themselves with the game prior to its translation and are able to check it for contextual information as required. In Capcom, in addition to the localisers, the editor looks at the finished translation and polishes it in order to make it sound more natural in English (Gay, 2007).

The outsourcing model consists of handing over the translation, the quality assurance or the full localisation to a specialised vendor. Japanese publishers usually provide localisation vendors with detailed game related information —walkthroughs, cheats, information about the plot and characters, screenshots of the game, and glossaries— and sometimes even a playable copy of the game or some clips of it in the event it is still under development (Ashcraft, 2010). Non-Japanese publishers, on the other hand, often do not provide localisers with significant background and contextual information. Some of them send spreadsheets containing text strings without detailed context information, a practice known in the game localisation industry as blind localisation (Bernal Mérino, 2008; Dietz, 2006, 2007), which can be the cause of numerous errors due to the lack of contextual information.

Internationalisation Strategies for Japanese Video Games Japanese games have traditionally been very popular in North America and Europe, and their success can be partly linked to the globalisation and internationalisation strategies adopted by Japanese gaming companies. Broadly speaking, four main design strategies seem to be applied to the design of Japanese games in terms of internationalisation:

1) Designing games with a setting outside Japan

Japanese developers often set their games in the USA, such as the Dead Rising (2006~) and the Resident Evil series (1996~). The games usually do not contain references to Japanese culture because they are designed with an international audience in mind.

2) Designing games that exploit the popular image of Japanese culture in the West

This can be considered a form of self-exoticism, by means of which Japanese companies develop games that emphasise their “otherness” to appeal to foreign audiences. Game series such as Onimusha

8


Localisation of Japanese Video Games (2001-2006), Tenchu (1998~) and Ninja Gaiden (1988~) rely on the Japanese archetypes of ninja and samurai as a key selling point. The game Okami (2006), which narrates how the Japanese goddess Amaterasu saved the land in the form of a white wolf, is another example of a game marked by its Japaneseness that was successful in North America and Europe. This indicates that Japanese games marked by their Japaneseness can also appeal to the international audience if the game play is engaging and immersive and their cultural content is accessible for the target players. 3) Designing culturally neutral or “odourless” games

According to media and cultural studies expert Koichi Iwabuchi (2002), the success of contemporary Japanese cultural products, such as manga —Japanese comics—, anime and video games, stems from the fact that they have been purposefully designed in a mukokuseki fashion —literally meaning “without nationality, stateless”— i.e. culturally neutral or “odourless”, thereby concealing or toning down the traces of Japanese culture in order to make their products more appealing to a Western audience. In the case of video games, this can be achieved by setting them in imaginary worlds, as is the case for most RPGs of the Dragon Quest (1986~), Final Fantasy (1987~) and The Legend of Zelda (1986~) series. However, even Japanese games designed to be culturally neutral may require the adaptation of some content, as they are likely to reflect to some extent the cultural values and habits of their creators. One of the cultural aspects that often requires attention when localising a Japanese game is the body language of the characters. In Final Fantasy XI (Square, 2002) there is a cinematic scene where an Elvaan prince sneezes while his men are gossiping about him. This is based on the Japanese folk belief that you sneeze while someone is talking about you behind your back and as such this visual reference has a comic function in the original. However, the reference would not be understood in the localised versions, so United States localisers tried to make it understandable and funny to their audience by also making it known that a characteristic of the Elvaan people is that they sneeze when somebody is gossiping about them (Edge Online, 2006).

4) Designing games that deliberately mix elements of Japanese and United States culture

The global game industry is characterised by the way in which it intermixes Japanese and United States culture in their games “to a degree unseen in other media industries” (Consalvo, 2006, p. 120). As an example, Consalvo mentions the case of the Japanese RPG Final Fantasy X (2001), where Tidus, the male protagonist, looks like a Western surfer while Yuna, his female counterpart, wears a kimono and behaves in a very Japanese way. This hybridisation is in fact also palpable in a number of games developed outside Japan, which include anime-style character design, as in the Wiiware game Zombie panic in Wonderland, developed by the Spanish company Akaoni Studio, which became a hit in Japan. United States developed game Oni (2001) also uses anime-style art and themes and is set in

9


Carmen Mangiron Japan, despite the fact that the game play is similar to the game play of a typical United States thirdperson shooter. Another interesting example of the hybridisation of games can be found in the international version of the Final Fantasy games released by Square-Enix. This version is only released in Japan and it is largely based on the North American version, although it contains a number of additions, such as bonus mini-games or new cinematic scenes. The international version is localised from English back into Japanese and contains the United States audio voiceover, which is subtitled into Japanese. It is very popular in Japan, as it allows Japanese players to experience the game as non-Japanese players do, and to see what the main differences between the two versions are and what view of their culture is portrayed in the localised game. From a translation studies perspective, this is a rather unique occurrence that enables a dynamic cultural exchange between source and target text and cultures, which is richer than the traditionally unidirectional exchange traditionally assumed.

Cultural Adaptation and Transcreation of Japanese Video Games This section focuses on the process of cultural adaptation, including transcreation, to which certain Japanese games have been subject. Cultural adaptation is a crucial stage of any localisation process, particularly if cultural issues have not been considered during the internationalisation process, and it can take place both at macro and micro level (O’Hagan & Mangiron, 2009).

Macro level adaptation

Modifications at macro level are often related to marketing strategies and can affect any aspect of game design, such as the game mechanics, the level of difficulty, the visuals, the story line, the script, and the title. For example, in the North American version of Chocobo Racing (1999), after assessing the feedback from a United States focus group, the level of difficulty was reduced by strategically placing guard rails on the race course to prevent falls (Edge Online, 2006), thus reducing the level of difficulty for target players.

Marketing concerns also led to the application of censorship to Japanese games in the early days of the video game localisation industry. During the mid 1980s and the early 1990s, Nintendo of America (NOA) was particularly renowned for its censorship practices. Most Japanese games released in North America underwent a thorough check for religious references, nudity, presence of alcohol or tobacco, violence, and bad language (Nintendo’s Censorship, n.d.). Some examples of the modifications made by NOA include covering a nude statue in Super Castlevania 4 (1991), removing red crosses from hospital signs in the game Earthbound (1994) and changing the name of a Russian character in Punch Out! (1984) from Vodka Drunkensky to Soda Popinsky (Nintendo’s Censorship, n.d.). This over-

10


Localisation of Japanese Video Games paternalistic attitude was due to the fact that Nintendo’s target audience in the United States at the time was children. With the establishment of the Entertainment Software Rating Board (ESRB) in the USA in 1994, game companies could, for the first time, target their games to different age groups, which eventually led to the relaxation and almost disappearance of the censoring practices of NOA (Nintendo’s Censorship, n.d.). Adapting the game’s box art is another marketing tool that is often used as part of the internationalisation process. As there are many games on shop shelves, an eye-catching cover can make the difference and act as a selling point in its own right, particularly for casual gamers.

Macro level adaptation is also used for games originally designed for the Japanese market which have not been internationalised and have been subsequently localised because they became a best seller in Japan. This was the case of the simulation game Animal Crossing (2001), in which players assume the role of a new kid in town. The game was originally full of Japanese cultural references, such as holidays, clothing, and furniture depicting the Japanese way of life. Its cultural content was comprehensively adapted to United States culture, including the game visuals, which were redesigned to portray the American lifestyle, including a barbecue, a stage coach, a wagon wheel and a cow skull with horns (Nutt, 2008). The localised North American version was so successful that it was retranslated back into Japanese and marketed in Japan with all the United States content as Animal Crossing Plus (2003), where it was also a hit. This is another example of the flexibility and dynamic nature of current game localisation practices for Japanese games.

Another game that was remade for its North American release is the 2006 version of the dating simulation game Tokimeki Memorial (1994). The game became very popular in Japan and also achieved some popularity overseas despite the fact that it had not been localised (O’Hagan, 2007). For this reason, Konami decided to release the game in North America, being the first Japanese dating simulation game to be addressed to a mass audience in that territory. The dating theme was preserved, but all the game visuals were redesigned and the story was rewritten to adapt it to United States High School life. The title was also changed to Brooktown High: Senior Year (2007).

11


Carmen Mangiron

Figure 3. Screenshots of Tokimeki Memorial (left) and Brooktown High (right) (© Konami) (Sources: http://robert1986.blog82.fc2.com/blog-entry-655.html and http://uk.psp.ign.com/articles/791/791371p1.html)

Despite the extensive redesign, the game was considered mediocre by game review sites, such as Gamespot and IGN. Gamers and reviewers criticised the character design and stated that they would have preferred it if the original game had been simply translated, as opposed to the complete remake [see for example, the comments by Bayley (2007), Nargrakhan (2010)]. It should be highlighted that while a remake is usually not considered a translation in the proper sense, it does fit in with the localisation paradigm, as localisation envisages the modification of any aspect of the original product that needs to be changed. However, the internationalisation and localisation approaches adopted in this instance were not successful, as the United States version did not provide a similar game play experience to the players, who found it boring and aesthetically unappealing. Judging from gamers’ comments, a more exoticising strategy highlighting the Japaneseness of the game and targeted at fans of Japanese culture would have been more effective, considering that the dating simulation genre is not particularly popular in the United States. Schäler (2006:44) refers to this type of exoticising localisation as “reversed localisation”, which he defines as intentionally keeping or even introducing the exotic, strange or unfamiliar in the localised versions of a product in order to make it more interesting and appealing for the target audience.

Although macro level adaptation is ultimately the decision of the developer or the publisher, if localisers come across a cultural issue that needs to be addressed at a macro level, such as a character’s design or body language, the issue should be reported to the localisation coordinator, who will liaise with the publisher, stating the issue and suggesting possible solutions. It is then the publisher’s decision to make the necessary changes.

12


Localisation of Japanese Video Games Micro level adaptation

Micro level adaptation is performed at a textual level by translators during the translation phase. Different territories have different cultural values and expectations, which are influenced by their history, ethnicity, political system, habits, traditions, as well as religious and moral values. When localising between distant cultures, such as Japan and the United States, the cultural gap between the original and target audiences is significant and, consequently, humour, cultural references, and intertextual allusions often need to be modified. When the culture-specific elements present in a Japanese game are confusing, obscure, offensive, or simply not as funny as intended for target players, it is advisable to neutralise, adapt or omit them, taking into account their function in the game and aiming to achieve a similar function in the target version with an appropriate target culture reference.

For example, in the game Chocobo Racing, Japanese folktale characters Momotaro and Kiji, who would be very familiar to Japanese players, were replaced by Hansel and Gretel in the North American version in order to bring the game closer to United States players by using a similar intertextual reference in the target culture (Parish, 2007). The Phoenix Wright series (2001-2007), which describes the start of a young lawyer’s career and his first legal battles, contains many references to Japanese pop culture that were adapted to United States pop culture. The localisation team included new jokes and United States cultural references, which were appreciated by critics and target players alike, who praised the localisation because it was “not simply translating the text, but adding surprisingly biting, tongue-in-cheek jokes, and unexpected pop culture references� (Yoon, 2007). In this case, a domesticating strategy was effective and appealed to target players albeit it is a visual novel-type adventure video game, a genre that is not particularly popular in the United States This may indicate that the quality of the localisation is also a key factor in the success of a game belonging to a genre traditionally not popular in the target territories, although more research would be needed to confirm this hypothesis.

Another issue often requiring attention when translating Japanese games is the use of humour based on sexual innuendo. Compared to some Western cultures, Japanese culture has a relatively unselfconscious attitude towards references to sex, homosexuality and transgenderism, which are often used to add a humoristic touch in manga, anime, and video games. However, this type of humour is often not deemed acceptable for young audiences in North America and Europe. As a result, such references have to be adapted or removed to avoid the risk of obtaining a different rating for an older age group in the target territories, which would imply a reduction in the potential market size. As an example, in Final Fantasy XII (2006) there is a secondary character, a thief member of the

13


Carmen Mangiron Seeq race, who is a transgender character in the Japanese original but became a female in the North American and the European versions.

Transcreation

As already mentioned, when localising Japanese games, localisers are usually granted creative freedom, which they tend to exercise beyond the boundaries of ‘pure’ translation and enter the realm of transcreation by including new target cultural references and humour in the localised versions (Mangiron & O’Hagan, 2006). Examples of transcreation can be found in the early attempts at video game translation, for instance in Super Mario Bros. 3 (1988), at the conclusion of the game, when Mario rescues the Princess, she thanks him with the message: “ありがとう! やっと きのこの せかいに へいわが もどりました。おしまいっ!” (Arigato! Yatto kinokonosekai ni heiwa ga modorimashita. Oshimai!, “Thank you! Peace has finally been restored at the Mushroom world. The End!”). This was translated as “Thank you! But our Princess is in another castle! ...Just kidding. Ha ha ha! Bye bye.” This was a joke from the translators, based on the fact that this is the message that Mario receives when he clears a level and has to continue his search for the Princess (Kohler, 2005; O’Hagan & Mangiron, forthcoming). United States localisers decided to translate this final message creatively, departing from the original and injecting some humour into the translation to make the game more enjoyable for their target audience. The trend to transcreate and increase the humour content in localised Japanese games is also present in the Final Fantasy series (see Mangiron & O’Hagan, 2006; Mangiron, 2010), and it seems to be aligned with the higher occurrence of humour in United States films and informal conversations as outlined in a study by Japanese scholar Takekuro (2006).

Another transcreation technique often used by United States localisers translating from Japanese is the introduction of accents and idiolects absent from the original version for characterisation purposes (Mangiron & O’Hagan, 2006). For example, in Final Fantasy X (2001), one of the main characters, Wakka, who does not speak with any particular accent in the Japanese original, was characterised as a Hawaiian in the United States version as this characterisation suited his looks and the fact that he lives on an island. When intervening like this, translators are not simply translating the source text, they are creatively adding new target culture elements to it in order to make it more appealing for their audience, an unusual phenomenon in other types of translation.

Nonetheless, transcreation is not always well received by target players. In the Spanish version of the simulation RPG Little King Story (2009), localisers decided to use some colloquial expressions which were popularised by a Spanish comedian, Chiquito de la Calzada, in the mid 1990s. They also

14


Localisation of Japanese Video Games introduced references to the political and economic situation in Spain, applying a transcreating and domesticating approach to their translation. The general reaction of Spanish gamers was negative, as they found that after repeated exposure to the game the humorous expressions became boring and they felt it was strange to hear them in a game set in an imaginary world that is aesthetically very Japanese10. Similarly to what happened with Brooktown High, an excessive degree of domestication had a negative impact on Spanish players, as it interrupted their willing suspension of disbelief thereby affecting negatively their game play experience. For this reason, when localising Japanese games, it is important to strike a balance between domesticating and exoticising approaches, considering factors such us the game genre, the gaming culture, the ratings system and the cultural values of the target players.

Conclusion Japan has traditionally been one of the key players of the video game industry, and their global success can be attributed to a great extent to the GILT practices adopted by Japanese developers and publishers. To date, barring a number of exceptions, the predominant internationalisation strategy for Japanese games has consisted of designing culturally odourless products or products that carefully intermix Japanese elements —mainly the aesthetics— with United States elements. In spite of this, many Japanese games still contain some traces of Japanese culture, which are often adapted, neutralised or removed during the localisation process applying a domesticating translation strategy. Modifications can be made both at macro level (game design: visuals, story line, game mechanics) and micro level (script) in order to make Japanese games more appealing for international audiences.

From a translation studies perspective, the brief of games localisation is to create a localised version that provides a similar game play experience to target players, engages them and facilitates their immersion in the game. To this end, localisers of Japanese games are granted freedom to modify, rewrite and even add new content and target culture references in the localised versions, becoming coauthors and unleashing their creativity.

However, an extreme domestication approach is not always necessarily the best strategy, as has been shown in the cases of the United States version of Tokimeki Memorial and the Spanish version of Little King Story, the reception of which was not good because players would have preferred a more exoticising approach, that is a reverse localisation that preserved some of the Japaneseness of the original. For this reason, when localising a Japanese game it is advisable to aim at striving for a 10

See, for example, the comments at http://ellaberintodegalious.blogspot.com/2009/10/little-kings-story.html; http://akihabarablues.com/2009/08/30/%C2%BFquien-ha-aprobado-la-traduccion-de-little-kings-story/ and http://www.elpixelilustre.com/2010/01/analisis-little-kings-story.html.

15


Carmen Mangiron balance between a domesticating approach and an exoticising approach, considering the features and genre of the game, as well as the audience to which it is addressed. An excessively domesticating approach may not be successful if it does not appeal to the target audience and breaks their suspension of disbelief. While it is advisable to adapt those cultural elements that can negatively affect the comprehension and enjoyment of a game by target players, preserving elements from Japanese culture that give colour to the localised version may also be appealing to target players and can contribute to the success of a game in the target territories, as in the case of games such as Okami. In Schäler’s (2006: 45) words, If employed wisely, keeping the strange and sometimes challenging — rather than trying to hide it — presents an opportunity to consumers of digital content to learn more about the origin of this content, about the cultural and linguistic context it was created in.

Translation studies and the game industry as a whole would undoubtedly benefit from further research into game localisation, a highly creative, customisable and collaborative type of translation, where localisers are often part-authors, especially when translating from Japanese. Future comparative intercultural reception studies will provide useful information to developers and publishers about how their games are received in target territories, providing them with information about how the reception could be improved. From the game companies’ perspective, this could potentially lead to larger sales in target territories, and from players’ perspective it could provide a more immersive and satisfactory gaming experience. The game is not over yet.

16


Localisation of Japanese Video Games

References 2010 total consumer spend on all games content in the United States estimated between $15.4 to $15.6 billion. (2011). Retrieved from NPD: http://www.npd.com/press/releases/press_110113.html Ashcraft, B. (2010). The Surprising Ways Japanese Games Are Changed For Westerners. Kotaku. Retrieved from http://www.kotaku.com.au/2010/11/the-surprising-ways-japanese-games-arechanged-for-americans/ Bayley, S. (2007). Brooktown’s High’s girls become hotter? Joystiq. Retrieved from http://www.joystiq.com/2007/02/17/brooktown-highs-girls-become-hotter/ Bernal-Merino, M. Á. (2008). Creativity in the translation of video games. Quaderns de Filologia. Estudis literaris, XIII, 57-70. Chandler, H. (2005). The Game Localization Handbook. Massachusetts: Charles River Media. Chandler, H. (2006). Taking Video Games Global: An Interview with Heather Chandler. Retrieved from http://bytelevel.com/global/game_globalization.html Consalvo, M. (2006). Console video games and global corporations: Creating a hybrid culture. New Media and Society, 8(1), 117–137. Cronin, M. (2003). Translation and Globalization. London/New York: Routledge. Di Marco, F. (2007). Cultural Localization: Orientation and Disorientation in JapaneseVideo Games. Revista Tradumàtica, 5. Retrieved from http://www.fti.uab.es/tradumatica/revista/num5/articles/06/06art.htm Díaz Cintas, J., & Muñoz Sánchez, P. (2006). Fansubs: Audiovisual Translation in an Amateur Environment. Jostrans: The Journal of Specialised Translation, 6. Retrieved from http://www.jostrans.org/issue06/art_diaz_munoz.php Dietz, F. (2006). Issues in localizing computer games. In K. Dunne, Perspectives in Localization (pp. 121-134). Amsterdam/Philadelphia: John Benjamins Publishing Company. Dietz, F. (2007). How Difficult Can that be? The Work of Computer and Video Game Localisation. Revista Tradumàtica, 5. Retrieved from http://www.fti.uab.es/tradumatica/revista/num5/articles/04/04art.htm Gay, B. (2007). Brandon Gay of Capcom Japan, Translation Team. Retrieved from http://blogs.capcomusa.com/blogs/scarlett.php/2007/02/05/interview_brandon_gay_of_capco m_japan_tr Iwabuchi, K. (2002). Recentering Globalization: Power Culture and Japanese Transnationalism. Durham, NC: Duke University Press. Jenkins, H. (2006). Convergence Culture: Where Old and New Media Collide. New York: NYU Press.

17


Carmen Mangiron Kelts, R. (2007). Japanamerica: How Japanese Pop Culture has Invaded the United States. New York: Palgrave Macmillan. Kohler, C. (2005). Power-Up: How Japanese Video Games Gave the World an Extra Life. Indianapolis: Brady Games. Mangiron, C. (2010). The Importance of Not Being Earnest: Translating Humour in Video Games . In D. Chiaro, Translation, Humour and the Media. London/New York: Continuum. Mangiron, C., & O'Hagan, M. (2006). Game localisation: unleashing imagination with ‘restricted’ translation . The Journal of Specialised Translation, 6, 10-21. Retrieved from http://www.jostrans.org/issue06/art_ohagan.pdf Muñoz Sánchez, P. (2009). Video Game Localisation for Fans by Fans: The Case of Romhacking. The Journal of Internationalisation and Localisation. 1, 168-185. Retrieved from http://www.lessius.eu/jial Nargrakhran. (2010). Letter campaign for TM4 in English. Gamespot GameFAQS. Retrieved from GameFAQs: http://www.gamefaqs.com/boards/974887-tokimeki-memorial-4/53037663 Nintendo’s Censorship. (n.d.). Retrieved from http://www.filibustercartoons.com/Nintendo.php O’Hagan, M. (2009a). Evolution of User-generated Translation: Fansubs, Translation Hacking and Crowdsourcing. The Journal of Internationalisation and Localisation, 1, 94-121. O'Hagan, M. (2007). Video games as a new domain for translation research: From translating text to translating experience. Revista Tradumàtica, 5. Retrieved from http://www.fti.uab.es/tradumatica/revista/num5/articles/09/09.pdf O'Hagan, M. (2009b). Towards a cross-cultural game design: an explorative study in understanding the player experience of a localised Japanese video game. The Journal of Specialised Translation, 11, 211-233. Retrieved from http://www.lessius.eu/jial/documents/JIAL_2009_1_2009_APA.pdf. O'Hagan, M., & Mangiron, C. (2004). Games Localization: When Arigato Gets Lost in Translation . New Zealand Game Developers Conference Proceedings (pp. 57-62). Otago: University of Otago. O'Hagan, M., & Mangiron, C. (2009). Turning 花鳥風月 into a Painkiller: Extreme Cultural Adaptation or “Fragrant”approach? Taipei: LISA Forum Asia. O'Hagan, M., & Mangiron, C. (Forthcoming). Game Localization: Translating for the Global Digital Entertainment Industry. Parish, J. (2007). GDC 2007: The Square-Enix Approach to Localization: How Final Fantasy went from spoony to sublime. Retrieved from 1Up.com: http://www.1up.com/do/newsStory?cId=3157937 Playing for Keeps: Challenges to Sustaining a World Class UK Games Sector. (2007). Retrieved from Game Investor Consulting: http://www.gamesinvestor.com/downloads/Playing%20for%20Keeps%20Games%20territory %20profiles.pdf Pym, A. (2009). Exploring translation theories. New York: Routledge.

18


Localisation of Japanese Video Games Q&A - Square Enix's Richard Honeywood. (2006). Retrieved from Edge Online: http://www.edgeonline.co.uk/archives/2006/02/qa_square_enixs_1.php Sch채ler, R. (2006, October/November). The appeal of the exotic: localization in reverse. MultiLingual, 83, 42-45. Takekuro, M. (2006). Conversational Jokes in Japanese and English. In J. M. Davies, Understanding Humor in Japan (pp. 85-98). Detroit: Wayne State University Press. The Entertainment Software Association. Industry Facts. (2009). Retrieved from ESA: http://www.theesa.com/facts/index.asp Video gamers in Europe 2010. (2010). Retrieved from GameVision Europe: http://www.isfeeu.org/index.php?PHPSESSID=2n2jcg2haun83jug21kdpb6kk3&oidit=T001:662b16536388a 7260921599321365911 Wood, V. (2009). Interview with Miguel Bernal. The Journal of Specialised Translation. Retrieved from http://www.jostrans.org/issue11/int_sony_ent.php Yoon, A. (2007). Phoenix Wright: Ace Attorney - Justice For All. Retrieved from Anime News Network: http://www.animenewsnetwork.com/review/game/nintendo-ds/phoenix-wright-aceattorney-justice-for-all

19


Carmen Mangiron Games Animal Crossing (Nintendo, 2001) Animal Crossing Plus (Nintendo, 2003) Brain Training series (Nintendo, 2005~) Brooktown High: Senior Year (Backbone Entertainment-Konami, 2007) Chocobo Racing (Square, 1999) Dead Rising (Capcom, 2006) Donkey Kong (Nintendo, 1981) Dragon Quest (Armor Project-Enix/Square-Enix, 1986~) Earthbound (Ape & HAL Laboratory, 1994) Final Fantasy IV (Square, 1991) Final Fantasy VII (Square, 1997) Final Fantasy VIII (Square, 1999) Final Fantasy X (Square, 2001) Fantasy XI (Square, 2002) Final Fantasy XII (Square-Enix, 2006) Ghosts’n Goblins (Capcom, 1985) Little King Story (Cing & Town Factory-Marvelous Entertainment, 2009) Ninja Gaiden (Tecmo, 1988~) No More Heroes (Grasshopper Manufacture-Rising Star, 2008) Oni (2001, Bungie-Gathering of developers (PC); Rockstar Toronto-Rockstar Games (PS2) Onimusha (Capcom, 2001-2006) Pac-Man (Namco, 1980) Periscope (SEGA, 1968) Phoenix Wright (Capcom, 2001~2007) Professor Layton (Level 5 – Nintendo, 2007~) Project Zero (Tecmo, 2001) Punch Out! (Nintendo, 1984) Resident Evil series (Capcom, 1996~) Space Invaders (Taito, 1978) Super Castlevania 4 (Konami, 1991) Super Mario Bros. 3 (Nintendo, 1988) Tenchu (Acquire, 1998~) The Sims (Maxis-Electronic Arts, 2000~) Tokimeki Memorial (Konami, 1994/2006) Zero Wing (Toaplan-Taito, 1989) Zombie Panic in Wonderland (Akaoni Studio, 2010)

20


The Language Interoperability Portfolio (Linport) Project: Towards an Open, Nonproprietary Format for Packaging Translation Materials

Alan Melby, Arle Lommel, Nathan Rasmussen, and Jason Housley 4064 JFSB Brigham Young University, Provo, UT 84602, USA

akmtrg@byu.edu, arle.lommel@gmail.com, volodymyr.velyky@gmail.com, housleyjk@gmail.com

Abstract ISO standards for intermodal shipping containers have dramatically improved efficiency within the shipping industry worldwide. The translation/localization industry needs an analogous standard for translation projects and tasks. There are a variety of proprietary translation formats that allow materials relevant to a translation project (the source text, various resources such as translation memory files, etc.) to be put into one or more packages and sent to a translator. The translator can then use the same format to return the requested information, such as the translation. The objective of the Linport Project is to define an open, nonproprietary format for describing translation projects and creating translation packages, plus transmission and remote-access mechanisms needed to support implementation of the format. Linport stands for Language Interoperability Portfolio, where a portfolio is the description of a translation/localization project. An important feature of the Linport Project is structured translation specifications compatible with the system of parameters in recently published ISO/TS 11669.

Keywords: translation, localization, interoperability, standards, specifications, tools

21


Alan Melby, Arle Lommel, Nathan Rasmussen, and Jason Housley

Introduction This article proposes the development of a standard translation container format for translation projects and tasks within a project, as well as an associated remote access protocol (e.g., a RESTful web service) and programming-language specific APIs for accessing specifications and payloads (contents) within containers. The need for such a container format was discussed at the LISA Open Standards Summit on March 1, 2011, just as the Localization Industry Standards Association (LISA) ceased operations. Two of the authors (Melby and Lommel) were asked by the participants to draft a proposal for this format, and the Container Project was born. Shortly after the Container Project was established, we found out about a related effort, called Multilingual Electronic Dossier, being developed under the auspices of the European Commission’s Directorate-General for Translation (DGT). Because of the similarities of the two projects and the desire to avoid fragmentation of efforts, the two projects were merged in July 2011 and development of a joint project, the Language Interoperability Portfolio (Linport) Project, began. A portfolio is a description of an entire translation project, which can be broken down into a set of specific tasks. Linport was initially established as a joint effort of the DGT, the Brigham Young University Translation Research Group (TRG) and the Globalization and Localization Association (GALA). Linport has since expanded to include supporters and participants from many companies and organizations (see the Linport site, http://www.linport.org, for a current list). When this article was submitted (October 2011), discussions with the Interoperability Now! Initiative were just beginning, and the results of these discussions will be reported on in a later article.

Why a Container? One of the consistent problems faced by the translation and localization industry is that material to be translated is transmitted in many different fashions, often with incomplete or inadequate instructions for how the project is to be completed. In some ways this is analogous to a problem faced by the shipping industry prior to the 1950s, when goods were shipped in a variety of containers. Containers ranged in size from small-scale shipping trunks to large containers of various dimensions (Yewell, 2011). Using multiple kinds of containers to transport goods (“payloads”) meant that teams of workers were needed to manually load cargo in small batches from one place to another when the method of shipping changed, e.g., when material was moved from a ship to a train. This need to load crates manually made transportation of goods very expensive in real terms, both because of the labor involved and also because of the very real risks of damage to or loss of the payload, misdirection of individual containers, and general human error.

22


Linport Project The development of ISO standards for the construction and marking of intermodal 11 shipping containers (ISO 688, 790, 1161, and 1897) has helped to alleviate these problems because standard containers can now be used to transport a variety of payloads by road, rail, or water, regardless of what those payloads are. For example, a container of lamps is processed and shipped in the same way as a container full of power saws. Because the dimensions of the containers are standardized and they share common attachment points for securing them to vehicles or to each other, any vehicle designed to use the relevant ISO standards can ship ISO containers without manual unloading and reloading of their contents: the containers themselves are moved as a unit between vehicles. As a result, real freight costs have declined considerably and volumes have increased dramatically.

As in the days before ISO standards for shipping containers, currently there is great inconsistency in the organization and transmission of information about of translation projects. TSPs (Translation Service Providers) currently spend a significant amount of manual effort in manipulating files, clarifying instructions, and verifying that files are moved from place to place correctly (using a variety of methods including e-mail, physical media, FTP, and web portals). Much effort is involved in ensuring that materials are translated according to clients’ expectations. Smith Yewell, CEO of Welocalize, explained the impact of these issues as follows:

The lack of interoperability and standard metadata cost [Welocalize] approximately $3.5 million in 2010. The cost was derived mainly from non-billable administrative tasks associated with incompatible systems, missing and non-standard information, and additional engineering and project management time related to a lack of standard data-exchange. Extrapolated across the industry, this figure shows that [‌] standardization could deliver immediate financial benefit [to the industry]. (S. Yewell, unpublished presentation, Boston, March 1, 2011).

As an example of the sorts of issues that contribute to these costs, a freelance translator may be contracted via e-mail to translate marketing survey responses stored in an online repository. Instructions for accessing the repository are sent separately from any reference materials, which are given in later e-mails. To interact with quality control personnel, the translator is asked to correspond through an instant messaging service. This approach means that important information pertaining to a project is split between three locations: e-mail, instant messages, and the online repository. This type of system often leads to confusion concerning the details, management, and evaluation of the project, especially if multiple parties are involved on either the requester or provider side of the business relationship.

11

i.e., designed for different shipping modalities, such as ship, rail, or truck.

23


Alan Melby, Arle Lommel, Nathan Rasmussen, and Jason Housley

Containers vs. Payloads In this discussion, it is useful to consider two layers to be addressed in standardizing content translation/localization parallel to those discussed above: the container itself and the payload. The container layer, which this article focuses on, refers to the overall “wrapper� or architecture in which relevant translation and localization data are transmitted. For example, a simple ZIP archive can contain any sort of digital content, and any tool that supports the ZIP archive format can open it, regardless of the particular contents, although support for the ZIP format obviously does not mean that the tool will be able to process the contents of a ZIP archive. Thus, the ZIP format is a type of container, in the sense used here.

The second layer, the payload, refers to the contents (such as spreadsheet, plain text, graphics, or media files) within the container. Standardization of both layers is important, but almost all localization standards developed over the last fifteen years have focused on payload standardization, leaving the container layer undefined.

If we look at the payload layer, a partial solution to the problems described above can be found in the use of a standard format for the payload, such as XLIFF, which allows localizable material to be extracted and transmitted in a standardized manner. When XLIFF is used consistently, it simplifies the process of obtaining localizable material and ensuring the material’s completeness, but it does not ensure that files are sent and received properly, nor that instructions for the translation process are correctly transmitted. It also does not address the needs of all translation tasks (e.g., desktop publishing tasks or graphics localization that go beyond the translation of strings). So while XLIFF presents a tremendous benefit for users, it does not eliminate many of the manual issues associated with the translation process. A container format and an XLIFF profile are needed.

A portfolio is thus a container for an entire project, along with its associated payload. Ultimately, full interoperability will require standardization of both the container and payload layers. Standardization of just one layer would deliver benefits, but leaves significant barriers to full interoperability in place. However, even if payloads are not yet fully standardized, a container can still offer considerable benefit. For example, if a requester has a number of files in Adobe Photoshop format and lacks the tools or skills to convert to a standard payload format such as a well-defined XLIFF file, the provider would still benefit from receiving those files in a standard container format rather than as many, separate e-mail attachments or having to download them separately from an FTP site. When non-standardized payloads are delivered in a standard container, a significant portion of processing can still be automated.

24


Linport Project

A Critical Business Issue for the Translation Industry By some estimates, including those first reported by Reinhard Sch채ler of the University of Limerick at ASLIB in 2002 (R. Sch채ler, 2002) and those anecdotally confirmed at the first LISA Open Standards Summit in Boston, Massachusetts (2011), the costs of typical translation projects can be broken down as follows in approximate terms: 50%: TSP overhead (project management, facilities, file handling, etc.); 30%: payments to translators; 20%: TSP profit. Furthermore, a survey conducted by Welocalize of its translators in 2011 (Smith Yewell, personal communication, March 1. 2011) found that individual translators found themselves in a similar situation: about one third to one half of their time was spent on non-translation tasks. As shown in Figure 1, these results would indicate that only 20% of the costs of translation projects actually go to translation itself:

Figure 1. Costs of Translation (R. Sch채ler, 2002; & S Yewell personal communication, March 1, 2011).

If these results are typical, about 60% of translations costs are spent on non-translation tasks needed to support translation, e.g. TSP and translator overhead. While these aspects cannot be eliminated in their entirety, automation and adoption of standard formats can help reduce them, in some cases by up to 75% (A. Zydron, personal communication). Achieving this goal requires adherence to standard formats from company to company, and any variance from these formats could neutralize potential gains. The business goal of the Linport project is to reduce the large portion of efforts and costs currently spent on this overhead while simultaneously allowing for more flexibility in workflows. By

25


Alan Melby, Arle Lommel, Nathan Rasmussen, and Jason Housley addressing the root causes for these expenses, Linport aims to eliminate manual tasks, thereby increasing the productivity of TSPs and translators in order to increase the value of their efforts.

Not only would improving file handling and reducing the manual overhead discussed here help reduce costs, it would enable TSPs to handle projects on a much larger scale than their current capability allows. The reduction in transaction costs would increase efficiency and reduce current manual bottlenecks in project management that make current models un-scalable. In the words of Paula Shannon, Vice President at Lionbridge, a leading translation provider, we have reached the point where translation projects are “beyond human scale� (personal communication). Just as the shipping industry was able to transform itself to the point where one cargo ship today can carry as much as an entire fleet of cargo ships in 1900, so too the translation industry needs to transform to meet future needs and increase in efficiency.

An Open Nonproprietary Translation Container Format Because current manual process- and personnel-driven methods are incapable of scaling to meet increasing demands for translation volume while simultaneously meeting requirements for quality and speed, the translation industry requires a container format that can contribute to full interoperability in the supply chain. Ideally, it would also offer the potential to eliminate manual processes that do not add value to translation.

The need to package all of the materials necessary for a translation project (the source text and any terminology, translation memory, or other reference files, etc.) in one place is so clear that most translation environment tools already provide their own proprietary formats for bundling translation materials, usually ZIP archives containing tool-specific resources. These formats are generally not compatible with one another, meaning that they lock users into tool-specific ecosystems. This lock-in, whether intentional or not, has a number of business consequences:

1) Although tool-specific formats allow TSPs to package materials for convenience, other individuals in the chain, from authoring to publishing, may not be able to process a particular proprietary format. As a result the need for manual handling of files remains unaddressed outside of the realm of what an individual tool can handle.

2) TSPs and their clients are limited in their selection of translators to those who happen to use a particular tool. If, for instance, a TSP uses tools from one company but the best translator for the job uses tools from another tool developer, he or she may not be able to

26


Linport Project accept the job, leaving the TSP to find a less-qualified translator or to use manual processes.

3) TSPs and freelance translators often have to maintain multiple expensive tool sets and maintain the skills needed to use them, taking time away from translation itself.

4) TSPs are not able to mix tools within the localization process and tool developers are forced to develop “solutions� that cover the entire translation process rather than focusing on their particular strengths.

While a translation container format will not eliminate all manual processing and management steps or completely eliminate lock-in, it will greatly reduce them by providing a standard way for translation tools to interact with the resources in the portfolio. If the portfolio’s container structure is flexible but well defined, the tools associated with it will know how to interpret the contents, meaning that manual intervention would be required only when strictly necessary. (For example, the portfolio might contain instructions on how to obtain materials at a secure facility, an inherently manual task, but these instructions would persist in the actual portfolio, eliminating the need to pass separate e-mails or messages.)

The translation container should encourage, but not require, the use of other standard exchange formats for the payload, such as TMX, TBX, XLIFF, and SRX. Specific profiles (more constrained subsets of the overall container format), however, might mandate the use of specific formats in order to achieve greater interoperability. However, the overall format must be flexible enough to accommodate a variety of user scenarios, ranging from an individual looking for a quote to translate a batch of media files (a relatively unconstrained instance) to more complex (and constrained) cases in which tools would expect to see content only in specified formats, as discussed previously.

The use of the translation container format applies regardless of the tools to be used. The Linport container should provide a standard mechanism for the transmission of translatable content together with other resources needed to facilitate the translation and localization process. The intention is that a complete and valid portfolio (an instance of a container and its associated payloads) should contain or reference all of the materials and project data needed to fully process a transaction, thus minimizing the need for manual intervention or negotiation between the TSP and the client after the start of the transaction. We believe that a translation container would be a scalable format suitable for translation requests, ranging from just a few words to hundreds of thousands of words, using the same general structure.

27


Alan Melby, Arle Lommel, Nathan Rasmussen, and Jason Housley

Portfolio Structure A portfolio should have a well-defined structure. The exact structure is a subject of ongoing discussion within the Linport committee. At a minimum, however, the Linport portfolio structure will include the following components:

1. Portfolio metadata a. Profile (specification of the type of Linport object) b. Portfolio ID c. Contact details (to obtain clarification from the sender) 2. Source content to be translated/localized 3. Target content 4. Translation Project Specifications (based on ISO/TS 11669) 5. Reference materials a. Human-oriented (style guides, notes, relevant background material, etc., intended for human use) b. Machine-oriented (a terminology database, translation memory, etc. intended for machine processing and use)

While most of these items are relatively self-explanatory, standardized translation project specifications are a key component of the Linport format that will be unfamiliar to most users. The following section describes them in detail.

Standardized Translation Project Specifications The Linport format is intended to include full project specifications (an important part of the project metadata) describing the various aspects of the agreement between the requester and the TSP. While metadata about projects has largely been ignored/excluded from tool-specific container formats, it is our contention that this lack of clear and detailed specifications is directly responsible for many project failures and quality breakdowns. Anecdotal evidence collected at the LISA Open Standards Summit in March, 2011, showed that many TSPs have experienced difficulties because relevant assumptions about projects were not conveyed to all parties, often because the volume of relevant information exceeded the ability of translators or project managers to keep track of it (2011).

For example, if a media translation project is received by a TSP, but the client does not specify that the translation is in the medical domain, it may be assigned general media translators who lack

28


Linport Project specialist knowledge, leading to poor quality in the translation and possibly the need to retranslate content.

One common complaint among companies procuring translation services is that the lack of clarity about what services are being procured can lead to higher costs or lower quality than expected. For example, two companies may advertise “localization” services, but prices are not comparable if one provider includes full third-party review and desktop-publishing (DTP) services in its price while the other includes only translator self-checking and will charge extra for review and DTP services. The situation is exacerbated for large companies that obtain translation services through procurement departments where the staff responsible for contracting services may be unaware of translation requirements. Even when expectations are conveyed verbally or in e-mail, it may happen that not all parties are made aware of them, leading to errors. As a result, ensuring accurate and satisfactory business exchanges are difficult unless sufficient project metadata is included with a project in an accessible form.

Because of the need for consistent project specifications, a structured translation specifications (STS) object is at the heart of the proposed portfolio structure. An STS is a set of project-relevant metadata that explains how the transaction is to be carried out, as well as the client’s expectations and requirements for the end translation product. If an STS is used, many of the causes of conflict and redundant or unnecessary work will be eliminated from the translation process. It will make it easier to identify the reasons for any breakdown in the work. In addition, the STS can assist with translation procurement by identifying in advance the variables that are likely to affect project costs and by requiring clients to be clear about what they expect from service providers. If an STS is part of the bidding process and bids conform to the proposed STS guidelines, prices will be transparent and consistent.

The STS is not an ad-hoc, unstructured set of specifications for describing a translation project. It is instead based on a list of parameters that correspond to existing translation quality standards (ASTM F2575-06; ISO/TS 11669). The goal of the STS is to accurately describe the translation project at hand, and the authors postulate that the same parameters can be used to provide an adequate description of nearly every translation project. Table 1 presents a list of the parameters addressed in the STS. Although in theory the STS can be constructed by hand, it is expected that the software used to create portfolios will request this information and that some of the specifications will be automatically determined. For example, a tool may perform a word count to obtain the volume of text to be translated.

29


Alan Melby, Arle Lommel, Nathan Rasmussen, and Jason Housley When dealing with specification sets, a distinction is made between parameter and specification. A parameter is a variable aspect of translation projects that]can be paraphrased as a question, while a specification is the descriptive value of a particular parameter, i.e., it can be considered the answer to the question posed by the corresponding parameter. For example, a specification for the parameter file format (paraphrased as “What format are the source files in?”) might be “a Microsoft Office Word 2007 Document (.docx).” The specifications for one translation project might be very different from the specifications of another project (e.g., translating a patent versus translating subtitles for a movie), but the parameters remain constant. Parameters form a framework for creating structured specifications. Without this framework, the names, descriptions, and order of the specifications for a translation project might vary widely.

The structured set of 21 translation parameters listed in Table 1 can be broken down into four major groups: Linguistic, Production Tasks, Environment, and Relationships. Linguistic parameters detail information about the source content (its document type, language, intended audience and purpose) as well as target language–specific requirements. Source and Target parameters are subgroups of linguistic parameters. Production tasks describe the tasks to be performed as part of the production phase. Environment parameters detail the tools and references that a translator will use. Whereas the first three major groups address the translation project itself, the parameters in Relationships focus on the interaction between the requester and the TSP and allow room for additional clarifications. Many of these parameters are included in national and regional translation quality standards, such as the European standard EN 15038 and Canada's CGSB 131-10 although they may not all occur together with the same names as those listed here (ASTM F2575 06; ISO/TS 11669).

30


Linport Project A. Linguistic {1–13}

B. Production tasks {14–15}

source content information

{14} typical production tasks

(not dependent on target language)

a) preparation

{1} source characteristics

b) initial translation

a) source language

c) in-process quality assurance

b) text type

1) self-checking

c) audience

2) revision

d) purpose

3) review

{2} specialized language

4) final formatting

a) subject field

5) final reading

b) terminology

{15} additional tasks

{3} volume {4} complexity

C. Environment {16–18}

{5} origin

{16} technology {17} reference materials

target content information

{18} workplace requirements

{6} target language information a) target language

D. Relationships {19–21}

b) target terminology

{19} permissions

{7} audience

a) ownership

{8} purpose

b) recognition

{9} content correspondence

c) restrictions

{10} register

{20} submissions

{11} file format

a) qualifications

{12} style

b) deliverables

a) style guide

c) delivery

b) style relevance

d) deadline

{13} layout

{21} expectations a) compensation b) communication

Table 1. Parameters of a Structured Translation Specification Set (STSS12).

These specifications form a framework that defines and guides a translation project and allows the translation product, process, or translation project in its entirety to be evaluated. The first five

12

More information on these parameters is available from http://www.ttt.org/specs.

31


Alan Melby, Arle Lommel, Nathan Rasmussen, and Jason Housley parameters {1–5} are useful in developing initial project specifications and are highly relevant to preproduction activities. An appropriate translator cannot be selected without knowing the text type and subject field(s) of the source material. To estimate the cost of a project obviously requires knowledge of the volume and complexity of the source document. For example, the effort required to translate text in a graphic (e.g., images, diagrams, or even Flash presentations) depends on whether the graphic is available without text or with editable text. Likewise, the number of potential fuzzy or exact matches within a translation memory changes the practical volume of a text. Such source text obstacles may dramatically affect the degree of difficulty of a translation task.

In this framework, a quality translation project is one that conforms to all of the agreed-upon specifications. Parameters {6–13} relate specifically to the target content in isolation. However, all of the parameters are relevant to the task of translation. Conformance to some specifications cannot be determined solely by examining the target text. For example, the complexity/difficulty of a translation task is directly impacted by the first five parameters and the apparent quality of a translation may be heavily impacted by properties of the source text. As another example, an otherwise good translation that is inappropriately divulged to a third party or that is delivered late would not be considered part of a job well done. Project specifications are thus relevant during all phases of a translation project, both in achieving and assessing quality.

Within the STS, each parameter could be labeled with one of three statuses, depending on where the portfolio is in the business process: incomplete, proposed, or settled. An incomplete parameter indicates that a specification has yet to be determined, or that the client has no strong opinion about that particular parameter for this project. Proposed specifications indicate soft requirements, or that the requester is willing to negotiate these details, whereas settled specifications are ready to go into production and indicate the hard and fast details of the project. By the time the requester and the TSP sign a contract, every parameter needs to have a status of settled for the portfolio to be valid even if the specification is simply “at translator’s discretion.”

The STS file itself can be represented in a simple XML format. Its inclusion in one place within Linport portfolios will simplify the process of accessing them and improve communication and quality within projects since all parties will have access to agreed-upon specifications at all times. When used consistently, the STS would eliminate many causes of error, confusion, and delay.

Relationship with Other Formats Under the rubric Interoperability Now (Interoperability Now Project, 2011), several vendors have been developing a translation package format of their own, known as TIPP (TMS Interoperability

32


Linport Project Protocol Package). As of October, 2011, there was an informal agreement that the next version of the TIPP format will allow for the inclusion of STSS and that the Interoperability Now TIPP format and a simple bilingual profile of Linport will merge (see www.linport.org for updates, including meeting minutes). The authors believe that this collaboration is necessary to prevent the development of competing formats that accomplish essentially the same goals and to prevent fragmentation of the translation and localization industry.

Deliverables The Linport project currently plans to provide the following deliverables:

1.

The overall Linport architecture, including representation of entire projects and specific tasks.

2.

A number of Linport profiles to support various activities along the Authoring/Translation/Publication (ATP) chain. One of these profiles for bilingual translation projects is currently under development.

3.

A RESTful remote access protocol to allow translation tools to access Linport portfolios stored on the Internet.

4.

A set of open-source programming-language specific API libraries for accessing Linport portfolios from within stand-alone applications.

5.

An open-source reference implementation tool set for Linport.

These deliverables will allow implementers to utilize Linport at a low development cost and help them understand how to use Linport portfolios in their tools.

The Brigham Young University TRG is currently developing a Web application that will function as a proof of concept for the translation portfolio format. A visitor to the site will be able to create a free user account to save personal STS models, use an online specifications builder to create an STS, and upload files to include in a translation portfolio. The Web application will then build a translation portfolio based on the STS and uploaded files, which the user can then download from the site. The website is not a repository for files, but users will be able to save their own STS models and optionally include them in a public library. The use of STS models will facilitate the generation of

33


Alan Melby, Arle Lommel, Nathan Rasmussen, and Jason Housley specifications because a user may have a series of similar translation projects that need only minor changes in their specifications.

The TRG is also developing a Web application that allows a user to upload a Linport portfolio and view aspects of the file. This application will serve as the basis for developing the RESTful access listed above.

Conclusion The translation industry needs an open, nonproprietary format for packaging all of the materials necessary to complete a translation project. The Linport container format will provide that support in direct compliance with industry standards (ASTM F2575 06; ISO/TS 11669). The Linport format offers clear benefits for both requesters and providers of translation services. An individual requesting a translation can create a portfolio without knowing beforehand who will perform the translation; instead, the initial specifications can guide the selection of the appropriate translator. Because all portfolios have the same structure and use the same parameters for the STSS, a translator will know exactly where to look for instructions rather than needing to search through e-mails and other correspondence. Translation tools can also use the portfolio to automatically load the required translation memories and terminology resources without the need for the translator to search for them. The portfolio provides more than just materials for producing a translation; it provides a structure that promotes quality throughout the entire process, from authoring to publishing. The reliance on structured specifications allows the portfolio to include payloads and go beyond the functionality of a file format such as XLIFF.

Future work in developing a standard container format for translation tasks includes further consensus-building between requesters and providers as to what the portfolio needs to accomplish, and then initiating a project within an industry standards body such as OASIS 13 or ETSI 14 for standardization. Tool developers will then be able to create import/export functions to read and create portfolios. Just as ISO intermodal shipping containers have helped to standardize the way goods are transported from point A to point B, the Linport format will help to alleviate the need to manually organize and modify translation materials.

13

OASIS website: http://www.oasis-open.org European Telecommunications Standards Institute (ETSI) website: http://www.oasis-open.org. The particular body within ETSI is the Localization Industry Standards (LIS) Industry Specification Group (ISG). 14

34


Linport Project Sample portfolios will be made available at http://www.linport.org for inspection and comment. In addition, the proof of concept Web applications for constructing and viewing translation portfolios will be linked via that site. We invite interested readers to visit the Linport site to join the Linport mailing list and comment on the Linport Wiki. We also ask them to send their comments and suggestions to info@linport.org.

References 2006). ASTM F2575 06 Standard Guide for Quality Assurance in Translation. West Conshohocken, PA: American National Standards Institute. Retrieved from http://www.astm.org/Standards/F2575.htm (2008). CAN/CGSB 131.10-2008. Services de traduction. Gatineau, QuÊbec: Canadian General Standards Board. Interoperability Now! Project . (2011). Retrieved from Interoperability Now Rubric: http://interoperability-now.org/tiki/tiki-index.php?page=The+Package (2012). ISO/TS 11669. Translation projects -- General guidance. Geneva: International Standards Organisation. Retrieved from www.ttt.org/specs Schäler, R. (2002). Can Translation Companies Survive the Current Economic Climate? Proceedings of the 24th International Conference on Translating and the Computer. London: ASLIB. Smith, Y. (2011). Unpublished presentation. LISA Standards Summit. Boston, Massachusetts.

35


Soundtrack Localisation: Culturally Adaptive Music Content for Computer Games 15 Ian R. O’Keeffe

Localisation Research Centre, Centre for Next Generation University of Limerick, Co. Limerick, Ireland. ian.okeeffe@ul.ie

Abstract This paper focuses on the localisation and adaptation of one particular aspect of the computer game: the soundtrack. Sometimes bespoke, sometimes selected from commercial releases, it provides a background, supporting role in creating atmosphere and supporting the emotive state of the game space. But how often is the target market considered when selecting appropriate musical content? Is it possible to use this almost subliminal channel into the game player’s consciousness to increase his awareness of what is happening around him, and give him a feel for his character’s emotional and physical wellbeing? This paper presents a novel approach for transforming the soundtrack - via a system the author originally created for the purposes of capturing and recreating emotive content in music.

Keywords: Soundtrack, Computer Games, Emotive Musicology

15

This research is supported by the Science Foundation Ireland (Grant 07/CE/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) in the Localisation Research Centre in the University of Limerick.

36


Soundtrack Localisation

Introduction Computer games are, in many ways, the ultimate multimedia experience. For the purpose of this paper I will consider computer games to be “…any forms of computer-based entertainment software, either textual or image-based, using any electronic platform such as personal computers or consoles and involving one or multiple players in a physical or networked environment.” (Frasca, 2001). Games exist across many genres: Action and Adventure, Driving and Racing, First-Person Shooter (FPS), Platform and Puzzle, Role Playing Games (RPG), Strategy and Simulation & Sports and Beat -‘emups being one available list (Berens, & Howard, 2001). The visuals of a computer game are nonlinear, reacting to the interactions of the player, as are the sound effects and any haptic feedback generated by the control devices (joysticks, button controllers, motion-sensitive wireless gesture sensing devices and so on). The sonic landscape is generally in stereo at a minimum, and often in full surround, to create the illusion of being within the space depicted on-screen and it is made up of many layers such as incidental sound effects, dialogue, status warning sounds and the soundtrack. “Sound is more immersive than graphics. While graphics will draw you in to a scene, the sound going on in the background will create a reality in the player's mind that can never be done with graphics alone.” (Howland, 1998). Computer games have more in common with cinematography in this respect than most other computer applications, as sound is regarded as such a central aspect of the user experience. Indeed, in most other computer use scenarios sound is not perceived as being that important and may even be seen as an issue. Consider the laptop user on a crowded train who navigates to a website featuring loud music and has to quickly mute the speakers for fear of offending fellow travellers. In contrast an enthusiastic gamer will have a good set of headphones, or even a dedicated home surround sound speaker setup, to help enhance the gaming experience. This is because the principle of the suspension of disbelief, a concept first introduced by the poet Samuel Taylor Coleridge, is considered to be “one of the fundamental tools in creating a successful game design” (Crosignani, Ballista, & Minazzi, 2008) and the more immersive the experience the more likely the game player is to become enmeshed in the narrative, the characters and the virtual setting. This is why it is so important to avoid shattering this spell - through factual inaccuracies, inappropriate dialogue, unrealistic game-play or incorrect soundtrack elements - as once lost it is very difficult to regain the focus of the player. To further discuss the requirements for creating a computer game I shall now move on to investigate the game elements that make it up in more detail.

37


Ian R. O’Keeffe

Design From a design perspective one of the first elements to be considered is the overall narrative along with subordinate elements such as game play features, character information and so on. When it comes to the actual physical interaction with the game there are three basic groups: graphics, controls and haptic feedback, and sound, or more basically: things you can see, things you can feel, and things you can hear. Graphics can then be further broken up into game area graphics, status displays and menus. Game area graphics present the virtual world the player inhabits while playing the game. A recent development is the increasing availability of 3D graphics, necessitating the use of 3D glasses and special screen hardware. The status displays present information to the player such as energy left, location on a map, weapons available, time, game level and so on. These can often be customised in terms of positioning, content and visibility. Menus are normally used for navigating selection choices, something not normally done during intensive interactive game-play. Controls and haptic feedback make up the physical interface between the player and the game, the game control interface. Examples include the keypad used for controlling characters, specialist input devices like car steering wheels, pedals and gear levers, wireless gesture capture devices (Nintendo Wii), and more recently, human gestures captured by image analysis via a camera mounted on the games console (Microsoft Kinect). Haptic feedback from these devices can be in the form of vibrations, or force feedback in the form of resistance or ‘kick-back’ such as would be experienced through a steering wheel in a car. The final group of elements, sound, are outputted via the headphones or the speaker system. These consist of a number of layers: effects, dialogue, and the soundtrack. The effects typically relate to players actions, and events that occur within the game play. Dialogue is a special case relating to actual speech made by characters within the game. Both of these layers are positioned within the game space, both logically and spatially. In other words, the nature of the acoustic space in which the player finds himself will generally affect these sounds in terms of echo or reverberation, and stereo or surround positioning. The soundtrack is the final layer, and does not share this close-coupling with the game space. It exists to set the mood and enhance the game experience. A soundtrack is generally regarded as effective if it manages to remain ‘unnoticed’ by the player/audience, remaining at a subliminal level, and reinforces the on-screen action rather than distracts from it. However, like many other aspects of the computer game, it still requires modification and fine tuning to be acceptable to different locales and cultures.

Localisation Computer games, like all computer software and related digital content, face localisation requirements if they are to be commercially viable in the global marketplace. In some cases these requirements can

38


Soundtrack Localisation be met just by translating any text visible to the player, such as menus, signage within the game itself, or the status display, but more complex games will normally require a higher degree of modification, such as re-recording any dialogue in the required target languages. There is also the issue of cultural modification, one example being catering for cultural gaming conventions such as the preference in Asian game characters for more child-like characteristics (anime or manga) versus the more adult characters in Western games (Lara Croft) (Trainor, 2003). Another related cultural issue is the degree of freedom in depicting sexuality or violence. Europe typically shows much more concern for violence in games than it does for nudity, with the opposite being true of the US market. Germany for example has very strict laws relating to the depiction of violence in computer games, in particular where blood and gore are concerned (Chandler, 2005; Dietz, 2006). The soundtrack itself may also face issues, either with regard to broadcasting rights for a geographical area for a particular piece of commercial music or perhaps from a cultural perspective where the style of music may not suit the target audience. Another area of cultural concern, and the primary focus of this paper, relates to the emotional reaction of a player to the soundtrack if it is intended to communicate emotive or expressive content relating to the game play itself via the soundtrack. As music cognition and emotional response is a complex area in its own right, it is necessary to first present an overview of the process of creating a soundtrack for a computer game.

Soundtrack creation The soundtrack’s task is to help create atmosphere and enhance the gaming experience, either by supporting the visual scenes presented or by giving feedback on the performance of the player. The musical content for a game soundtrack can be selected from current commercial tracks, but in a lot of cases it is composed specifically for the game being created as this allows it more flexibility in dealing with specific requirements, such as the non-linear nature of game play when compared to the controlled environment of a movie/film. It is in this area of soundtrack composition that I will place my focus here. It is a useful ‘side-effect’ that bespoke compositions also have the added benefit of avoiding expensive royalty payments where global sales are concerned.

I will assume the main musical themes have already been composed at a conceptual level, to suit the marketing and style requirements of the particular genre of game, and the focus is now on tailoring the soundtrack to fit in with the game play itself. It must be stressed, however, that while some aspects of the soundtrack can be composed in advance many situations require it to be altered or amended in a non-linear manner. This makes such music well suited to a procedural composition approach. It should be noted that the focus here is not on the interactive elements (Collins, 2009), sound effects events triggered directly by the player through their actions via their input device such as the swishing of a sword or the sound of footsteps. Instead, it is the adaptive audio that is under investigation. These

39


Ian R. O’Keeffe adaptive audio events are triggered by a game’s sound engine based on certain in-game parameters such as general locations, time of day, ‘camera’ angle, or player properties like health or skills, and as such have a much more indirect link to the actions of the player. They use a core of musical content to generate the required duration of soundtrack in real time to fit in with the specific requirements of each player as they progress through the game. Another contrast with the linear, fixed movie soundtrack is the amount of music that needs to be composed due to the amount of time it takes to complete most modern games, and of course online multi-player games may never actually end at all. One good example of a dynamic composition engine is the iMUSE system (Interactive Music) developed and used by LucasArts (Land, & McConnell, 1994) in such titles as Totally Games / LucasArts’ “X-Wing” series. The abstract from the patent describes the technology as follows: “A computer entertainment system is disclosed for dynamically composing a music sound tract (sic) in response to dynamic and unpredictable actions and events initiated by a directing system in a way that is aesthetically appropriate and natural”. Then an example is given to differentiate the technology from existing approaches, where a fight scene features three music sequences: fight music (looped), victory music and defeat music: “The fight music, rather than playing along unresponsively, can be made to change mood of the game in response to the specific events of the fight. For example, certain instrument parts can signify doing well (for example, a trumpet fanfare when a punch has been successfully landed), while others can signify doing poorly”. It is also possible to transpose the music in pitch as the fight reaches its climax, and to incorporate transitional phrases to avoid an abrupt jump from the fight loop to the victory music. Some more examples of the soundtrack reacting to game stimulus: the game ‘Super Mario Bros.’ (Nintendo), where the actual tempo of the music increases as time runs out for the player, or the practice within games to gradually reduce the intensity of the music over time or maybe stop it altogether to avoid irritation due to repetition if a player has become stuck in a particular location for a long time.

So it can be seen that the soundtrack is a key aspect of any computer game, and can react to game trigger events to allow it to be modified in real time to convey more context information. It is also able to adapt to fill indeterminate time slots, and reflect aspects of the player’s status and the general surroundings of the game location. But it could do so much more, given the power music has to influence our emotions and our ability to track multiple information streams in parallel via our hearing as evidenced by the “cocktail party effect” (Arons, 1992). The fact that the soundtrack data is accessible from a computational standpoint means that it becomes a prime candidate for emotive adaptation. What will be presented here is an approach for capturing and re-using emotional templates for music, thus providing the computer game industry with a mechanism for altering soundtrack content to project any mood or emotion required by the game developers or indeed by the players themselves. A search of some online gaming forums identified some interesting threads, one example being from www.neogaf.com that requested members to list video game music that made them happy.

40


Soundtrack Localisation While the names of the games and songs listed by members were of genuine interest (Brawl - Yoshi's Story Ending Theme, Tekken 2 - Michelle's Theme, Marvel Super Heroes vs Street Fighter - Sakura's Theme, Final Fantasy V - Dragon Flight theme etc.), it is forum interactions such as that quoted below that really highlight the need for care in soundtrack selection.

Forum member #1: “Also this one makes me happy everytime i hear it Final Fantasy VIII - Balamb Garden” Forum member #2: “That song always makes me sad and nostalgic. -sniffle-”

From a localisation perspective, this raises the question of universality in emotion in music. Do we need to localise or culturally adapt the soundtrack to suit each locale, culture or demographic?

In terms of positioning this research is best viewed as being placed at the intersection of musicology, music technology and psychology. More specifically, this research is placed within a sub discipline of musicology, that of music cognition. This area is concerned with the study of music as information from the viewpoint of cognitive science. This discipline shares the interdisciplinary nature of other fields such as cognitive linguistics. Music technology is the result of applying computers and other forms of technology to the creation and adaptation of music. It should be noted that the initial focus of the research presented here was on the capture of emotional content in music for one particular culture, listeners to western art music, but the data gathered in this particular application demonstrates a lot of promise for the expansion of this technical approach into other areas of music analysis and modification.

Proposal The proposal is to put a mechanism in place that facilitates the modification of the existing soundtrack to include status information from the game such as strength and threat level as well as information relating to the player’s emotive state, so that the player becomes aware, almost subliminally, of what is happening around him and has a feel for his character’s emotional and physical wellbeing. In essence, I would see a situation where the soundtrack accompanying a computer game responds actively to various environmental variables and therefore enhances the game experience. This layer of modification would be placed on top of any existing music engine, and would act upon the original soundtrack. For example, if the player is acting particularly aggressively then anger could be introduced to the soundtrack. Low energy levels could also be modelled by capturing templates from users asked to create modified music that in their view represents tiredness or fatigue. Completeness of each level of a game could also be modelled, perhaps by increased tempo or intensity (by altering

41


Ian R. O’Keeffe the stresses applied to individual notes, shortening the sounding times and so on). There is no reason to view the process as one directional either, with the player’s actions only influencing the soundtrack. The opposite could also be argued to be true as research has shown that music has the ability to induce mood in a listener (Pignatiello, Camp, & Rasar, 1986). If there is an inherent threat to the player which may not be visible then subtle hints of menace, danger or fear could be introduced to the music, the amount of influence depending on proximity or perceived threat level, and the player would therefore feel the effect of this emotive content himself by listening to the music, thus increasing the immersive nature of the game experience. Moving elements of status information from the screen to the soundtrack would also have the benefit of freeing up screen real estate, and thus also helping to enhance the suspension of disbelief by removing text from the view the player sees.

Some suggestions for possible mappings from status information to musically mapped emotions could be: High energy levels - Strong/Happy, Finished a level - Happy, Low energy - Sad, Running away from enemy - Fearful, Aggressive game play - Angry, Low on ammunition - Worried and so on. It is also plausible to consider mapping emotions from the facial expressions of hero/player if so expressed graphically, as long as there was access to such status via some tag or variable within the game’s code base.

Localisation Impact From a localisation perspective the question that must be asked is: how much thought goes into the cultural suitability of such musical content? The emotive template needs to be representative of the player’s locale so that it can be correctly recognised and processed as it is quite possible that different cultures may derive differing meanings and understanding from music due to their cultural conditioning, and an acceptance that the structures of Western Art Music can be viewed as universal is a dangerous assumption to make (O’Keeffe, 2009). This implies a requirement to ensure the correct set of emotional templates is selected depending on the player’s demographic. As an example, perhaps music deemed happy in some cultures may be perceived as melancholy in others. It is therefore necessary to review the existing research in musical cognition, particularly any research that focuses on cross-cultural differences in comprehension and understanding, prior to describing possible mechanisms for creating the emotional modification of music, because if there is no evidence of such cultural diversity then the concept of ‘music localisation’, for this is effectively what we would be considering, becomes superfluous. The majority of the work in this area tends to focus on the psychological aspects of musical understanding, and the physiological results, rather than on how the music itself may be adapted to suit differing cultures or locales. The research also generally takes the form of passive studies of test subjects and their reactions to pre-prepared musical data, rather than an active approach where the test

42


Soundtrack Localisation subjects are able to change the music themselves. Looking firstly at psychological approaches, Gregory and Varney (1996) conducted a study that looked at the affective response of subjects to music from different cultures and found that listeners “brought up in the Indian cultural tradition have difficulty in appreciating the emotional connotations of western music”. Walker (1996) states that “understanding the music of another culture requires assimilation of the influences affecting musical behaviour as much as of the resultant musical products”, suggesting that cultural conditioning plays a part in how a listener understands music. Different cultures and ethnic backgrounds also show preferences for different types of music for musical therapy, as demonstrated in a study of the music selected by medical patients to help with post-operative pain relief (Good, Picot, Salem, Chin, Picot, & Lane, 2000). Moving on to neuro-science, a study (Morrison, Demorest, Aylward, Cramer, & Maravilla, 2003) of human brain activity captured using functional magnetic resonance imaging (fMRI) showed that there were activation differences between Western (familiar to test group) and Chinese (unfamiliar) music based on training. Trained listeners showed extra activation “in the right and left midfrontal regions for Western music and Chinese music, respectively”. It would therefore seem that we react differently to unfamiliar musical styles or traditions whether we want to or not. In line

with

this

review

of

the

literature,

the

author

has

set

up

an

online

survey

(www.localisation.ie/resources/MusicLOCweb/) to answer the question: Do different cultures hear music differently, and is the emotional content in music universal, or dependant on locale? The experiment requires the test subject to listen to a simple piece of music, and then to modify its mood using basic controls for speed, rhythm, dynamics (how loud it is played), note length (short or long) and scale (major or minor). The aim is to end up with four versions of the piece that the test subject judges to sound Happy, Sad, Angry or Fearful. The survey is still active, as data gathered to date has been mostly from the US/European locales, so a more diverse dataset needs to be collected before any true trends can be identified. The data does demonstrate broad agreement on emotive templates by region, however, which is encouraging. What is reassuring in the research discussed here is that there does seem to be evidence that music is not the universal language that it is often thought to be and that there is a place for musical localisation within game soundtracks, particularly when combined with emotive profiling.

When the scope of game play is expanded to include Massively Multiplayer Online Games (MMOGs) then the opportunities to create adaptive soundtracks expand in parallel allowing the mirroring of player status, such as health or mood, in the soundtrack they broadcast to other gamers as well as the music they hear themselves. This could be achieved by giving each player their own distinctive soundtrack carrying subliminal information about their mental state, something along the lines of the leitmotif used by Wagner, and more recently by John Williams in his Star Wars soundtracks, where one character thinks of another character or of an emotion and the soundtrack hints at this. This would allow each player to not only gain information about their own status from their soundtrack but also to

43


Ian R. O’Keeffe pick up on the presence, and emotional make-up, of other game players in their proximity. Of course, this also increases the possibility for confusion if the cultural background of each player is not taken into consideration when presenting them with emotive musical information. It becomes apparent that some form of adaptation would be necessary where the soundtrack is concerned to ensure that the perception of each individual player matches that planned by the games developer so individual templates, like personal profiles, could be held by each player and used to present them with their own culture-centric mood-corrected version of the overall game soundtrack. Perhaps consider the analogy of having all textual visual signage presented in your own language as you look at it, even though another player standing ‘beside you’ in the virtual space may well see the same text in a different language. Of course, such conceptual discussion requires a physical mechanism for modifying a music stream if it is to be moved from the realms of conjecture to reality, and the next section presents just such a framework.

Mechanism The objective for the research was to create a system that would allow any listener to directly create emotion in music. This was in response to a review of existing emotional research methods for capturing emotive data where it was found that they varied from asking test subjects to listen to live performances and report on them (Downey, 1897; Gilman, 1891) through to judging pieces of music composed to express the required emotions (Thompson, & Robitaille, 1992; Rigg, 1937) or phrases selected from existing classical compositions (Gundlach, 1935) and finally to asking professional musicians to play pieces in various emotions and study the reaction of the test subjects to these pieces (Gabrielsson, & Juslin, 1996). What was interesting was that very few of these experiments required that the test subjects create the emotional content themselves, instead asking them to play the role of passive listener, and if musical input was required the user was normally required to be a skilled musician. The approach presented here strives to overcome these shortcomings by creating a series of low-level musical operations from scratch, bypassing the analysis stage, and creating a system that allows the user build emotive modification presets or ‘templates’ with these operations, thus synthesizing the emotion. These ‘templates’ of the user actions were then captured for analysis and potential re-use. The advantages of this include: no pre-requisites of musical ability, either as a performer or theorist, no dilution of emotive content through syntactical loss in a process of description or through loss of immediacy caused by reflection after listening (Meyer, 2001), and direct results. Allowing users to construct the conversion “templates” themselves also avoids the issue of direct influence and bias on the part of the researcher.

44


Soundtrack Localisation

Transformations The music transformations were as follows: Tempo - The speed of the performance. Pitch - This transformation performs transposition. Rhythm - The simplest rhythm for a piece is to have each note equal in length, creating a regular pulse. Ways to alter this pulse include staggering it, making it act against the natural rhythm of the piece in the form of syncopation, or having a random variance from note to note, creating unexpected results. Timbre - Which instrument to use to perform the piece. Harmony - Reinforce the melody with an accompanying voice positioned at a harmonic interval above the primary melody. Accompaniment - Separate from the harmonic accompaniment mentioned above, this option gives the user the ability to add an accompanying bass line. Dynamics - How loud the piece is played. Drum Rhythm - Does the addition of a drum rhythm affect the emotive content of a piece, and if so, do different rhythms have different effects? Attack - Individual stressing of beats in a bar: the inhuman precision and evenness of a machine, the carefully stressed beats of a skilled musician, or simple random stressing. Articulation - From staccato, for very short stabbing notes with significant gaps prior to the next note, through to legato, where each note begins to link up with the next note in the sequence. Scale - Major, Minor, Pentatonic, Blues & Gypsy scales.

Emotions The list of basic emotions for the study was compiled after analysing the proposals of several psychologists: Clynes (1980), Izard (1991), Plutchik (1980; 2001), Scherer (1995), Schopenhauer (Gale, 1888), and Shand (1914). Of particular interest was Plutchik’s classification system, with eight sectors representing eight primary emotion dimensions arranged as four pairs of opposites. For this reason Robert Plutchik’s model was selected over those of Shand, Izard and Clynes as it includes a pairing of opposing emotions, thus allowing further comparisons to be made between emotions as well as analyzing each emotion separately. The list used:

Joy, Sadness, Anger, Fear, Acceptance, Disgust, Surprise, Anticipation and No Emotion (the control emotion).

45


Ian R. O’Keeffe

Notation The system described in this paper uses the MIDI music file format for storage and manipulation of the musical content. It was selected because of its wide availability, portability and small storage footprint, and because it stores music in a symbolic way. It does have some limitations, and perhaps other notation methods would need to be considered in the future, but MIDI is certainly a good place to start given its wide acceptance worldwide and its use within the computer gaming industry itself.

Studies & Results The Main Study (see (O’Keeffe, 2009) for a more in-depth explanation) consisted of nine participants. A Cooperative Evaluation (Monk, Wright, Haber, & Davenport, 1993) approach was taken, in which each user was encouraged to see himself as a collaborator in the evaluation and also to actively criticise the system rather than simply suffer it, thus allowing the evaluator (myself) to clarify points of confusion so maximising the effectiveness of the approach. Cooperative Evaluation is a variant of Think Aloud (Jorgenssen, 1989), in which the user performs a number of tasks and is asked to ‘think aloud’ to explain what they are doing at each stage and why. The user’s actions were recorded via paper-based notes and via the command tracking script system built into the main system for each of the tasks. A task list was prepared and given to each of the nine participants prior to the start of the study, and consisted of a walkthrough of the system followed by the creation of each of the nine required emotions. For the purpose of isolating any significant findings the Chi Square Goodness of Fit test was applied to the data gathered, given its nominal/categorical nature (Figure 1). An alpha value of 0.05% was selected for the test for significance, although the sample size may be considered to be a little on the small size for robust findings. Even so, it gives a good indication of the trends in the data gathered. The results of this study show that the proposal of collecting emotive data from test subjects using low-level musical transformations definitely has merit, and some interesting trends are apparent (Tables 1 through 11).

Figure 1. Chi Square Goodness of Fit formula.

46


Soundtrack Localisation

Tempo is definitely a decisive factor, particularly in Joy, Sadness, Anger and Surprise. Some more examples of emotional opposites in the data, as suggested by Plutchik in his arrangement of the primary emotions as pairs of opposites, can be seen in Pitch between Joy and Sadness, and Anger and Fear; in Rhythm between Joy and Sadness, Acceptance and Disgust, and Surprise and Anticipation; in Dynamics for all emotional pairs; and also in many aspects of Attack Length and Articulation. Scale shows an almost bipolar split between Joy and Sadness, and also shows a lot of variance across the other emotions. Moving the focus away from each transformation and onto individual emotional templates, when they are compared in the combined chart (Figure 2) the variance between them is apparent. This would suggest that each emotion is mapping to its own set of preferences, and also suggests that there is validity in attempting to extract emotional content from music in this manner.

Figure 2. A combined chart of individual emotional templates.

The templates produced by the main study presented an excellent opportunity for verifying the data gathered by the system simply by running the experiment in reverse. To this end, a study was created using a short piece altered emotionally by the system using averaged emotional templates gathered from the main study, and the test subjects were asked to categorise these pieces. The study involved listening to the nine pieces - each representing a different emotion - and categorising them by Emotion. The results were split into two groups, those who were already familiar with the system, and those who had never seen it and were judging the pieces provided purely on emotion. The results are displayed here in the form of confusion matrices (Tables 12 & 13).

47


Ian R. O’Keeffe The results (Chi Square analysis with an alpha value of 0.05% as before) showed a significant outcome for the data collected for each emotion, although not all emotions were correctly identified. What is also interesting is how closely the data matches between the two groups, suggesting that there is indeed a recognisable set of emotions in music whether you are familiar with the system and its processes or just a listener to any emotionally altered piece.

Discussion The results gathered by this research show much promise for an interactive approach to music analysis and alteration. The benefits are observable: clear, empirical data captured from precise onscreen user decisions, avoidance of confusion through bias, introspection, and descriptive issues, ease of usability across a broad musical ability spectrum and easy capture of emotive templates for re-use. The follow-up study also demonstrated that listeners were able to correctly identify the intended emotion or mood in pieces of music that had been altered by the templates gathered by the system.

To summarise the capabilities the approach provides: The ability to capture a user’s preferences in the form of a template in response to a request to induce an arbitrary mood, emotion or categorization in any piece of music;

The ability to store this template for re-use on other musical material as required;

The ability to generalize the data gathered across a number of users for any specific template descriptor: for example, sadness;

This then implies the ability to segment any generalized findings into any required demographic, such as locale, culture, game genre, gender and so on.

Now that the concept and practice of capturing emotive templates for music has been demonstrated, the next phase is to ask how this system could be leveraged in a computer gaming environment as was proposed earlier. First of all, let us consider how the system currently operates and compare this with the requirements of a computer game soundtrack generation engine. In the system presented here the target of the emotive modification process is a static MIDI file and the transformations are applied to the entire piece in one pass. To be able to react in real time the processing would need to be retargeted to alter the music as it is being produced by the music engine, perhaps being a part of the workflow for that process. Any transformation functions that rely on knowing how far into the piece they are would probably have to be either removed or substantially remodelled. It would also be

48


Soundtrack Localisation possible to implement modification functions that act on a bar of music at a time and then alter the music at the start of the next bar in the soundtrack. As the emotive information is not as immediate as a sound effect for, say, a sword swish then this approach could be quite effective. Another issue is the use of MIDI itself, but the transformations are described logically in terms of musical alterations so could be applied to any symbolic musical representation method.

Moving on from the technical considerations of system integration, part of the preparation for emotive soundtrack adaptation in the creation of a new game would involve establishing what profiles are required in terms of moods or emotions. A set of categories for the templates required for any given game scenario would need to be proposed, such as: Basic Emotions (Happy, Sad, Angry, Fearful and so on), or game-play variables (Fatigue, Adrenaline, Amount of game level complete, Weapons gathered and so on). There would also be a requirement for sub-categories within each culture or locale, as the templates for given emotions could quite possibly differ across cultures, and this must be taken into account as preparation for producing localised versions of games. These categories could always be expanded later as required. Decisions would need to be made as to what game data should be tracked, and how it should be mapped onto the soundtrack. For example, consider progression through a stage or level in a game, performance of a player, energy levels, or behaviour analysis. At a higher level, the interaction of differing basic emotions or signals could have interesting results, such as the combination of anger and disgust leading to more complex emotions such as contempt. This could also create highly individual and personal soundtrack content if the modification process was made up of a product of a series of individual templates, each weighted dependant on influence, strength, distance or whatever.

Possibly the biggest area of work would relate to the creation of the templates themselves in all the required categories, sub-categories and cultures. The game authors would be responsible for producing the definitive version for each culture/locale, but there would also be the possibility of user-created profiles, where individual players could save their own favourite modification profiles. A further development along this path could see players being able to upload their profiles, or share them with fellow gamers. This could be regarded as being a variant of the crowd-sourcing model, where users would be able to create their own cultural templates for the soundtrack of the game, possibly with the option of uploading these templates for use by the global community.

The term 'Crowdsourcing' was first coined by Jeff Howe (2006) and later defined as "the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call." (Howe, 2009). Howe also refers to it as "the future of corporate R&D", citing the example of InnoCentive, the "research world’s version of iStockPhoto" (Howe, 2006). In contrast, what is of more interest here is crowdsourcing

49


Ian R. O’Keeffe motivated simply through personal desire to make a contribution, such as demonstrated by contributors to the translation of Facebook. Their reward is recognition from their peers, and perhaps personal satisfaction. No money changes hands.

What is proposed here is the incorporation of a simplified version of the music modification application embedded in the game itself that would allow any player to tweak the musical transformations that are connected to the game variables, and thus create templates for their own use. As an example, a player may see a latent threat in the game-play as being embodied by a minor key shift and a faster tempo. A further development would be allowing the sharing of these user profiles. The most extreme reading of this trend would be the creation of a website for a game to allow the management and collation of these profiles, thus opening up the possibility of facilitating the ‘localisation’ of the game soundtrack to match a player’s own particular locale or culture, particularly if they feel the existing templates for emotions, moods or categories do not accurately reflect their preferences. For example, the default representation of anger in a wargame soundtrack may be completely at odds with the idea of musical anger in some cultures. In fact, it would be hoped that this form of ‘national pride’ would provide the motivation for contribution, as it has done for some minority languages on Facebook. Quality enforcement, and the avoidance of online ‘vandalism’, could be realised by including a peer voting system similar to that implemented by Threadless, the web-based t-shirt company (Brabham, 2008). This would allow visitors to the site to vote on the accuracy and appropriateness of existing templates. The data gathered could be of great value to the game authors, giving them a window into the requirements of the gaming community and helping with their localisation requirements for creating global versions of their software.

Conclusion In conclusion, this research demonstrates the plausibility of creating a system to capture emotionspecific templates in music by involving the participant directly in the process of creating those templates. While there is still a lot of work to be done from a propagation standpoint this work presents a new research possibility in the fields of cognitive musicology, computer gaming and cultural adaptation - the possibility to perform emotive modification on music by culture or locale for the adaptation of game soundtracks - and the system constructed is a good foundation for the further development of such functionality.

50


Soundtrack Localisation

References Arons, B. (1992). A review of the cocktail party effect. Journal of the American Voice I/O Society, 12, 35-50. Berens, K., & Howard, G. (2001). The Rough Guide to Videogaming 2002. London/New York: Rough Guides. Brabham, D. (2008). Crowdsourcing as a Model for Problem Solving: An Introduction and Cases. Convergence, 14(1), 75-90. Chandler, H. (2005). The Game Localization Handbook. Massachusetts: Charles River Media. Clynes, M. (1980). The communication of emotion: theory of sentics. In R. Plutchik, & H. Kellerman, Theories of Emotion (Vol. 1, pp. 171-216). New York: Academic Press. Collins, K. (2009). An Introduction to Procedural Music in Video Games. Contemporary Music Review, 28(1), 5-15. Crosignani, S., Ballista, A., & Minazzi, F. (2008, October/November). Preserving the spell in games localization. MultiLingual, 38-41. Dietz, F. (2006). Issues in localizing computer games. In K. Dunne, Perspectives in Localization (pp. 121-134). Amsterdam/Philadelphia: John Benjamins Publishing Company. Downey, J. E. (1897). A musical experiment. American Journal of Psychology, 9, 63-69. Frasca, G. (2001). Rethinking agency and immersion: video games as a means of consciousnessraising. SIGGRAPH 2001. Retrieved from http://siggraph.org/artdesign/gallery/S01/essays.html Gabrielson, A. J. (1996). Emotional expression in music performance: Between the performers intention and the listeners experience. Psychology of Music, 24, 68-91. Gale, H. (1888). Schopenhauer's Metaphysics of Music. New Englander and Yale Review, 48(CCXVIII), 362-368. Gilman, B. I. (1891). Report on an experimental test of musical expressiveness. American Journal of Psychology, 4, 558-576. Good, M., Picot, B., Salem, S., Chin, C., & Picot, S. (2006). Cultural Differences in Music Chosen for Pain Relief. Journal of Holistic Nursing, 18(3), 245-260. Gregory, A. H., & Varney, N. (1996). Cross-Cultural comparisons in the affective response to music. Psychology of Music, 24, 47-52. Gundlach, R. H. (1935). Factors determining the characterization of musical phrases. American Journal of Psychology, 47, 624-644. Howe, J. (2006). The Rise of Crowdsourcing. Wired Magazine. 14(6). Retrieved from http://www.wired.com/wired/archive/14.06/crowds_pr.html Howe, J. (2009). Crowdsourcing: A Definition. Retrieved from http://crowdsourcing.typepad.com

51


Ian R. O’Keeffe Howland, G. (1998). Game Design: The essence of computer games. Retrieved from http://www.cpphome.com/tutorials/ Izard, C. E. (1991). The psychology of emotions. New York: Plenum. Jorgenssen, A. H. (1989). Using the think-aloud method in systems development. In G. Salvendy, & M. Smith, Designing and Using Human-Computer Interfaces and Knowledge-Based Systems. Amsterdam: Elsevier Science. Land, M. Z., & McConnell, P. N. (1994 ). Patent No. US Patent No. 5,315,057. Meyer, L. B. (2001). Music and Emotion: Distinctions and Uncertainties. In P. N. Juslin, & J. A. Sloboda, Music and emotion: Theory and research (pp. 341-360). New York: Oxford University Press. Monk, A., Wright, P., Haber, J., & Davenport, L. (1993). Improving Your Human-Computer Interface, A Practical Technique. Prentice Hall International. Morrison, S. J., Demorest, S. M., Aylward, E. H., Cramer, S. C., & Maravilla, K. R. (2003). fMRI investigation of cross-cultural music comprehension. NeuroImage, 20, 378-384. O'Keeffe, I. R. (2009). Music Localisation: Active Music Content for Web Pages. Localisation Focus - The International Journal of Localisation, 8(1), 67-81. Pignatiello, M., Camp, C. J., & Rasar, L. A. (1986). Musical mood induction: An alternative to the Velten technique. Journal of Abnormal Psychology, 95(3), 295-297. Plutchik, R. (1980). A general psychoevolutionary theory of emotion. In R. Plutchik, & H. Kellerman, Theories of Emotion (pp. 3-33). New York: Academic Press. Plutchik, R. (2001). The Nature of Emotions. American Scientist, 89, 344 - 350. Rigg, M. G. (1937). Musical expression: An investigation of the theories of Erich Sorantin. Journal of Experimental Psychology, 21, 442-55. Scherer, K. R. (1995). Expression of Emotion in Voice and Music. Journal of Voice, 9(3), 235-248. Shand, A. F. (1914). The Foundations Of Character – Being a study of the Emotions and Sentiments. Macmillan. Thompson, W. F., & Robitaille, B. (1992). Can composers express emotions through music? Empirical Studies of the Arts, 10, 79-89. Trainor, H. (2003). Games localization: production and testing. Multilingual Computing & Technology, 14(5), 17-20. Walker, R. (1996). Open peer commentary: Can we understand the music of another culture? . Psychology of Music, 24, 103-130.

52


Soundtrack Localisation Table 1.Totals for Tempo Modifications by Emotion and Calculated Chi Square Values Emotion Tempo

No

Measure

Emotion Joy

Sadness Anger Fear Acceptance Disgust Surprise Anticipation (Control) Tempo Change

Accelerating

0

0

4

2

0

1

2

2

0

Decelerating

0

4

0

1

0

1

0

0

0

Constant

9

5

5

6

9

7

7

7

9

18.00

4.67

4.67

4.67

18.00

8.00

8.67

8.67

18.00

X

2

Tempo Faster

8

0

7

4

2

3

8

6

0

Slower

0

9

1

3

6

4

1

2

7

No Change

1

0

1

2

1

2

0

1

2

12.67

18.00

8.00

0.67

4.67

0.67

12.67

4.67

8.67

X

2

Note. X2 .05 (2) crit = 5.99

53


Ian R. O’Keeffe Table 2. Totals for Pitch Modifications by Emotion and Calculated Chi Square Values Emotion No

Pitch

Emotion Joy

Sadness Anger Fear Acceptance Disgust Surprise Anticipation (Control)

Up

6

0

0

5

3

1

8

6

1

Down

2

9

9

3

3

8

1

1

3

1

0

0

1

3

0

0

2

5

4.67

18.00

18.00

2.67

0.00

12.67

12.67

4.67

2.67

No Change X2 Note. X

2 .05

(2) crit = 5.99

54


Soundtrack Localisation Table 3. Totals for Rhythm Modifications by Emotion and Calculated Chi Square Values Emotion No

Rhythm

Emotion Joy

Sadness Anger Fear Acceptance Disgust Surprise Anticipation (Control)

Regular

2

5

2

3

4

0

0

5

8

Triplet

3

0

0

0

1

0

0

1

0

Dotted

3

1

0

0

3

1

0

0

0

dotted

1

0

4

3

0

1

4

1

0

Syncopated

0

1

1

2

1

1

3

1

0

Irregular

0

2

2

1

0

6

2

1

1

6.37

11.73

7.71

6.37

9.05

17.09

10.39

10.39

34.52

Double-

X2

Note. X2 .05 (5) crit = 11.07

55


Ian R. O’Keeffe Table 4. Totals for Timbre Modifications by Emotion and Calculated Chi Square Values Emotion No

Timbre

Emotion Joy

Sadness Anger

Fear

Acceptance Disgust Surprise Anticipation (Control)

Piano

2

0

3

0

3

2

4

4

8

Celeste

5

0

0

0

2

0

3

2

0

Organ

0

1

0

2

0

1

0

0

0

Guitar

0

0

0

0

1

0

0

0

0

Violin

0

5

0

2

0

0

0

1

0

Choir

0

0

1

5

0

0

1

0

0

Trumpet

0

1

0

0

1

0

0

0

0

Sax

0

0

2

0

0

0

0

0

0

Oboe

0

1

1

0

1

2

0

0

0

Flute

2

1

0

0

1

1

1

2

0

Synth

0

0

2

0

0

3

0

0

1

31.34

26.45

14.23

31.34

11.78

14.23

24.01

21.56

70.46

X2

Note. X2 .05 (10) crit = 19.68

56


Soundtrack Localisation Table 5. Totals for Harmony Modifications by Emotion and Calculated Chi Square Values Emotion No

Harmony

Emotion Joy

Sadness Anger Fear Acceptance Disgust Surprise Anticipation (Control)

Fifth

3

4

5

5

2

7

3

3

0

None

6

5

4

4

7

2

6

6

9

1.00

0.11

0.11

0.11

2.78

2.78

1.00

1.00

9.00

X

2

Note. X2 .05 (1) crit = 3.84

57


Ian R. O’Keeffe Table 6. Totals for Accompaniment Modifications by Emotion and Calculated Chi Square Values Emotion No Emotion

Anger

Fear

Disgust

Surprise

(Control)

Anticipation

Sadness

Acceptance

Joy

Accompani

Triads

0

0

4

2

0

1

2

2

0

Arp. Triads

0

4

0

1

0

1

0

0

0

Tonic Bass

0

4

0

1

0

1

0

0

0

None

9

5

5

6

9

7

7

7

9

18.00

8.00

8.67

8.67

18.00

-ment

Bass

18.0 X2

0

4.6 4.67

4.67

7

Note. X2 .05 (3) crit = 7.81

58


Soundtrack Localisation Table 7. Totals for Dynamics Modifications by Emotion and Calculated Chi Square Values Emotion No

Dynamics

Emotion Joy

Sadness Anger Fear Acceptance Disgust Surprise Anticipation (Control)

Very Quiet

0

3

0

0

0

0

0

0

1

Quiet

2

4

0

3

0

0

0

0

1

Moderate

2

0

0

1

8

2

3

7

7

Loud

4

0

0

2

1

1

2

0

0

1

0

6

0

0

5

3

0

0

0

0

3

3

0

1

1

2

0

0

2

0

0

0

0

0

0

0

10.49

13.62

26.12

8.93

41.74

15.18

8.93

32.37

30.80

Very Loud Getting Louder Getting Quieter X2

Note. X2 .05 (6) crit = 12.59

59


Ian R. O’Keeffe Table 8. Totals for Drum Rhythm Modifications by Emotion and Calculated Chi Square Values Emotion Drum

No

Rhythm

Emotion Joy

Sadness Anger

Fear

Acceptance Disgust Surprise Anticipation (Control)

1

0

5

2

1

1

5

2

2

0

2

1

0

4

6

2

1

2

3

0

3

5

0

1

0

2

2

1

1

0

4

2

0

1

0

2

1

0

1

0

1

4

1

2

2

0

4

2

9

8.22

13.78

3.78

13.78

0.44

8.22

4.89

1.56

36.00

None X

2

Note. X

2 .05

(4) crit = 9.49

60


Soundtrack Localisation Table 9. Totals for Attack Length Modifications by Emotion and Calculated Chi Square Values Emotion Attack

No

Length

Emotion Joy

Sadness Anger Fear Acceptance Disgust Surprise Anticipation (Control)

Short

5

0

3

2

2

3

7

2

2

Medium

1

0

4

6

4

4

1

5

5

Long

3

9

2

1

3

2

1

2

2

2.67

18.00

0.67

4.67

0.67

0.67

8.00

2.00

2.00

X2

Note. X2 .05 (2) crit = 5.99

61


Ian R. O’Keeffe Table 10. Totals for Articulation Modifications by Emotion and Calculated Chi Square Values Emotion No

Articulation

Emotion Joy

Even

Sadness Anger Fear Acceptance Disgust Surprise Anticipation (Control)

3

7

1

4

2

4

4

7

8

stressed

5

0

6

2

6

0

1

0

1

Random

1

2

2

3

1

5

4

2

0

2.67

8.67

4.67

0.67

4.67

4.67

2.00

8.67

12.67

Beat-

X2 Note. X

2 .05

(2) crit = 5.99

62


Soundtrack Localisation Table 11. Totals for Scale Modifications by Emotion and Calculated Chi Square Values Emotion No

Scale

Emotion Joy

Sadness Anger Fear Acceptance Disgust Surprise Anticipation (Control)

Major

8

0

2

1

4

0

6

7

6

Minor

0

9

3

5

0

2

0

1

0

Pentatonic

1

0

1

0

5

2

2

0

1

Blues

0

0

1

1

0

1

0

1

2

Gypsy

0

0

2

2

0

4

1

0

0

27.11

36.00

1.56

8.22

13.78

4.89

13.78

19.33

13.78

X

2

Note. X

2 .05

(4) crit = 9.49

63


Ian R. O’Keeffe Table 12. Confusion Matrix of Data for those Familiar with the System Predicted pation

% Correct No

Surprise

Emotion

Disgust

(Control)

Fear

0

0

5

0

8

4

0

0.50

Sadness

0

22

2

2

2

2

0

3

0

0.67

Anger

0

0

18

4

0

7

2

1

0

0.56

Fear

0

0

0

16

0

3

2

11

0

0.50

Acceptance

9

0

0

0

7

0

3

6

3

0.25

Disgust

0

0

5

8

0

17

1

0

0

0.55

Surprise

12

0

0

0

3

0

11

2

0

0.39

n

14

0

0

0

14

0

3

0

0

0.00

No

2

0

0

0

3

0

0

3

18

0.69

Antici-

Anger

0

Accep-

Sadness

17

tance

Joy

Joy

Emotion

Actual

Anticipatio

Emotion (Control)

64


Soundtrack Localisation Table 13. Confusion Matrix of Data for the Independent Test Subjects Predicted pation

% Correct No

Surprise

Emotion

Disgust

(Control)

Fear

0

0

3

0

6

0

3

0.59

Sadness

0

27

0

0

0

0

0

0

0

1.00

Anger

0

0

14

8

0

3

0

6

0

0.45

Fear

0

0

6

12

0

3

0

5

3

0.41

Acceptance

3

0

0

0

15

0

3

3

3

0.56

Disgust

0

0

0

9

0

17

0

3

0

0.59

Surprise

3

0

0

0

0

0

17

6

3

0.59

n

12

0

0

0

6

0

2

3

6

0.10

No

0

0

0

0

3

0

0

6

18

0.67

Antici-

Anger

0

Accep-

Sadness

17

tance

Joy

Joy

Emotion

Actual

Anticipatio

Emotion (Control)

65


XLIFF Mapping to RDF Dimitra Anastasiou

SFB/TR8 Spatial Cognition Computer Science/Languages and Literary Studies University of Bremen, , Germany

anastasiou@uni-bremen.de

Abstract This paper discusses the lack of interoperability between file formats, standards, and applications. We suggest a mapping from the ‘XML Localisation Interchange File Format’ (XLIFF) into the ‘Resource Description Framework’ (RDF) in order to enhance interoperability between a metadata standard and a metadata model. Three use cases are provided (a minimal, a modular and one with alternative translations); each one with a source (XLIFF), an output (RDF), and an ‘Extensible Stylesheet Language Transformations’ (XSLT) file. We explain in detail how the XLIFF file elements and attributes can be matched by the XSLT. Believing in the symbiotic relationship for a more effective way of presenting multilingual content on the Web, we developed a conversion tool to translate from XLIFF into RDF in order to automate the process. Our contribution is to translate XLIFF into RDF in order to facilitate ontology localisation, i.e. localise monolingual ontologies and populate Semantic Web approaches with localisation-related metadata.

Keywords: conversion, interoperability, localisation, RDF, standards, XLIFF

66


XLIFF Mapping to RDF

Introduction Nowadays interoperability between file formats, standards, applications, and tools is not only vital, but also necessary for integration purposes. We adopt a general definition of interoperability that ‘language resources interact or work together’ (Witt, Heid, Sasaki & Sérasset, 2009: 5). Lack of interoperability can mean, for example, that file formats cannot be converted in other formats at all or if converted, files are corrupt. Lack of converters and incompliance with standards can inter alia lead to interoperability failure. In the context of Semantic Web, the linking between data and metadata with semantic information plays an important role in digital content transfer, management, and localisation. Gerber, Barnard and Van der Merwe (2006) state that the terms ‘semantics’, ‘metadata’, ‘ontologies’ and ‘Semantic Web’ are used inconsistently. The authors go into detail explaining each layer of the ‘Semantic Web Cake’ (Berners-Lee, Hendler & Lassila (2001). Semantic Web has been described in rather different ways: as a utopic vision, a web of data, or merely a natural paradigm shift in our daily use of the Web16. In Semantic Web, there are two uses of this technology: i) use for documenting agreements on the structure and format of knowledge (ontology) and ii) use for sharing information in a structured format (linked data).

Ontologies are formal knowledge representations of a set of concepts within a domain and the relationships between those concepts; ontologies are used by people, databases, and applications in order to share common domain information. The ‘Resource Description Framework’ (RDF) by W3C is a language for representing information about web resources and often ontologies are being represented in RDF. According to the taxonomy of language resources of Witt et al. (2009), ontologies belong to the item-based static resources. Our goal is to enhance interoperability by combining the ‘XML Localisation Interchange File Format’ (XLIFF) with the ‘Resource Description Framework’ (RDF) through an XLIFF to RDF (XLIFF2RDF) mapping. XLIFF, which is under the auspices of the Organization for the Advancement of Structured Information Standards (OASIS), is a single interchange file format, designed by a group of software providers, localisation service providers, and tools providers. XLIFF can be used to exchange data between companies, such as software publishers and localisation vendors, or between localisation tools. It is an open standard that in addition to the localisation data that it contains, it can consist of rich metadata, such as status of strings, status of localisation process, software version information, and so on. 16

http://semanticweb.org/wiki/Main_Page, 10/08/10

67


Dimitra Anastasiou The motivation of this work, the XLIFF mapping to RDF, is that an XLIFF concepts’ ontology can be used to annotate resource descriptions, i.e. become an integrated part of the Semantic Web. Our contribution is to map XLIFF elements and attributes to RDF. As XLIFF is a localisation OASIS standard and RDF a family of W3C specifications, there is a trade-off between using either an often ‘restrictive’ standard or custom specifications. We attempt to keep a golden mean having a symbiotic relationship between XLIFF and RDF, so that more applications read and process this format, thus be XLIFF/RDF compliant. Hence interoperability will be enhanced between a metadata standard and a metadata model.

The article is laid out as follows: in section 2 we present some related work about ontology creation, mapping and localisation, and multilingual ontologies. Ontology localization (Suarez-Figueroa & Gómez-Pérez, 2008) is important for our research, as localisation is related to XLIFF and ontologies often tied with RDF. Section 3 discusses how multilingualism in ontologies is supported by standards. XLIFF and RDF are described in sections 4 and 5 respectively, also in relation with metadata. Section 6 is concerned with the actual mapping from XLIFF to RDF (XLIFF2RDF); we present three use cases and provide each time the source, the output, and the ‘Extensible Stylesheet Language Transformations’ (XSLT). We conclude this paper with future prospects (section 7) and a summary/conclusion (section 8).

68


XLIFF Mapping to RDF

Ontology Creation, Mapping, and Localisation In the context of computer and information sciences, ontology is defined by Gruber (2009) as follows:

An ontology defines a set of representational primitives with which to model a domain of knowledge or discourse. The representational primitives are typically classes (or sets), attributes (or properties), and relationships (or relations among class members).

Ontologies can be related with linguistics and linguistic resources by building blocks for linguistic description and analysis (and their hierarchy). Farrar and Langendoen (2003) have taken the first step toward the creation of a linguistic community of practice by working on a General Ontology for Linguistic Description (GOLD). They organised linguistically related concepts into four major domains: expressions, grammar, data constructs, and metaconcepts. GOLD was intended to capture the knowledge of a well-trained linguist, and can be viewed as an attempt to codify the general knowledge of the field.

A relatively recent model that has been proposed to associate linguistic data to ontologies is the ‘Linguistic Information Repository’ (LIR) (Montiel-Ponsoda, Aguado, Gomez-Perez & Peters, 2008), specially designed to account for cultural and linguistic differences among languages. LIR is a linguistic proprietary model to be published and used with domain ontologies; it covers a subset of lexical and terminological description elements that account for the linguistic realisation of a domain ontology in different natural languages17.

Although there is a plethora of ontologies (not only for linguistics), these are in most cases monolingual and mainly English. In the Ontoselect Ontology Library (Buitelaar, Eigner & Declerck, 2004) the distribution of human languages used in the definition of labels for classes and properties was 64% percentage for English, followed by French and English (19%) and German and English (13%).

The contribution of multilingual ontologies to computational linguistics has been highlighted by Espinoza, Montiel Ponsoda, & Gómez-Pérez (2009: 33):

17

http://mayor2.dia.fi.upm.es/index.php/en/technologies/63-lir, 08/06/10

69


Dimitra Anastasiou Multilingual ontologies are important to enable computational linguistics approaches, such as machine translation (MT), multilingual information retrieval (IR), question and answering, knowledge management, etc.

As far as ontology creation is concerned, manual building and aligning bi- or multilingual ontologies is a very expensive and time-consuming task. Carpuat (2002) created a bilingual ontology by syntactic alignment. She used a language-independent, corpus-based method that borrows from techniques used in information retrieval and machine translation (MT) to create a bilingual ontology by aligning WordNet with an existing Chinese ontology called HowNet. Jung, Hükansson and Hartung (2009) described a use case of aligning Korean and Swedish ontologies; in order to reuse alignments between multilingual ontologies, the alignments between ontologies were stored in a centralised alignment repository and were freely available and sharable on distributed ontology environment. As far as ontology mapping is concerned, Fu, Brennan and O’Sullivan (2009) examined a generic approach that involves MT tools and monolingual ontology matching techniques in cross-lingual ontology mapping scenarios. The authors made two experiments: the first one was to examine the impact of MT tools in the process of ontology rendition, specifically the quality of machine translated resource labels. The second experiment investigated the impact of MT tools in crosslingual ontology mapping (CLOM) by evaluating the quality of matching results generated using the generic approach. The results are that if MT tools are to be used in CLOM, the quality of translated ontologies needs to be improved in order for monolingual matching tools to generate high quality matching results. In their proposed framework SOCOM (semantic-oriented cross-lingual mapping) the semantics defined in one ontology can indicate the context in which a label to be translated is used. Thanks to the position of the node linked to this label, the labels of its surrounding nodes can be retrieved and studied. A framework for multilingual ontology mapping (ontologies have been translated through a lexical database or a dictionary) can be found in Trojahn, Quaresma and Vieira (2008).

The terminological difference between ontology matching and mapping should be made here. According to O’Sullivan, Wade & Lewis (2007), matching is the identification of candidate matches between ontologies whereas ontology mapping is the establishment of the actual correspondence between ontology resources based on candidate matches. More information about ontology matching in general can be found in Euzenat and Shvaiko (2007) and in the Ontology Alignment Evaluation Initiative (OAEI).

70


XLIFF Mapping to RDF Now we turn our attention to ontology localisation. According to the NeOn project (Peters, Espinoza, Montiel-Ponsoda & Sini, 2006), there are two limitations on applying multilingual ontologies to the Internet: i) lack of expertise, and ii) usage of English as a pivot language. In order to have multilingual ontologies, monolingual ontologies have to be localised. The definition of ontology localisation provided by Suarez-Figueroa and Gomez-Perez (2008: 10) is the following:

[Ontology Localisation] is the adaptation of an ontology to a particular language and culture.

It is an arbitrary decision in which natural language ontology labels are written in; multilingualism in Semantic Web means among others, multilingual ontological systems, multilingual semantic tools, and multilingual search engines. Multilingual ontologies would have greater adoption worldwide and Semantic Web resources higher recognition. Apart from multilingual support, multicultural idiosyncrasies like spelling variations, dialectal variants, etc. can be encapsulated in ontologies. Based on these points, we make the previous ontology localisation’s definition more explicit:

Ontology Localisation is the adaptation of an ontology and its concepts to a locale, i.e. unique combination of language and culture. That includes i) translation of ontology labels into another natural language than its original and ii) adaptation of ontology labels to cultural characteristics, including spelling variations.

Espinoza et al. (2009) examined various problems in the process of ontology localisation which fall under the categories of:

Localisation problems (existence of an exact equivalent/several context-dependent equivalents/conceptualisation mismatch);

Management problems in the maintenance (ontology term is added/disappears/renamed);

Multilinguality representation problems (inclusion of multilingual information in the ontology/creation of one conceptualisation per culture and language involved/association of external multilingual information to the ontology).

Subsequently, they propose the following guidelines for ontology localisation:

71


Dimitra Anastasiou Select the most appropriate linguistic assets (these assets should have consensus, broad coverage, and high precision);

Select ontology label(s) to be localised (taking into account the context);

Obtain ontology label translation(s) (though cross-language term extraction, word sense discovery, or word sense disambiguation);

Evaluate label translation(s) (semantic fidelity and stylistic evaluation);

Ontology update.

More information about ontology localisation can be found in Cimiano, Montiel-Ponsoda, Buitelaar, Espinoza & Gómez-Pérez. (2010).

Espinoza et al. (2009) described an Ontology Localization Activity (OLA) and a methodology for guiding in the localisation of ontologies. In OLA a LabelTranslator (see Espinoza, Gómez-Pérez & Mena 2008) takes as input an ontology whose labels are expressed in a source natural language and obtains the most probable translation of each ontology label into a target natural language. LabelTranslator has been designed to support ontology localisation by automating main tasks and with the aim of reducing human intervention. The aforementioned LIR’s main purpose is to associate multilingual information to ontologies with the aim of contributing to OLA.

Hartmann, Palma, Sure, Haase & Suarez-Figueroa (2005) created an ontology metadata standard called ‘Ontology Metadata Vocabulary’ (OMV). OMV is a common set of terms and definitions describing ontologies similar to a standard. They made a use case combining a decentralised ‘Oyster’ and a centralised ‘Onthology’ 18 which are both applications to identifying, reusing, and providing ontology metadata. The OMV Core has among others, a ‘multilinguality extension’. Montiel-Ponsoda, Aguado, Gomez-Perez & Peters (2008) created LexOMV, an OMV extension to capture multilinguality. LexOMV informs people searching for multilingual ontologies of the quantity of linguistic and terminological data associated to the ontology.

18

Oyster is a P2P (Peer-to-Peer) system and Onthology a metadata portal. Onthology stands for an “anthology of ontologies”.

72


XLIFF Mapping to RDF More recently, a lexicon model for ontologies called lemon is developed by the Monnet project. This model is intended to be a standard for sharing lexical information on the Semantic Web. Lemon is described in detail in the lemon cookbook.19 (see also McCrae, Spohr & Cimiano, 2011). MontielPonsoda, Gracia, Aguado-de-Cea and Gómez-Pérez (2011) proposed a new module to represent translation relations between lexicons in different natural languages associated to the same ontology or belonging to different ontologies; this can enable the representation of different types of translation relations as well as translation metadata.

Moreover, Cardeñosa, Gallardo, Iraola and De la Villa, (2008) used the Universal Networking Language (UNL) as an intermediate step between the process of acquiring knowledge from textual sources and translating it into one of the state-of-the-art knowledge representation formalisms for building multilingual ontologies.

In the next subsection we will see how multilingualism is encapsulated in other knowledge representation languages (see McGuiness, Fikes, Hendler & Stein, 2002), like Web Ontology Language (OWL) and Simple Knowledge Organization System (SKOS).

Standard Support of (Multilingual) Ontologies OWL (McGuiness & Harmelen, 2003) is a semantic markup language for publishing and sharing ontologies on the World Wide Web. OWL is developed as a vocabulary extension of RDF and is derived from the DAML+OIL Web Ontology Language.20 In 2008, OWL 2 was published; OWL 2 ontologies provide classes, properties, individuals, and data values, and are stored as Semantic Web documents. OWL 2 ontologies can be used along with information written in RDF, and OWL 2 ontologies themselves are primarily exchanged as RDF documents21. One of the OWL properties is owl:sameAs.22 This property is usually used for linking individuals with the same ‘identity’, for example two different names of the same person. Also in OWL Full, an OWL sublanguage, the owl:sameAs construct can be used to indicate that two concepts have the same intensional meaning. However, translations of concepts can be included in this property: 19

The lemon cookbook (n.d.). Retrieved March 15, 2012 from http://lexinfo.net/lemon-cookbook.pdf

20

OWL Web Ontology Language, W3C Recommendation 10 February 2004. Retrieved August 10, 2010 from http://www.w3.org/TR/owl-ref/ 21 OWL 2 Web Ontology Language, W3C Recommendation 27 October 2009. Retrieved August 10, 2010 from http://www.w3.org/TR/owl2-overview/ 7 OWL Web Ontology Language, W3C Recommendation 10 February 2004, § 5.2.1 owl:sameAs. Retrieved June 08, 2010 from http://www.w3.org/TR/owl-ref/#sameAs-def.

73


Dimitra Anastasiou

<rdf:Description rdf:about="#book"> <owl:sameAs rdf:resource="#Buch"/> </rdf:Description>

Example 1. OWL multilingual support

It should be mentioned here that Wim Peters re-engineered the cores of XLIFF, TMX (Translation Memory Exchange), and MLIF (Multilingual Information Framework) 23 into OWL ontologies. 24 These ontologies are added as plug-ins in the GATE25 (General Architecture for Text Engineering) system. We now have a look at the ‘Upper Mapping and Binding Exchange Layer’ (UMBEL) project, as it is related to ontologies and multilingual information. UMBEL is a lightweight ontology for relating external ontologies and their classes to UMBEL subject concepts. UMBEL subject concepts are conceptually related together using the Simple Knowledge Organization System (SKOS) and the OWL-Full ontologies. One of UMBEL subject concepts is the so-called semset. In semset concept, the translation of a concept is provided. In skos:altlabel a synonym of a word can be provided, as shown in the following example (Figure 1), which is a sample of the UMBEL Ontology Instantiation document.

23

TMX is a (Localisation Industry Standards Association) LISA standard for exchanging Translation Memory (TM) information. MLIF is an ISO DC standard which examines at morphological description, syntactical annotation, or terminological description by providing a list of data categories, which are much easier to update and extend. 24 http://gate.ac.uk/ns/ontologies/LingNet/, Retrieved July 15, 2010. 25 http://gate.ac.uk/, Retrieved July 15, 2010.

74


RDF in localisation

XLIFF Mapping to RDF

http://umbel.org/images/080605_umbel_techdoc_fig3.png Figure 1. Excerpt from the semset example26 The SKOS element skos:altlabel is not used exclusively for synonyms, but also morphological (singular-plural) or spelling variations (US/UK), as seen below in Example 2. semset-en:Project a umbel:Semset ; skos:prefLabel"""project"""@en ; skos:altLabel """projects"""@en ; skos:altLabel """undertakings"""@en ; skos:altLabel """undertaking"""@en ; skos:altLabel """enterprises"""@en ; skos:altLabel """enterprise"""@en ; skos:altLabel """programs"""@en ; skos:altLabel """program"""@en ; skos:altLabel """programme"""@en ; dcterms:language <http://www.lingvoj.org/lingvo/en>.

Example 2. SKOS multi-lingual and -cultural support

As we have seen in this section, there is localisation support for ontologies from various standards. In the next section we introduce XLIFF and its bi- and multilingual support.

26

http://umbel.org/specifications/annexes/annex-c, Retrieved March 13, 2012.

75


Dimitra Anastasiou

XLIFF XLIFF is the XML Interchange File Format that is used as an intermediate file to translate digital content, including software. The standard is under the auspices of OASIS and has started first in 2002; its current version is 1.2 and currently the XLIFF Technical Committee works on the specifications of XLIFF 2.0.

XLIFF can be used to exchange data between companies, such as software publishers and localisation vendors, or between localisation tools, such as Translation Memory (TM) and Machine Translation (MT) systems; thus it is partner and tool-independent. XLIFF can also consist of a lot of metadata (see following subsection 4.1). Examples of XLIFF in strict and trasitional ‘flavour’ can be found in the XLIFF OASIS webpage 27. Here we isolate only the translation unit (trans-unit) element – part of an XLIFF file – which has two paired elements, source and target: <trans-unit id="#1"> <source xml:lang="en-US">book</source> <target xml:lang="de-DE">Buch</target> </trans-unit>

Example 3. XLIFF bilingual support

XLIFF is a bilingual file, which means that it includes one source and one target sub-element. However, the alternative translation (alt-trans) element offers multilingual support, i.e. there are more than one target alternative translations for one source segment (which can be different than the source in the trans-unit). An example follows in Example 4: <trans-unit id="#1"> <source xml:lang="en-US">book</source> <target xml:lang="de-DE">Buch</target> <alt-trans> <source xml:lang="en-US">book chapter</source> <target xml:lang="de-DE">Buchkapitel</target> <target xml:lang="fr-FR">chapitre de 27

http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=xliff, 28/03/11

76


XLIFF Mapping to RDF livre</target> <target xml:lang="es-ES"> capítulo del libro</target> </alt-trans> </trans-unit>

Example 4. XLIFF multilingual support In the next subsection we describe some metadata available in the XLIFF standard and refer briefly to metadata modularisation and minimalism.

XLIFF and Metadata The importance of metadata, in general, is to connect, archive, and search data more effectively. In the field of localisation, Anastasiou and Morado Vázquez (2010) defined metadata as follows:

Localisation Metadata connects the data present at different stages of the localisation process, from digital content creation, annotation, and maintenance, to content generation, and process management. The usage of open, rich, and flexible metadata is an effective step towards the aggregation and sharing of data between localisation sub-processes.

XLIFF is a standard that carries a lot of metadata that makes the data explicit. Sometimes, not only in XLIFF but in other standards too, it is difficult to distinguish between actual data and metadata. In the next paragraphs we describe some XLIFF metadata, including some ‘tricky’ metadata (not easily distinguishable from data), and make some recommendations for future metadata. The XLIFF <header> contains metadata about the file and the localisation process. It contains the <skl>, <phase-group>, <glossary>, <reference>, <count-group>, <tool>, <prop-group>, and <note> elements. The metadata hierarchy of the <header> follows below: header skl internal-file/external-file phase-group phase-name process-name company-name

77


Dimitra Anastasiou tool tool-id date job-id contact-name contact-email contact-phone glossary internal-file/external-file reference internal-file/external-file count-group count count-type phase-name unit tool tool-id tool-name tool-version tool-company prop-group prop prop-type xml:lang note xml:lang from priority annotates Figure 2. Header metadata hierarchy

The content in italics are the attributes; the attributes in bold are the required ones, while the remaining ones are the optional attributes. That shows that some metadata, just because it often has an additional, description role, it does not mean that it is optional. Much metadata is necessary to be

78


XLIFF Mapping to RDF included, so that the file is valid and successfully processed by applications. As we see from Figure 2, each child element may contain other submetadata-attributes. For example, the <phase> element contains metadata about the tasks performed in a particular process. The required phase-name attribute identifies the phase for reference within the <file> element, while the process-name identifies the kind of process the phase corresponds to; e.g. ‘proofreading’. The description of all the above elements is outside the scope of this paper; more information can be found in the XLIFF 1.2 specifications28. Very often it is difficult to distinguish metadata from actual ‘real’ data. This kind of metadata (when the question ‘is that data or metadata?’ cannot be easily answered) is the required metadata elements/attributes. One example is the translation status of the strings. The status/state of a target string can take one of the following values: final, needs-l10n, needs-reviewadaptation,

needs-review-l10n,

needs-review-translation,

needs-

transla-tion, new, signed-off, translated. When a string is untranslated, it is very crucial to know whether the string needs translation or it should not be translated (e.g. because it is a proper name), in this case signed-off. This metadata is crucial to avoid translating information that should not be translated.

Many discussions, including a panel at the 1st International XLIFF Symposium, have been about modularisation and minimalism in XLIFF. Two approaches have been discussed: top-down (macrolayer) and bottom-up (micro-layer) approach. The former which is the modularised approach looks at more domains and data categories, while the latter, the minimal approach checks which are the most frequently used XLIFF constructs29. Let us have a look at two examples of modularisation: contact-name, contact-email, contact-phone could be replaced by one single attribute: contact details. Name, e-mail, and phone can still be included as subattributes. The same holds for the translation state: instead of having needs-review-adaptation,

needs-review-l10n, and

needs-review-

translation, only needs-review could suffice, with adaptation,

l10n, and

translation as optional subattributes. There is a trade-off in modularisation between complexity in authoring (as more levels/categories are inserted) and simplicity in visualisation and understanding.

28

XLIFF Version 1.2, OASIS Standard, 1 February 2008. Retrieved March 22, 2011 from http://docs.oasisopen.org/xliff/v1.2/os/xliff-core.html. 29 st 1 XLIFF International Symposium: Panel Minimal and Modular XLIFF. Retrieved March 22, 2011 from http://www.localisation.ie/xliff/resources/presentations/2010-09-28_panel-minimal-and-modular-xliff.pdf.

79


Dimitra Anastasiou Various stakeholders in the localisation field would give different recommendations regarding what is currently missing in XLIFF depending on their needs, purposes, and preferences. For project management in a translation and localisation process, for example, start and due date are very important. Such standard metadata could be date and substandard start date and due date. Furthermore, in many cases it is useful to mark the original language of the authored content. Often the source language (SL) is not the original language of the string and when the translator does not know how to translate a segment in the target language (TL), it is useful to look back not only at the SL, but the original language (in case of pivot/interlingua translation), as the previous translator might have translated something wrong.

It should be noted that XLIFF is extensible allowing non-standard user-defined elements or attributes. With this customisation, users have the freedom to add their own metadata according to their needs and purposes.

Resource Description Framework and Metadata

RDF, OWL, SKOS and others are so-called knowledge representation languages, which, in the context of the Semantic Web, forward the Artificial Intelligence (AI) areas of knowledge representation and reasoning. The purpose of RDF, specifically, is to declare machine-processable metadata (Gerber et al., 2006: 4).

Focusing on RDF, as both OWL and SKOS use RDFS, RDF identifies things using Web identifiers (URIs) and describes resources with properties and property values. Brief explanation of a Resource, Property, and Property value follow: A Resource is anything that can have a URI, such as “www.d-anastasiou.com”; A Property is a Resource that has a name, such as “author”; A Property value is the value of a Property, such as “Dimitra Anastasiou”.

The combination of a Resource, Property, and Property value forms a statement, known as the subject, predicate, and object, the so-called object triple (see Decker, Melnik, Van Harmelen, Fensel, Klein, Broekstra, Erdmann & Horrocks., 2000): Resource-subject, Property-predicate, and Property value-object. It should be pointed out that a Resource can have more than one Property values and also other resources as Property values (see Figure 3).

80


XLIFF Mapping to RDF According to the RDF Primer, “RDF is particularly intended for representing metadata about Web resources, such as the title, author, and modification date of a Web page, copyright and licensing information about a Web document, or the availability schedule for some shared resource”.

Hunter and Lagoze (2001) presented how RDF and XML schemas can work together to enable flexible, dynamic mapping between complex metadata descriptions which mix elements from multiple domains, i.e., application profiles. An RDF application is the Dublin Core Metadata Initiative. The Dublin Core is a set of “elements” (properties) for describing documents and recording metadata. These elements facilitate the description and the automated indexing of document-like networked objects. The Dublin Core metadata set is intended to be suitable for use by resource discovery tools on the Internet, such as web crawlers. Dublin Core has currently 15 elements: title, creator, subject, description, publisher, contributor, date, type, format, identifier, source, language, relation, coverage, and rights. Although these elements provide important information, they do not serve translation purposes. Nilsson, Powell, Johnston and Naeve. (2008) provided recommendations for expressing Dublin Core metadata using RDF. Our goal is not to describe web resources using metadata (as Dublin Core does), but to translate them; the encoding we propose is XLIFF because it has metadata related to translation and localisation process. Instead of describing a magazine article with Dublin Core (Example 5), we translate it instead with XLIFF encoding in RDF schema (RDFS) (Example 6):

81


Dimitra Anastasiou <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/"> <rdf:Description rdf:about="http://www.dlib.org/dlib/may98/miller/05miller.html"> <dc:title>An Introduction to the Resource Description Framework</dc:title> <dc:creator>Eric J. Miller</dc:creator> <dc:description>The Resource Description Framework (RDF) is an infrastructure that enables the encoding, exchange and reuse of structured metadata. </dc:description> <dc:publisher>Corporation for National Research Initiatives</dc:publisher> <dc:subject> <rdf:Bag> <rdf:li>machine-readable catalog record formats</rdf:li> <rdf:li>applications of computer file organization and access methods</rdf:li> </rdf:Bag> </dc:subject> <dc:rights>Copyright © 1998 Eric Miller</dc:rights> <dc:type>Electronic Document</dc:type> <dc:format>text/html</dc:format> <dc:language>en</dc:language> <dcterms:isPartOf rdf:resource="http://www.dlib.org/dlib/may98/05contents.html"/> </rdf:Description> </rdf:RDF>

Example 5. Dublin Core to describe a magazine article <?xml version="1.0" encoding="UTF-8"?> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:xliff="urn:oasis:names:tc:xliff:document:1.2"> <xliff:file rdf:about="Translation of 'An Introduction to the Resource Description Framework.txt'" xliff:source-language="en-us" xliff:target-language="de-de" xliff:datatype="plaintext"> <xliff:body> <xliff:trans-unit xliff:id="#1"> <xliff:source> The Resource Description Framework (RDF) is an infrastructure that enables the encoding, exchange and reuse of structured metadata. </xliff:source> <xliff:target> Das Resource Description Framework (RDF) ist eine Infrastruktur, das Kodierung, Austausch und Wiederverwendung von strukturierten Metadaten ermöglicht. </xliff:target> </xliff:trans-unit> </xliff:body> </xliff:file> </rdf:RDF>

Example 6. XLIFF in RDFS to translate a magazine article

82


XLIFF Mapping to RDF

XLIFF2RDF Mapping

The mapping of elements and attributes of XLIFF to RDF is important, as we want to find the golden mean between a relatively strict standard (XLIFF) and general specifications (RDF). With the mapping, we aim at combining the generality of RDF with the control of XLIFF (see discussion in Witt et al., 2009: 11). The relationship between localisation and Semantic Web standards, in general, can be found in Anastasiou (2011a). From an XLIFF2RDF mapping, XLIFF will gain from the popularity of RDF and RDF from the localisation support of XLIFF. Ontology localisation will be more effective, because metadata, such as authoring, managing and structuring metadata of XLIFF facilitate translation and localisation processes. There is value in increased localisation of ontologies or other Semantic Web resources and our proposed encoding helps localising ontologies written in RDF (or OWL and other knowledge representation languages), as XLIFF is a standard used for localisation purposes.

In the following subsections (6.1, 6.2, and 6.3) we describe three use cases/experiments:

A minimal XLIFF file with one translation unit; A modular XLIFF file with alternative translations; An XLIFF file enriched with header metadata.

In each of these three use cases, we provide the XLIFF source file, the RDF desired output, and the XSLT we semi-manually30 created. The XSLT has been tested using the Saxonica31 XSLT processing software to see whether the transformation was performed successfully. We also designed a graphical tool based on Saxonica which converts XLIFF to RDF (see more in section 7).

First use case

Our first XLIFF source file is a minimal one, as it contains only one translation unit: the English sentence The book is good is translated in German as Das Buch ist gut. The file element has some required attributes that are original, source and target language, and datatype. The diagram below depicts the structure of XLIFF follows:

30

There are tools which make an automatic mapping/transformation, such as Stylus studio (http://www.stylusstudio.com/); we used Stylus to create some parts of the XSLTs. 31 Saxon Client Edition 1.0, Retrieved March 28, 2011 from http://www.saxonica.com/.

83


Dimitra Anastasiou

Diagram 1. Minimal example: XLIFF concepts in RDF graph (first case)

In order to connect the XLIFF concepts with the RDF terminology, the circles are the resources, the labels on the arrows are the properties, while the content of the rectangles are the property values. idX is just a placeholder for a resource representing the body. The XLIFF source file is on the left column of Table 1 (and also 2, 3). On the right column is the RDF output and under the two columns is the XSLT which transforms the XLIFF file into RDF.

84


XLIFF Mapping to RDF

Source – XLIFF <?xml version="1.0" encoding="UTF-8" ?> <xliff version="1.2" xmlns="urn:oasis:names:tc:xliff:document:1.2" xmlns:xsi="http://www.w3.org/2001/XMLSchemainstance" xsi:schemaLocation="urn:oasis:names:tc:xliff:d ocument:1.2 xliff-core-1.2-transitional.xsd"> <file original="minimal_XLIFF.html" sourcelanguage="en-us" target-language="de-de" datatype="html"> <body> <trans-unit id="#1"> <source>The book is good</source> <target>Das Buch ist gut</target> </trans-unit> </body> </file> </xliff>

Target – RDF <?xml version="1.0" encoding="UTF-8"?> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdfsyntax-ns#" xmlns:xliff="urn:oasis:names:tc:xliff:document :1.2"> <xliff:file rdf:about="minimal_XLIFF.html" xliff:source-language="en-us" xliff:target-language="de-de" xliff:datatype="html"> <xliff:body> <xliff:trans-unit xliff:id="#1"> <xliff:source> The book is good </xliff:source> <xliff:target> Das Buch ist gut </xliff:target> </xliff:trans-unit> </xliff:body> </xliff:file> </rdf:RDF>

XSLT <?xml version='1.0' ?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:a="urn:oasis:names:tc:xliff:document:1.2" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:xliff="http://docs.oasis-open.org/xliff/xliff-core/xliff-core.html#"> <xsl:template match="/"> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" > <xliff:file> <xsl:attribute name="rdf:about"> <xsl:value-of select="a: xliff/a:file/@original"/> </xsl:attribute> <xsl:attribute name="source-language"> <xsl:value-of select="a:xliff/a:file/@source-language"/> </xsl:attribute> <xsl:attribute name="target-language"> <xsl:value-of select="a:xliff/a:file/@target-language"/> </xsl:attribute> <xsl:attribute name="datatype"> <xsl:value-of select="a:xliff/a:file/@datatype"/> </xsl:attribute> <xliff:body> <xsl:for-each select="a:xliff/a:file/a:body/a:trans-unit"> <xliff:trans-unit> <xsl:attribute name="id"> <xsl:value-of select="@id"/> </xsl:attribute> <xliff:source> <xsl:value-of select="a:source"/> </xliff:source> <xliff:target> <xsl:value-of select="a:target"/> </xliff:target> </xliff:trans-unit> </xsl:for-each> </xliff:body> </xliff:file> </rdf:RDF> </xsl:template>

85


Dimitra Anastasiou </xsl:stylesheet>

Table 2. XLIFF2RDF of a minimal example

SLT is a language to translate from XML into other languages, such as (X)HTML. By authoring in XML and rendering with XSLT, tasks of content authors and designers are separated. XSLTs allow for styling and changing the visual design without rewriting the content. This XSLT (Table 1) can successfully convert minimal XLIFF files to RDF. We highlight some important characteristics in the XSLT: in line 2 we include the namespaces of XSLT, XLIFF, and RDF; where a: is a shortcut for the XLIFF schema location. In lines 5-20 we see how the XLIFF file elements and attributes can be matched by the XSLT: <xsl:attribute name="X">. The values of XLIFF elements are selected through <xsl:value-of select="a:Y">, where Y is the path of the element. We also have <xsl:for-each> because we can have more than one translation unit (not shown in this example).

Second use case

In the second use case we have a more modular XLIFF file than the first one, as it contains alternative translations; book chapter is translated in German, French, and Spanish as Buchkapitel, chapitre de livre, and capĂ­tulo del libro respectively, as we saw in example 4 in section 4. The graph and the table follow below:

Figure 3. Alternative translations: XLIFF concepts in RDF graph (second case)

86


XLIFF Mapping to RDF

Source – XLIFF <?xml version="1.0" encoding="UTF-8" ?> <xliff version="1.2" xmlns="urn:oasis:names:tc:xliff:document:1.2" xmlns:xsi="http://www.w3.org/2001/XMLSchemainstance" xsi:schemaLocation="urn:oasis:names:tc:xliff:d ocument:1.2 xliff-core-1.2-transitional.xsd"> <file original="alternatives.html" sourcelanguage="en-us" target-language="de-de" datatype="html"> <body> <trans-unit id="#1"> <source>book</source> <target>Buch</target> <alt-trans> <source xml:lang="en-US">book chapter</source> <target xml:lang="deDE">Buchkapitel</target> <target xml:lang="fr- FR">chapitre de livre</target> <target xml:lang="es-ES"> capítulo del libro</target> </alt-trans> </trans-unit> </body> </file> </xliff>

87

Output – RDF <?xml version="1.0" encoding="UTF-8"?> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdfsyntax-ns#" xmlns:xliff="urn:oasis:names:tc:xliff:document :1.2" xmlns:xml="http://www.w3.org/XML/1998/namespac e"> <xliff:file rdf:about="http://minimal_XLIFF" xliff:source-language="en-us" xliff:target-language="de-de" xliff:datatype="plaintext"> <xliff:body> <xliff:trans-unit xliff:id="#1"> <xliff:source>book</xliff:source> <xliff:target>Buch</xliff:target> <xliff:alt-trans> <xliff:source> <xliff:p xml:lang="en-US">book chapter</xliff:p></xliff:source> </xliff:alt-trans> <xliff:alt-trans> <xliff:target> <xliff:p xml:lang="deDE">Buchkapitel</xliff:p> </xliff:target> </xliff:alt-trans> <xliff:alt-trans> <xliff:target> <xliff:p xml:lang="fr-FR"> chapitre de livre</xliff:p> </xliff:target> </xliff:alt-trans> <xliff:alt-trans> <xliff:target> <xliff:p xml:lang="es-ES"> capítulo del libro</xliff:p> </xliff:target> </xliff:alt-trans> </xliff:trans-unit> </xliff:body> </xliff:file> </rdf:RDF>


Dimitra Anastasiou

XSLT <?xml version="1.0" ?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:a="urn:oasis:names:tc:xliff:document:1.2" xmlns:xliff="http://docs.oasisopen.org/xliff/xliff-core/xliff-core.html#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntaxns#"> <xsl:template match="/"> <rdf:RDF xmlns:a="urn:oasis:names:tc:xliff:document:1.2" xmlns:xliff="http://docs.oasisopen.org/xliff/xliff-core/xliff-core.html#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntaxns#"> <xsl:for-each select="a:xliff/a:file"> <xliff:file> <xsl:attribute name="rdf:about"> <xsl:value-of select="a:xliff/a:file/@original"/> </xsl:attribute> <xsl:attribute name="source-language"> <xsl:value-of select="@source-language"/> </xsl:attribute> <xsl:attribute name="target-language"> <xsl:value-of select="@target-language"/> </xsl:attribute> <xsl:attribute name="datatype"> <xsl:value-of select="@datatype"/> </xsl:attribute> <xliff:body> <xsl:for-each select="a:body/a:trans-unit"> <xliff:trans-unit> <xsl:attribute name="id"> <xsl:value-of select="@id"/> </xsl:attribute> <xliff:source> <xsl:value-of select="a:source"/> </xliff:source> <xliff:target> <xsl:value-of select="a:target"/> </xliff:target> <xliff:alt-trans> <xliff:source> <xsl:value-of select="a:alt-trans/a:source"/> </xliff:source> <xsl:for-each select="a:alt-trans/a:target"> <xliff:target> <xliff:p> <xsl:attribute name="xml:lang"> <xsl:value-of select="@xml:lang"/> </xsl:attribute> <xsl:value-of select="."/> </xliff:p> </xliff:target> </xsl:for-each> </xliff:alt-trans> </xliff:trans-unit> </xsl:for-each> </xliff:body> </xliff:file> </xsl:for-each> </rdf:RDF> </xsl:template> </xsl:stylesheet>

Table 3. XLIFF2RDF of a translation unit with alternative translations

88


XLIFF Mapping to RDF Here we should mention that in the RDF output we have more than one alt-trans elements. This is not allowed in XLIFF; however, it is necessary to have that here in order an RDF to be valid 32. The problem we faced here was that target could not have as a child xml:lang, as target is already a child of trans-unit. Thus we had to create another encoding, i.e. <xliff:p>.

Third use case

In the third example we have a lot of metadata included in the header of an XLIFF file. This includes a phase-group which can include, in turn, one or more phase elements. Here we have one such element containing all the attributes it can contain: process and phase name, and contact details (name, e-mail). This meta-information is child elements of the header root element. What we also added here is another translation unit.

Figure 4. Example with metadata: XLIFF concepts in RDF graph (third case) Source – XLIFF

Output – RDF

<?xml version="1.0" encoding="UTF-8"?> <xliff version="1.2" xmlns="urn:oasis:names:tc:xliff:document:1.2" xmlns:xsi="http://www.w3.org/2001/XMLSchemainstance" xsi:schemaLocation="urn:oasis:names:tc:xliff:d ocument:1.2 xliff-core-1.2-transitional.xsd"> <file original="book_with_metadata.txt" source-language="en-us" target-language="dede" datatype="plaintext" tool="TM-ABC"> <header> <phase-group> <phase phase-name="review" process-name="Terminology Management" contact-name="Dimitra Anastasiou" contact-email="anastasiou@uni-bremen.de"/> </phase-group>

32

<?xml version="1.0" encoding="UTF-8"?> <rdf:RDF xmlns="urn:oasis:names:tc:xliff:document:1.2" xmlns:xliff="http://www.w3.org/2000/10/swap/xl iff#" xmlns:rdf="http://www.w3.org/1999/02/22rdf-syntax-ns#"> <xliff:file rdf:about="book_with_metadata.txt" xliff:source-language="en-us" xliff:targetlanguage="de-de" xliff:datatype="plaintext" xliff:tool="TM-ABC"> <xliff:header> <xliff:phase-group> <xliff:phase xliff:phase-name="review" xliff:process-name="Terminology Management" xliff:contact-name="Dimitra Anastasiou" xliff:contact-email=" anastasiou@unibremen.de"/> </xliff:phase-group>

We have to create more properties, i.e. more alt-trans, as multiple children of a property element are not allowed.

89


Dimitra Anastasiou </header> <body> <trans-unit id="#1"> <source>book</source> <target>Buch</target> </trans-unit> </body> </file> </xliff>

</xliff:header> <xliff:body> <xliff:trans-unit xliff:id="#1"> <xliff:source>book</xliff:source> <xliff:target>Buch</xliff:target> </xliff:trans-unit> </xliff:body> </xliff:file> </rdf:RDF> XSLT

<?xml version='1.0' ?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" xmlns:a="urn:oasis:names:tc:xliff:document:1.2" xmlns:xliff="http://docs.oasis-open.org/xliff/xliff-core/xliff-core.html#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> <xsl:template match="/"> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> <xliff:file> <xsl:attribute name="rdf:about"> <xsl:value-of select="a:xliff/a:file/@original"/> </xsl:attribute> <xsl:attribute name="target-language"> <xsl:value-of select="a:xliff/a:file/@target-language"/> </xsl:attribute> <xsl:attribute name="source-language"> <xsl:value-of select="a:xliff/a:file/@source-language"/> </xsl:attribute> <xsl:attribute name="datatype"> <xsl:value-of select="a:xliff/a:file/@datatype"/> </xsl:attribute> <xliff:header> <xliff:phase-group> <xliff:phase> <xsl:attribute name="phase-name"> <xsl:value-of select="a:xliff/a:file/a:header/a:phasegroup/a:phase/@phasename"/> </xsl:attribute> <xsl:attribute name="process-name"> <xsl:value-of select="a:xliff/a:file/a:header/a:phase-group/a:phase/@process-name"/> </xsl:attribute> <xsl:attribute name="contact-name"> <xsl:value-of select="a:xliff/a:file/a:header/a:phase-group/a:phase/@contactname"/> </xsl:attribute> <xsl:attribute name="contact-email"> <xsl:value-of select="a:xliff/a:file/a:header/a:phase-group/a:phase/@contact-email"/> </xsl:attribute> </xliff:phase> </xliff:phase-group> </xliff:header> <xliff:body> <xsl:for-each select="a:xliff/a:file/a:body/a:trans-unit"> <xliff:trans-unit> <xsl:attribute name="id"> <xsl:value-of select="@id"/> </xsl:attribute> <xliff:source> <xsl:value-of select="a:source"/> </xliff:source> <xliff:target> <xsl:value-of select="a:target"/> </xliff:target> </xliff:trans-unit> </xsl:for-each>

90


XLIFF Mapping to RDF </xliff:body> </xliff:file> </rdf:RDF> </xsl:template> </xsl:stylesheet>

Table 4. XLIFF2RDF of a modular example with metadata

Future Prospects

As concerns future prospects, we plan to transform the XML schema (XSD) of XLIFF into RDFS; in this case there will be a uniform way of transforming XLIFF files into RDF. The XLIFF schema can be strict or transitional. Having these schemas mapped in RDF would mean that all well-formed XLIFF files would be able to be mapped in RDF. As aforementioned in section 6, we have designed a converter from XLIFF to RDF which is currently under the Google code hosting33 website. The development of a conversion tool to translate from XLIFF into RDF automates and thus accelerates the process. Currently the user can input one or more XLIFF file(s) to the tool, convert them to RDF, and preview them on a web browser. Other users can freely get a local copy of the tool or create their own clone; thus replication of the tool is allowed. The conversion tool fulfils the basic requirement that XLIFF files be represented in RDF. Not only minimal XLIFF examples with one TU, but with more TUs and also with file processing metadata, alternative translations, etc. can be successfully converted. The design of this converter is discussed in detail in Anastasiou (2011b). The purpose of the converter is the easy mapping of XLIFF to RDF and can be used as a plug-in in localisation and Semantic Web tools. Today, there is lack of interoperability between data based on standards and interoperability between standards. Conversion between the XLIFF and RDF standards plays a small part within the wider scope of interoperability that includes, among others, supporting relevant standards and conforming with specifications.

This paper presented how a new file with an ontology (in RDF, OWL) could be translated using XLIFF translation units and metadata concepts. In order to implement the XLIFF concepts in the original RDF file, some ontology-specific concepts for class names and object properties in XLIFF should be defined, because at the moment XLIFF handles only translation units and not such ontology concepts. RDF2XLIFF but also XLIFF2RDF have the challenge of converting many user-defined elements. With the RDF2XLIFF conversion, ontology labels will be more easily localised, but also structured based on localisation metadata. We plan to find more real-life XLIFF and RDF examples that are representative of business practices. At the moment, the RDF encoding of XLIFF is very 33

xliff-rdf. Retrieved March 15, 2012 from http://code.google.com/p/xliff-rdf/.

91


Dimitra Anastasiou close to XLIFF, thus we will consider changing some parts in order to be more readable and understandable by Semantic Web applications.

Furthermore, we plan to extend the conversion API for other standards. Also interoperability between other translation/localisation/internationalisation standards is also among future prospects. In terms of quality assurance, existing validation tools will be part of our tool. For example, the validity of XLIFF files can be checked through the open source XLIFF Checker34 tool developed by maxprograms and RDF files can be validated through the W3C RDF validator35. Last but not least, XLIFF can be used as a standard to help representing translations on the Semantic Web in the lemon lexicon model for ontologies. Further work will be targeted for implementing that as a future plug-in.

Summary and Conclusion

Our contribution is to translate XLIFF into RDF in order to facilitate ontology localisation, i.e. localise monolingual ontologies (see section 2). After presenting some related research about multilingual ontologies and their support by existing standards, we made a brief overview of XLIFF and RDF pertaining to metadata. After that, the mapping from XLIFF2RDF and three use cases with XLIFF as a source, RDF as output, and XSLT as a transformation ‘engine’ were given.

Considering that both XLIFF and RDF are open standard models, we see their symbiotic relationship for a more effective way of presenting multilingual content on the Web.

Our contribution to the Semantic Web technology (both ontologies and linked data) is the enhancement of interoperability by combining the ‘XML Localisation Interchange File Format’ (XLIFF) with the ‘Resource Description Framework’ (RDF). As language resources can be expressed in XLIFF and/or RDF, interoperability is needed, so that resources in different syntax interact with each other. The interoperability between XLIFF and RDF is attained through an XLIFF2RDF element and attribute mapping. The mapping will help applications to transfer data and metadata easier and more efficiently.

In Semantic Web context, it is an arbitrary decision in which natural language the ontology labels are provided, and thus many researchers see the need for multilingual ontologies; challenges, like cross-

34 35

XLIFFChecker. Retrieved March 15, 2012 from http://www.maxprograms.com/products/xliffchecker.html. W3C RDF Validator Retrieved March 15, 2012 from http://www.w3.org/RDF/Validator/.

92


XLIFF Mapping to RDF lingual mapping and translation follow the existence of multilingual ontologies. Hence doors are opened for localisation to contribute to ontologies of the Semantic Web. In the future localisation tools should be able to effectively localise ontologies and Semantic Web approaches to be populated with localisation-related metadata.

Acknowledgement

We gratefully acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) through the Collaborative Research Center SFB/TR 8 Spatial Cognition-Subproject I3-SharC.

We thank Asanka Wasala for his help with the XLIFF2RDF mapping and Stylus Studio.

93


Dimitra Anastasiou References

(n.d.). Retrieved from GOLD Community: http://linguistics-ontology.org (n.d.). Retrieved from Linguistic Information Repository: http://mayor2.dia.fi.upm.es/index.php/en/technologies/63-lir (n.d.). Retrieved from Monnet project: http://www.monnetproject.eu/Monnet/Monnet/English?init=true Anastasiou, D. (2011a, March/April). The Impact of Localisation on Semantic Web Standards. European Journal of ePractice, 12, 42-52. Anastasiou, D. (2011b). XSLT Conversion between XLIFF and RDF. Proceedings of the 2nd Multilingual Semantic Web Workshop, 10th International Semantic Web Conference. Bonn. Anastasiou, D., & Morado Vázquez, L. (2010). Localisation Standards and Metadata. Proceedings of the 4th Metadata and Semantics Research Conference. Communications in Computer and Information Science, 108, pp. 255-276. Alcalá de Henares. Berners-Lee, T., Hendler, J., & Lassila, O. (2001). The Semantic Web. The Scientific American Magazine, 284(5), 34-43. Buitelaar, P., Eigner, T., & Declerck, T. (2004). OntoSelect: A Dynamic Ontology Library with Support for Ontology Selection. Proceedings of the Demo Session at the 3rd International Semantic Web Conference. Hiroshima. Cardeñosa, J., Gallardo, C., Iraola, L., & De la Villa, M. (2008). A New Knowledge Representation Model to Support Multilingual Ontologies. A case Study. Proceedings of the 2008 Conference on Semantic Web and Web Services, (pp. 313-319). Las Vegas. Carpuat, M. (2002). Creating a Bilingual Ontology: A Corpus-Based Approach for Aligning WordNet and HowNet. Proceedings of the 1st Global WordNet Conference, (pp. 284-292). Mysore. Cimiano, P., Montiel-Ponsoda, E., Buitelaar, P., Espinoza, M., & Gómez-Pérez, A. (2010). A note on ontology localization. Journal of Applied Ontology, 5(2), 127-137. Decker, S., Melnik, S., Van Harmelen, F., Fensel, D., Klein, M., Broekstra, J., ... Horrocks, I. (2000). The Semantic Web: The roles of XML and RDF. Internet Computing, IEEE, 4(5), 63-73. Espinoza, M., Gómez-Pérez, A., & Mena, E. (2008). Enriching an ontology with multilingual information. Proceedings of the 5th European Semantic Web Conference, (pp. 333-347). Tenerife. Espinoza, M., Montiel Ponsoda, E., & Gómez-Pérez, A. (2009). Ontology Localization. Proceedings of the 5th International Conference on Knowledge Capture, (pp. 33-40). California. Euzenat, J., & Shvaiko, P. (2007). Ontology matching. Heidelberg, Germany: Springer. Farrar, S., & Langendoen, D. (2003). A linguistic ontology for the Semantic Web. GLOT International, 7(3), 97-100.

94


XLIFF Mapping to RDF Fu, B., Brennan, R., & O'Sullivan, D. (2009). Cross-Lingual Ontology Mapping – An Investigation of the Impact of Machine Translation. Proceedings of the 4th Asian Conference on the Semantic Web, (pp. 1-15). Shanghai. Gerber, A., Barnard, A., & Van der Merwe, A. (2006). A Semantic Web Status Model. Integrated Design & Process Technology, Special Issue: IDPT 2006, 473-482. Gruber, T. (2009). Ontology. In L. Liu, & M. Tamer Özsu, Encyclopedia of Database Systems (pp. 1963-1965). Springer. Hartmann, J., Palma, R., Sure, Y., Haase, P., & Suarez-Figueroa, M. (2005). OMV- Ontology Metadata Vocabulary. Proceedings of the International Workshop on Ontology Patterns for the Semantic Web. Galway: 4th International Semantic Web Conference. Jung, J., Håkansson, A., & Hartung, R. (2009). Indirect Alignment between Multilingual Ontologies: A Case Study of Korean and Swedish Ontologies. Agent and Multi-Agent Systems. Technologies and Applications, (pp. 233-241). McCrae, J. S. (2011). Linking lexical resources and ontologies on the semantic web with lemon. Proceedings of the 8th Extended Semantic Web Conference on the Semantic web: research and applications, (pp. 245-259). Crete. McGuiness, D., & Harmelen, F. (2003). OWL Web Ontology Language Review. W3C. Montiel-Ponsoda, E., Aguado, G., Gomez-Perez, A., & Peters, W. (2008). Modelling multilinguality in ontologies. Proceedings of the 22nd International Conference on Computational Linguistics. Manchester. Montiel-Ponsoda, E., Gracia, J., Aguado-de-Cea, G., & Gómez-Pérez, A. (2011). Representing Translations on the Semantic Web. Proceedings of the 2nd Multilingual Semantic Web Workshop, 10th International Semantic Web Conference. Bonn. Nilsson, M., Powell, A., Johnston, P., & Naeve, A. (2008). Expressing Dublin Core metadata using the Resource Description Framework (RDF). Retrieved from http://dublincore.org/documents/dc-rdf/ O’Sullivan, D., Wade, V., & Lewis, D. (2007). Understanding as we roam. IEEE Internet Computing, 11(2), 26-33. Ontology Alignment Evaluation Initiative. (n.d.). Retrieved from http://oaei.ontologymatching.org/ Pazienza, M., & Stellato, O. (2005). The Protégé Ontoling Plugin - Linguistic Enrichment of Ontologies. Poster and Demo Proceedings of the 4th International Semantic Web Conference. Galway. Peters, W., Espinoza, M., Montiel-Ponsoda, E., & Sini, M. (2006). D2.4.3 Multilingual and Localization Support for Ontologies (v3). Retrieved from http://www.neon-project.org/webcontent/images/Publications/neon_2009_d243.pdf Raul, P., Hartmann, J., Gómez-Pérez, A., Sure, Y., Haase, P., & Suarez Figueroa, M. (2006). Towards an Ontology Metadata Standard. Poster in Proceedings of the 3rd European Semantic Web Conference. Budva. RDF Primer. (n.d.). Retrieved from http://www.w3.org/TR/rdf-primer/

95


Dimitra Anastasiou SKOS (Simple Knowledge Organization System). (n.d.). Retrieved from http://www.w3.org/2004/02/skos/ Suarez-Figueroa, M., & Gómez-Pérez, A. (2008). First attempt towards a standard glossary of ontology engineering terminology. Proceedings of the 8th International Conference on Terminology and Knowledge Engineering, (pp. 1-15). Copenhagen. Trojahn, C., Quaresma, P., & Vieira, R. (2008). A framework for multilingual ontology mapping. Proceedings of the 6th Language Resources and Evaluation Conference, (pp. 1034-1037). Marrakech. Witt, A., Heid, U., Sasaki, F., & Sérasset, G. (2009). Multilingual language resources and interoperability. Language Resources & Evaluation Journal, 43, 1-14.

96


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.