9 minute read
1.6 Machine and relay translation – practices affecting quality of translation
01
THE SECTOR OF TRANSLATORS FOR PERFORMING ARTS NEEDS TO BE BETTER STRUCTURED
This sector – which is a fairly recent addition to the translation family, but which is growing rapidly – can learn from its colleagues in the literary and audiovisual fields. A dialogue between translators and theatre companies and venues should take place to better structure the sector and to develop a shared set of ‘basic rules’ for the performing arts sector, applicable to all EU countries and international players/platforms operating within the EU, setting a quality and working conditions framework that would substantially improve the theatre translation ecosystem.
The exercise of performing plays in the original language, accompanied by surtitles, aims to facilitate the circulation of works and to attract a wider audience, while preserving the profound singularity or cultural authenticity of the original.
01.6 | Machine and relay translation – practices affecting quality of translation
EVEN THOUGH MACHINE TRANSLATION (MT) IS STILL UNSUITABLE FOR TRANSLATING LITERATURE, PROGRESS IN TECHNOLOGY SHOULD BE MONITORED CLOSELY
MT is the process of substituting words in one language for those in another using computer software. It is a field of computational linguistics that has been developing since the mid-19th century, and has been developing rapidly in the last few years, with thousands of research articles published on the subject. There are different approaches to MT, based on their understanding of language itself, and the most popular current system is neural MT, which is the one used by Google Translate and DeepL. The idea behind neural MT is not to translate word by word, but to use predictive computation to generate a new text in a different language. It is based not on dictionaries or grammatical rules, but on statistical analysis and the use of semantic maps. It is based on corpora, that is collections of written and spoken material that the software can use to extract results.
This technology might seem promising, but MT is still unsuitable for translating literature (58) . Firstly, MT makes many mistakes. Some are typical, others are more unpredictable, so the result is not publishable and human post-editing is needed to achieve an acceptable level. In the case of essays and academic papers, the results are getting better, but the generated texts still need human intervention.
Observatoire de la traduction automatique (59)
In December 2018, the French association ATLAS formed a group that aims to monitor the evolution of online MTs and the performance of such software with literary texts. Over 2 years, it monitored the translation of a corpus formed by 40 major texts of European culture (that included authors such as Shakespeare, Kafka, Lobo Antunes and Tokarczuk) and compared the result with human translation.
During the period under consideration, no significant improvement in the outputs of the software was detected, although this can be explained, in part, by a research protocol that, by posing too great a challenge to the algorithms, placed them in a situation of artificial failure. MT – which has made great progress with neural translation – can indeed manage with simple texts or with small segments of language, and would thus satisfy readers engaged in a consumerist use of literature. However, in its current state of development, MT is unsuitable for the translation of ‘high-end’ literature, as it cannot meet the high standards of professionalism needed to satisfy the public: the texts are not of a quality acceptable to publishers without human post-editing. They are also do not meet the quality required to receive public subsidies for publication.
The observatory is now focusing its efforts on the new uses of post-editing and on methods of collaboration between human and machine.
01
Another drawback is simply the objective of the software, which typically has been developed not to translate long and complex texts, but to enhance the experience of so-called ‘natural language processing’ for virtual assistants such as Siri or Alexa or social networking sites such as Facebook. The aim of the software is to keep users engaged, not to deliver good texts, and the development of the programs profit driven.
Translation software also carries a risk of cultural and linguistic framing. Most MT software ‘learns’ from a vast range of material, including online posts, which can lead to racial and gender bias. Furthermore, MT can lead to homogenisation because interactions are pre-scripted. It can also generate biased translations because of skewed semantic maps. And there is also a hidden problem to this : MT relies on English as a pivot language. The consequence of this is that the software works much better for some language pairs than for others, and especially for more widely used languages.
A problem with MT also arises around the issue of copyright. If computer software generates a translated text and the role of the translator is reduced to post-editing, establishing authorship of the translation can be a tricky issue. Who is the author of the translation? Could companies that have created the software claim authorship of the text? Is it legitimate for a post-editor who has not produced a text, but only improved it, to claim the authorship of that text ? The mere use of MT to create a text could be considered an infringement of intellectual property rights in some countries.
MT should be monitored closely. It is a promising field experiencing rapid development. It will have an impact on literary translation, just as it is having an impact in other fields of translation.
Machine translation in the audiovisual sector
As has already been mentioned, MT is still far too nascent to be effectively used in the book sector, as it has proven limited for long texts and both editors and translators view this new technology with scepticism. Unfortunately, those working in other fields of translation are less cautious. In the case of audiovisual translation, MT is rapidly becoming a problem, as audiovisual companies produce automated translations and look for workers who can edit the result for cheaper fees than a translator.
AVTE, the European federation of national associations and organisations of audiovisual translators, has recently published a manifesto (60) that highlights the risks of MT to translators’ work environment. The manifesto states that translators are still essential to assure the quality of a translated text, yet software developers fail to take them into account when developing translation technology, where they could be developing computer-assisted technology which would be of greater practical benefit to both translators and the overall quality of output. In addition, it also refutes the idea that MT is efficient, pointing out that post-editing a bad translation ‘can take longer than translating the text from scratch’. The manifesto also warns of the increasing frequency of malpractice, such as passing off an MT-produced translation as a human translation to clients without informing them, knowing full well that content creators would be opposed to MT.
The manifesto ends with a very sensible idea: the concept of augmented translators: ‘By using MT to empower translators and improve their working conditions, we can secure a sustainable future for the field of AVT [audiovisual translation] and continue to bridge linguistic divides across different countries and cultures’. This approach supports translators rather than using MT to undercut them, and it keeps the ‘handmade’ quality of translation while making the best use of technology.
01
RELAY TRANSLATION SHOULD NOT BE ENCOURAGED AS A LONG-TERM SOLUTION TO PROMOTE TITLES FROM LESSER-USED LANGUAGES IN EUROPE
In the third volume of the Handbook of Translation Studies, Martin Ringmar defines relay translation as ‘a chain of (at least) three texts, ending with a translation made from another translation: original > intermediate text > end text’ (61), This practice of translating a text not from the original source but from another translation is also referred to as ‘indirect translation’, and has been around for centuries: without going too far back in time, French translations had a mediating role in Europe in the 17th and 18th centuries (and in some areas, such as Spain, this role persisted even up to the 19th century). In western Europe, English has replaced French as a mediating language, while German plays an important role as an intermediate language in central and eastern Europe, and Swedish has assumed the same role in Scandinavia.
But why would anyone rely on a text that is not the real thing? Some researchers have pointed out that particular languages are chosen because of their social prestige, and also because of the difficulty of procuring the original text. But the most plausible reason has been and still is the lack of translators with enough knowledge or expertise of the source language of the text.
Using an intermediate text to create a translation might appear to be a practical solution, as finding translators of more widely known languages is an easier task and fees might be lower than for translators of lesserknown languages. But, of course, it has major disadvantages. The end text might differ significantly from the original as a result of the influence of factors such as grammatical structure or lexical choice in the intermediate text. And there is a high risk of variation due to ideological aspects, as the translated text might perpetuate intermediate versions that have suffered censorship or different degrees of manipulation. Another problem is that of homogenisation: when an original text is translated into a hegemonic language (e.g. English), the chances are that some elements are adapted to the language and culture that receives the text. In all translations, there are exotic elements that get slightly watered down, or simply adapted, in the process of translation. In doing so, the hegemonic language imprints its own culture onto the text, and these elements are carried into the relay translation and are attributed to the original language and culture.
Furthermore, relay translation is regarded with a great deal of scepticism by serious translators and editors. One could even argue that this practice is unethical for both ends of the book chain: the author is not being properly translated, and the reader is not receiving a translation that is close enough to the original.
It should be added, though, that there is also such thing as ‘authorised’ translation approved by the author of the original text who has sufficient command of the target language and confirms that that translation can be used either as relay translation or for the purposes of facilitating the foreign sales of the work. In some cases, relay translation carried out to the highest standards, not censored or culturally adapted, and compared with both the source and target language, is still the only viable option for the translation of literature in lesserknown languages. However this solution should be temporary and should not be widely used in the long term, as it could discourage the emergence of new translators with less common combinations of languages. Working in pairs, using the language skills of one and the literary skills of another, is a better option than relay translation in cases where such pairs can be created. Investing in the training of a translator is a healthier long-term solution.