Context

In the last two decades we have been able to witness a technological turn in translation and interpreting studies with natural language processing (NLP) and deep learning playing more and more prominent part. There is already a growing number of NLP applications which are used to support the work of translators and interpreters. In addition, the recent advances in (and latest models of) deep learning have powered the further development and success of high performing Neural Machine Translation (NMT) systems.

Jiménez Crespo (2021) reflects on the reality and discernability of a disciplinary turn in translation both as a profession and a field of research. After reviewing the concepts of “turn” and “technological turn” as defined, amongst others, by well-known translation scholars, such as Sin-Wai (2004), Cronin (2010) or O’Hagan (2013), Jiménez Crespo describes this phenomenon as “a process by which translation theories begin to incorporate the increasingly evident impact of technology, developing theoretical tools and frameworks for translation studies and related disciplines”. Human-computer interaction is nowadays a common practice and situation in professional translation (O’Brien 2012). Corpus analysis, terminology management, computer assisted translation tools, machine translation, translation project management software are at the core of the profession.

The translation technology revolution has revolutionised the translation profession and nowadays most professional translators employ tools such as translation memory (TM) systems in their daily work. Latest advances of Neural Machine Translation (NMT) have resulted in NMT not only becoming an integral part of most state-of-the art TM tools but also typical for the translation workflow of many companies, organisations and freelance translators.

Although translation has benefited more from technological advances, interpreting has also experienced a technological turn. Fantinuoli (2018) points out that technology has been present in professional interpretation since the beginning of simultaneous interpreting systems in 1920s. Technology mediated interpreting has also been popular in dialogue settings, and telephone interpreting dates back to the 1950s (Braun, 2015; Cabrera Méndez, 2016). However, it has not been until some years ago that soft technology has permeated interpreting practice and research. Computer assisted translation, MT and NLP tools have been adapted to be used by interpreters. One of the most important related projects is VIP (Corpas Pastor, 2021), a platform that integrates several CAI and NLP tools (terminology management, speech-to-text, note-taking, summarisation).

Shlesinger (1998) already mentioned several decades ago the benefits of using corpus-based methodologies in interpreting studies, particularly to obtain information about lexical, grammatical or discursive patterns. Authors such as Van Besien (1990), Takagi et al. (2002) or Ryuet al. (2003) pioneered corpus-based studies on simultaneous conference interpreting, focusing on interpreting techniques, time span or contrastive linguistic features respectively.

More recently, corpus-based studies have reached dialogue interpreting. For instance, the ComInDat Pilot Corpus (Angermeyer, Meyer and Schmidt, 2012) comprised two subcorpora of interpreter-mediated medical interviews and court trials. More recent are the corpora TIPp, which also contains interpreter-mediated court trials, and INTELPRAGMA / PRAGMACOR, made of telephone interpreter-mediated interactions. Most of the corpora of dialogue interpretations have been processed and analysed with the software EXMARaLDA.

The increasing interest in NLP, MT and the automation of processes has brought us to multidisciplinary projects that deal with the development of models for automated oral communication. Machine interpreting has already been developed and is being improved, focusing on speed and accuracy matters (Müller et al. 2016). Either domain-specific (commercial, military, humanitarian…) or general (Skype Translator), there is still a long way to go to render machine interpreting more human-like (Waibel et al., 2017; Braun, 2020).

Many of the above recent developments have to do with the employment of Natural Language Processing tools and resources to support the work of translators and interpreters. This workshop is expected to discuss the growing importance of NLP in different translation and interpreting scenarios.

The workshop invites submissions on topics including but not limited to:

    • NLP and MT for under-resourced languages;
    • Translation Memory systems;
    • NLP and MT for translation memory systems;
    • NLP for CAT and CAI tools;
    • Integration of NLP tools in remote interpreting platforms;
    • NLP for dialogue interpreting;
    • Development of NLP based applications for communication in public service settings (healthcare, education, law, emergency services);
    • Corpus-based studies applied to translation and interpreting.;
    • Machine translation and machine interpreting;
    • Resources for translation and machine translation;
    • Resources for interpreting and interpreting technology application;
    • Quality estimation of human and machine translation;
    • Post-editing strategies and tools;
    • Automatic post-editing of MT;
    • NLP and MT for subtitling.
    • Technology acceptance by interpreters and translations;
    • Machine Translation and translation tools for literary texts;
    • Evaluation of machine translation and translation and interpreting tools in general;
    • The impact of the technological turn in translation and interpreting;
    • Cognitive effort and eye-tracking experiments in translation and interpreting;
    • Development of models for research and practice of translation and interpreting;
    • Multidisciplinary cooperation in NLP applied to translation and interpreting.

 

 

nlp4tia@uah.es