One of MeMAD’s objectives has been to produce fully automated subtitles that are created from source language speech with automatic speech recognition, automatic segmentation and machine translation. We have previously conducted user tests where professional translators at the Finnish public broadcaster Yle post-edit machine-translated subtitles. The purpose of these tests is to see how automated subtitles can be used as a part of a professional subtitling process. In order to get a fuller picture of the usability and usefulness of machine-translated subtitles, we also want to hear from end users: viewers who might watch videos subtitled using the MeMAD tools. How well can viewers follow a programme with fully automated interlingual subtitles?
To answer that question, we have been showing viewers short video clips of Yle’s current affairs programmes with fully automated subtitles and asking them for feedback. We are trying to find out how understandable the clips are, how easy it is to follow them, and what kinds of videos might be suitable for automatic subtitling. The subtitles in the clips have not been post-edited at all. We decided to use fully automated subtitles, because they could be the most plausible solution in certain contexts. It is impossible to subtitle everything into every language, and translations may be needed quickly, so fully automated subtitling could be useful in situations where there are no human translators available to do the job or where human translation or post-editing would not be fast enough.
Focus group discussions: quality, understandability and reliability
The first step in our viewer testing was to conduct two focus group discussions in June 2020 with viewers who might plausibly watch Yle’s current affairs programming in translation. In the first group, 6 Finnish-speaking viewers watched a Swedish-language clip subtitled in Finnish, and in the second group, 7 English-speaking viewers who live in Finland watched a Finnish-language clip subtitled in English. After watching the clip, they discussed their views and reactions.
The subtitles used in the experiment were easy to distinguish from human-made subtitles, as there were errors and inaccuracies which would not occur in professional subtitles. Therefore, the focus group participants were immediately aware that they were watching machine-translated subtitles, and their initial reactions were somewhat suspicious. They criticised mistranslations, clumsy language, time lags between the spoken dialogue and the appearance of the subtitles, and unclear segmentation. These issues made the subtitles more difficult to follow than professional subtitles, and they distracted viewers’ attention from the rest of the programme. However, it was also clear that the clips were reasonably understandable with these subtitles. In other words, the subtitles served their primary purpose, even if quality issues made them more laborious to follow.
One challenge in using automated subtitles is their reliability. The focus group participants were quite aware of the fact that there may be inaccuracies in machine-translated subtitles, and they pointed out that these inaccuracies mean there are fewer possible uses for automated subtitles. For example, it would be difficult to trust them as a means of learning the foreign language if they do not always reflect the foreign-language content accurately. The focus group participants suggested that the reliability of automatic subtitles could be increased by using human post-editing. Reliability could also be supported by transparency, such as clearly labeling subtitles as machine-translated, so that viewers know what they are encountering.
Differences between language pairs and use contexts
Although the two focus groups discussed the quality and reliability of the subtitles in similar ways, there were also differences. While Finnish-speakers living in Finland are well served by professional subtitles on Yle’s channels and elsewhere, English-speakers living in Finland have a difficult time accessing Finnish audiovisual content, because it is rarely subtitled into English. Therefore, the English-speakers’ focus group could see benefits in using automated subtitles with local Finnish content, while Finnish-speakers came up with fewer uses for automated subtitles.
The Finnish-speakers did mention that automated subtitles might be useful for watching content on marginal topics or subcultures that are not available in Finnish translation or in other widely understood languages such as English. For them, automated subtitles could be a way of broadening their media use options to less mainstream topics and languages. For the English-speakers, on the other hand, the need to understand local programming was a more urgent and immediate reason to use automated subtitles. Consequently, the English-speakers’ discussion indicated that offering automated subtitling could boost Yle’s reputation for inclusiveness and widening access, but for the Finnish-speakers, automated subtitles in the local media would be a step down from the professional subtitles they are used to.
What’s next
Both language groups hoped for improvements to the machine-translated subtitles before introducing them to general use, and both groups proposed ways in which human involvement, such as post-editing, could make the subtitles more useful. Still, the potential of automated subtitling as a way of facilitating access to audiovisual content and inclusion in society was evident in the discussions. We are carrying out further evaluations with an online questionnaire and a second round of focus groups to gain a more detailed understanding of viewers’ opinions and expectations. The findings of these viewer studies will help with the further development and possible implementation of automated subtitling systems. With some further improvements and a careful choice of use contexts, fully automated interlingual subtitling could have a realistic place in media alongside professional human translation.