Mastering Media Localization: Spotlight on Subtitling
Taking your creative concept from one medium to another, aka media adaptation, can seem a daunting task. Add to the mix a need for media localization, and it can all start to feel a little … foreign. But creating multilingual versions of your media assets need not be a puzzling process.
Whether you’re producing closed captions, recording a voiceover (VO), or timecoding a transcription, there are a few things to keep in mind. And this time round, let’s talk subtitling:
Subtitling is a rather generic term used to describe many different audio-visual techniques. So, before diving into your dialogue, it’s worth taking a moment to consider what you’re really after. Do you simply require a written transcription of your audio content? Do you need the transcription to be timecoded? Would you prefer subtitle file exports, or do you need the subtitles to be burnt-in? Having a clear understanding of the target usage for your subtitled content will help guide you in making these decisions. And providing your LSP with details of the desired outcome for your project will ensure they are best placed to advise you on which subtitling approach to take.
Read Your Speed
Now you’ve identified which subtitling service is the best fit for your project, there are a few rules and restrictions to be mindful of. Firstly, subtitles should not exceed 40 characters per line (including spaces), or 20 characters per second. And for Asian languages, this equates to 20 characters per line (again, including spaces), or 6 characters per second. These guidelines around character counts are in place for a very good reason – to help moderate reading speeds. After all, what use is a subtitle if the viewer doesn’t have time to read it. As such, industry guidelines state that, for an average viewer, a full two-line subtitle should be on screen for at least six seconds. So, if your source video is very dialogue heavy, bear in mind that some adjustments may need to be made to the translated captions, depending on the language, to ensure adherence to industry standards.
Behind the Screens
Another thing to mention is on-screen text (OST). By default, OST will not be included in your subtitles unless it is significant or if specifically requested — so if you do wish to include this copy for your foreign audience, then let your LSP know in advance. It’s also worth noting that when OST and dialogue appear at the same time in your video, the audio will be prioritized. And if being translated, OST should be formatted in block capitals to distinguish it from the central audio. Subtitles should always aim to complement and enhance the user’s audio-visual experience, not distract from it.
Subtitling for the hard of hearing (SDH), also known as closed captions (CC), aims to describe additional non-dialogue sound alongside the main audio, such as sighs, door creaks, or song lyrics. They also include character identification. These extra descriptors can be woven into the subtitled content to create a more inclusive experience for those who are D/deaf or hard of hearing. Closed captions come with their own unique set of rules around reading speeds, formatting, and punctuation — so speak to your LSP if you’d like to produce an alternate version of your subtitles for your whole audience to connect with the message.
Did you know, Mother Tongue offers a wide range of subtitling services, from on-screen text and caption localization through to closed captions and burnt-in videos? What’s more, our delivery is adapted to your needs, from ready-to-publish videos to subtitle file exports to use in your own video editing software. You can customize the style, size, font, colour and positioning of the copy to fully execute your vision.