Advice on sharing accessible content during the Coronavirus pandemic
By Molly Watt | 23/03/2020
Studies show that 55% of the UK public access the news online. In light of the current climate, where public information, video explainers on social media, animated GIFs and infographics are themselves going viral — and especially when people who normally rely on face-to-face contact must stay at home and avoid coming too close to other people — I'm sharing some tips and guidance on how to ensure that your content is fully accessible to all. It has always been important for everyone to have equal access to world news — and is even more so in times of worry — not only because it’s a moral obligation, but because not being fully informed can pose a possible health risk.
The importance of video captions
It's not just the hard-of-hearing/Deaf community that benefit from content's being subtitled. They are just the starting point: the Deaf community comprises over 350 million people worldwide, 11.5 million of whom live in the UK.
Captions also improve comprehension for a variety of different audiences. For example, people who have
- learning disabilities
- attention deficits
- situational access needs, such as being on a busy train and unable to play sound
- English as a second language (although this may be more appropriate for subtitles, as described later on this page)
In today’s most concerning climate, people are looking online more than ever for resources to rely upon. As many as 85% of video views on Facebook occur with the sound off.
Organisational and brand perspectives provide other clear reasons for captioning content. Besides the moral obligation, it improves overall user experience and can increase search engine optimisation (SEO), as Google favours informative content.
The US and the EU have laws and accessibility directives that suggest that if a company or public body is not creating accessible content, they are discriminating and could face legal action. But captions don’t need to be viewed as a legally required add-ons — when done right, they can be seen as part of a creative, responsible strategy that draws more people in.
Transcripts vs captions
Transcripts and captions are different things, and both are important. A transcript is simply a text document, and unlike captions has no time information attached to the script. Captions are divided into text sections that are synchronised with video and audio content, and if they are not "live captions" they are often produced from transcripts. Both “verbatim,” transcripts and captions should portray both speech and sound effects as well as identify different speakers.
Why transcription matters
Transcripts offer a great alternate for users who would benefit from having audio and video content scribed. Some assistive technologies used by the disabled population cannot access video and/or audio content. In this instance, transcripts would be invaluable to those users.
Radio shows and podcasts should not be overlooked. Transcripts make these services accessible — they offer the same benefits as captions do, such as improving comprehension, increasing user interaction and, again, helping with SEO. As an example, read how transcription improved This American Life’s SEO by over 6%.
Transcripts will not meet accessibility standards alone; however, they can contribute greatly by being
- the first step for captioning
- the best solution to making radio broadcasts and/or podcasts accessible
- easier than captions to translate into other languages
- helpful for those with English as a second language, as well as people living with various cognitive and visual disabilities
- more compatible with assistive technologies in a text document format
You will still need to do captioning.
Tips on how to do captioning
In YouTube’s Studio, you can add a transcription file or manually amend/edit your own captions.
There is an option to use “automatic captioning,” created by YouTube’s in-built speech recognition technology. Note that these captions are not always accurate, and it's important to review and correct them before posting a video.
- Go to your Video Manager by clicking your account in the top right-hand corner > Creator Studio > Video Manager > Videos.
- Next to the video you want to add captions or subtitles to, click the drop-down menu next to the Edit button.
- Select Subtitles and CC.
- If automatic captions are available, you'll see Language (Automatic) in the 'Published' section to the right of the video.
- Review automatic captions and use the instructions to edit or remove any parts that haven't been properly transcribed.
Interestingly, YouTube now has in-built tools that can help and encourage users to have content translated. To do this, you can add translated video titles and descriptions and in YouTube’s words: “We'll show the title and description of the video in the right language, to the right viewers.”
Closed vs open captions
YouTube captions are “closed,” meaning only activated when the user physically turns them on and off. If you are sharing informative content on social media platforms, having captions embedded on video at all times (open captions) would be more inclusive. Plus, according to Facebook research, open captions increase watch time by 12%.
Usability applies to captions too
Several key design features can make captions easier to follow and can increase their value to your viewers. Especially important are timing, width and position.
Captions simply must be synchronised with what's happening on screen. (If they're not, the video almost might as well not have them.) They should be fairly narrow (perhaps 60-65 characters wide), and they should appear at the bottom of the video unless that would obscure something that's equally important. And of course they shouldn't jump around randomly from place to place.
And of course the legibility of caption text is important too. But you will already have considered that, right? That isn't unique to captions but applies to all text in an interface.
Digital marketer and writer Meryl Evans gives some great advice about these aspects. She ends with a great suggestion: hashtag a captioned video with #Captioned so that Deaf and hard-of-hearing people can find it more easily.
A word about subtitles
Although subtitles look much like captions, they serve a different purpose. Captions substitute for the audio, for people who cannot hear it, and subtitles provide text for just the language content. Subtitles are intended for people who can hear but have difficulty understanding the words — usually because they don't speak the language, although there can be other reasons as well. Subtitles need only cover spoken words or other key written content, as their audience can hear off-camera sounds that captions must describe, such as car engines starting up, doors closing, babies crying, dogs barking and (in some cases) music playing.
Whilst open captioning is more inclusive than closed captioning, subtitles are more inclusive if they are visible initially and the viewer can switch them off. People who speak the language in the video, especially if they are learning it, often want to view content in that language without having subtitles interfere with their experience. So you should always enable viewers to switch subtitles off.
Subtitles are especially relevant during the Coronavirus lockdown, as language learning technologies are seeing a surge in their user base. "During the third week in March," reports Forbes Magazine, "downloads of Duolingo surged more than ever before, increasing 42% worldwide, compared to the weekly average of the previous 30 days." It's a safe bet that many of these language learners will want to watch videos in a language they are learning and will appreciate having subtitles available.
And a final word of advice: place subtitles so that they do not obscure the captions. Some viewers — a Deaf person learning a language, for example — use both.
Apps for live captioning
“With live captions, Teams can detect what’s said in a meeting and present real-time captions for anyone who wants them.”
Microsoft Teams is full of inclusive design features that can really help users, but most people probably don't think of using them.
Firstly, Teams offers live captions during meetings. Under “More options,” you can “Turn on live captions” to improve access and comprehension to anyone who needs or wants captions. As you would expect with live captioning, there can be a slight delay and the occasional error, but the transcriptions themselves are pretty accurate. However, note that if the speaker has an accent or is speaking in a busy environment, the captioning will not be as precise. If you're working from home, as many of us are right now, try to minimise background noise during meetings. You can also blur your background to allow people who use lip reading to focus on your speech.
The second feature within Teams is the ability to translate content. This is more important than ever, and will encourage and enable collaboration across the globe. Currently, live translation is only available in some languages, but as the number of languages increases, the accuracy of translation services can only get better.
Thirdly, Teams gives us the option to zoom in on content during presentations.
Dark mode is also available in Teams. This can help users access content by reducing brightness, which if too high can contribute to headaches and eye strain. Other Teams features such as voice messages, shortcut keys, focus time and speech-to-text voicemail, have potential to improve a user’s access to information. Recorded meetings can be played back with captions, using Microsoft Stream.
Microsoft have also embedded the fantastic Immersive Reader (IR) feature in Teams (users can access IR on Word Online, OneNote, Outlook, PowerPoint and Office Lens), which can improve the readability of Teams’ messages. The Immersive Reader offers various settings such as grammar and reading-level tools that can help the user understand the content better.
Microsoft PowerPoint options
“Present with real-time, automatic captions or subtitles in PowerPoint.”
PowerPoint on Office 365 can live transcribe as you are presenting. On-screen captions will be presented in the language you are speaking or it can be translated to another language. Additionally, you can change the position, colour and size of the captions to create more accessible captions for a wider audience.
“Clipomatic is a smart video editor that turns everything you say into live captions. All you have to do is hit the magical record button, speak clearly and your words will appear as stylish captions right on your recording. Enhance the video with one of the artistic filters and wow your friends!”
Whether you are a famous vlogger, an MP or a member of the public wanting to share an opinion, Clipomatic’s user-friendly app is useful for short videos recorded on mobile devices. This speech recognition app will pick up on your live voice and transcribe what you are saying.
If you do not speak clearly or have background noise, or there is more than one person in the shot it can get tricky. You can, however, edit and correct the captions. The fonts and designs are a little restrictive, as you cannot change the size of the text or add a background to create better contrast, which is crucial for many to access the text. It is ideally suited to videos of a couple of minutes that you share on social media. You can download Clipomatic, a paid-for app, from the App store.
Using artificial intelligence for producing transcriptions
The app did a reasonable job of filtering out background chatter and, as the workshop progressed, the AI was quick to adapt previously transcribed text as it learnt more about the evolving conversation around our table.
AI-based tools are available to aid in producing transcriptions. Our colleague Chris Bush watched Otter.ai transcribe a workshop discussion in real time for a person with a hearing impairment, and was impressed by the AI's ability to adapt and improve its transcription as the discussion progressed. He says that AI-supported transcribing could save time and cost in creating transcriptions for online media, by "minimising transcription time and editing".
We see particular relevance in today's situation, with the number of podcasts increasing as more and more content moves online. AI transcription tools could be invaluable for podcasters who want to create accessible content but have a limited budget.
Captioning apps for pre-recorded content
“Creators use MixCaptions as the final step to complete editing their videos. Simply open MixCaptions, choose your video, generate captions, and the app will automatically do the rest. You can also easily edit captions to your preference.”
Unlike Clipomatic, this tool does not create live captioning. By adding a video (up to 10 minutes) into the app, it uses speech recognition to create the captions. This mode of captioning would probably be easier for longer clips, featuring more than one person.
MixCaptions gives the ability to edit transcription and add details, such as when the speaker changes. On a positive note, you can change the size and colour of the text, as well as adding a background to increase contrast*.
*Contrast is important with video and text content, particularly where light and colours are present.
After you have used a certain number of characters in transcribing, MixCaptions offers in-app purchases that enable you to continue editing captions. Again, you can download MixCaptions from the App Store.
"The Amara Editor is an award-winning caption and subtitle editor that’s free to use! It’s fun and easy to learn and encourages collaboration. Whether you’re an independent video creator, someone helping a friend access a video, or a grandchild translating a family clip for Grandma – the Amara Editor is the simplest way to make video accessible."
Like MixCaptions, you need video content to begin with. But this time, you can use a URL (YouTube, Vimeo, MP4, WebM, OGG and MP3) to proceed with manual captions.
Amara is a desktop subtitling application that offers the ability to create your own captions, or purchase professional subtitling services from the Amara team.
Once signed up, you have the option to find a video URL to then add subtitles too. Using Amara’s Editor’s keyboard controls, you quickly get familiar with: play/pause, skip, insert line break you then begin typing up manual captions. After you have finished, you can edit to make sure the captions work accordingly with the video and audio content.
Most people will think the only way to make video accessible to people who are blind or visually impaired is to have audio descriptions. You’re not wrong. However, when creating video content there are steps you can take to make sure it is accessible:
- Audibly describe relevant visual information
- Insert regular pauses (this allows any third-party transcription apps or services to catch up)
- Have people speak one at a time, and identify the person who is speaking
- If there's audience participation, use descriptive language — e.g: If you've eaten lunch, put your hand up. Molly has her hand up
What is alt text?
Users who are visually impaired and rely on a screen reader would read the alt text description (which would indicate it is an image within the alt text) aloud to the user. Alt text informs user what is in the image, alt text can be fairly basic however useful enough to help understand the content. If an image fails to load, the alt text would show. If you're sharing images on Twitter, there is an Alt option you can use to add descriptive text. If you use a hashtag in your tweets, you should write them using camel case, so the screen reader can identify the separate words, for example #BeMoreInclusive.
What are image descriptions?
When writing image descriptions, here are some tips on what to describe from the Perkins School for the Blind:
- Placement of objects in the image
- Image style (painting, graph)
- Names of people
- Clothes (if they are an important detail)
- Placement of text
- Emotions, such as smiling
Likewise, there are some things that should be left out of image descriptions. You don't need to describe
- Colours — no need to explain what red looks like
- Obvious details such as someone having two eyes, a nose, and a mouth
- Details that are not the focus of the picture
- Multiple punctuation marks
And avoid using highly poetic language or giving detailed descriptions.
Automatic alt text
Automatic alt text sounds great and can be a useful feature; however, as with speech recognition the quality varies. Make sure you check that the alt text accurately describes the content of images. If you get the chance, try to test your content with screen reader technology to get a feel for how it sounds when read out.
Writing alt text
Always add meaningful descriptive language for alt text. Here’s an example of what to avoid and what to do:
What to avoid: a black dog
What’s better: A cute Black Labrador wearing a Guide-dog harness, lying on the floor of a bus, looking up with big brown eyes to the camera.
By doing this you are not obstructing any viewers from comprehending the content. They get to enjoy it too.
Plain English benefits not only the hard-of-hearing and Deaf community but also people with low literacy skills, those with cognitive impairments and the over half of the world’s population who are bilingual. It also helps someone understand a new topic they might want to learn more about. The UK's average reading age is 9, which means papers such as The Sun and The Guardian write at a reading level of 8 and 14 years, respectively. They do this to attract as many readers as they can and to ensure that the language they use is easy to understand.
- Find what they need
- Understand what they find
- Use what they find to meet their needs
Plain English uses punctuation, word length and sentence/paragraph structure as appropriate for the audience. It uses storytelling techniques such as active voice and personal pronouns to engage readers in the content.
The team at Content Design London offer some excellent guidance and training on writing in plain English.
Designing with older adults in mind
Although it is always worthwhile to design for the widest audience that might want to use your content, current circumstances make this especially important… and you may be inadvertently overlooking the needs of older adults. The NHS has classified people over 70 as having "an increased risk" of becoming seriously ill if they become infected with the Coronavirus, and has urged them to stay at home and avoid close contact with people outside their household. Although UK residents in this age group use the Internet rather less than do younger people, the age gap has narrowed in recent years and is continuing to shrink; and the "stay at home" order means that many of them will find themselves using information technology more than they have ever done before. We have a responsibility — and a welcome opportunity — to consider their needs in designing content that the technology. This does not include stereotyping or patronising them with apps to pair them with younger people who could help them out; it means that we pay special attention to the kinds of impairments that tend to come with age, and that we design to accommodate those impairments.
The Nielsen-Norman Group cite three main design issues that cause problems for older adults in using digital products:
- small font sizes and small targets
- inflexible and unforgiving interfaces
- exclusion from online content
Even content that is written for older adults, they found, "often treats seniors as a niche interest group rather than a diverse and growing demographic." Not very inclusive, is it?
Ginny Redish and Dana Chisnell conducted a usability review of 50 websites from the perspective of older adults, and they produced a set of heuristics to guide us in designing for older adults. Their heuristics cover four aspects of design:
- interaction design
- information architecture
- visual design
- information design
The full set of heuristics, along with questions to guide their use, appears on page 50 of the usability review report.
Age UK has a report on older users and Internet use.
Inclusive content for everyone
Although accessibility has never been so important in this difficult time, we should not forget that one billion people worldwide live with some form of disability, with 14.1 million living in the UK. While we've considered those registered with a disability, we have not taken into account those who are not yet registered, or do not identify as someone with a disability. Furthermore, let’s not forget our ageing population, which has increased globally in the last couple of decades. This requires us to think about digital inclusion in new ways, as it means we need to consider digital skills, confidence, affordability of equipment and Internet access alongside specific assistive technology needs and impairments that tend to occur more commonly in older people.
I hope these tips will help you to create content that is more inclusive and that you start to reach more people with your messaging during this difficult time. With the rapid and unexpected increase in home-working, please think about your disabled colleagues, who may not be able to adapt to new ways of working unless you put some of these accessibility measures in place for meetings and remote presentations.
We are being bombarded with information, and although organisations like the NRCPD, ASLI and Royal Association for the Deaf, among others, lobbied for a sign-language interpreter for the Prime Minister's public briefings, often accessibility is still an afterthought, sadly. Please do what you can to share information publicly, on social media and across your teams, in an inclusive way.
Accessibility should not be considered as an afterthought, or as "help", but as reasonable built-in adjustments for all.