www.apress.com

2/4/20

How Artificial Intelligence is Helping Accessibility

by Ashley Firth

Technologies with the potential to make the web more inclusive are constantly being created. Many of these developments can be considered ‘assistive technologies’. These are tools – from machines to pieces of software – that help people with a wide range of impairments (and resulting access needs) overcome barriers in their lives.

Of course, not all technology has to be designed to solve one specific problem, nor do they have to help everyone in the same way. Think about automatic doors, for example: they might help people with motor impairments, or those speaking sign language who do not want to stop their conversations to open the door, but they are also equally useful for anybody who has their hands full. Here, the same technology helps different people for different reasons. This is an example of universal design at work. This concept aims to ensure that everything we make can be “accessed, understood and used to the greatest extent possible by all people regardless of their age, size, ability, or disability."

With this in mind, let’s take a look at a few of the many recent advancements in artificial intelligence, and how they have made the world better for those with access needs.
 

Providing information about images

One of the most common issues with accessibility is the lack of alternative text for images, which means people who are blind or have sight loss could be missing important information. There have been a host of success stories recently, with large companies using new technology to address this problem. Google's Cloud Vision API has been using a neural network to classify images, but also to extract text embedded in them. This is achieved through Optical Character Recognition (OCR) technology, that can ‘read’ the text, and display it alongside the image, ensuring that no valuable information is trapped in an image.

In a slightly different use case, Facebook has been working for the past few years on automatically adding alt text to images that are uploaded to its platforms. Every day, people share more than 2 billion photos across Facebook, Instagram, Messenger, and WhatsApp, so they set about creating a neural network that could understand what is going on in an image and make that information available to screen readers. At the time of writing it can detect “objects, scenes, actions, places of interest, and whether an image/video contains objectionable content." Right now, they start every alt text entry with “Image may contain...” as they try to perfect its ability to analyse an image. This is a brilliant piece of work from the world’s most used social network that will help anyone using a screen reader.
 

Providing automatic video captioning

YouTube has been developing speech-recognition technology, using machine-learning algorithms to automatically generate captions for its videos. They have stated that “the quality of the captions may vary” at this point but, as with any machine learning, the longer it is running the smarter it’ll get. Importantly, any generated captions can easily be edited by the person that uploaded the video should they contain any incorrectly translated speech. This also improves the system’s accuracy for future captions, as it helps the AI to understand where it went wrong. This technology holds the potential to provide nearly immediate accessibility for one of the most popular websites, and mediums, on the internet – helping those who are deaf, have hearing loss, or encounter a language barrier engage with content freely, and eventually, without having to wait for captions to be added or edited manually.

Google’s DeepMind division has also been using AI to generate closed captions based on lip reading. In a 2016 joint study with the University of Oxford, DeepMind’s algorithm watched over 5,000 hours of television and analysed 17,500 unique words. It then went head-to-head with a professional lip reader over 200 random video clips, and won, clearly – achieving 46.8% of translated words without error, compared to the lip reader’s 12.4%.
 

Providing human-level language translation

In April 2018, Microsoft announced that its free translator app - where audio is translated into other languages and into text for captions. It showed for the first time “a Machine Translation system that could perform as well as human translators (in a specific scenario – Chinese-English news translation)”. This was a major breakthrough and, in the year since, they’ve managed to make huge strides in the system’s ability to provide accurate translations for other languages. It now comes as a mobile app, on all major platforms, that can provide real-time translation even when the device is offline. This is really useful for people who have to regularly interact with content that isn’t in their first language, and those who are deaf.
 

Providing information about a user’s surroundings

One of my personal favourite uses of artificial intelligence is Microsoft’s Seeing AI application, that has “changed the lives of blind and low vision community. Here, a user can use their device’s camera as a form of sight in a range of situations, and the app will then interpret what it can see using artificial intelligence and inform the user audibly. At the time of writing it’s available in 35 countries, and can do things like read out short pieces of written text, identify currency, describe products by reading their barcode, understand documents, and even describe people around you and their emotions.
 

Conclusion

As you can see, there is a wealth of progress being made in the emerging AI sector. Some of the world’s largest companies are simultaneously investing heavy amounts of time and resources to develop solutions that help many people, including those with disabilities and access needs. The resulting developments make for an exciting time in accessibility – with the potential to radically alter (and improve) how those with a wide array of access needs interact with technology, and indeed the wider world around them.



About the Author

Ashley Firth is Head of Front-end Development and Accessibility at award-winning energy supplier Octopus Energy. Accessibility has been an obsession of his since he started this role and he has worked together with customers to understand their needs and use new technology to try to make an online experience as inclusive as possible. Ashley and Octopus Energy have won numerous customer and digital experience awards for their products, and their approach to web accessibility has been described as “best in class” by the Royal Institute for the Blind. Ashley was shortlisted for the 2018 Young Energy Professional of the Year for customer service, spoke at the Festival of Marketing on the importance of web accessibility, and was part of eConsultancy’s first ever Neurodiversity report. He is a published writer for Web Designer Magazine on accessibility and acts as a consultant to other companies to help them improve their approach to accessibility. Before Octopus Energy, Ashley ran the Front-end team at Digital and CRM agency, Tangent, helping to build sites for clients such as Walkers, Carlsberg, SAP, and the Labour Party, and before that, at experiential start-up, Fishrod Interactive, helping to make installations for WWE, Sky, and Budweiser. You can find him on Twitter and Instagram @MrFirthy.

This article was contributed by Ashley Firth, author of Practical Web Inclusion and Accessibility.