Facebook yesterday announced a new service designed to assist the millions of visually impaired people who use one of its many products on a daily basis. Called automatic alternative text, according to the social media giant the system uses a neural network to automatically generate and append descriptions to photos uploaded to Facebook (an Instagram and WhatsApp rollout is likely in the future). These approximate descriptions can then be read aloud by screen readers, enabling the visually impaired to have an understanding of the contents of a photo. This development also, of course, allows for an automatic tagging of photos beyond the text a user might assign — thus permitting more finely tuned advertising.
"Before today, people using screen readers would only hear the name of the person who shared the photo, followed by the term 'photo' when they came upon an image in News Feed," reads Facebook's announcement. "Now we can offer a richer description of what’s in a photo thanks to automatic alt text. For instance, someone could now hear, 'Image may contain three people, smiling, outdoors.'”
Wired spoke with one of the engineers behind the development, Matt King, who explained that sure, the rough capabilities of the AAA system leave a lot to be desired, but something is absolutely better than nothing.
“My dream is that it would also tell me that it includes Christoph with his bike,” King, speaking of a photo showing his friend biking through Europe, told the publication. “But from my perspective as a blind user, going from essentially zero percent satisfaction from a photo to somewhere in the neighborhood of half is a huge jump.”
The service will first be available for English iOS screen readers, but Facebook expects to expand it to other platforms and languages soon.