Facebook’s brief image outage earlier this week exposed the general public just how bad accessibly really is in our modern visual-first social Web. While most major social media platforms today offer some ability for users to provide ALT descriptive text for their images and videos, few users spend the time to write up accessible textual descriptions of their imagery. Instead, most ALT text on social platforms is automatically generated by deep learning algorithms that generate a comma-delimited set of metadata tags of major topics or activities depicted in the image. The quality of these labels is ludicrously bad today, but even the US Government, which has long enforced strict accessibility standards for government Web content, does not require that social content be made accessible. While governments and the technology community are investing heavily in AI bias, they care little about accessibility bias. Will those with different abilities simply be left behind by the future Web?
For a brief moment this week, Facebook and Instagram users saw empty boxes where their images should have appeared, alongside horrifically bad descriptive ALT text showing how Facebook’s algorithms saw that image.
Sadly, this is the world experienced by those with different visual abilities every day.
For those who access the Web through text-only screen readers, they are entirely dependent on the textual description of images provided through their ALT tags.
Unfortunately, few Facebook and Instagram users could be bothered to provide such descriptions for their images. Despite both sites allowing users to type up a textual description of each image to be read by screen readers for those who have differing visual abilities, very few users do. Even policymakers who have staked their entire careers on accessibilityand bridging divides are too busy chasing viral fame to be bothered with actually making their own social media streams accessible for their constituents that utilize screen readers. Indeed, the Government does not actually require them to do so.
Instead, the majority of the ALT text on social media today is automatically generated by deep learning algorithms that generate a comma-delimited string of metadata tags describing common objects and activities depicted in the image. Only entities for which a model has been previously trained can be recognized.
The accuracy is comedically bad. Unlike the state-of-the-art image recognizers used in the commercial world, the models being deployed by social media sites at the moment appear to be optimized for speed rather than expressiveness and accuracy.
To the overwhelming majority of all Web users, however, this error rate is entirely invisible. The average social media user never sees the Web’s ALT text, instead basking in the right beautiful vibrant world of modern high-resolution Web imagery.
The brief image outage earlier this week led to considerable media coverage as journalists and pundits saw for the first time just how bad these ALT tags really are.
Yet sadly most of this coverage erred towards lampooning the results, joking about particularly bad tags and noting how thankful they are that few users have to rely on these ALT tags.
Unfortunately, for those relying on screen readers, these tags are how they see the Web’s imagery.
For them the abysmal quality of today’s tags is not a joke. It is a profound limitation to their ability to use our increasingly visual social platforms.
Putting this all together, for all of society’s focus on AI bias, there has been precious little attention paid to accessibility bias. Unlike the effects of algorithmic biases, which are felt by everyone, accessibility bias is invisible to the average Web user and the easiest fix, requiring users to type up descriptions of the images they post, would create interface friction few users appear willing to accept.
Instead, as the Web becomes increasingly visual, it is increasingly leaving behind an entire portion of society, walling them off from the digital world.
Even the US Government, which has long been a strong advocate for digital accessibility, no longer appears to view accessibility as important in the social media era, having waived its once sacrosanct rules that historically required official government publications to be accessible to those with differing physical abilities. As lawmakers increasingly turn to inaccessible social media platforms to make policy announcements and connect with constituents, those with differing abilities are being increasingly cut off from the democratic process.
In the absence of governmental leadership, it is unclear what might turn the tide on accessibility.
One possibility is that companies’ endless need for hand annotated imagery might lead them to push their users to provide ALT text for their imagery in order to train their algorithms.
In the end, as the Web is becoming more visual, it is also becoming more discriminatory.