AI feels like it’s everywhere – on your phone, powering search engines, and even filtering what you see on your social media feed. So, it’s no surprise that it’s creeping into web accessibility too.
It’s easy to be sceptical and brush it off as something that doesn’t belong here, especially when AI in other areas has been hit or miss, but AI does have some potential when it comes to web accessibility.
The question isn’t whether AI can be useful, but how and when it should be used.
The key thing to remember is that AI should be a tool, not a replacement for actual accessibility knowledge or human expertise. Let’s break down what AI can do well, where it falls short, and why it can be a stepping stone, rather than the final destination.
Opportunities: Speed and Scale
AI is ridiculously fast at processing data and spotting patterns, much faster than any human could manage. This makes it useful for large-scale accessibility checks, where it can scan an entire website in seconds and highlight any potential issues. For example, AI powered accessibility checkers can crawl through a site and instantly flag missing or incorrect alt text, detect failing colour contrast, and check if the tab order actually makes sense for keyboard navigation. These are all things that would take a human tester significantly longer to go through manually.
AI can also be used to process massive amounts of content at once. If a company has thousands of images without alt text, an AI tool can generate descriptions for all of them instantly. If a website has hundreds of pages with inconsistent heading structures, AI can flag where improvements are needed.
Basically, if it’s a clear-cut Web Content Accessibility Guidelines criterion failure that follows a known pattern, AI has probably got it nailed.
Adaptable
One of AI’s biggest strengths is that it can be used for a variety of accessibility tasks, beyond just detecting issues. It’s not just a checker – it can actually attempt to fix things (with varying success).
Many AI tools can generate alt text or captions on the fly, restructure content to improve heading hierarchy, or even attempt to fix contrast issues by adjusting colours dynamically.
Take AI generated captions as an example. For people who are deaf, hard of hearing or just watching videos in the library, captions are essential for accessing video content. Manually transcribing every video is time-consuming, and so AI generated captions are often used as a first step. Platforms like YouTube and Zoom have automatic captioning features that, while not perfect, provide an instant baseline level of accessibility.
Cost-Effective
For smaller teams that don’t have an accessibility expert on hand, AI can help make accessibility more approachable for non-specialists. Hiring a full-time accessibility consultant isn’t always feasible for startups, small businesses, or independent creators, and AI can provide an affordable first step.
For example, many small businesses rely on AI driven accessibility plugins that claim to “make your website accessible in one click.” While these tools come with their own set of issues (which we’ll get into), they at least provide a starting point for businesses that wouldn’t otherwise have accessibility on their radar.
But, and I can’t emphasise this enough, AI is only the first step. It’s a bit like using spellcheck instead of hiring an editor. AI might catch the basics, but it won’t refine your content the way a human can. The same applies to accessibility – AI can flag problems, but it takes human expertise to truly fix them.
Challenges
Context and Nuance
Remember, AI can’t read the room. It doesn’t understand context the way humans do.
The classic example is alt text. AI might generate a description like “Image of a dog,” which is technically correct, but it misses the point of the image. If the dog is a guide dog assisting someone, that’s a crucial detail that AI wouldn’t necessarily pick up on.
AI struggles even more with complex or abstract content. A chart comparing financial trends? AI might say, “Image of a chart,” which is not helpful. A meme with layered sarcasm? AI probably won’t get the joke.
And let’s not even start on AI trying to describe art. Many AI generated descriptions of paintings, photographs, or other visual media completely miss the artistic intent behind the piece.
As an example of this, for Global Accessibility Awareness Day 2024, I hosted a Spindogs quiz for writing alt text. One of the questions used The Fruit Basket, painted by Giuseppe Arcimboldo. Most people at Spindogs accurately described the painting accurately noting the nuance that the fruit is arranged to look like a face, here is one of them:
“A painting of an arrangement of fruit and vegetables that is depicted in such a way as to also appear like an abstract depiction of a human face.”
AI’s description of this was:
“A still life painting depicting a basket filled with various ripe fruits, including pears, peaches, grapes, and other produce.”
Totally missed the point.
Alt text is just a single example where a human touch will always be needed. This applies across all AI tools, at the very least, to guide, review, and correct AI’s output.
False Sense of Security
AI tools can make it seem like accessibility is ‘handled.’ Spoiler: it’s not.
Many businesses assume that because they’ve installed an AI powered accessibility plugin, they’re fully compliant with accessibility laws. The reality is that most of these tools don’t fix accessibility problems – they put a temporary patch on them.
For example, some tools try to “fix” keyboard navigation issues by forcing focus onto certain elements. But this can actually break the experience for screen reader users, making it harder to navigate rather than easier.
True accessibility isn’t just about technical compliance – it’s about usability. AI might say a website passes WCAG guidelines, but only human testing can confirm whether real people can actually use it.
Ethical Concerns
Personally, I don’t trust AI to make structural changes to a webpage on the fly. Why? Because we don’t always know where it’s getting its information from.
AI relies on massive datasets to generate its responses, and many of those datasets have questionable biases, outdated information, or poor accessibility practices baked in. That means AI can sometimes reinforce bad habits instead of improving things.
And then there’s the environmental impact. Ethics and morality are core to my work, so it won’t be a shock that, alongside accessibility, sustainability is also a huge importance to me. AI data centres are expanding rapidly, and they come with a significant environmental cost:
🗑️ They generate electronic waste.
💧 They consume vast amounts of water, already scarce in many places.
💎 They depend on rare minerals, often mined in unethical and unsustainable ways.
⚡ They use huge amounts of electricity, contributing to greenhouse gas emissions and worsening the climate crisis.
It’s something we have to consider when we talk about AI as a solution.
The Bottom Line
AI can help with web accessibility by flagging issues and, in some cases, attempting to fix them. But it’s not a replacement for actual human effort or expertise.
In my opinion, the best approach is a hybrid model – using AI to assist and speed up accessibility work, but always having humans involved to provide context, test usability, and ensure quality.
That being said, AI is evolving fast. The tools we have today may lack nuance and always need a human touch, but in a few years they could be drastically different or, dare I say, better!. If AI becomes more sophisticated at understanding context and real-world accessibility needs, we may need to rethink how we use it. The key is to stay flexible, embrace change and improvements where they genuinely help while staying critical of where AI still falls short.
For now, use AI as a starting point, not the whole solution. A tool, not a fix.