Neshise Insights / accessibility

What accessible AI actually requires

Accessibility in AI is not a coat of paint applied at the end of a project. It is a set of design choices made early, often, and in conversation with the people most affected.

By Neshise

There is a quiet pattern in our industry: the word “accessible” is added to a product page once a feature ships, and then forgotten. The work of making something usable to a wider range of people happens — if it happens at all — in the final sprint, after the difficult architectural decisions have already been made.

This essay is a small argument that accessible AI requires the opposite stance. Access is not a layer; it is a frame.

Three things that have to be true

For an AI system to be accessible in any meaningful sense, three conditions tend to hold.

It must be reachable. A model that requires a high-bandwidth connection, a recent device, and a credit card on file is not reachable to most of the people who could benefit from it. Reachability is partly a research problem (smaller models, better quantization), partly an interface problem (the input modalities you support), and partly a policy problem (pricing, regional availability, rate limits).

It must be understandable. When something goes wrong — a refusal, a hallucination, a subtle bias — the person on the other side of the screen has to be able to make sense of it. Understandability is what model cards, documentation, and honest UX copy are for. It is also why we should resist anthropomorphic interfaces that obscure how the system actually works.

It must be answerable to the people it affects. This is the part that is most often missing. Accessible AI is not built for a population — it is built with one. Disability-led design teams, plain-language reviewers, and users from outside the dominant language and culture of the building team will surface problems that no internal evaluation can.

What this means in practice

In our own small practice, this has translated to a few habits:

  • We read model cards carefully before recommending a tool, and we look for what is not there.
  • We caption and transcribe every video we publish, and we treat captions as content, not afterthought.
  • We write at a deliberately slower cadence, because revision is what makes language plain.

None of these are dramatic. Together, they are the work.

A starting place, not a finish line

Accessibility is not a checkbox. There is no version of an AI system that is accessible to all people in all contexts. But there is a version that listens better, ships more carefully, and stays in a longer conversation with the people it serves.

That version is the one we are trying to build toward.