Introduction
Artificial intelligence (AI) systems can be defined as “machine-based systems that can make predictions, recommendations, or decisions influencing real or virtual environments” (U.S. Department of Justice Civil Rights Division, 2024). There is increasing attention to AI’s potential to mitigate barriers for people with disabilities. Examples include autonomous vehicles (AVs) for non-drivers, generative AI to assist with communication or cognitive tasks, and AI systems used for image description. However, these systems, along with other mainstream applications of AI, may also present new barriers for people with disabilities. For example, the datasets that are being used to train and develop AI systems have been shown to embody bias that disadvantages marginalized groups based on race and gender characteristics (Kamikubo et al., 2022; Lewicki et al., 2023; Shelby et al., 2023). These biases may also disadvantage people with disabilities who are impacted by automated decisions (Disability Rights Education and Defense Fund, 2022; Glasgo et al., 2024; Tyson, 2024).
Governmental regulation is one way to ensure audits and transparency efforts are applied consistently. Several efforts are underway to develop regulations at both the state and federal level to incentivize AI research and development as well as to prevent outcomes that perpetuate discrimination. During the first Trump administration, the Federal government first issued an executive order on AI intended to promote American research into and use of the technology (Exec. Order No. 13859, 2019). The Biden administration issued an executive order and subsequent directives and guidance from the Office of Management and Budget focused on how the government uses AI and addresses some of the rights- and safety-impacting uses of AI (Exec. Order No. 14110, 2023). Several states have considered or passed legislation that gives state agencies or attorneys general the power to assess and audit AI models and tools for bias. Regulatory proposals range from simply clarifying that current discrimination prohibitions apply to the use of AI to crafting specific auditing regimes that require both deployers and developers of this technology to assess whether AI tools are fair for people with disabilities and other protected classes.
The following sections include a brief review of academic and policy literature on the benefits and risks of AI for people with disabilities. Then, this paper will present consensus findings from a panel of experts, including representatives from private industry, policy analysis, academia, and government roles. The findings encompass consensus opinions on the current and future state of AI accessibility and fairness for people with disabilities, especially people who are blind or have low vision. The paper concludes with a series of principles derived from the literature and the study findings.
Benefits of AI for People with Disabilities
By using automation in place of human capabilities to perform tasks, AI systems hold promise to make activities and environments more accessible for people with disabilities. In the transportation domain, there is much anticipation of the ongoing development of autonomous vehicles (AVs), which are currently on the road in several US cities (Hampshire, 2024; Ray, 2023). Fully autonomous vehicles (sometimes referred to as Level 5 AVs; SAE International, 2021) could support independent transportation for individuals who cannot drive an automobile because of visual, physical, or other disabilities (Hampshire, 2024). AI also holds great potential to assist in personalizing education to diverse learner needs (Morrison et al., 2021, 2023) and to boost accessibility and accommodation options for workers with disabilities (PEAT, 2023).
A growing body of literature explores the promise of AI-enabled assistive technologies or AI systems explicitly created to “increase, maintain, or improve the functional capabilities of people with disabilities” (Assistive Technology Act, 2004). Prominent examples include AI applications built for image description, image generation, object recognition, captioning, and communication and cognitive supports (Bennett et al., 2021; Bianchi et al., 2023; Gamage et al., 2023; Theodorou et al., 2021). Other innovative applications of assistive AI include the development of systems that aid blind and low-vision people in locating lost items (Morrison et al., 2023) or accessing visual information on clothes shopping websites (Stangl et al., 2018).
By using automation in place of human capabilities to perform tasks, AI systems hold promise to make activities and environments more accessible for people with disabilities.
Risks AI Poses for People with Disabilities
As AI evolves, researchers and advocates have begun to raise concerns about risks AI systems may pose for marginalized groups, including people with disabilities. Regarding AVs, some advocates have identified potential safety risks for pedestrians with disabilities. For example, an AV might not recognize and properly avoid colliding with a pedestrian who uses a mobility aid such as a wheelchair, walker, or guide dog (Moura, 2022). In one experiment, a simulated AV consistently collided with a person propelling herself backward in a wheelchair (Treviranus, 2018). If designed and trained well, AVs can potentially improve safety relative to human drivers, but some argue that the safety standard set for AVs should be higher than that set by human drivers (Ray, 2023).
More broadly, AI systems are trained on people with “average” characteristics. Since disability and the diverse range of conditions that cause disability are statistically unusual characteristics, AI systems used in healthcare or benefits decision making may make inaccurate diagnostic judgments or disproportionately flag disabled people for denials of services (Brown et al., 2020; Disability Rights Education & Defense Fund (DREDF), 2022; Edwards & Machledt, 2023; NAIAC, n.d.). Similarly, AI systems used to screen job applicants may disproportionately reject disabled applicants based upon resume characteristics or behaviors during automated screening (Glasgo et al., 2024; PEAT, 2023; PEAT, 2023b; Wiessner, 2024). Furthermore, AI systems used to monitor students or employees may flag the atypical work behaviors of disabled students or employees (such as not looking at a camera, not clicking a mouse, or taking frequent movement breaks) as warranting closer surveillance or disciplinary action (Center for Democracy & Technology [CDT], 2022; Tyson, 2024; Woelfel et al., 2023). In addition to placing disabled students and employees at risk for discriminatory discipline, these systems may also threaten the privacy of disabled students and employees by potentially “outing” their disabilities (CDT, 2022). Another study found that current large-language model (LLM) chatbots mirror predominant cultural stereotypes about disability, such as telling stories featuring disabled people as inspirational or passive (Gadiraju et al., 2023).
A final set of concerns revolves around current limitations of assistive AI systems and the biases they may perpetuate. In one study, prompting an image-generating algorithm to produce an image of a “thug” tended to generate images of dark-skinned people (Bianchi et al., 2023). Bennett et al. (2021) recommended caution in using and interpreting AI-generated image descriptions of people, whose identities may be misrepresented by the AI. Finally, assistive AI research may not adequately involve feedback from potential users, resulting in products that do not align with users’ actual needs. Demonstrating this issue, Gamage et al. (2023) reviewed 646 studies of assistive AI for blind and low-vision users and found that only 38 of these studies involved blind and low-vision participants in the design, ideation, or requirements gathering stages of the research. This resulted in AI development priorities that diverged from the stated priorities of potential users. Specifically, a majority of these studies focused on assistive AI for object handling or personal mobility, but a group of blind and low-vision participants reported that they most desired AI assistance with text recognition and obstacle detection (Gamage et al., 2023). There is thus a need to involve people with disabilities in the data sets used to train AI systems as well as in the design and development of such systems.