In recent years, artificial intelligence (AI) systems have grown in capability and versatility. As the capabilities of AI and automated systems expand, there is much excitement about the potential for autonomous vehicles, AI-enabled tools at school and in the workplace, and other innovations that could increase human efficiency. Many of these innovations have the potential to expand access and inclusion for people with disabilities, particularly the myriad of AI-based assistive applications being specifically developed to support users with disabilities. Alongside these benefits, however, AI industry members, advocates and scholars have identified risks these AI systems could pose for people with disabilities, including risks of bias and discrimination, a lack of equitable access, and privacy concerns.
Researchers at the American Foundation for the Blind (AFB) conducted a Delphi study to synthesize expert opinions about the current and future impacts of AI on people with disabilities. A total of 32 experts across industry, policy analysis, academia, and government roles participated. They provided anonymous feedback via individual interviews and then participated in two rounds of questionnaires to build consensus.
AI innovations have the potential to expand access and inclusion for people with disabilities.
Key Findings
The experts agreed on several opinions related to AI’s current and future impacts on people with disabilities, as well as recommendations for optimizing benefits and minimizing risks. Some of these included:
- Benefits of AI: On-device text recognition AI apps will be especially beneficial to blind and low-vision users, who may perceive these apps as more private than using a human reader. In the transportation domain, AI will improve wayfinding support and sidewalk accessibility for pedestrians with disabilities.
- Accessibility concerns: Mainstream AI systems coming to classrooms will not be fully accessible for students with disabilities, and software used to teach people how to use AI will also have significant accessibility limitations. Additionally, image-generating tools are not currently usable by blind users, because there is no way yet to check the accuracy of an image.
- Bias and discrimination concerns: “Automation bias,” the belief that machines make fairer decisions than humans, is itself a bias that may lead to over-trust of AI systems. AI may show biases against people with “non-average” characteristics, including people with disabilities. For example, AI may deny healthcare to people with disabilities who have unusual or complex care needs.
- Need for human oversight: Humans should review decisions made by an AI system, especially in the context of hiring or education. Employers should also notify job applicants when an AI system is being used for screening.
- Need for disability community involvement: People with disabilities should be involved in all stages of AI development and deployment.
- Need for regulations: AI regulations should be proactive, informed by the disability community, and specifically protect the rights and privacy of people with disabilities.
- Using AI to expand inclusion: It is not enough to avoid harm to people with disabilities; AI should also be utilized to actively expand access and inclusion.
AI industry members, advocates and scholars have identified risks AI systems could pose for people with disabilities, including risks of bias and discrimination, a lack of equitable access, and privacy concerns.
The experts also voiced differing opinions on some meaningful issues, such as the following:
- Autonomous vehicles: Some experts felt that autonomous vehicles will soon provide unparalleled transportation access to nondrivers with disabilities, while others cautioned that financial challenges, technological limitations, and safety concerns will likely limit their benefit.
- AI as a benefit to workers with disabilities: Some experts believed that AI will boost productivity and workplace inclusion for workers with disabilities. Others felt that AI will not overcome physical and attitudinal barriers in the workplace.
Principles for Change
Based upon the findings, AFB developed a series of principles to guide AI developers, deployers, users, and policymakers in ensuring that AI minimizes harm and expands inclusion for people with disabilities, including people who are blind or have low vision. A summary of the principles includes the following:
- When AI has a significant impact on people’s civil rights, health, safety, freedom, or opportunity, both deployers and developers have an obligation to ensure that the AI models in use are not discriminatory either intentionally or unintentionally.
- AI systems should be designed and audited to ensure that they do not amplify harmful stigmas about people with disabilities.
- Producers of AI training datasets should evaluate whether their datasets represent a sufficiently diverse range of people with disabilities, including diverse disability types and people with intersecting identities, and modify their data sets accordingly.
- AI developers should actively recruit people with disabilities into their workplaces, and AI workplaces should be fully accessible. This includes ensuring that AI programming and training tools are accessible and that there are accessible avenues for people with disabilities to learn AI skills.
- Developers of software that incorporates AI models should provide transparent information about the extent to which the software has been tested for representativeness, bias, and accessibility for people with disabilities.
- AI software should fully conform with international accessibility standards, such as the Web Content Accessibility Guidelines 2.2, Level A and AA.
- The use of AI in software and decision-making tools should be clearly disclosed to people impacted. Particularly in cases where an AI tool could screen out people with disabilities, such as in hiring, impacted people should understand how to request reasonable accommodations or how to appeal decisions to a human reviewer.
- AI should not entirely replace human educators when delivering instruction or developing educational plans for students with disabilities.
- Students and employees with disabilities should be able to use AI as assistive technology. School administrators and employers should consult with the disability community in developing policies for the use of AI as a reasonable accommodation.
- When using AI with sensitive or private information, users should be able to choose where the information is stored and who can access it.
AFB developed a series of principles to guide AI developers, deployers, users, and policymakers in ensuring that AI minimizes harm and expands inclusion for people with disabilities, including people who are blind or have low vision.