A blind person using a white cane that focuses on the shoes of the person and the tip of the white cane. The literature review and research presented in this white paper respond to the promised societal transformations of Artificial Intelligence. The experts in this study agreed that AI must focus on expanding access and inclusion while avoiding harm to people with disabilities. To that end, the American Foundation for the Blind proposes the following non-exhaustive principles to guide developers, deployers, users, and policymakers in crafting beneficial AI for people who are blind, have low vision, or have other disabilities:

  • AI has the potential to increase access to assistive technology by automating services, like captioning, image description, and wayfinding, and integrating those technologies into mainstream devices and software that are widely available to the general public.
  • Investments in AI research and development should maximize the capabilities of human users to provide opportunities and services to people with disabilities such as in educational and transportation settings, rather than replacing human professionals altogether.
  • Some uses of AI demand greater scrutiny. When AI has a significant impact on people’s civil rights, health, safety, freedom, or opportunity, both deployers and developers have a greater obligation to ensure that the AI models in use are not discriminatory either by intent or happenstance.
  • AI systems should be designed and audited to ensure that they do not amplify harmful stigmas about people with disabilities. The outcomes should be measured for specific negative effects against people with disabilities.
  • In order to be representative of people with disabilities, the data used to train AI models should include sufficiently diverse data to be representative of people with a range of disability types and with other characteristics, including gender, age, race, and income.
  • Producers of AI training datasets should evaluate whether their datasets represent a sufficiently diverse representation of people with disabilities or incorporate stigmas, modify the datasets as needed, and provide transparent information to guide data users and researchers to understand the limitations of the dataset.
  • AI chatbots, especially those used in customer service settings, should be provided training information and resources to answer questions relevant to people with disabilities and their unique accessibility needs.
  • Investments in AI research and development, including grants from government agencies, should incentivize and prioritize research into AI that is representative of and produces fair outcomes for people with disabilities.
  • Additional resources should be invested in producing validation and auditing practices that ensure that people with disabilities are accurately and sufficiently represented by AI models and that decisions produced or influenced by algorithms are fair and appropriately attuned to the experiences of people with disabilities.
The principles are to guide developers, deployers, users, and policymakers in crafting beneficial AI for people who are blind, have low vision, or have other disabilities.
  • People with disabilities should have equal access to STEM education and careers. Improving the accessibility of K-12 and higher education STEM curricula as well as career training programs should be a priority to create pathways for students with disabilities to be able to enter careers creating and using AI technologies.
  • AI developers should actively recruit people with disabilities and ensure that workplaces are accessible.
  • Development tools used to program and train AI models should be accessible to and usable by people with disabilities.
  • Training courses designed to prepare existing workers to develop AI skills should be fully accessible to people with disabilities, including by incorporating accessible interfaces, captions, audio description, plain language, and alternative formats of graphical information where appropriate.
  • AI literacy and skilling training should prepare people with and without disabilities to understand automation bias as well as potential sources of disability bias and how to correct for it. Deployers of AI should provide employees with ample agency, training, and time to question the results of AI decision-making tools and identify whether they present bias against people with disabilities.
  • Developers of software that incorporates AI models should provide transparent information about the extent to which the software has been tested for representativeness, bias, and accessibility for people with disabilities.
  • Developers of AI models should consider creating technical manuals that guide users and deployers of those models in understanding the limitations of the model and how to correct for biases that may be discovered after the fact, such as by incorporating human oversight into decision-making processes supported by AI.
  • Training should be made available to guide users in understanding prompt creation for generative AI that produces results that are accurate and representative of people with disabilities.
  • Software that incorporates AI may discriminate against people with disabilities if people with disabilities cannot use all aspects of the software interface, including with assistive technology like screen readers. AI software should be designed to fully conform with international accessibility standards, such as the Web Content Accessibility Guidelines 2.2, Level A and AA.
  • The use of AI in software and decision-making tools should be clearly disclosed to users. Particularly in cases where the use of AI could screen out people with disabilities, users should understand how to request reasonable accommodations or how to appeal decisions to a human reviewer.
  • Deployers of AI in employment and educational screening should carefully consider whether the AI model may explicitly or implicitly discriminate against people with disabilities, for example by discarding applications with employment gaps or judging candidate videos for certain eye movements or speech patterns. To the greatest extent possible, human reviewers should have access to all applications and should confirm that the screening tool appropriately recommended qualified candidates, including those with disabilities.
  • In general, educational technology, regardless of whether it incorporates AI, should be fully accessible to and usable by students with disabilities. The U.S. Department of Justice has issued regulations for public schools and universities requiring educational websites and mobile applications to be accessible to both students and parents with disabilities.
  • When used in educational technology, AI agents and chatbots should be designed to provide information in accessible formats and to produce pedagogical outputs and means of instruction that are appropriate for students with disabilities. Trained educators of students with disabilities, including teachers of students who are blind or have low vision, as well as students themselves, should be consulted in validating the appropriateness of these educational tools.
  • AI may support teachers in reducing planning, documentation, and paperwork burdens, but it should not entirely replace human educators when delivering instruction or developing educational plans for students with disabilities.
AI used in software to surveil employee and student performance and productivity should not disproportionately affect people with disabilities.
  • Students and employees with disabilities should be able to have access to AI as an assistive technology that supports their educational and employment opportunities. School administrators and employers should consult with people with disabilities in developing appropriate policies and procedures for the use of AI in employment and educational settings, including as a reasonable accommodation.
  • AI used in software to surveil employee and student performance and productivity should not disproportionately affect people with disabilities. Employers and educational institutions should carefully evaluate whether such tools may discriminate, such as by flagging employees who need breaks for personal needs or to care for a service animal or by categorizing certain involuntary eye movements as cheating or inattention.
  • Collaboration between the assistive technology industry, AI developers, and the disability community could result in more accurate and neutral representation of individuals in image descriptions that balances privacy, concerns about bias, and accuracy of the image descriptions.
  • To the extent that AI powers assistive technology for people with disabilities, it may be given greater access to more personal information than a nondisabled user would provide. Access to assistive technology uses of AI should not be contingent upon people with disabilities relinquishing either their privacy or data security, especially in situations where people without disabilities do not have to exchange privacy for access.
  • To enable people with disabilities to use AI in sensitive situations, such as reading mail, users should be able to choose where user data is stored and to what extent it is shared with an AI developer. When data must be uploaded to the cloud to provide greater processing power, users should be able to control whether that data is stored and accessible by companies using the data.
  • Terms of agreement should offer users the choice whether to allow companies to access and use the images that are uploaded to assistive technology tools.