AI Uses in Transportation and Autonomous Vehicles

The experts generally agreed that, in the next 5 years, AI could potentially improve access for disabled pedestrians by benefiting both sidewalk accessibility and wayfinding. The experts were divided, though, when anticipating the potential widespread use of AVs. While some of the experts were optimistic that AVs will support transportation access for nondrivers with disabilities, others expressed skepticism about how widely available AVs will become. Concerns revolved around the limitations of AV technology, potential expense and limited sustainable funding, and potential regulations limiting their use. Some experts also voiced concerns about the safety risks AVs could pose for disabled pedestrians, although they emphasized that much is still unknown about the likelihood of AV-pedestrian collisions or if they are more likely to collide with disabled than with nondisabled pedestrians. When asked whether robust public transportation was a better investment than AVs, some experts said that investing in public transit and investing in AVs do not need to be mutually exclusive. However, one expert was skeptical that AV investments would be as effective long-term as investing in public transit. Finally, the experts disagreed on whether or not AI will improve data collection in the transportation space to promote data-driven decision making. Concerns were expressed about the lack of disability representation in data, as well as the restrictions private companies place on access to transportation data.

AI could potentially improve access for disabled pedestrians by benefiting both sidewalk accessibility and wayfinding.

AI Uses in Education

The expert panel was united on three broad points related to AI and education. First, they agreed that AI should not replace human educators when delivering instruction or when writing administrative documents such as Individualized Education Plans (IEPs). Second, experts were skeptical that students with disabilities will be able to fully participate in the use of new AI systems brought into classrooms. They predicted that many new AI systems brought into classrooms will be inaccessible, creating new barriers to classroom inclusion and exacerbating existing ones. Finally, the experts concurred that training methods for developing AI literacy are likely to present access barriers for users with disabilities, due to inaccessible components such as drag and drop interfaces. To mitigate risks related to inaccessible interfaces, the experts further agreed that the National Institute of Standards and Technology (NIST) risk management framework ought to be applied to AI systems in education before adoption. The experts had differing opinions on how effectively AI will be used in the future to deliver individualized instruction to students with disabilities.

AI Uses in Employment

Experts shared concerns about the disproportionate negative impacts of algorithmic job candidate screening systems on candidates with disabilities. To mitigate harm, they unanimously recommended that AI used in applicant screening should be disclosed to all job applicants to promote transparency, and that a human should review AI-generated screening decisions. The experts further agreed that new job categories, such as AI supervision, may not be fully accessible to candidates with disabilities. Experts were more uncertain about the benefits of AI for current employees with disabilities, though. While some experts acknowledged that AI embedded in workplace technologies could help people keep their jobs after becoming disabled, others emphasized that this depends on the specific system and its design. The experts also disagreed on whether or not AI will result in people with disabilities being better integrated into their workplaces. Skeptics of this concept emphasized that AI cannot mitigate barriers in the physical workplace or negative attitudes humans hold toward workers with disabilities. Overall, the expert panel advised caution and vigilance toward the potential for AI to worsen bias and discrimination in the workplace.

AI Uses in Healthcare and Benefits Decisions

In the healthcare domain, the experts voiced strong concerns about the potential for AI to flag people with disabilities for denials of care. In particular, it was expressed that since people with disabilities often need both a greater quantity of care and more unusual care, algorithms built to reduce or economize healthcare utilization could disproportionately threaten access to care for patients with disabilities. Experts emphasized that algorithms are trained on the “average” cases, so people with disabilities may stand out negatively to an algorithmic system because they do not fit an “average” or “typical” health profile. This risk might be amplified for people with multiple minority identities, including disability, because those people’s specific combinations of characteristics are statistically less likely to be represented in training data. Additionally, the expert panel slightly agreed with a concern about the harms of using AI in prenatal genetic testing, specifically that it could contribute to eugenic actions preventing the birth of children with disabilities.

A nurse assisting a patient in a wheelchair.

In healthcare, experts voiced strong concerns about the potential for AI to flag people with disabilities for denials of care.

The experts also considered whether or not AI will reduce wait times and expedite approvals for people waiting to receive disability benefits. Some experts believed this should happen, but others worried that in practice, AI will instead be used to expedite benefits denials. There was also disagreement on whether or not AI will be used to ration scarce medical resources in ways that will disadvantage people with disabilities. Some experts felt this will be a significant risk, but others cautioned that we do not yet have clear evidence of this possibility.

AI Uses in Assistive Technology

The experts reached agreement on two issues related to the use of AI as blindness-related assistive technology. First, they agreed that AI can sound confident even when making mistakes in image descriptions. Such false confidence, they contended, makes it difficult for users who are blind or have low vision to judge the accuracy of visual descriptions, and thus to know whether they could trust AI-generated descriptions at all. Secondly, they agreed that although blind and low-vision users can employ AI to generate images for them, these text to image generation tools cannot describe the images they generate, so blind and low-vision users cannot verify the accuracy of AI-generated images nonvisually.

Some rich debate arose regarding the issue of how much AI should be able to describe images of people. Some experts voiced the view that AI should be designed to provide unrestricted descriptions of all images, including those of people. Others felt that image descriptions of people should be limited, particularly the ability to recognize people by name from their images, to protect the privacy of the people being described or to avoid the harm of describing people in biased ways. Specifically, one expert advocated for descriptions of people to be considered separately from descriptions in general, as it is easier for human biases to affect descriptions of people than of animals, plants, objects, or landscapes.

AI and Privacy Concerns

The expert panel reached consensus on three points related to AI and privacy. First, blind and low-vision people may perceive text-reading AI as more private than a human reader, if the images of text are stored locally and not in the cloud. This may make on-device AI reading apps especially attractive to blind and low-vision users. Second, balancing privacy standards with accessibility is critically important when considering the impacts of AI on people with disabilities. Finally, the panel agreed that there should be strong AI privacy laws at the federal level that are directly informed by people with disabilities.

AI and Bias Concerns

Several opinions arose from the expert panel related to algorithmic biases across domains. The term “automation bias” was suggested to describe the common assumption that decisions coming from an algorithm (as opposed to a human decision maker) are unbiased. This assumption, described as being a bias of its own, can lead people to over-trust the accuracy and impartiality of algorithmic decisions. The panel also agreed that AI can oversimplify the disability experience, missing the variability within the disability community and differences between impairment groups. Further, experts believed that when an AI system makes biased decisions, businesses deploying those systems should be held accountable.

While the experts generally believed that AI developers ought to be held accountable for biased decisions made by the AI systems they build, several experts noted that, currently, AI developers are not adequately held accountable for those biases due to a lack of enforcement. Two experts also questioned whether AI developers should be held accountable for the biased decisions of AI, as biases are entrenched in the culture and the data used to train the systems.

Expert Recommendations for Solutions

The experts agreed on the following with regard to solutions involving AI. Firstly, they agreed that AI auditing must account for biases related to disability as well as those related to race and gender when making decisions; that the regulatory process needs to proactively involve people with disabilities; and that people with disabilities need to be involved in the tech industry to ensure accessibility of new systems. The experts felt that currently, there is insufficient involvement of people with disabilities in AI research and development, leading to a gap between the development of technology and the lived experiences of its users.

AI auditing must account for biases related to disability as well as those related to race and gender when making decisions regarding solutions involving AI.

Regarding AI regulation, experts highlighted the cumulative harm from a large number of issues considered too small to need regulation, i.e. death by a million small cuts. They further concurred that reactionary regulation, where regulation is only implemented after something bad has happened with an AI system, is common in the AI space. The experts contended that this is a problem and that AI regulation should be implemented as soon as possible and have as one of its goals the specific protection of people with disabilities from algorithmic biases.

Experts had some differing opinions on the question of whether or not regulation will hinder AI adoption. Generally, experts felt that regulation and innovation are not mutually exclusive and that regulation can co-exist with responsible innovation and adoption of AI. Though the experts do want to see AI utilized for the improvement of accessibility in everyday life, they also want to ensure AI’s risks do not outweigh its rewards. The experts further asserted that AI is ultimately under the control of humans, who retain the power to decide what impact it has on the experience of living with a disability.

Ultimately, the experts achieved consensus on the idea that AI should strive not just to refrain from doing harm to people living with disabilities but should actively strive to expand access and inclusion. They also emphasized considering the uniqueness of each experience of living with a disability; there are as many experiences of disability as there are people living with disabilities on Earth. They highlighted here just how important it is to keep individuality of AI users in mind and demonstrated the nuance of working within the context of disrupting accessibility barriers. They charged humans with the work of ensuring responsible AI adoption because ultimately humans develop it and can choose its impact on all people – with and without disabilities.

Opinions of Industry Expert Subsample

Since prior studies have not captured the views of AI industry representatives, a separate analysis was conducted with the questionnaire responses from the 13 experts who reported working in private industry. Response patterns were similar overall, with the industry subsample reaching consensus on all of the opinions described earlier that were adopted by the full panel. For example, the industry experts agreed that AI regulation should come soon and should specifically protect people with disabilities; that the tech industry needs more diversity and disability representation in its workforce; and that a human should review decisions made by AI used in employment screening. The industry experts also agreed on some opinions that were more controversial in the full sample. Compared with the full sample, the industry experts were more optimistic that AI will help people remain employed after acquiring disabilities and that AI will help employees with disabilities become more integrated in workplaces. The industry experts also agreed that AI will introduce sweeping changes to the way society operates, including transformations in workplace productivity within the next 1-2 years.

Industry experts also agreed that AI will introduce sweeping changes to the way society operates, including transformations in workplace productivity within the next 1–2 years.