These statements were expressed by at least one member of the expert panel, but the whole panel did not feel the same way about each statement. This can occur if some panel members strongly agree with a statement while others disagree, or if some members agree while others have more neutral opinions. Since people are generally more willing to express agreement than disagreement, statements that at least some experts viewed as false were likely to generate varied opinions. Other possible reasons for non-consensus include differing interpretations of the statement, having different predictions of where the future is headed, or the statement assuming a premise that other experts are not willing to take as true at this time.

Non-Consensus Items

Q4 AI developers are accountable for the bias when AI makes biased decisions.

Q8 AI advancements increase disability stigma; e.g. artificial vision creates pressure for all bodies to conform.

Q9 We are currently experiencing a lot of hype marketing AI that over promises to the layperson what AI is doing.

Q11 AI will introduce sweeping changes to how society operates, more drastic than the introduction of the mobile phone.

Q12 Large Language Model (LLM) technology will plateau in 1-3 years.

Q15 AI applications in education are likely to become punitive, involving monitoring software, high-stakes testing, and tools that flag AI use.

Q17 Students with disabilities are accused of cheating at an unfair rate because generative AI is built into relevant assistive technologies.

Q18 AI will effectively tune curricula to individuals with disabilities’ needs.

Q20 Relying on AI for IEPs could be harmful to disabled students due to its lack of interactivity and collaboration.

Q21 In 5 years, AI will have student data from curricula and be able to write better IEPs for students than professionals can.

Q22 Many AI tools coming to classrooms will be inaccessible.

Q25 The rollout of faulty AI systems for determining benefits eligibility is a life-or-death issue for disabled people.

Q26 AI should be used to make benefits applications easier to fill out and expedite approvals.

Q28 Healthcare AI is rapidly being adopted with insufficient supporting data.

Q29 AI tools to ration scarce medical resources are likely to discriminate against people with disabilities.

Q34 AI embedded in mainstream systems can help people with new disabilities remain employed.

Q35 AI will result in people with disabilities being more employable and integrated in workplaces.

Q36 Boss ware / employee surveillance software violates reasonable accommodations.

Q37 Boss ware / employee surveillance software disrupts accessibility.

Q38 Boss ware / employee surveillance software interferes with privacy about disability status, disclosing to employers what accommodations an employee utilizes.

Q39 AI use in workplace accommodation determinations is problematic for disabled workers because it interferes with the employee/employer collaborative process.

Q40 In the next 1-2 years, AI will significantly transform productivity, creating new opportunities and necessitating retraining.

Q41 In the hiring context, AI detects disabled behavior as “abnormal” in a negative way.

Q47a I believe there will be many level 4 or 5 autonomous vehicles on the road within 10 years. (vehicles which can drive without any driver in certain settings/ vehicles which never need a driver).

Q47b More level 4 or 5 autonomous vehicles on the road will provide accessible transportation without needing to schedule rides or have a driver’s license.

Q48 Society is not yet ready for mass adoption of autonomous vehicles.

Q49 The rise of autonomous vehicles (AV) will disproportionately threaten the safety of disabled pedestrians.

Q50 Driver’s licenses should not be a requirement for owning or operating a fully autonomous vehicle (level 4 or 5).

Q53 Robust public transit systems would be a better investment than further autonomous vehicle development.

Q54 AI will improve data collection in the transportation space so that future decisions can be data driven.

Q57 AI automated notetakers and AI proxies create conflict between their use as an accommodation and privacy and consent concerns.

Q58 Assistive AI is under-regulated because it falls under assumptions of beneficence.

Q59 Camera bans in schools and workplaces will unfairly limit access to image description AI.

Q62 AI should only describe people who have consented to be described.

Q63 People should be able to get unrestricted descriptions of images, up to the level AI can technically provide.

Q64 Inaccessible AI features are being layered on top of mainstream tools creating new barriers.

Q66 AI-generated documents will improve accessibility and reduce dependence on third party software.

Q69 Advertisers or tech companies should not be allowed to detect disability in the data they routinely collect from their users to better tailor advertising to the interests of that disability community.

Q71 If AI can recognize people by name, that is a threat to privacy.

Q77 Companies with a lot of money are less interested in accessibility than smaller companies with less money.

Q80 Disabled people may need to be exempted entirely from some algorithmic systems (e.g., eye tracking).

Q81 The language of the ADA does not cover AI (AI sits in a loophole).

Q86 Regulation will be a main hindrance of AI adoption.

Q87 AI is already appropriately regulated, if you discriminate using AI it’s still discrimination and you’re still accountable.

Q88 AI development is outpacing our control over it.