Items shown here reached consensus after the second or third round of consultation and are presented in rank order of strength of agreement. For statements that did not reach consensus on the second round, Participants were shown the group mean before responding to the third round, so that they could consider moving closer to the group consensus or provide their justification if they disagreed with the group. Justifications were used for the dissent analysis on all topics.
Consensus Items
Q43 A human in the loop is necessary for candidate screening.
Q79 The tech industry needs more diversity in its own employees to be able to spot and guard against the many types of bias that AI can generate.
Q7 AI auditing must account for anti-disabled biases in addition to racial and gender biases.
Q3 AI needs to focus on expanding access and inclusion; it is not enough to only avoid harm to people with disabilities (PWD).
Q19 AI should be a partner, not a replacer, in writing Individualized Education Plans (IEPs) for students with disabilities.
Q75 There should be strong privacy laws at the federal level that are informed by the disabled community.
Q6 Automation bias (belief that if it comes from the algorithm it must be true and unbiased) leads to an over trust of AI for tasks it is not particularly accurate for.
Q72 Balancing privacy standards with accessibility needs is critical in AI development for PWD.
Q76 Regulation needs to ensure individuals with disabilities are proactively considered in AI development.
Q42 AI used in resume screeners or hiring decisions need to be disclosed to all applicants.
Q16 AI should not replace interactions with human educators.
Q78 PWD should be involved at every stage of creating, procuring and deploying algorithmic decision-making.
Q10 Skills like curiosity, empathy, and critical thinking will remain the most relevant after AI adoption.
Q65 Training methods for AI literacy, such as drag-and-drop interfaces, pose barriers for people with disabilities.
Q84 Reactionary regulation, where actions are taken only after something bad happens, is common in AI.
Q74b There is a gap between technology development and user experience as it relates to disability needs.
Q30 For patients who have a "non-average" characteristic, AI could fail in ways that are difficult to detect.
Q74a There is a lack of involvement of people with disabilities (PWD) in research.
Q5 Businesses that deploy AI solutions are accountable for the bias when AI makes biased decisions.
Q85 Regulation of AI should come sooner rather than later and specifically protect individuals with disabilities.
Q82 The National Institute of Standards and Technology (NIST) risk management framework should be applied to educational AI software before adoption (evaluates privacy, security, etc).
Q52 AI can be used to track sidewalk accessibility to create better routes.
Q73 Blind users may perceive AI for reading as more private than a human reader, if images are not stored in the cloud.
Q70 AI tools oversimplify disability and miss the variability within the disabled community.
Q61 Text-to-image tools are currently not accessible as blind users are unable to review generated images non visually.
Q27 Many people with disabilities need more unusual healthcare, which is likely to get flagged by AI for denial.
Q51 AI will revolutionize wayfinding accessibility for blind people in the next 5 years.
Q60 AI models often sound confident when making mistakes, making it almost impossible for blind users to identify if an image description is wrong.
Q83 There is a cumulative harm from things viewed as too minor to need regulation - death by a million small cuts.
Q22 Many AI tools coming to classrooms will be inaccessible.
Q44 New job categories, like AI supervision, could be inaccessible to people with disabilities (PWD).
Q31 AI driven prenatal screening tools will create actions of eugenics.