The proliferation of Generative Artificial Intelligence (GenAI) tools has brought a critical shift in how people approach information retrieval and content creation in diverse contexts. Yet, we have a limited understanding of how blind people use and make sense of GenAI systems. To bridge this gap, we report findings from interviews with 19 blind individuals who incorporate mainstream GenAI chatbots such as ChatGPT, Microsoft Copilot, Google Gemini, and Claude and GenAI-powered image description tool Be My AI (a feature of the Be My Eyes app) in their everyday practices. We also detail the ways in which blind individuals form mental models of GenAI tools. Finally, we highlight how blind users grapple with concerns about ableist biases and other harms perpetuated by GenAI tools against the benefits they receive through these tools. We discuss key considerations for rethinking access and information verification in GenAI tools, unpacking erroneous mental models among blind users, and reconciling the harms and benefits of GenAI from an accessibility perspective.
Presenter
Maitraye Das
Assistant Professor of Computer Science and Art + Design, Northeastern University
Maitraye Das is an Assistant Professor in Computer Sciences and Art + Design at Northeastern University, where I direct the Technology, Equity, and Accessibility Lab. Her research in Human-Computer Interaction (HCI) focuses on making collaboration, content creation, and learning more accessible and equitable for people with disabilities. Maitraye’s prior and ongoing work investigates how accessibility is created and negotiated in the contexts of collaborative writing, creative making, ideation, and remote work in ability-diverse teams involving blind, low-vision, and neurodivergent professionals. Recently, she has been exploring how blind and low-vision individuals use and understand generative AI tools, what information they need in alt text of AI-generated images, and how we might enhance AI literacy among blind and low-vision people.