Generative AI, including large language models (LLMs) and text-to-image (T2I) models, is rapidly transforming the tools we use every day. This session will discuss making AI-generated media more accessible and representative based on two studies:
- Image Descriptions for Blind and Low-Vision Users: Explore the evolving needs of blind and low-vision consumers with recommendations for improving AI-powered experiences.
- Disability Representation in AI-Generated Images: Learn about common tropes identified by focus groups of participants with different disabilities, guidelines for respectful representation, and areas for future research.
Presenter
Dr. Cynthia Bennet
Senior Research Scientist, Google
Dr. Cynthia (Cindy) Bennett is a senior research scientist in Google's Responsible AI organization. Her research is about making technology-mediated experiences, such as those leveraging generative AI (e.g., ChatGPT, Gemini, Midjourney), accessible to and representative of people with disabilities while mitigating harmful applications. Prior, Bennett was a researcher at Apple and a postdoctoral Research Fellow at Carnegie Mellon University after receiving her Ph.D. in Human Centered Design and Engineering from the University of Washington. Bennett's research has been recognized with awards from top scientific publication venues and funding agencies in her field. She is also a disabled woman scholar committed to raising the participation of people with disabilities in the tech industry.