AFB Talent Lab now accepting applications!

The American Foundation for the Blind is proud to announce that the applications to join the AFB Talent Lab apprenticeship and internship are open, now through March 31st!

The demand for inclusive digital products is on the rise. However, tech designers, engineers, and project managers simply aren’t being trained in accessibility skills. The AFB Talent Lab aims to meet the accessibility needs of the tech industry – and millions of people living with disabilities – through a unique combination of hands-on training and mentorship, created and developed by our own digital inclusion experts.

These paid experiences include:

  • Foundational coursework in digital inclusion delivered through interactive modules.
  • Mentorship and job shadowing with experienced digital inclusion professionals.
  • Authentic, hands-on client projects and project testing.
  • Direct client interaction for accessibility reporting and remediation.
  • Certification in project management (apprenticeship only).

Both sets of participants will begin in Summer 2022. The apprenticeship is open for any assistive technology user with an interest in pursuing a career as a project manager specializing in accessibility, and the internship is open to any currently enrolled student majoring in computer science or design who is interested in learning about accessibility and digital inclusion. To be eligible, participants must be US citizens or permanent residents.

We welcome you to learn more about the program and to submit an application by visiting www.afb.org/talentlab or contacting us at inclusivefuture@afb.org.

NYU team releases open-source database from Woven Planet to help visually impaired pedestrians navigate cities

A new dataset released by a New York University Tandon School of Engineering research team and Woven Planet Holdings, Inc., a Toyota subsidiary, promises to help visually impaired pedestrians and autonomous vehicles (AVs) alike better navigate complex urban settings. 

Woven Planet partnered with NYU Tandon’s Visualization, Imaging and Data Analytics Research Center (VIDA) to compile a dataset of more than 200,000 outdoor images over the course of a year. The dataset is being used to test a range of visual place recognition (VPR) technologies that can improve the accuracy of personal and automotive navigation applications and promote independence for a variety of users. 

Developed by a team from the Automation and Intelligence for Civil Engineering (AI4CE) lab led by Chen Feng, assistant professor of civil and urban engineering, mechanical and aerospace engineering, and computer science and engineering, this dataset uses side-view images of sidewalks and storefronts in addition to forward-facing imagery, allowing researchers to test more applications than traditional mono-perspective sources. For example, side views support navigation for people with impaired vision who navigate in 360 degrees across busy city sidewalks. The data could also help improve delivery robotics, which must move forward and back as well as side to side to reach homes and businesses.

“This is the first work to systematically analyze some of the biggest challenges of visual place recognition,” said Dr. Feng. “We believe we are the first to make such data available free for education and research purposes, which is critical to diagnose and solve pressing problems with visual place recognition. Vast datasets like this one from Woven Planet can provide critical variety and diversity to inform data-driven systems and speed machine learning at scale.” 

Researchers at NYU led by John-Ross Rizzo, professor of biomedical engineering, and mechanical and aerospace engineering at NYU Tandon, and Vice Chair of Innovation for Rehabilitation Medicine at the NYU Grossman School of Medicine, are already using this dataset to help develop technologies that will help visually impaired individuals better navigate complex urban environments. 

“As a visually impaired person myself, I’ve long been frustrated that our population hasn’t seen more innovation in the navigation space; sure, solutions exist, but apply them in our urban canyons and accuracy, precision and reliability are all compromised,” said Dr. Rizzo. “Image-based wearable navigation assistance is set to make significant breakthroughs for everyone from the blind to the cognitively impaired to the elderly, helping with safe navigation in congested, complicated and often dangerous outdoor environments and also in unfamiliar indoor environments. Ultimately, this project has the potential to redefine accessibility, helping millions of people expand their horizons and better interact with the world.”

This project, which is being sponsored by C2SMART Center (the Connected Cities for Smart Mobility Toward Accessible and Resilient Transportation) — a USDOT Tier 1 University Transportation Center led by NYU Tandon — uses images originally provided by CARMERA Inc., an automotive mapping company and former participant in NYU Tandon Future Labs that was acquired by Woven Planet in 2021.

“NYU has long been one of our core academic partners, in no small part because of our shared commitment to delivering social impact through mobility,” said Ro Gupta, senior director at Woven Planet and head of the company’s Automated Mapping Platform (AMP) North America team. “It’s gratifying to see our data, which is core to our commercial mapping products, being used to help researchers around the world develop tools that will ultimately make mobility more accessible and equitable for all.” 

In addition to providing data from multiple viewpoints, this dataset offers several more unique features: 

  • Captures long-term changes of the same urban area for a year, so researchers can improve VPR under varied conditions like snow and heavy foliage.
  • Anonymizes images to protect the privacy of pedestrians and cars. The anonymized images also provide VPR algorithms static and environment-only information.

Besides Feng and Rizzo, the NYU-VPR team includes Claudio Silva, Institute Professor in the Department of Computer Science and Engineering, who — with VIDA, which he directs — collaborated with CARMERA on the initial image-analysis research. 

Full details of the project are outlined in the paper, “NYU-VPR: Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymization Influences,” which has been published in IEEE IROS 2021 and can be viewed here.

This article is made possible in part by generous funding from the James H. and Alice Teubert Charitable Trust, Huntington, West Virginia.

Comment on this article.

Previous Article

Back to Table of Contents

Article Topic
AccessWorld News