Bill Holton
Haptic feedback on smartphones and other handhelds can dramatically enhance the usefulness of navigational apps and aids for the vision impaired. Your phone may buzz, for example, when you reach a designated corner, and allow you to determine in which direction your next way point or ultimate destination lay. Innovations never cease, however, and below I will describe two new ways haptic feedback may soon be coming to a mobility device near you.
Smart Paint
Princeton, NJ -based Intelligent Material Solutions, Inc designs and manufactures advanced materials and sensors. One of their newest products is a tunable “Smart Paint.” Here’s how it works:
Various rare earth elements such as yttrium and ytterbium are combined in different proportions, depending on how the smart pigment will be tuned. The mixture is then heated in a high-temperature furnace until it forms crystals that arrange themselves in predictable shapes and sizes from 3nm to 50,000nm. The crystals are inert and can be added to standard paint along with other, traditional pigments. Pigments usually reflect color. Green paint reflects light in the green spectrum, blue reflects blue. “Depending on which elements we combine in which proportions our pigments can be tuned to reflect near infrared light in an additional frequency,” says IMS Chief Technology Officer Josh Collins. “Smart Paint used for highway lane markers might appear yellow, but shine a near-infrared light at it and it will reflect not only the frequency of the light but a second, tunable frequency that can be easily detected with sensors. Here allow me to offer a somewhat less than elegant analogy. Consider the web page where the images and graphs have been made accessible with alt text. A sighted user experiences that web page one way, but with a special sensor—in this case a screen reader—additional information is made available. Now, imagine a bucket of yellow highway lane paint with its own alt-tag. “Near infrared light has much greater penetration, so A sensor mounted on a vehicle’s underside could detect lane markers, even in heavy fog or other foul weather,” says Collins. “We were in discussions with the Department of Transportation about using Smart Paint in this very fashion when one of the government reps observed that Smart Paint might also be a useful navigational aid for the vision impaired.” The company conducted some preliminary R&D, then approached the Ohio State School for the Blind and Tampa Lighthouse for the Blind for testing. “We used Smart Paint to mark crosswalks and produced canes with mechanisms to emit and detect the near-infrared light and offer haptic feedback as the cane touches the lane marker,” says Collins. “It’s too easy to veer off course crossing a busy intersection,” notes Jonathan Fink, director of Portland State University’s Digital City Testbed Center. Fink and his associates also conducted successful tests using several Smart Painted crosswalks, bus stops and even travel paths to various campus locales. “The detectors worked even in pouring rain, though the performance was better when we marked the routes with plastic tape embedded with the smart pigments. We also quickly learned that the detectors need to be attachable instead of built into canes—people want to use their own equipment they’re already comfortable with.”
The sensors weigh approximately 40 grams--less than 2 ounces--and consist of a haptic engine that attaches near the cane handle and a ground-facing sensor that connects via wire or Bluetooth from a few inches above the tip. “Basically, it’s a few LEDS and a haptic engine,” says Collins, who predicts his company will be able to produce the units in quantity for about $50 each.
And though The technology is still in its infancy Collins also predicts a wealth of future possibilities. Already the company has partnered with a Southern CA university to produce a smartphone app that will combine sensor data with GPS information and Open Map overlays. “We are also hoping to use strips of smart plastic tape to create bar codes to mark a point of interest, a particular address or the names of stores in a shopping mall.”
Collins even suggests the possibility of hand held infrared scanners—the latest iPhone?—that could use near-infrared light to scan otherwise invisible QR codes on street signs and business doorways .
I, for one, can’t wait for an airport with “alt-tagged” gate directions, food Kiosk menus, and entire bathroom walls invisibly labeled “Sinks” and “Stalls.”
The Haptic Sleeve
Nearly a decade ago the US military began testing “Haptic belts” to aid soldiers in night time maneuvers. Soldiers hands are usually too busy holding rifles and other equipment to use standard GPS devices. Also, the light of the devices’ displays could potentially warn the enemy of their location. The belts were connected to a GPS device and worn strapped around the soldiers’ torsos. A ring of haptic motors would vibrate in one of the eight cardinal directions to guide the soldiers and keep them on track--even in pitch dark. This research was conducted more than ten years ago, and I was unable to learn whether these belts were ever deployed by the US military. However the concepts were familiar to engineers Manuel Zahn and Armaghan Ahmad at the Center for Digital Technology and Management, Technical University of Munich.
“We were working with 3d cameras, trying to find useful applications for the technology,” says Zahn. “It occurred to us that instead of using haptics to point in a single direction we could use 3D imaging to create a sort of haptic map of what’s ahead, and that the blind might be able to use these maps to help with orientation and navigation.” A pair of glasses were fitted out with dual infrared cameras that use an invisible, infrared flash to illuminate what’s ahead. The reflections from the two, slightly separated vantages were then interpreted to create a stereoscopic 3d rendering, including not only the image, but also a map of distance points. , much like the Viewmaster slides of yore. The combined 3D rendering is sent to a tiny computer, where it is translated into a haptic map array.
Instead of using the torso to transmit the haptic data to the user Zahn and Zahn and Ahmad designed a flexible sleeve that fits over three quarters of the user’s forearm. “The skin there is quite sensitive,” says Zahn. The built-in five by five array of haptic pads vibrate in various patterns and intensities, creating a vibrating map of what the camera sees. “I was startled the first time I tried on the goggles and sleeve together,” Zahn recalls. I turned my head, and as I panned my office wall I could literally feel the absence as the camera moved across the open doorway.” A narrow hallway generates strong vibrations on each side of the sleeve and the intensity of these vibrations reduces toward the center of the array. If the user walks towards an obstacle, the vibration intensity of the respective motors gradually increases. And since the arm is only used to display the information of the camera, the user can walk and perform tasks requiring two hands, or use a cane and still have a free hand for opening doors, using stair rails and the like,” says Zahn. “
Because the system relies on infrared light, it can be used in complete darkness. Indeed, during testing, users were able quickly to adapt to the device and grew more proficient with practice, easily navigating a pitch dark maze route around and between tables, chairs and other obstacles. “And with minimal practice with the equipment their navigation speed increased significantly,” notes Zahn.
Zahn and Ahmad published their research on the arXiv.org e-Print archive.
Like all too much other research, the team has moved on to other projects, and have no plans for further development. However states Zahn, “The computer algorithms are fairly simple, and the parts list is in the paper for anyone to use and develop.” He even offers a few ideas for refinements. “At some point a second sleeve could be added. This would likely steepen the learning curve, but it would also supply a lot more 3D information about the nearby environment. And since the cameras also provide full imagery, it should also be possible to link the results with an image recognition data base to more accurately identify what’s around.”
As regular readers of AccessWorld already know, one company is already doing just that. Supersense from Mediate uses iPhone’s lidar and image recognition to announce chairs, tables, people standing in line and other barriers to travel. Perhaps the developers can give this technology a helping hand…or rather a helping arm.
This article is made possible in part by generous funding from the James H. and Alice Teubert Charitable Trust, Huntington, West Virginia.