Full Issue: AccessWorld September 2018

Letters to the Editor

Dear AccessWorld Editor,

This message is in reference to Aaron Preece's August 2018 article, Microsoft Soundscape Adds a New Dimension to Accessible Wayfinding.

My city is well mapped, but real-time intersection notifications occur at least a quarter-mile past the actual intersection when traveling in a vehicle or using public transportation. Therefore, the app is useless for determining bus stop locations while at road speeds.

Secondly, while the stereo or 3D positioning of speech is novel, it makes the app more unintelligible when traveling and using something like bone-conduction headphones. There are other navigation applications that are much better, and provide as much information — including the ability to use a beacon to find a destination.

Steve T.

Dear AccessWorld Editor,

This message is in reference to Deborah Kendrick's February 2017 article, When Hearing Loss Causes More Vision Loss.

Yes. As usual Deborah nailed it. I too have hearing loss, and I use two hearing aids. Deborah is saying much that is true for a growing number of us. And there are listservs full of interesting people who discuss hearing loss with vision loss. The discussions can be very interesting and helpful, just as is true of this article.

Mike Cole

Dear AccessWorld Editor,

This message is in reference to Bill Holton's August 2018 article, Letters, We Get Letters: Receiving Digital Scans of Your Mail Envelopes Using Informed Delivery.

Thanks for sharing this resource. Unfortunately, I did not find the scanned images of my mail to be accessible using the JAWS OCR feature on my Windows PC as suggested. But, my wife enjoys using it and it has already come in handy for monitoring mail while away from home.

Dennis Walsh

Dear AccessWorld Editor,

This message is in reference to Deborah Kendrick's August 2018 article, Outstanding Summer Showcases for Technology: ACB and NFB Annual Conventions.

What an awesome event. Hopefully, these technologies are going to make a positive difference for the blind community. Thanks for allowing everyone not present at conventions to be aware of what's happening in the world of technology.

Owais

AccessWorld News

Right-Hear Announces US Launch of Orientation and Navigation System for the Blind and Visually Impaired

Right-Hear, a developer of advanced solutions for the blind and visually impaired, recently announced the US launch of its system, making public buildings more accessible to people with visual impairments.

Right-Hear addresses the accessibility, orientation, and navigation needs of blind and visually impaired people (as well as orientation-challenged individuals,) providing them with real-time voice cues through their own smartphone. Information is provided about a user's precise indoor location, and the app can also provide directions and describe surroundings. The voice notifications are based on information received from Bluetooth beacons (sensors) located in the building. The sensors are installed by the building staff by simply attaching them to the wall with a sticker. The sensors' location is determined in consultation with Right-Hear.

The system includes an administrator dashboard that allows the building staff to program and control the whole system optimally, according to the needs of blind and visually impaired visitors and employees.

The Right-Hear system is suitable for any type of public building, including complex buildings (interconnected buildings) such as malls, corporations, universities, municipalities, airports, museums, supermarkets, hotels, and restaurants.

The smartphone app alerts users whenever they are near a Right-Hear-enabled building, and picks up the signals automatically from the beacons. The user is notified when a new local Right-Hear-enabled accessible zone joins the Right-Hear network. The Right-Hear application on the user's smartphone can also instantly translate the text and read it aloud in the user's preferred language.

"Using Right-Hear, blind and visually impaired people have access and orientation in public buildings, giving them the ability to be independent," said Idan Meir, Right-Hear's CEO. "On the other hand, public venues and facilities will benefit from increased numbers of visitors, including blind and visually impaired people, that until now had difficulty visiting these places."

The Right-Hear system has already been deployed in hundreds of buildings and complexes, mainly in Israel.

Right-Hear is currently represented in the US by Deon Bradley in Florida (704-604 8244) and the system is distributed by?Woodlake Technologies in Chicago (312-733 9800). The solution can also be ordered by e-mail.

About Right-Hear

Founded in 2015, Right-Hear develops orientation and navigation solutions for the blind and visually impaired. The first system serves users for indoor navigation and orientation, and comprises Bluetooth beacons (sensors), smartphone application and administrator's dashboard. To date, the Right-Hear system has been installed in over 600 venues, mainly in Israel, including malls, corporations, universities, municipalities, airports, museums, supermarkets, hotels and restaurants. The Right-Hear smartphone app is available for free download on both Android and iOS, and can immediately connect with RightHear Bluetooth beacons. For more information visit the Right-Hear website

Freedom Scientific Updates Fusion and ZoomText; Access to Remote Desktop and Citrix Now Available Using Fusion

Freedom Scientific recently announced the August updates for Fusion 2018 and ZoomText 2018. ?For this release the spotlight is on the Fusion 2018 update, which introduces support for Citrix and Windows Remote Desktop Connection. Available as a Fusion license add-on, the new Remote Desktop Support allows users to access remote computers and applications with full magnification, screen reading and braille support. Rounding out the Fusion August update are tracking and highlighting improvements and general performance enhancements.

The August update for ZoomText 2018 delivers "under the hood" improvements to performance and stability, inter-op with JAWS, and remote desktop support when running ZoomText from a Fusion product installation.

These free updates can be downloaded and installed over top of previous Fusion 2018 and ZoomText 2018 installations. Users can also use the auto update feature, which will alert them to the update on the next restart of ZoomText or Fusion.

Older Versions of Sendero Map Data Will Be Removed From Servers

Sendero is currently storing all versions of GPS software and their associated maps on their servers. After so many years, the company has finally arrived at the point where the server can no longer support this massive data storage, so they will be deleting all older versions of software and maps on the server.?On October 1, the only software and maps that will be available for download from the Sendero server will be the current version Sendero GPS version 22 with 2018 maps. If you already have version 22, you do not have to do anything.

For all prior versions, please be sure to download a complete copy of your map and software files and store them locally on your computer by September 30, 2018.? They will no longer be available from GoSendero or any Sendero download website after that date.

If you have any questions about this transition, please e-mail Sendero.

The IAAP 2019 Call for Webinars is Now Open

IAAP invites you to submit your topics for IAAP's 2019 Professional Development Webinar Series. The deadline for submission is Sunday, September 30, 2018.

Sessions should focus on accessibility topics as they relate to software, websites, mobile applications, hardware, assistive technology implementation, proprietary applications, content or documents, law, policy, standards or regulations, and best practices.

Presentations focusing on the topics below will receive preference during the selection of webinar proposals. However, these topics do not guarantee your selection:

  • React Accessibility
  • Angular Accessibility
  • Managing Up ? making the case for accessibility to upper management
  • Building the Business Case for Accessibility
  • EPUB/EPUB Readers/Ace
  • Introductory PDF remediation
  • Artificial Intelligence for the Web
  • Topics outside the US/North America, particularly Asia
Target Audience

Topics in this series may be of interest to anyone whose job requires accessibility awareness or competence ? including but not limited to accessibility professionals, policy makers, developers, designers, testers and others involved in the creation or implementation of accessible technology, content, services, programs, and policies.

Webinars may be targeted to those new to accessibility or to those with more experience and who are looking to advance their skill level.

Webinar Format

Webinars are 90-minute long sessions that include a PowerPoint or HTML presentation and a Q&A between the attendees and speakers. They can also include desktop sharing of programs or resources, videos, and audience polling.

IAAP's webinar program provides an opportunity for you to showcase your expertise and help advance the knowledge of accessibility professionals. Plus, you will receive compensation for your efforts. If you're interested in submitting a topic for the 2019 webinar series, please fill out the IAAP Call for Webinars application form by Sunday, September 30, 2018. All submissions will be reviewed by the Individual Professional Development committee and you will be notified by mid-November regarding the status of your submission.

If you prefer a paper application or have any questions regarding the submission process, please contact Kevin Hower.

Wegmans First Grocery Chain to Offer Aira Service Free to Customers with Visual Impairments

Aira recently announced that Wegmans, a grocery chain located primarily in the northeast United States, will be providing access to the Aira service free of charge to all blind and visually impaired customers. A customer would first need to download the Aira app on the iOS App Store or the Google Play Store. After registering as an Aira Guest in the app, the customer would have access to Aira while in the Wegmans store. Though free Aira service has been implemented in locations such as the Houston Airport, this is the first instance when an entire store chain has provided Aira at all of its locations. Suman Kanuganti, the founder and CEO of Aira had this to say: "I'm excited to have Wegmans as our first partner in the Aira Supermarket Network. Their commitment to ensuring all customers have the same great Wegmans shopping experience is inspiring, and I'm proud to call them a partner."

Linda Lovejoy, who is the community relations manager for Wegmans, said, "At Wegmans, we are committed to providing incredible customer service to all our shoppers. Our partnership with Aira?helps us deliver on this commitment, giving our blind and low-vision customers access to this innovative service and the ability to navigate our stores easily and efficiently."

Aira provides assistance to people with visual impairments through the use of both artificial intelligence and trained human professionals, Agents, who can provide guidance to the user, Explorer, either through a smartphone camera or a pair of camera equipped smart glasses. To learn more about the Aira service, visit Aira on the web. AFB has also published three articles regarding the Aira service. The first two of these articles provided an introduction to the service. Part 1 can be found here and part 2 here. The third article was a review of the Aira Horizon smart glasses and can be found here.

Wegmans is a grocery store chain in the northeast United States, with stores in New York, New Jersey, Pennsylvania, and Virginia. To learn more, you can visit Wegmans on the web.

Microsoft Soundscape Updated to Version 1.8, Custom Markers feature Introduced in Latest Version

Microsoft's Soundscape app for the iOS platform has recently been updated to version 1.8. This version brings several fixes and improvements but the most significant is the inclusion of the Marker feature, which will allow users to create their own custom landmarks. It is possible to mark existing points of interest, addresses, and your current location. The previous "Reference Points" menu option is now "Manage Markers'" which will allow you to place a marker. You can also place a marker on your current location from the new "Mark Current Location" button on the home screen, though this option is available in the Manage Markers menu as well.

The first time you place a marker, you will be taken through an interactive tutorial that will allow you to place a marker and set a beacon on that marker. If you choose not to take the tutorial, it is available in the "Help and Tutorials" menu along with help files regarding markers.

When you place a marker, Soundscape will attempt to help you name it. For example, if you place a marker on your current location Soundscape will create a name based on your nearest landmarks or other addresses. For example, the default name may be something such as "100 feet from Bob's Café" or "nearby 111 Main Street". You can always clear this information and change it to what you want; this is even true if you are marking a point of interest.

Once a marker is set, it acts just like any other point of interest. This means that it will be called out as you travel, announced when seeking information from Soundscape, and you will have the capability of placing a beacon on the location. There is also a "Nearby Markers" button at the bottom of the Home Screen, which will call out nearby markers only. Note that I found that markers were not being announced automatically during standard automatic call outs after updating the app. I found by disabling all callouts and reenabling them from the menu, this issue was fixed.

This update also allows users to operate Soundscape using the media controls on a set of headphones; a list of commands is available in the help menu. In addition to other general bug fixes and improvements, Soundscape has been updated so that it provides better location information when you are traveling in a vehicle.

If you are new to the Soundscape app, you can learn more from the Soundscape page at Microsoft, Find it on the App Store, and read the full review in the August 2018 issue of AccessWorld.

The Accessible Home: Housekeeping from Your Smartphone

Here at AccessWorld we receive a steady stream of reader feedback asking us to describe and review more mainstream home gadgets and appliances that are already or can be made accessible to the blind. Most of these devices are app-enabled, relying on a touchscreen reader accessible mobile app to make them work (an example is the Instant Pot, which I reviewed in the August 2016 issue of AccessWorld).

iRobot, makers of the Roomba robotic vacuum cleaner, have offered app-enabled units for a few years, but I already have a Roomba, it works fine, and I have always been a bit leery of replacing working items with newer models just to get access to features I'm not sure are accessible. But recently, during Amazon Prime Day when the company offered a Roomba Model 671 that was Wi-Fi and Alexa enabled for $100 off, I figured I'd take the plunge.

Let me say right here at the beginning that I was extremely pleased with my purchase. The Roomba 671 is nearly completely accessible using the iPhone app. (I did not test the Android app, nor a Google Assistant.)

This is actually my third Roomba. The first I bought with a lifetime guarantee from Hammacher Schlemmer and returned a year after purchase when it stopped working. The second, purchased several years later, has been a faithful servant for more than five years. However I have always had trouble with device accessibility, and these issues fall into three categories.

  • The inability to accessibly set the time or a daily cleaning schedule.
  • The inability to locate the device when it gets stuck or runs out of power.
  • The inability to accessibly detect whether the device is properly positioned on the charger when I need to place it there manually.

The first of these was an annoyance. Sure, I could get sighted help to set the clock and then set a scheduled time for Roomba to run. However these settings only lasted until the first time the power ran out—see next paragraph—at which time the clock would reset to 12:00 and what was a scheduled time of 3 PM became 7 PM and my only choices were to let it run while I tried to read or watch TV, or chase Roomba down and stop it until bedtime, or the following morning, and every morning after that until the power ran out again and I played another game of Roomba roulette.

The second issue was a real problem. Nearly every time it ran, Roomba would get wedged against the edge of a couch, or snagged on a too-fluffy bathroom rug, and stop dead in its tracks. Or it would fail to find its way back to its power dock. The unit uses an infrared signal to find its way, and if it's in a different room without a line of sight In any case, I spent a lot of time wandering the house searching all the old familiar places. More often than not I was unable to find it, and it might be days before I stumbled over it—out of power, of course. (See paragraph above.)

When Roomba returns to the base station and positions itself to recharge it plays a trill. However on those many occasions when I had to place the unit on the charger manually there was no way to detect if it was in the proper position. Colored signal lights indicate proper positioning so if you can't see those lights, you don't have any feedback. I often would dock the unit improperly and wind up a day later with the unit still sounding the "Please charge Roomba" message when I pressed the Power button.

Introducing Roomba 671

If you've never gotten your hands on a Roomba, it's a large, thick plastic disk, approximately 13 inches in diameter. The edge, approximately four inches thick, holds the bumper guards and the removable refuse bin. On the bottom are two independently turning wheels, a pivot caster, and a spinning side brush, sort of like plastic whiskers that brush dirt from baseboards and such and move it into position to be sucked up by Roomba's brushes and vacuum.

The unit's brush cage is also visible and accessible from the bottom. All Roombas use a pair of roller-style brushes. The first unit I purchased included two plastic bristle brushes. The unit I currently own replaced the brushes with rubber rollers. The Model 671 comes with one of each. Replacements can be purchased.

The roller brushes are covered by an easily removable plastic brush guard. They can be lifted right out of the unit for cleaning and to remove large debris, such as lengths of string, that can clog the machine. I do like this ability to easily access the brushes. The upright vacuum I currently own and barely use these days requires removing several tiny screws and a high-tension rubber belt to access and clean the brushes.

The top of the unit includes a sort of thick button cliff sensor that stops Roomba from falling down stairs, and a pressure touch control, which sports a fairly large Power button at the very center. Previous models I have owned include a row of buttons: Spot, Home, Clock and Schedule. The model 671 moves these controls to above and below the Power button, with Spot being directly above the center Power button between it and the top protruding cliff sensor and Home just below.

The setup also includes a power charger, which sits on the floor and into which Roomba is supposed to self-dock at the end of a session. As mentioned above, Roomba does not always accomplish this. And if you need to manually dock Roomba, it can be difficult to determine if it's properly positioned without seeing the flashing signal lights. At least in non-app-enabled models.

Other units I have purchased also came with a pair of virtual walls, tiny towers that you can place in a doorway or such to prevent Roomba from entering or leaving a room or other area. These did not come with the model 671, though they can be purchased from the company's Accessories Store.

Replacement filters and edge brushes also were not provided. Nor was the remote control that comes with some models. Note: iRobot also sells an optional Roomba Halo that stops Roomba from getting too close and displacing dog dishes.

Connecting Roomba 671

There is an Android app available from Google Play, but I did not test it for this review. Instead I removed the battery protective tab and charged Roomba overnight to condition the battery, as the instructions advise, then downloaded the iRobot app from the iOS App Store and began the setup. You must create an iRobot account, using your email and a password. Next, you will choose Setup a new robot, and select Roomba. The other choice is the Braaba Jet, a device that wet/dry mops tile and linoleum floors.

After giving your new device a name you must choose the Wi-Fi network you will use to connect Roomba. Like many other Wi-Fi devices I have seen lately, Roomba will not use a 5-gigahertz network. Most modern routers broadcast in combined 2.4/5 gigahertz bands, and to pair Roomba you will need to divide the signals into two separate bands. After you have paired Roomba you can recombine them. Your ISP can help you do this. Alternatively, you can call iRobot tech support and they will walk you through the process. After trying out a completely inaccessible Wi-Fi slow cooker I never bothered to reset the dual band, so I already had a 2.4 network to pair with. However I did have another issue.

Once you select your network and enter the password, the app instructs you to press and hold the Spot and Home buttons for two seconds. These are the buttons located above and below the Power button, assuming you have Roomba in its charging cradle.

After releasing the buttons you will receive a tone informing you that Roomba has located your signal. You must then enter your network password a second time, and here is where I had trouble.

Using VoiceOver on my iPhone 6, I could not get Roomba to accept my password. I even had sighted help type it in using VO, and it again failed. However, with VoiceOver turned off the very same password worked fine. I have experienced this situation on other devices. Perhaps it is a VoiceOver bug?

In any case, once this single stumbling block was passed, the app became 99-percent accessible. There is a battery gauge that appears as an image.

Starting Roomba was as simple as selecting its name and then double tapping "Clean." I could also set a daily schedule by selecting "on" for the days I wish Roomba to run, then use the slider to choose the times I wish it to start. If you enable notifications, your iPhone will alert you when Roomba has started, when it has finished, and most importantly, when there is an issue that has caused Roomba to stop.

OK, so Roomba has stopped. So where is it? Simply press the "More" button, and you will find a "Locate" control. Activating this control causes Roomba to announce its location with a pair of beeps, issued twice. You can repeat this, carrying your smartphone until you find Roomba. Alternatively, the app will announce, "(Your device name) is recharging on its home base."

There is no "Spot" option on the app. This must be activated manually, but this is fine, since if you plan to spot clean you are probably already "on the spot."

You can end a cleaning job using the app, but Roomba may not be able to locate the dock if you stop the unit when it's in a different room or area of the house without line of sight to the dock. If you must manually dock Roomba, simply press the "Locate" button, which will tell you if you have successfully docked the unit.

Roomba and Alexa

If you have an Amazon Echo device you can use your smartphone app or the Web to install the iRobot skill. After entering your account and password you can then use commands such as:

"Alexa, tell (ask) Roomba to clean (stop, go home)." You can also ask, "Alexa, when will my room be clean (set a schedule, or locate Roomba)?" This last will cause Alexa to announce, "Roomba (your device name) is signaling its location. Now go find it." Unfortunately, when done this way, Roomba does not offer a constant signal. It only blurps twice, so it can be difficult to find if it's not in the immediate area. Happily, I own an Amazon Tap, which I can carry around with me while I play hide and seek with Roomba. You can also use your smartphone app as described above.

Further Information

The Roomba 600 Series Quick Start Card and Owners Guide can be obtained in accessible PDF format from the company's web store.

Roombas can also be found at Amazon and many other retailers.

There is a wide variety of models, including various 600, 800, and 900 series. The higher number models usually come with more accessories and up to five times more powerful suction. Some higher numbered models also come with a "recharge and resume" feature. I was told by an iRobot representative that any model that shows as "app-enabled" will also work with Alexa and Google Assistant.

This article is made possible in part by generous funding from the James H. and Alice Teubert Charitable Trust, Huntington, West Virginia.

Comment on this article.

Related articles:

More by this author:

A Review of The Secret Sauce of Savvy Search, Jonathan Mosen's Google Audio Course

If 25 people were asked the question, "What do you do on the Internet?" the answers would vary widely. Some check sports scores, others read the latest news headlines, still others spend time on social media, and others mostly email friends and family. One thing that would probably show up on everyone's list, however, would be searching for information.

I can remember a time when researching a topic on the Net yielded few results, with only a couple of paragraphs of information given to the topic of interest. Today, a person can spend hours reading about everything from a medical condition to what is expected in an upcoming season of your favorite TV show.

Depending on the topic of interest, it is possible to find specialized search engines that roam the Web looking for information pertaining to a given subject. With all the ways there are to search the Internet for knowledge, one site stands out for its sheer popularity: Google.

But what can you really do with Google? Is it just a place you go to type a quick search term and get whisked off to another part of the Web to view the results of your query?

When I learned Jonathan Mosen had released a two-hour audio course called The Secret Sauce of Savvy Search that would help unlock the potential of Google, I was curious. After all, I knew Google was more than a simple webpage containing an edit box and a search button, but I had barely explored any of the site's other features. I decided it was time to get the course to see what I had been missing, and what I could learn.

Purchasing The Secret Sauce of Savvy Search Course

Available from the Mosen Consulting Store, The Secret Sauce of Savvy Search is available in two packages. You can download one large MP3 file containing the entire course, or a series of short MP3 files that divide the course into sections for easy reference. I chose to download the large MP3 file. I could have downloaded both packages for one price of $35 if I had preferred to. I found the process of purchasing and downloading the course to be hassle-free, and I was enjoying the course in no time after I made my purchase.

Beyond Google Search

The course begins by visiting the main Google page. Using Jaws 2018 and Google's own Chrome browser, Mosen takes us on a tour of Google's site, and demonstrates some features many may not have explored. I have always known it was possible to set up Google alerts, but I never actually investigated to see what I could do with them. As it turns out, there are quite a few possibilities. Should I choose to, I could have Google alert me when my name is mentioned on the Web, and I could set up certain criteria such as when my name is mentioned in conjunction with AccessWorld.

For as long as I've used Google, I've been content with the first 10 results that show up when I do a search for any topic. Until Mosen pointed it out, it hadn't really occurred to me that 10 search results is a really low number considering the power of today's computers and the speed of the Internet. It makes just as much sense to increase that number to 100.

I'd also overlooked Google's advanced search options. It's possible to be quite specific regarding what words you want to search for, as well as which words you wish to omit from your search. Mosen gives some examples, and demonstrates how a list of search results can go from almost impossible to sift through to a list of relevant results that are exactly what you are looking for.

Once your browser is set up to use Google as its default search engine, all you need to do is type your search term into the address bar. Searching from the address bar of your browser can be quite powerful, and Mosen gives a lot of examples of various searches that can be performed and provides practical examples of why and how you would go about conducting those searches.

Most of us are familiar with using words such as "AND" and "OR" when doing Google searches, but I didn't realize that those two words need to be in all caps before Google will use them to change the parameter of my searches. Another common technique is to include certain words in quotes so that Google knows they should be grouped together. A search for braille watch on the Web will yield results for everything having to do with braille and watches. Searching for "braille watch" in quotes will narrow the scope of your search to results having to do with watches designed specifically for blind people.

It is possible to set the scope of your Google search to a certain website, such as CNN.

Did you ever consider using Google as a calculator? I certainly didn't until Mosen pointed out how easy it is.

Do you need to look up the status of a flight? No problem. Google has you covered.

The Bottom Line

I say it every time I write a review of one of Mosen's courses, but his engaging style and thorough knowledge of the subject in question keep me coming back for more of his work. The Secret Sauce of Savvy Search is no exception.

When I began reading The Secret Sauce of Savvy Search in preparation for my product review for AccessWorld, I hoped to be able to learn more about how to use Google in a more powerful way to get search results that truly were relevant to what I was looking for. Mosen's course met my expectations, and I am certain that I will refer back to the reference many times in the future in order to gain a better understanding to key concepts of particular importance to me.

One mistake I made was to only download the large MP3 version of the course. Since the download link for the copy of the course expires after a certain time, go ahead and grab both versions. I plan to go back and reread certain parts of the course, and having the separate MP3 files would have made things easier. I'm not sure why this particular course is not available in DAISY format. That might be something that can be made available in the future.

Some might ask why a quick reference card couldn't be inexpensively produced that would show the proper syntax for doing Google searches. For example, typing "define: intransigent" without the quotes to get a definition of that word. While that would certainly be possible, Mosen's course provides practical examples of how to use the various search capabilities of Google, as well as suggests ways that searches can be combined to provide even more relevant results.

For $35, The Secret Sauce of Savvy Search by Jonathan Mosen is an audio course that is well worth the price. This would make a great gift idea for the student or recent graduate in your life. What a cool way to get the edge on fellow classmates or coworkers. There is no way to retain all the information packed into this two-hour reference after just one listen, so you will find yourself coming back for more nuggets of insight from Mosen.

Be sure to visit Jonathan Mosen's website to learn more about what he has to offer.

Product Information

Product: The Secret Sauce of Savvy Search audio course
Author: Jonathan Mosen
Price: $35

This article is made possible in part by generous funding from the James H. and Alice Teubert Charitable Trust, Huntington, West Virginia.

Comment on this article.

Related articles:

More by this author:

Envision AI and Seeing AI: Two Multi-Purpose Recognition Apps

Not all that long ago, separate apps were needed for text recognition, currency identification, bar code reading, and color identification. Seeing AI from Microsoft and Envision AI from Envision Technologies each offer a multi-purpose app that claims to perform many of these functions, and more. Both apps are available in many countries throughout the world.

For this article, an iPhone X was used to evaluate both apps; each item used was scanned three times to insure consistency.

Microsoft Seeing AI

Once the Seeing AI app is installed, there will be a notification asking to allow Seeing AI to use your phone's camera. A brief tutorial appears on the screen after permission is granted.

Seeing AI's main screen has four options: Menu, Quick Help, Pause Announcements, and Channel. The best way to navigate the home screen is to flick left or right with one finger.

The Menu Option

The first choice in the Menu is Browse Photos. Selecting this option will cause Seeing AI to ask if it can access your photos. If access is allowed, your photos will be displayed.

The next Menu choice is Face Recognition. In order for Seeing AI to recognize a face, the person must first be photographed and their name put into the database. The Face Recognition menu option explains how to do this.

The third Menu choice is Help. This section is easy to navigate by headings, and each feature is clearly described.

The fourth choice under the Menu option is Feedback. Selecting this option opens an e-mail message for you to send feedback about the app to the developers.

The fifth choice in the Menu is Settings, which provides opportunities for customization including setting voice and speech rate, which currency to recognize, setting up 3D touch shortcuts, and choosing whether the camera automatically uses a flash when necessary.

The final Menu choice is About. This section gives the current version number along with the fine print like the terms of use and the privacy policy.

The Home Screen

The Quick Help feature is context dependent, which means the options will change based on which feature is being used. For example, if Short Text is chosen, the information in Quick Help will describe how to use this feature and give an option to play a video on how to use it.

The "Pause Announcements" button is a toggle. When the button is active, VoiceOver will still speak.

The final item on the Home screen is Channel. This may sound confusing, but Channel indicates which feature is currently active. Think of it as different TV channels. Channels include Short Text, Document, Product, and Currency. Options such as Color and Scene will have the word Preview next to them. These features are available but still in development. Swipe up or down to get to a specific channel, or, if your device supports 3D touch, you can pick up to four channels to access with the feature.

Envision Technologies Envision AI

This app offers a free 14-day trial. After that, there is no cost for up to 10 actions per month. Subscription plans start at $4.99 per month.

The first time the app is used, there is a welcome screen and a "Log In" button. Activating this button brings you to a new page where you have the option to log in with Google, Facebook, or your e-mail address. Once you're signed in, Envision will ask for several permissions to access location, camera, and photos. A welcome email will also be sent.

Screen Layout

There are three tabs at the bottom of the main screen: Text, General, and Help. By default, when the app is first launched, the Text tab is selected. The easiest way to navigate the Home screen is with a one-finger flick.

With the Text tab selected, there are four options: Magnifier, Start Reading Instantly, Read Handwritten Text, and Read Document.

If the General tab is selected, the options are Describe Scene, Detect Colors, Scan Bar Code, and Teach Envision.

When the Help tab is selected, a screen loads with a help section and below it is the Settings section. The Help section has three options: Read Tutorials, Give Feedback, and Request a Call. When Read Tutorials is activated, the next page will require you to choose a language. This must be done every time. You can navigate the different tutorials using the headings option in the VoiceOver Rotor. The tutorials are clearly worded.

The first option in the Settings section is a toggle labeled "Offline text recognition, faster recognition option for languages based on Latin scripts." By default, this feature is off. The Speech option is next. Here is where speech parameters can be adjusted. Within the speech settings is a button labeled Automatic Language Detection. If you are only using one language, keep this button off. The next setting is for Color Detection. There are two options, Standard 30 and Descriptive 930. The second option is on by default.

Below the color settings is information about your account. There are options to Share with Friends and Write a Review on the App Store. The final option is About Envision.

Evaluation

I used the same lighting conditions for all tasks.

Short Text

In Seeing AI, hold the phone over the item and once text is found, the app will read it. The app will continue to look for text, which can be useful if looking for a room number or address.

With Envision AI, select the "Start Reading Instantly" button and hold the phone over the item. Envision will beep when it locates text. When finished, tap the "Stop Reading Instantly" button.

I had both apps read my computer screen, the label on a bottle of wine, and two store coupons. Both apps did well. Envision AI had slightly faster text recognition and did significantly better than Seeing AI with the wine bottle.

Documents

The easiest way to scan a document page with Seeing AI is to put the phone on the document with the camera in the middle. Gradually lift the phone up and the app will say which edges are not visible. Slowly lift the phone up until you hear "hold steady." This means that the document is going to be photographed in about two seconds. After the document is processed, it can be read using VoiceOver navigation or by activating the "Play" button at the bottom left of the screen. Additional options at the bottom of the screen are Stop, Increase Font Size, Decrease Font Size, and Share.

The set-up procedure is the same for Envision AI. Put the camera in the middle of the page and lift the phone up gradually. When the page is in focus, the app will say, "hold steady." It says this at the same time the shutter clicks. Once the document is processed, it can be read with VoiceOver commands or by activating the Play option at the bottom left of the screen. You'll find an Export Text option next to the Play option.

Envision AI lets you scan multiple pages in a document and then process them. Double-tap and hold on the "Document button". The app will ask if you want to read multiple pages. After the last page is photographed, activate the "Done" button to start processing.

Three different documents were used to test this feature: a letter, a financial report, and a printout of the USDA's food safety information. Seeing AI was more accurate with each of the documents, though Envision AI's results were still very good and the ability to scan multiple pages at once is very useful.

Reading Handwritten Text

Both apps indicate this feature is for reading short notes like the ones on sticky notes. They operate the same way except Seeing AI requires you to photograph the item manually while Envision AI takes the photo automatically.

Seeing AI read enough of the samples to let me get the basic idea of what was written. It even read my awful handwriting; however, Envision AI was not able to recognize handwriting on any of the test documents.

Currency Identification

At this time, only Seeing AI has a currency identifier. It read currency very quickly.

Product Identification

When using a bar code scanner, make sure the item to be scanned and the camera are positioned correctly. This is critical.

Seeing AI has a Product channel that is used for reading bar codes. Position the phone's camera so it faces the product and slowly move either the phone or the object. The app will play faster beeps as you get closer to the bar code. Once a bar code is identified, Seeing AI will speak the name of the item. If there is additional information about the item, there will be a More Info option on the screen, so reviewing the entire screen can be helpful. The same options that are on the screen when the Document channel is active appear on the More Info screen.

Envision AI's bar code scanner is in the General tab. Double tap the "Scan Bar Code" button. Follow the same instructions as above for scanning. You will hear beeps as you get closer to the bar code. Once it is identified, the app will speak the product's name. If more information is available, a More Information option will appear near the top of the screen.

A variety of test items were scanned, including cans, frozen foods, and product boxes. Both scanners worked equally well in these tests.

Color Identification

When attempting to identify colors, it is important to have good lighting. Seeing AI has a Color Preview channel that allows you to use this feature while it is in development. Hold the phone above or in front of the item to be described and the app will start speaking.

In Envision AI, position the camera above or in front of the item you want to scan. Activate the "Detect Colors" button from within the General tab. The app will start speaking.

I used six items for color identification including a red shirt, a pair of jeans and a light pink shirt. The app was tested in three different lighting conditions: two indoors and one in sunlight. Unfortunately, neither app did particularly well. Each got one color correct part of the time but not consistently.

Describe a Scene

These apps can take a photo of your surroundings and attempt to describe what is there. In Seeing AI, go to the Scene Preview channel. Once you are ready, activate the "Take Photo" button. The app will describe the surroundings. At the top of the screen is a "Close" button. Below the description are options to save and share the photo.

In Envision AI, first make sure your camera is pointed in exactly the right direction, then select the "Describe Scene" button from the General tab. Once the button is activated, the app will automatically take a photo and speak a description. There is an option to save the photo or, if you want to take another photo, you can reselect the "Describe Scene" button.

I photographed both indoors and outdoors. Both apps correctly indicated I had photographed my dog, a tree, a laptop, and my living room. However, Seeing AI correctly said I photographed my house while Envision AI kept saying, "looks like clear skies to me" and then gave me the temperature.

Face Recognition

In order for either app to recognize someone's face, the person must be photographed and added to the app's database.

In Seeing AI, go to the Menu at the top of the page and select Face Recognition. Select the "Add" button. By default, the front facing camera is selected so the person who you want to photograph can take selfies. You can switch to the back-facing camera if you prefer.

Once the photos are taken, the app can then recognize the person and describe what they are doing. Even if you do not put the person into the database, the app will give a brief description such as age, gender, and hair color.

In Envision AI, activate the "Teach Envision" button. There are options to photograph a person or an object. After taking the photos, activate the "Done" button. An edit box will open to enter a person's name or object. If the person or object is in a scene, the app will say their name or the object's name.

These apps worked equally well during these tasks, but neither worked perfectly.

Conclusion

Both apps offer excellent features. Fortunately, both Microsoft and Envision Technologies are dedicated to improving their products. Neither app is perfect, but they are very helpful, and both have excellent onboard help. Envision AI also has a Request a Call feature, which I used. I figured out the solution while I waited but still took the return call, which confirmed my solution. The call came after about an hour, which was impressive since it was Saturday afternoon. Also be aware that Envision AI is now available on Android through Google's Play Store.

Since Seeing AI is free and Envision AI offers a free trial. I recommend trying both, to find out which app works best for you. Both are powerful tools to increase independence in the home, at school, at work, or on the go. With either of these apps on your phone, you will not need to ask someone else to read you a menu or tell you the price, and you will never have to trust your bartender to give you the correct change!

Product Information

Product: Seeing AI
Developer: Microsoft
Price: Free
Product: Envision AI

Developer: Envision Technologies B. V.
Price: 14-day free trial with 10 free captures a month afterward. Subscription: 1 month $4.99, 6 months $24.99, Annual $33.99.

This article is made possible in part by generous funding from the James H. and Alice Teubert Charitable Trust, Huntington, West Virginia.

Comment on this article.

Related articles:

More by this author:

Brailliant BI-14: HumanWare's Latest Step Toward Smart Braille Displays

When I first learned of HumanWare's Brailliant BI-14, I wondered if it might be the perfect next level in braille displays that I and many AccessWorld readers have been hoping to see. Promising to be small, lightweight, and compatible with mobile devices and computers, Brailliant BI-14 would also offer a few simple and highly valued applications. It took more than a year to get my hands on it, but the following article is my assessment of how close HumanWare has come to meeting that "next level" expectation.

Out of the Box

The Brailliant BI-14 bears that signature appeal of so many HumanWare products, the appeal of being esthetically pleasing to touch. With a footprint quite similar to that of comparable products in the refreshable braille arena, the Brailliant BI-14 is slim, sleek, and housed in a smooth leather case. As its name makes clear, it features 14 braille cells and 14 accompanying touch cursors. With the unit properly positioned on a table before you, you will find an eight-key Perkins-style keyboard behind those cells and a Space Bar in front of them, closest to you. Between the keys for dot-1 and dot-4 is a joystick that can be moved toward you, away from you, left, or right, as well as pressed downward to execute various commands. On the front edge of the display are the four thumb keys unique to HumanWare's braille products, which facilitate moving forward or back when reviewing text.

On the left side of the unit are a micro USB slot (used for both charging and connecting) and the Power button, marked with a small tactile dot. On the back edge, near the left, is a slide switch with two positions: one for applications, the other for when the unit is being operated as a terminal.

The unit ships with an adjustable strap already connected to it and is protected by a sleek leather case. While the closure for the case is magnetic, hook-and-loop strips have been rather cleverly used to affix the Brailliant to the inside of the case. Note that these strips are completely hidden when the Brailliant is in position, so that they can't snag other items such as clothing.

Also in the box you will find a CD containing somewhat sparse documentation, a braille sheet explaining how to sync the Notes application, a micro USB cable, and an AC adapter.

Getting Started

While the Brailliant BI-14 is braille only, no speech, it does have useful sound and vibration cues. When powering on or off, for example, the unit vibrates and emits a short beep. It also conveniently displays the word "starting" in braille, soon followed by the correct time if powering up in Application mode, or by the word "hello" if powering up in Terminal mode.

As most readers familiar with braille displays will probably recognize, the Brailliant's Applications mode is for when you'd like to use it as a stand-alone device and Terminal mode is for when you are using it as an adjacent terminal or screen for another device such as a smart phone, tablet, or computer.

The Applications mode is what set the Brailliant BI-14 somewhat apart from the outset, so that is where I first focused my attention. When powered on in Applications mode, as mentioned above, it first displays the correct time. By moving the joystick to the right, you will find the following menu items: Notes, Battery, Stopwatch, Connections, Settings, and About. You can select any of these options either by pressing downward on the joystick or tapping the dot-8 or Enter key. The functions provided by most of these options are as you would expect: reporting battery status, activating a stopwatch, and listing available connections.

Again, since it was the Notes feature that drew my attention to this product in the first place, and since the only braille material included in the unit that came to me was information on syncing notes, that is where I began my exploration.

Brailliant Sync, available for free in the Apple Store, must be downloaded in order to sync notes across devices. After downloading Brailliant Sync, you add your Gmail or other accounts to it and, voila, any notes that you write on the Brailliant that you wish to sync are automatically available in your phone's Notes app as well as in your email account. When you first open Notes on the Brailliant, the first choice is Local. As the label suggests, notes written here reside on the Brailliant alone. Navigating the list by moving the joystick to the right, you will find the other accounts you have added. Move the joystick toward you to access the individual notes in a specific location, and then move the joystick left or right to navigate the list of notes.

It took some experimentation to discover the quickest ways to navigate through these options, but in the end, it is very intuitive. You can save a note or exit without saving. And you can save with or without syncing. A word about the Brailliant Sync app: once it was installed and I added my accounts to it, I never needed to touch it again. At this time, this is only available in the Apple Store, although plans are to make it available in Google Play later.

Terminal Mode

Some customers, of course, are only interested in a braille display of this size for its role as a terminal, to read and navigate the screen of a smart phone, tablet, or laptop. The Brailliant can connect via Bluetooth or USB.

I first paired it with my iPhone 7, and was genuinely impressed. No code to enter and no revisiting the braille device selection. The two were connected in just a few seconds, perhaps the easiest such pairing I have experienced to date.

Connecting to my Windows 10 JAWS 2018 computer took a bit more patience, but once the connection was established, it worked beautifully. As with most braille displays, you can type directly from the Brailliant if desired as well as enter commands from its keyboard and joystick. If the Brailliant has been successfully paired with two devices, such as your computer via USB and your phone via Bluetooth, activity from both devices flashes on the display. If, for example, you are writing a document in Microsoft Word on your laptop, and a text message, phone call, or news notification occurs on your phone, that new information will appear on the Brailliant. While having such information appear on the braille display spontaneously can often be quite useful, it is also sometimes annoying. Returning from one connected device to the other is supposed to occur by accessing the Connections item on the menu. I found this method to be inconsistent at best. Locking the phone, however, achieved the desired result of switching the display's focus back to the computer.

Some Pros and Cons

Navigating screens large and small with a variety of keystrokes—either the Perkins keys as chords, the thumb keys, or the joystick—results in a seamless and pleasant access experience. The ease of taking the display in and out of its case and the fact that its lanyard is attached to the display rather than the case represent simple but very consumer-friendly design decisions. The ability to sync notes across all mobile devices as desired is definitely a big step in the right direction. These notes can only be reviewed once they are saved (moving forward and back by character, word, or line, using key combinations that will be familiar to anyone who has used other braille displays). At this time, not even simple editing commands are available (cut, copy, paste) although I was told such functionality will be incorporated in future upgrades. You can delete characters and delete entire notes, but there is, at this time, no ability to manipulate text in other ways.

Brailliant BI-14 is completely usable by both blind and deaf-blind individuals, since all functions offer vibration feedback. (Vibration can also be turned off for those who do not find it helpful.)

For those with multiple devices, you can connect one USB device and up to five Bluetooth devices simultaneously. You can customize the computer braille language, literary braille language, degree of sound and vibration feedback, and can use the unit in a one-handed mode. After its first full charge, battery life is impressive: 15 to 20 hours after a full charge and, of course, if it is connected to a laptop or other USB device, it is constantly being charged.

The Bottom Line

While some simple text editing capabilities would make the Notes feature even more attractive, the Notes app as it stands is a fabulous addition to a small display that is a top-notch terminal. At $995, this is a tool many braille users with laptops, smart phones, or tablets will want to add to the technology toolkit.

Product Information

Product: Brailliant BI14
Price: $995
Manufacturer: Humanware
Phone: 800-722-3393

This article is made possible in part by generous funding from the James H. and Alice Teubert Charitable Trust, Huntington, West Virginia.

Comment on this article.

Related articles:

More by this author:

Vision Tech: It's Complicated

Nearly five years ago in my very first Vision Tech article, Four Emerging Vision-Enhancing Technologies, I reported on the Implantable Miniature Telescope from VisionCare, Inc. The device is perfectly described in its name. It's a tiny telescope that's implanted beneath the cornea that focuses vision so individuals with AMD can make greater use of their remaining peripheral vision. As part of that article, I spoke with retired engineer Dan Dunbar, who received his device in November of 2011. Dan is an avid model train enthusiast, but by the early 2000s he had worked his way up the size scale from N-gauge through HO-gauge all the way up to O-gauge model trains. Eventually he could no longer see, he could only hear his trains as they circled the far side of his eight-by-thirty-foot O-gauge layout. Happily, the last time I spoke with Dan he was not only able to enjoy his trains again, he had returned to skiing. He was also able to return to using a computer with magnification. Here's where things get interesting.

During our conversation Dan described an unusual occurrence he experienced when he first returned to using a computer:

Something I didn't understand when it started out is that a person's sight—when you look at something and say, "oh, that's a pretty flower," or "I recognize that individual," or whatever it is you're looking at, when you actually see it in your mind, when you have this image, it's a memory. It's not necessarily what comes from the eye. It's managed vision, is the best way I can describe it.

I've worked on Mac computers since they started making them. We're on QuickBooks, and when my bookkeeper moved I started using QuickBooks online and it worked great. And now, every now and then, QuickBooks has an update. One time before I got on, it told me it wanted to update, and I said OK. …Pretty soon …it said it was OK now to open the program. I clicked Open and I had a blank, white screen. There was no object on that screen except the cursor. I kept moving the cursor around, and I could see the cursor, so I knew the display was working, but there was nothing else. I opened a different report, again, nothing.

What's going on here? They screwed up, I figured. So, I called into tech support, and I reach a young lady and she says no, we don't have any problems like that. Let me take your computer and make sure things are OK. So, I gave her the control, and she came back to me and said ?It's working just fine. I told her "I don't see anything. I have a blank screen with a cursor."

She told me to move the cursor to the upper left corner and tell me what you see. I did this. nothing. "Now move it to the upper right corner and tell me what you see." Again nothing. To the middle. Nothing.

She told me, "The upper left should have a magnifying glass symbol, the middle, a big plus sign, at the right looks like a gear." And the minute she said this, the entire display popped into view. And she hadn't done a thing. The updated layout was different. It didn't match my memory, so I couldn't see it.

The Circuitry of Vision

Vision involves a lot more than a functioning eyeball, optic nerve, and visual cortex. "Vision is a cognitive construction of what the world is around you," says Uday Patel, Senior Research Scientist at Second Sight Medical Products, makers of the Argus II artificial retina, and, more recently, The Orion brain implant. "There is a lot of memory and filling in of what you perceive."

It's not just the blind who don't always see what's right there in front of them. Have you ever heard of the "I Love Paris" optical illusion?

It's a famous illustration that most people misinterpret. Ask someone to read the text and most people recite, "I love Paris in the Springtime."

Actually, the text reads: "I Love Paris in the the springtime" with two "the'S." We aren't expecting to see the second "the" so often we don't see it. Just like we are likely to miss the gorilla passing his way through this game of basketball.

Out of necessity Dan had a strong memory of the Quicken screen layout. Only after his brain reconciled itself that it might look different did the new screen appear. Of course, this effect was likely enhanced by his magnified, telescopic vision.

Along with memory, signals from functioning eyes must also be processed. This processing takes place in various parts of the brain, and, according to Patel, "there is a lot of evidence that the circuitry for the visual cortex actually depends on early vision to wire itself correctly."

Some of this wiring is just now becoming known. Recently, for example, the news was filled with stories of Scottish native Milena Canning, who was left blind 18 years ago after a respiratory infection, a series of strokes, and an 8-month coma.

About six months after emerging from her coma Canning reported seeing a flash of reflection glinting off a metallic gift bag, "like fireworks." Soon she was able to follow a moving arm, and name the colors of large moving objects, though stationary objects remained invisible to her.

"She can see rainwater running down a window but cannot see through it," Glasgow ophthalmologist Gordon Dutton wrote in a 2003 paper. "When her daughter is walking away from her, she can see her daughter's pony tail moving from side to side but cannot see her daughter. She can see the movement of the water going down the plug hole, but she cannot see her child in the bath."

Dutton referred Canning to the Western University's Brain and Mind Institute in London, Ontario, where cognitive neuroscientist Jody Culham and a team of researchers performed a series of tests, including a full functional MRI.

"Canning is missing a piece of brain tissue about the size of an apple at the back of her brain—almost the entirety of her occipital lobes," Culham and her team members discovered. "However, there is an area, just at the edge of the part that's damaged—about a teaspoon on each side—that survived intact. This area, known as the MT for Middle Temporal, activates whenever someone sees movement.

"In Milena's case, we think the 'super-highway' for the visual system reached a dead end. But rather than shutting down her whole visual system, she developed 'back roads' that could bypass the damaged pathways to bring some vision—especially motion—to other parts of the brain."

The condition is known as Riddoch syndrome, or the Riddoch phenomenon, also known as statokinetic dissociation—where someone who is otherwise blind can see moving objects.

The opposite condition also exists. It's called akinetopsia and these patients have a damaged MT, and are left viewing the entire world as a series of still images.

"Sort of like what a person with normal vision might see in a room where a strobe light has been switched on," says Culham.

Persons with akinetopsia have trouble knowing when a drinking glass is about to overflow, and are probably the last to be picked in a game of beach volleyball or motorcycle racing, though Culham did relate she once knew of a man with akinetopsia who drove a motorcycle regularly.

Facial Recognition

Another aspect of vision that most experts agree is learned early in life is the ability to recognize faces. Some people grow up lacking this ability. It's known as face blindness or developmental prosopagnosia (DP) and among its sufferers was the famed British neurologist Oliver Sacks, author of The Man Who Mistook His Wife for a Hat and other ground-breaking works that diagnosed and described unusual brain anomalies.

In a 2010 New Yorker article entitled "Face-Blind" he wrote:

At the age of seventy-seven, despite a lifetime of trying to compensate, I have no less trouble with faces and places. I am particularly thrown if I see people out of context, even if I have been with them five minutes before. This happened one morning just after an appointment with my psychiatrist. (I had been seeing him twice weekly for several years at this point.) A few minutes after I left his office, I encountered a soberly dressed man who greeted me in the lobby of the building. I was puzzled as to why this stranger seemed to know me, until the doorman greeted him by name—it was, of course, my analyst. (This failure to recognize him came up as a topic in our next session; I think that he did not entirely believe me when I maintained that it had a neurological basis rather than a psychiatric one.)

Reports of prosopagnosia date back centuries, although it wasn't until 1947 that the condition was officially recognized. Until recently it was thought there were only a few hundred face-blind individuals in the world, but recent studies at Harvard and University College London have discovered that up to two percent of the population may have some degree of face-blindness.

In his New Yorker article, Sacks also describes how he has to ride his bike along the same route every day because otherwise he will get lost.

This, too, is not unusual, as at least one recent study has demonstrated that the inability to recognize faces is linked to broader visual recognition problems.

"Our research indicates that neural abnormalities in many people with DP are more widespread than previous studies have suggested," says psychology and brain sciences professor Bradley Duchaine, the principal investigator of the Social Perception Lab at Dartmouth College. The study he refers to included 22 people with DP and 25 control subjects. Each were shown videos that included faces, objects, and scenes. Those with DP tended to respond less strongly than the control subjects in brain areas associated with facial recognition, but also less strongly in areas associated with scene recognition.

The takeaway here is that while previously it was thought that DP was caused by a lack of experience with faces, it's more likely a result of a broader neurobiological cause affecting a broad region of the cortex.

Mind Blindness

One of the most unusual "vision" problems I encountered researching this article has nothing at all to do with actual eyesight. It's called mind blindness, or congenital aphantasia. It's the inability to form any sort of mental images. According to recent estimates between two and five percent of the population may experience this condition.

In a recent interview with Australia's news channel nin.com.au, 22-year-old recent graduate Maddie Burrows described her experience of the condition: "When I try to really visualize someone's face, really I'm just thinking about how there is nothing there. But I still have so much information about what their face looks like, I just can't bring the image together. When I think about whether it's a round or narrow face and if they have brown or blonde hair, all of that detail I still know, I just can't put it into an image."

In her interview, Burrows stated that her problems weren't limited to picturing faces. She also has trouble imagining anything at all. She can't picture her home, her bedroom, or even simple shapes like a circle or square.

Later in the interview Burrows stated that her mind blindness makes it difficult for her to do math in her head. "Another negative is that I'm not the best at navigation and direction. If I am walking to a place that I've been to a million times before I don't need to think about it. But if it's a place I only vaguely know I need to remember street names and building names."

Mental calculations and navigation. Hmmm.

The Future of Vision Tech

With all the groundbreaking research and development on artificial retinas, brain implants, stem cell patches, and even the possibility of whole-eye transplant, it's important for us to remember vision involves a lot more than sending and receiving pixels. "There is no little man inside your head watching TV," says Culham.

What will happen when someone blind since birth undergoes a successful whole-eye transplant or stem cell retinal replacement at the age of 30? Will he or she ever be able to distinguish letters of the alphabet without touching them and a lot of training? What about that same person, only he or she went blind at 12 years of age? The average length of time a person has been blind before receiving an Argus II retinal implant is 20 years. Will someone who has been blind for 50 years do better than someone blind for only 2 years? Would it be possible to perform brain scans to determine likely success rates before the blind individual undergoes complex surgery? How much do mental calculations and orientation depend on creating a workable mental image, and are there ways to improve a blind person's ability to do this? Are there better ways to stimulate the brain with the tiny electrodes of an ocular bypass system such as the Argus?

Happily, we are living in times when answers to these and other similar questions may come from science rather than science fiction.

This article is made possible in part by generous funding from the James H. and Alice Teubert Charitable Trust, Huntington, West Virginia.

Comment on this article.

Related Articles:

More by this author:

Adjusting Focus

Lee Huffman

Dear AccessWorld Readers,

As I'm sure you have all noticed, the days are now growing noticeably shorter. Students have returned to school, and it's now a logical time to begin thinking about work and careers. October is National Disability Employment Awareness Month, so next month AccessWorld will take a closer look at employment resources for people with vision loss as well as revisit tried-and-true job search strategies. Of course, we will also look at technology to support and enhance your career and work life.

As I said in last month's Editor's Page, we will be developing additional AccessWorld content in the areas of Employment, Technologies on the Horizon, and Technology Case Studies at major technology companies. We will begin this process in October with an article by Deborah Kendrick that details an employment initiative at BOSMA Enterprises. Employment is an area of significant emphasis at AFB, and AccessWorld will now begin supporting this work at a higher level.

Our mission at AFB, and at AccessWorld, is to expand possibilities for people with vision loss. That is why I and the entire AccessWorld staff investigate, try out, and report on the many aspects of technology we cover. Technology is the "game changer." It is the single most significant tool people with visual impairments have to obtain and maintain independence in every area of our lives, from education, to work, to transportation, to personal finances, to maintaining our homes, to supporting our families, and to reaching our highest level of accomplishment.

AccessWorld publishes technology information for you to use to the best of your ability and in your best interest. We expect and encourage our readers to be information seekers and problem solvers. If there is one thread that runs through every issue of AccessWorld, it is that we are working through these technology challenges together. Don't think for one minute that AccessWorld authors never become frustrated, overwhelmed, and even disappointed by technology. Believe me, it happens to all of us. After cooling off and trying again, though, the "I got it!" moment happens and the challenge becomes worth the effort. It happens for us, and it will happen for you. I encourage you and challenge you to stay with it.

Every article in AccessWorld may not pique your interest or provide the most relevant information for your specific circumstances, so send me your suggestions or questions. You may know of information or resources that we do not. Sharing what you know in a Letter to the Editor may provide another reader with information they need. That is how it works. By sharing information, tools, and tips with one another, we strengthen and empower our entire community, and that is the ultimate goal of AccessWorld.

The AccessWorld team hopes you will read each article in this and every issue to gain as much access information as possible. Please remember to like and share on social media the articles you find most helpful and informative, or send links via email to a specific friend, relative, student, or colleague. As technology is always advancing, we encourage you to stay diligent and proactive in seeking out new access strategies that better meet your situation. Tune in next month as we continue this journey!

Sincerely,
Lee Huffman
AccessWorld Editor-in-Chief
American Foundation for the Blind