Full Issue: AccessWorld July 2013

An Overview of New Features and Bug Fixes in Victor Reader Stream Version 4.1.6 from HumanWare

HumanWare released the Victor Reader Stream New Generation Version 4.1.6 on May 21, 2013. The update is available in the following ways: you can download the update file from the HumanWare Victor Reader Stream New Generation support website, you can use the HumanWare Companion software (the update service is located under the Help menu), or you can update wirelessly after the device is connected to a wireless network. The update to 4.1.6 includes several bug fixes and improvements, as well as the highly anticipated ability to play Audible audio books.

Bug Fixes and Improvements

The update includes many bug fixes and improvements. You can now raise the volume of your headphones to 20. If you create a temporary playlist on the Music Bookshelf and turn the device off while it is active, you will be returned to the temporary playlist when you turn your device on again. Using the HumanWare Companion software, you can now connect the Victor Reader Stream to static IP networks and HTTP proxy servers. The dialogue to do this is located under the Tools menu in the HumanWare Companion software.

The Victor Reader Stream has also seen several improvements to audio playback and the recording capability. The Victor Reader Stream previously had an issue where certain audio tracks would distort if the bass setting was raised to the max level in the Music Bookshelf. This issue has been corrected. Recordings made with the device have far less of a low whine in the background, and the sound of recordings is much clearer.

In addition to these improvements, the Victor Reader Stream supports two more pieces of peripheral hardware. One of these is the Swifty USB adaptive remote switch. The other is the Victor Reader CD Accessory. With the addition of the CD Accessory, the Stream can now play commercial audio CDs.

Audible Support

The greatest improvement made to the Victor Reader Stream software is its ability to play Audible audio books. Audible produces commercial audio books in digital format. Most books have some markup for sections, such as chapter headings. The Victor Reader Stream can play the Audible Format 4 and Enhanced Format books.

Adding and Playing Audible Books on the Victor Reader Stream

Use the Audible website to purchase Audible books, and the Audible Manager to download and add the books to the Victor Reader Stream. The Audible Manager software can be downloaded from the Audible Software webpage. When you install the software, it will prompt you to choose your device. The Victor Reader Stream is labeled "HumanWare Devices" in the device list. Once a book has been downloaded, you can add it to the Victor Reader Stream by opening the Applications menu and selecting "Send to Victor Reader Stream." You must have the Victor Reader Stream turned on and connected to the computer using the included USB cable. You can also locate the Audible title and copy it to the Audible folder of your Stream SD card directly using Windows Explorer.

Once you have added your book to the Victor Reader Stream, you can find it under the Audible Bookshelf. The book can either be navigated by the heading element or by a time interval. You can select the desired element (heading or time jump) using the 2 and 8 keys. You can then move by the selected element using the 4 and 6 keys. There is a minor issue when skipping by heading. You must wait until the audio begins to play to move to the next heading using the 6 key. Pressing the 6 key in quick succession will not advance by multiple headings.

The Bottom Line

HumanWare has released a solid update to the Stream. Several bugs were fixed successfully, and Audible support is very well implemented with only a minor issue. If you would like to know more about the Victor Reader Stream, please read the April 2013 AccessWorld review of the Victor Reader Stream New Generation.

Product Information

Product: Victor Reader Stream version 4.1.6
Price: Free update
Available From: HumanWare
Phone Number: 800-722-3393

Comment on this article.

Working with Text and VoiceOver on a Mac

As with any screen reader, VoiceOver has its own set of commands for accomplishing specific tasks, such as editing text, reading documents, and finding specific words. This article will discuss how to accomplish these tasks on a Mac. There are some similarities between Mac and Windows commands, which can make the transition from PC to Mac a bit easier. I am using a MacBook Air running Mac OS X Mountain Lion.

VoiceOver Introduction

VoiceOver, the Apple screen reader, comes pre-installed on all iOS devices and Mac computers. This is different than most PC screen readers, which are third party applications. (Microsoft has Narrator, but it doesn't have all the desired features.) VoiceOver can be toggled on and off with Command + F5. (The Command keys are immediately to the right and left of the space bar.)

All VoiceOver commands use the Control and Option keys. The Option key is immediately to the left of the Command key on the left side of the space bar, and the Control key is one further left. (These VoiceOver keys will now be referred to as VO keys.) There are also Mac commands that do not involve VoiceOver, such as Command + S for saving a file. There is often more than one way to accomplish a specific task on the Mac. For example, typing VO + D will bring you to the dock as will a two-finger double tap near the bottom of the Trackpad Commander.

Reading Text

VoiceOver provides many ways to read text. Reading can be done with keyboard keys or with the trackpad, or a combination of both methods.

Reading Using Keyboard Commands

To read an entire document type VO + A. VoiceOver will continue reading until the end of the document is reached or the Control key is pressed. Typing the Control key will also stop speech with Window-Eyes and JAWS.

To read the current line type VO + L. The current sentence is read by typing VO + S, the current paragraph command is VO + P, and the current character command is VO + C. Typing the character again will say its name phonetically, such as typing S to elicit the response "Sierra." To read the current word type VO + W. The first time you press the command VoiceOver will say the word. Pressing the W again will have VoiceOver spell the word, and pressing the W a third time will have VoiceOver spell it phonetically. MacBooks do not have Home and End keys. Typing Command + Left Arrow brings you to the beginning of the current line, and VO + Right Arrow brings you to the end of the current line. Typing VO + Up Arrow goes to the top of a document, and VO + Down Arrow goes to the bottom. To read line by line, use the Down Arrow, and the Up Arrow to read the previous line. To read letter-by-letter, use the Right and Left Arrow keys (same when using Window-Eyes and JAWS). When reading by line or character, do not add the VO keys.

Using the Trackpad and VoiceOver Gestures

These gestures are the same as on an iPhone, iPod touch, or iPad. To read a document starting from the VoiceOver cursor, flick down with two fingers. To choose the unit by which VoiceOver will read, do a two-finger twist on the trackpad to the left or right until you hear the desired unit. Then, flick down with one finger. Another method of choosing VoiceOver's reading unit on the trackpad is to hold down the Command key and tap one finger on the trackpad. Repeat this gesture until you hear the unit you want.

Selecting Text

There are many ways to select text, some of which are described below. However, for additional methods, read chapter four in the VoiceOver Getting Started Guide. To access the guide type VO + H. The guide is the last option in the list. To hear the selected text, either type VO + F6 or tap three times if using the Trackpad Commander. If no text is selected, VoiceOver will say, No selection available.

Selecting text using Standard Mac OS X Keys

In order to use these key combinations, the keyboard cursor and VoiceOver cursor must be set to move together (this is the default setting). If changes have been made to this setting, adjustments can be made in the VoiceOver Utility by typing VO + F8.

To select text by character to the right of the insertion point, type Shift + Right Arrow. Type Shift + Left Arrow to select text to the left. To select by word, type Shift + Option + Right Arrow to select by word to the right and Shift + Option + Left Arrow to select by word to the left. To select text from the keyboard cursor to the end of the line, type Shift + Command + Right Arrow and Shift + Command + Left Arrow to select to the beginning of the line. To select text from the keyboard cursor to the end of the text, press Shift + Command + Down Arrow, and to select from the cursor position to the beginning of the text, press Shift + Command + Up Arrow. Typing Shift + Down Arrow will select text by line. To select the entire document type Command + A. This command is similar to the Window-Eyes and JAWS commands, which substitute the Control key for the Mac's Command key.

If you accidentally select text, type one of the arrow keys alone, but be aware that this can change the cursor position. If text is accidentally deselected, try the Undo command (Command + Z). To copy selected text, type Command + C. To cut the text and move it to a different location, type Command + X. To paste the cut text, type Command + V. If you substitute the Control key for the Command key when copying, cutting, and pasting, the keystrokes will work for JAWS and Window-Eyes.

Selecting Text with VoiceOver Keys

Sometimes the above methods will not work in an e-mail, but text can still be selected using the VoiceOver Keys. When you're at the beginning of the text you want to select, press VO + Return. Then use the VO keys and arrow keys to read the text. At the end of the text to be selected, press VO + Return again, which highlights the text.

Selecting Text with Quick Nav

Launch Quick Nav by pressing the Left Arrow and Right Arrow at the same time. To choose the unit of movement, use the Up Arrow and either the Right Arrow or Left Arrow. Repeat this until you find the desired unit, such as headings or words. When you come to the point where you want to begin selecting text, press Shift + Down Arrow, and VoiceOver will indicate that the text has been selected. By typing the same keystroke again, you will move to and select the next unit of text. To unselect text type Shift + Up Arrow.

Finding Text

The VoiceOver command VO + F will bring up an edit box where you can type in a search query. Once the text is entered, hit the Return key. If no match is found, VoiceOver will make a sound. If there is a match, VoiceOver will go to the result and speak the search query. Use VO + L to read the line the text is on or use the arrow keys. To go to the next instance, type VO + G, or type VO + Shift + G to go to the previous instance. It's also possible to use the regular OS X Search command (Command + F), but you won't get the same auditory feedback.

Conclusion

One of the nice features about the Mac is that there are usually several ways to accomplish a task. Practice with the above commands, and see which technique works best for you. Sometimes it may take a combination of techniques to accomplish an action. Look for similarities between Mac and Windows commands to make your transition a little easier.

Comment on this article.

A Review of the TapTapSee, CamFind, and Talking Goggles Object Identification Apps for the iPhone

In the April 2012 AccessWorld review of the VizWiz, Digit-Eyes, and LookTel Recognizer object identification apps for iOS, the reviewer concluded that each of these apps would make a valuable addition to your iPhone's app library. In the intervening months, several new offerings have hit the App Store that can help identify everything from a soup can to the return address on a package you receive in the mail. In this article I'll take a look at three: TapTapSee, CamFind, and Talking Goggles. All three of these apps compare pictures you snap using your rear-facing camera against extensive image databases and return the best match. TapTapSee and CamFind also use human staff to view the images and provide accurate matches. The third app, Talking Goggles, adds a unique twist: optical character recognition (OCR) that can decode small excerpts or labels of text printed on boxes, jars, magazine covers, and more.

An iPhone 5 was used for this review, but these apps should also work on other iOS devices that have a camera.

TapTapSee

TapTapSee from Net Ideas, LLC is a free iOS app boasting an extremely simple interface. Indeed, there are only three buttons, all clearly labeled. At the extreme upper left of the screen is the "Repeat" button, and at the extreme upper right is the "About" button. Tap anywhere else on the screen to locate the button most essential to this app: "Take Picture."

Aim your iPhone's camera at the item you'd like to recognize and double tap the screen. The shutter click sounds, and VoiceOver announces, "Picture One taken." Your photo is automatically uploaded to the company's servers, and when a match is found, VoiceOver speaks the item name.

The item name is not displayed on the screen, but you can press the "Repeat" button to have it re-voiced. The "Repeat" and "About" buttons at the upper left and right of the screen are very small and can be difficult to find. A better way to access is with a one-finger swipe to the right to reach the "Repeat" button and a second swipe to reach the "About" button.

You don't have to wait until you get the results of one search before you take a second or third picture. In fact, you can take five pictures before the app recycles to the "Picture One is taken" notification. This feature is especially handy when you have a number of items you wish to recognize or if you suspect your first attempt may not have been in focus or at the right distance. Consider snapping several and waiting for the results to come back one by one. Since the "Take Picture" button is so easy to find and press, it's a simple matter to hold your phone between your thumb and middle finger and tap with your index finger, leaving you a free hand to arrange each item for its moment in the spotlight.

TapTapSee announces the number of each picture as it is uploaded and the number of each result as it is returned and voiced. Unfortunately, you can only repeat the most recent result, so if TapTapSee has announced the name of item three, you cannot return to the information on item two. Also, since these results are not displayed on the screen, there is no way to review the item name letter by letter. Braille display and Zoom users will need to turn on speech to use the app since it only voices results.

Along with the app's version number, Terms of Use, and Privacy Policy, the About menu includes a setting labeled "Enable Auto-Focus Sound." If you enable this setting, TapTapSee will sound a double beep when your item is in focus. There is no flash setting, so the app defaults to auto-flash mode. I found the auto-focus sound feature extremely useful and not only for photo accuracy. If you do not close TapTapSee after you use it, the app will continue to auto-focus on one item after another until your phone is locked. If you're like me and run your iPhone with auto-lock set to "Never," the alert beep is handy as it will remind you the app is still running in the foreground.

Results

TapTapSee results are swift and accurate, and on those occasions when the results are delayed, the app voices a message saying, "This will take a few seconds," so you aren't left wondering if the photo had, in fact, uploaded. Most cans and boxes from my pantry were identified on the first try, but if I had to turn the can a bit to get a better picture, it was an easy matter to double tap the screen again without having to search for the "Take Picture" icon or find a "Back" button.

The auto-focus button is a real help in learning to position the camera, and I quickly found myself using it to submit several photos in rapid succession from different angles. As mentioned, TapTapSee uses a combination of a proprietary image database and humans to perform recognition, so the same item often returned slightly different results. Three successive pictures of my keyboard returned these results: "Apple keyboard, wireless," "Apple keyboard" and "computer keyboard." Two successive images of a DVD reported "WKRP in Cincinnati" and "WKRP in Cincinnati: The Complete First Season DVD"

The app had little trouble identifying US currency. It did a fine job identifying most DVDs and CDs in my small disk library, and it also recognized most of the books I tried even if I photographed the wrong side. However, as was the case with the DVDs, the results varied in their completeness.

TapTapSee truly shines when it comes to non-branded, non-labeled items. From the street it reported that I live in a "white stucco house." A second photo revealed that the verbena bush near my front door is covered in red and orange flowers. Submitting a pair of photographs with the camera aimed at myself gave the results "man in blue shirt" and "man with gray beard." I hung a red t-shirt on my home office door, and TapTapSee nailed it with "red shirt hanging on white door." The app frequently included color and background information, such as "bacon in a black pan" or "orange cat sleeping on black couch."

Conclusions

Considering its simplicity and ease of use, TapTapSee is a must-have app for any individual with vision loss who uses VoiceOver with their iDevice. The price is right (free), and it has already become my go-to app for quickly identifying objects at home or out and about. If you are a Zoom or braille display user or just want a more full-featured recognizer app, read on.

Update

Shortly before publication, the developers of TapTapSee released an update that includes some exciting new features. There are two new buttons across the top of the Home screen: "Library" and "Share." Double tapping the "Library" button calls up your photo stream. Select any picture and double tap, and the image is sent to TapTapSee for identification. This is a great way to help clear the clutter of all those pictures of blank walls and wayward thumbs from your stream. Now that you've found the picture of your cat playing the piano, return to the Home screen and double tap the "Share" button to e-mail it to a friend or post to one of your social networks. TapTapSee sends out the most recent identified image, so you can now check to make sure you actually captured that circus clown in the frame before you share it on Facebook or Twitter.

CamFind

The CamFind app, also from Net Ideas, LLC and also free, builds upon the developer's experience with TapTapSee, and responds to a lot of useful feedback they received from the sight-impaired community.

CamFind is geared toward people who want to learn more about an item, and perhaps shop for it online, without having to type in a search. The CamFind interface differs significantly from the simplicity of TapTapSee. The opening screen still has only three buttons: "History," "Take Picture," and "Options," but tapping anywhere on the screen does not call up the "Take Picture" button as it does in TapTapSee. Instead, double tap the bottom center just above the "Home" button.

Also unlike TapTapSee, CamFind displays the item name on the Results page along with a thumbnail image of your picture. There are four additional buttons on the results page: "Shopping," "Related Image Search," "Map Search Results," and "Loading Movies." Snap a movie poster, and the "Loading Movies" button will take you to a site where you can watch a trailer for the film. The WKRP DVD image took me to a web store where I could order that DVD and several other related titles, including Night Court and Murphy Brown. The "Map Search Results" button is supposed to show you where you can purchase the item locally, but mostly I got links to local hotels and restaurants. The "Related Image Search" button does just that, and the "Shopping" button leads you to online merchants who sell the item plus other resources, such as the manufacturer's homepage, Wikipedia articles, and other sites of possible interest.

Unlike with TapTapSee, you can't take successive pictures without first returning to the main screen. Unfortunately, the two-finger scrub gesture does not work in this app. Instead, you have to locate and double tap the "Back" button at the upper left of the screen.

The main screen then lists all of the items you have scanned to date. You can return to the Results menu by tapping any item on the list, which can be handy if you identified an item while you were out and want to wait until you return home to shop or learn more. The History menu has a "Clear" button when your list starts to get unwieldy, and an "Add Item" button that allows you to search by text or open and identify a picture from your photo library.

Results

Since CamFind and TapTapSee use the same image databases and human staff, the results tend to be similar. According to the developers, TapTapSee has been optimized to give results that individuals who are visually impaired would prefer, and I did experience at least two instances where I believed this to be true. The "red shirt against the white door" message that TapTapSee reported came back as simply as "red shirt" using CamFind. In general, TapTapSee is far more likely to include background information or to mention more than one item, such as "loaf of bread on a cutting board." Another time when I got usefully different results was when I snapped the same car with both apps. CamFind reported "red Toyota Camry" whereas TapTapSee reported "car with open door," which was potentially the information I wanted to know.

CamFind also offers a voice search feature and language support. You'll find both in the Options menu. The app also uses auto-flash and auto-focus, but currently, there is no beep alert to let you know the object is in focus. Happily, I was informed by one of the developers that this feature will be added to an upcoming release.

Conclusions

This app is an excellent companion to TapTapSee, especially if you want to save your results for later or do additional research on the items you photograph. Braille and Zoom users may also prefer this app to TapTapSee, since it displays results on screen. Since CamFind is also free, there's no reason not to have both apps in your app library.

Talking Goggles

The Talking Goggles iOS app from Sparkling Apps ($0.99) uses the Google Goggles image database, the same one used by the popular Android app of the same name. The app uses its own voice to announce the results. It also features a "Video Camera" mode you can use to identify one item after another without having to pause to take individual photos. Finally (and most interestingly), whenever the app detects text, it uses OCR to try to recognize and announce it.

When you start Talking Goggles, you are presented with four buttons. At the upper left of the screen is the "Flash" button that toggles the device's flash off and on. (This app does not use auto-flash.) At the upper left, there is a "Flag" button. Select this option to change the app's language settings. The screen's bottom left contains the "Gallery" button. Double tap this button to bring forth icons for "Camera Roll," "Photo Stream," and "Goggles Library." Open any of these, select an image, and double tap. Talking Goggles will open the photo and try to identify it.

Move to the bottom right, and you will find a button that toggles the app between "Video Camera" and "Still Camera" modes. In the lower middle is one last button. When you are in "Video Camera" mode, this button toggles between "Record" and "Stop." When you are in "Still Camera" mode, this button has the somewhat baffling label "Camera Copy14."

Double tapping the "Camera Copy14" button does not take a picture. You have to double tap a second time because the first double tap offers you the opportunity to focus the camera the proper distance from the object you wish to scan. Unfortunately, there is no audible feedback to assist.

The "Video Camera" mode causes Talking Goggles to attempt to identify objects it finds in the video stream. After you point the camera toward an object you wish to identify, Talking Goggles will continue to attempt to identify one item after another on the fly until you press the "Stop" button, the "Still Camera" button, or press your device's "Home" button to exit the app.

After voicing a "Still Camera" result, Talking Goggles displays the name of the last item on the screen. Double tapping the name calls up a web search for the item. There is no list of previously recognized items. You can access a history of your photos from the Goggles Library via the "Gallery" button, but unlike with CamFind, this app does not list the names of the items. Instead, it only lists the time and date stamps, so users who are visually impaired will find locating a previously recognized object frustrating at best.

Talking Goggles places app options in the iOS Settings menu. There you can choose whether or not to save your captured images and if you want the app to self-voice the results or not.

The app offers three scan modes: "Fast," "Balanced," and "Best." I tested all three and identification was in fact strongest using the "Best" setting.

Results

Talking Goggles performed well for most catalog items, including CDs, DVDs, and books. It also recognized pantry items well, but many times the results only contained a few words instead of the full title. I was also pleasantly surprised when I aimed the app at one of my antique poker dog prints. CamFind had recognized this as "a picture of seven dogs playing poker." Talking Goggles also included the print's title, "A Friend in Need." It also recognized my granddaughter's Mickey Mouse shirt and, in both cases, called up a web search for those characters.

The app did an adequate but not stellar job identifying US currency. I often had to take multiple photos or pull the video camera away and then go back for a second or third pass. Also, Talking Goggles often provided more information than necessary, such as "'In God We Trust.' Twenty dollars."

Unfortunately, my red shirt hanging on the door, a pair of reading glasses, and many other everyday items resulted in a fairly high percentage of "No Close Images Found" errors. Unlike for item matches, this message does not self-voice, and sometimes it takes several minutes to appear on the screen. In the meantime I was left wondering if the picture had, indeed, uploaded properly.

Often, instead of properly recognizing an object, Talking Goggles would report back a seemingly random string of words or characters. At first glance this seems like a weakness in the app, but in fact, it is one of its marquee features. Talking Goggles has been programmed to search for text and, when it finds some, use OCR to recognize it. Sometimes this works against the app, like when a screwdriver I scanned came back "swash up." (Perhaps the app was trying to identify a brand name, or the screwdriver's handle ridges appeared to the app as some sort of text.) Other times, this can be a true help to the visually impaired.

Whereas TapTapSee and CamFind described a bubble wrapped package as an orange cell phone, I was able to use Talking Goggles to find the words "refill" and "Febreze" on the package, which was all the information I needed. The OCR is by no means complete, but with practice I was able to catch a few words here and a sentence there, such as magazine titles and product names on the empty boxes stacking up in my closet.

Unfortunately, there are also times when the OCR works against the app. Instead of using an image match to identify a can of Chef Boyardee Ravioli, the app went straight to OCR and spoke small blurbs from the nutrition label. Several quarter turns of the can and about 20 seconds of trying to hold the camera perfectly still finally revealed the product name. TapTapSee nailed it on its first try. I also tried using Talking Goggles to identify a bottle of my wife's nail polish. The "Still Camera" mode gave me several "No Close Images Found" errors in a row. I moved on to the "Video Camera" mode, but after several minutes trying, the only text I could get the app to announce was "no chip" and "one coat." Again, TapTapSee identified the object correctly on its first try as a bottle of Revlon Clear nail polish.

One very useful task Talking Goggles did help me perform was sorting through a pile of mail. Often the brief blurbs of text or labels were all I needed to discern a return address or read "Limited Time Offer!" or "Current Resident," which allowed me to quickly sort out and toss much of the junk mail.

Conclusions

In "Still Camera" mode, Talking Goggles is considerably slower and less accurate than both TapTapSee and CamFind. It also does not perform well in poorly lit areas, even with the flash turned on. The "Video Camera" mode provides some useful text information via the OCR feature, but the learning curve is rather steep. Without audible feedback it's difficult to sense exactly how far away to hold various items. Also, it takes several seconds for Talking Goggles to identify and recognize text, and I often found myself struggling to hold my iPhone perfectly still for any length of time. It is also impossible to tell if the app has found text and needs more time to recognize and announce it or if there is no text in the focus area and the camera needs to be moved one way or another.

Despite these shortcomings, I definitely think Talking Goggles is well worth the $0.99 price even if all I use it for is to help presort my mail.

The Bottom Line

If you haven't already read the April 2012 AccessWorld article What is This?: A Review of the VizWiz, Digit-Eyes, and LookTel Recognizer Apps for the iPhone, it's definitely worth a look. The free VizWiz app described in the piece is a definite must-have even if you obtain all three of the apps mentioned in this follow-up.

Here's why:

Before you submit a photo to the VizWiz recognition engine and web workers, you are given the opportunity to record a question. Often your question is "What is this?" but just as often there will be a particular aspect (color, expiration date, oven temperature, etc.) which is the information you really want to know. VizWiz's question and answer format is ideally suited to these sorts of tasks. Even so, in my opinion, VoiceOver users should definitely consider adding both TapTapSee and CamFind to their app libraries for quick and easy identification of most products. Braille and Zoom users will probably be satisfied with just CamFind.

If you already use a dedicated OCR app with your iOS device, Talking Goggles will not add a great deal of functionality that is not already available on the previous two apps. Identifying works of art and characters on t-shirts is an amusing novelty, but except for sorting mail, I don't see myself firing up this app more than once or twice a week. At $0.99, I would nonetheless recommend the purchase of this app if it's in your budget. I believe the future potential for this and other similar apps is enormous, and I eagerly await Talking Goggles updates and the accessibility advances they might make possible. For now, as users who are visually impaired, it's in our best interests to offer the developers our encouragement and financial support.

Comment on this article.

Android's Big Step Forward: A Review of the Screen Enhancement Features of Android 4.2.2

At an ever increasing rate, tasks that have traditionally been carried out using PCs and Macs are being taken over by smartphones and tablets. The Android market is inching ever closer to the one million apps marker, indicative of the potential influence Android devices have on people's everyday lives. According to a recent press release by International Data Corporation (IDC), the sale of tablets will outpace laptop PCs before the end of this year, and predictions are that tablets will outsell all PCs by 2015. Read the IDC press release for more information.

These trends continue to have an enormous impact on the way that we access information and carry out tasks. When taking into account the reduced screen size of smartphones and tablets over traditional PCs and Macs, the proliferation of these devices on the market, and a rapidly aging population with visual impairments, the accessibility features of these devices for people who are blind or visually impaired become all the more important.

Photo of logo representing the Android operating system

Caption: Photo of logo representing the Android OS

For several years now, the level of accessibility within Apple's mobile and tablet operating system (iOS) has been recognized by the blind and visually impaired community as arguably the most accessible mainstream operating system the world has ever seen. In comparison, it's safe to say that the accessibility features of Google's Android OS have been considered less than stellar. The improvements that Google has made to their latest Android release, specifically for people with low vision, are now challenging that belief.

In this review, we will put the accessibility features for people with low vision using Android 4.2.2 to the test using the Nexus 4 smartphone and the Nexus 7 tablet, evaluating the operating system in the following categories:

  • Magnification
  • Adjustable text size
  • Color schemes
  • Combining speech and magnification
  • Open source vs. closed source

Magnification

An easily overlooked feature added to Android 4.2.2 is an option called Magnification Gestures. This is a deeply integrated magnification function found with the Accessibility menu that allows you to zoom in and pan any part of the viewable screen, regardless of which app or setting you're in. The Magnification Gestures feature has two options available: "On" and "Off." When "On" is selected, a screen appears with specific instructions on controlling the level of magnification using a handful of gestures. Turning the Magnification Gestures on does not mean that the viewable screen remains magnified at all times. It simply gives you the option of accessing this feature on the fly any time it's needed. Enabling screen magnification after the Magnification Gestures feature has been turned on is achieved by triple tapping the screen with a single finger.

Increasing and decreasing the viewable screen size is achieved by using the reverse pinch and pinch gestures (spreading the thumb and index finger apart or pinching them together). These are the same familiar gestures that are used on most tablets and mobile devices for resizing photos, thereby reducing the learning curve when using them with Android's Magnification Gestures feature. Panning the screen from side to side and up and down is accomplished by sliding two fingers over the screen in the desired direction. To disable magnification, you can triple tap the screen again with a single finger. Triple tapping and holding one finger on the screen will result in temporarily magnifying the display with the focus of attention at the location of your finger. Releasing your finger off the screen will immediately snap the viewable area back to its regular size.

Although Apple's iOS 7, which is expected to be released later this fall, may surprise us and tell a different story, iOS does not offer a comparable feature to the latest version of Android's single-finger triple tap hold/release feature. This feature is very helpful if you want to quickly magnify the screen at the exact location of your fingertip and, then, return to the original size. It's not much different from using a hand-held magnifier to quickly carry out spot-reading.

The range of magnification provided with Android is impressive. Five-point font using the Nexus 4 smartphone can be increased to 32-point font. Of course, the viewable font size itself is dependent on the screen size of the device you are using.

Adjustable Text Size

Android provides four font sizes within its operating system located within the Display menu: "Small," "Normal," "Large," and "Huge." Under Accessibility, the only option related to text size is called "Large Text." This option is either checked or unchecked. When checked, the font size actually changes to the "Huge" setting listed under Display. This could result in some confusion if you happen to be a novice user since font size modifications are listed in two locations and are labeled differently.

Using the Nexus 4 smartphone, the text within an e-mail ranges from six to nine-point font size. There are times when combining both large font sizes and magnification can be counterproductive since some of the viewable text can appear disproportionately larger than the other items and images displayed. Relying on the screen magnification gestures, font size adjustments, or a combination of both depends largely on personal preference and functional vision. That being said, providing a wide spectrum of font sizes within the Android OS will certainly expand the functionality and options of Android devices for a broader audience.

Screenshot of Android Accessibility settings

Caption: Screenshot of Android Accessibility settings

Color Schemes

The option to invert colors on a device can be extremely beneficial if you happen to be glare sensitive or if you prefer a less intense display in low-lit or dark environments while maintaining a high level of contrast. Depending on your specific visual needs, inverting colors may also reduce visual fatigue, especially for prolonged reading. The benefits of inverting colors (i.e., white text on a black background) are so valuable to people with low vision that it has long been a standard feature among screen magnification programs and electronic magnifiers (CCTVs).

By default, the stock version of Android displays its menu options and settings using white text on a black background. Android 4.2.2 does not provide any options for inverting the existing color scheme. Some manufacturers, such as Samsung, may include their own Android skins that include this feature. For instance, the Galaxy S3 and Galaxy S4 running Android 4.1.1 or later have a "Negative Colors" option within the Accessibility menu that allows for display inversion, including web pages. This is an example of how specific settings and features can vary from one Android device to another even when the same version of Android is being used.

There are some downsides to color inversion. Enabling this setting can actually decrease the level of contrast, depending on the specific items or color schemes that are used. Color inversion can also sometimes render graphics-based items, such as icons and photos, difficult to identify. Having the option to enable and disable color inversion as needed would provide the best of both worlds.

Combining Speech and Magnification

It's beyond the scope of this article to burrow deeply into the screen reading capabilities of Android (or lack thereof). A voluminous amount of material has been written on this topic alone. It would be remiss, however, to not touch on the capabilities of Android devices when it comes to their effectiveness with running their native screen magnification and speech output applications simultaneously.

When taking into consideration the diminutive screen size of smartphones and tablets, it isn't surprising that a greater number of people could benefit from at least some speech output. Android's screen reader, called TalkBack, is able to run simultaneously with the built-in magnification features. A thin yellow or orange border, depending on the foreground and background colors outlines the specific section of the screen being read. This outlining feature can be helpful, since tracking is often a challenge if you have lower acuities and/or field loss. The yellow and orange outline provides much less contrast when accessing light-colored background colors.

Although Android 4.2.2 is able to read the text with magnification when the viewable area is static, it is unable to simultaneously track text while TalkBack is reading it.

Enabling and Disabling Speech on the Fly

A feature that becomes all the more important for people who have visual impairments and are using small screen displays is the ability to enable and disable speech output on the fly, specifically when reading lengthy texts, such as e-mails and documents.

Android 4.2.2 allows you to temporarily suspend TalkBack by opening the Global Context menu with the "down, then right" gesture and dragging a single finger to the top-left corner of the screen and, then, releasing your finger when TalkBack announces, "Pause feedback." You can, then, carry out a left swipe gesture until the focus of attention is on the "OK" button and double tap on it. Another option is to double tap the "OK" button on the screen if you're able to locate it visually. Either way, there is a multi-step process for TalkBack to be suspended. TalkBack can be resumed again by pressing the "Power" button on the device to put it to sleep and, then, re-awakening it again.

Open Source vs. Closed Source

Android is an open source operating system, which means that Android can run on mobile devices and tablets from a number of manufacturers. This also means that manufacturers are able to add their own skin to the OS, customizing the Android OS to their own preferences and specifications.

Open source can be a double-edged sword. Samsung's "Negative Colors" option is an example of how open source allows manufacturers to add beneficial features for people with visual impairments. Sometimes open source can have the undesirable outcome of developers adding their own modifications and features that aren't always accessible. In addition, open source means a lack of standardization from one device and manufacturer to another: not all Android devices will have the same capabilities or require the same user skills.

Because developers need to retool Android to run on devices with a wide range of hardware specifications, it's important to be aware that not all Android-based devices are capable of running the latest version of Android even if the device has just hit the market. This means that you could purchase a new Android device today, and you may not have the option to update to the latest version of Android that includes the powerful Magnification Gestures feature mentioned earlier in this article. Before purchasing a mobile device or tablet running Android, it's important to be aware of the current version of Android running on the device and its ability to upgrade to a newer version of Android.

Apple's iOS is closed source. The source code is tightly controlled by Apple and is designed solely for Apple's iOS devices. Providing a transferable experience with operating systems from one device to another is beneficial for anyone, but it's especially useful if you are blind or visually impaired.

Android 4.2.2: What Would Make It Better?

  1. Include the color inversion option on the Google stock version of Android. It would be difficult to find an electronic magnifier or screen enhancement program worth its salt that does not include a color inversion option. Considering the proliferation of mobile devices and tablets in today's market and the importance they play in our everyday lives, including this option with all Android-based devices should be considered a standard feature. Taking it one step further, providing a series of application-specific options for color inversion(i.e., color inversion with the web browser and e-mail, but not within the OS itself) would further the level of accessibility for low vision users.

  2. Providing a gesture-based option to enable and disable color inversion on the fly would increase the level of efficiency for people with low vision. An example of when this feature could be useful is on web pages that provide poor contrast when applying color inversion.

  3. As frequently happens, creating accessibility for one target audience transfers over to benefits for another. Improving tracking features by effectively highlighting and displaying the words being spoken via TalkBack, particularly for longer texts, would not only provide dual sensory access for people who have low vision but would also increase the level of accessibility for people with learning disabilities, such as dyslexia.

  4. At this point it should be no surprise to developers that the rapid rate required for some of the tapping gestures on smartphones and tablets is challenging for some people. For decades now, both Microsoft and Apple have incorporated options to modify the double-click rate of the left and right mouse buttons within their computer operating systems. Including tap rate options (slow, standard, fast) within the Accessibility settings would provide a more user-friendly experience for people who lack fine motor skills or have slower reflexes.

  5. Enabling and disabling Android's TalkBack on the fly with a single gesture would provide much more flexibility and ease of use than the multiple-step process currently in place. Using TalkBack as needed is of significant importance since the smaller screen sizes of mobile devices and tablets will more frequently warrant the need for speech output.

  6. Identifying the available font size options is bound to be a little confusing for some people because the font settings options are listed in two locations and with different labels. For instance, selecting the "Large" text option in Accessibility provides the same font size as the "Huge" font size setting within the Display menu. Providing consistency with the location and labeling of font sizes would eliminate this confusion.

  7. The labeling of the font sizes within Android is somewhat misleading. The range in size from "Small" to "Huge" only changes from six- to nine-point font sizes using the Nexus 4. Providing an upper range of at least 20-point would increase the level of accessibility for a larger number of people with visual impairments.

The Bottom Line

It can be tempting to take a dogmatic stance when it comes to choosing an operating system. I would have been guilty of this myself if it weren't for the trajectory that Google is on for incorporating a high level of accessibility within the most recent version of Android. The waters of distinction and supremacy are more muddied than they once were. Granted, Apple's highly controlled system generates a long list of reliable and predictable software and hardware solutions. However, Google continues to take accessibility very seriously as demonstrated in its latest release, and its Magnification Gestures include usability and features that iOS does not currently have.

Being an iOS die-hard for several years myself, the Android Magnification Gestures feature has challenged my long-standing notion that iOS is the uncontested leader in mobile accessibility. Google is making a concerted effort to promote its stock version of Android to other manufacturers, which hopefully will translate into greater uniformity and accessibility in the future.

In general, Android-based devices are less expensive than iOS devices. Choosing your mobile operating system will boil down to your specific needs and your budget. As we await the anticipated release of iOS 7 this fall and the next version of Android, let's hope that these titans of technology continue to gesture toward greater accessibility.

Product Information

Product: Android 4.2.2
Manufacturer: Google Inc.
1600 Amphitheatre Parkway
Mountain View, CA 94043

Comment on this article.

AFB AccessNote Notetaker App Updated!

The American Foundation for the Blind (AFB) is excited to announce the release of an update to AccessNote, the notetaker app for your iPhone, iPad, or iPod touch. We are thrilled by how well AccessNote has been received and by all the wonderful suggestions for improvements. We have been working with our friends at FloCo Apps LLC to implement several of those improvements this spring, and AccessNote version 1.1 is now live in the Apple App Store.

Description of AccessNote

You can read more about AccessNote in our initial release announcement. Before going into the improvements made in version 1.1, here is some general information about AccessNote:

AccessNote is a powerful and efficient notetaker that takes advantage of the tremendous built-in accessibility of your Apple devices. For greater typing speed and accuracy, AccessNote is designed to be used with wireless QWERTY and refreshable braille keyboards. Several powerful customized keyboard commands for both QWERTY and braille keyboards are included that increase speed and efficiency. AccessNote is, of course, designed to be used with VoiceOver, the screen reader that comes with your iPhone, iPad, or iPod touch.

AccessNote is priced at $19.99 and can be purchased from the App Store. AccessNote has many of the features found in traditional notetakers and accessible PDAs. The app creates notes in the TXT file format, and can also import TXT files from e-mail or Dropbox accounts. AccessNote's clean and simple interface uses standard design techniques, so the layout will be familiar to Apple product users.

Screen shot of the AccessNote Home screen

Caption: The AccessNote Home screen

Improvements in AccessNote 1.1

We have made several minor and major improvements to AccessNote with the 1.1 release. Some are bug fixes, while others are additional features. Many of our AccessNote users have asked for new functionality, and we are happy to develop new features to improve the usefulness of AccessNote.

The Action Menu

The biggest change you'll see with AccessNote 1.1 is the addition of an "Action" button, which appears when you have a note open. This button brings up a menu of actions you can perform on the note. You can also bring up the Action menu when you are on the All Notes screen by highlighting a note and performing a double-tap-and-hold gesture. The Action menu items include:

  • Find in Note: Brings up the tool for searching for text in your note.
  • Find Previous: Brings up the tool for searching in text previous to your current cursor position.
  • E-mail as Text: Creates a new Mail message with the text of your note in the body of the e-mail message.
  • E-mail as Attachment: Creates a new Mail message with the note attached to the message.
  • Print: Brings up a dialog to print the text of your note. You need an AirPrint-capable printer to use this feature.
  • Rename: Allows you to rename your note.
  • Delete: Allows you to delete a note.
  • Cancel: Cancels the Action menu and returns you to your note.

The E-mail and Print actions listed above are new features while the various Find and Rename actions are items that could previously be performed only with a wireless QWERTY or braille keyboard. Deleting notes could previously only be done on the device itself with a double-tap-and-hold gesture, deleting is possible on the device or with a keyboard.

Getting Rid of Blank Files

A major bug in 1.0 is that notes imported from e-mail messages or from Dropbox sometimes appear as blank notes in AccessNote. We believe we have solved this issue with the update. We tested AccessNote 1.1 with over 500 imported files, and none of them appeared as blank notes.

Other Bug Fixes and Improvements

We have optimized AccessNote to work with the latest updates to the iOS operating system, and we also improved its general stability. The on-screen QWERTY keyboard used to randomly appear even when a wireless keyboard was connected, and we have worked to kill that bug. We also removed some VoiceOver stuttering and verbosity when performing the various Find commands. We had neglected to properly tag the Previous and Next buttons in our short tutorial found in the Help screen. That has also been corrected with AccessNote 1.1.

Instability and Navigation Delays with Large Files

Several of our AccessNote users have told us about instability when reading and editing large notes. Several of you have told us that you are importing TXT versions of books you have scanned or converted from the BRF format. We have done extensive testing and have confirmed that when a note's size approaches 100 kilobytes, navigation gets clunky, and the Find commands can take a bit too long. Although that is still true for AccessNote 1.1, we are working to develop a solution for large files. We initially designed AccessNote to be a tool for taking notes in class and in meetings, so we did not think of importing large book files to read with AccessNote. We thank our AccessNote users for opening our minds to that idea, and we will do our best to improve the app's performance with large files.

We Want Your Feedback

We are obviously very excited about AccessNote, and we hope you all go out and buy it as soon as you can. We want to take advantage of the power of iOS devices to allow students and professionals to use the same mainstream device that their sighted peers are using. We, of course, will be anxiously looking for more feedback from those who use AccessNote, and we are also interested in more of your suggestions for improvements and new features as we develop future updates to AccessNote. We are already busy trying to kill the large-file bug, and we are working on being able to import and edit more file formats.

Comment on this article.

Apps, Phones, Tablets, Computers, Bookreaders, and Strategies for Teaching Technology: AccessWorld has it Covered for Back-to-School!

Lee Huffman

Yes, that's right. I said, back-to-school. It's almost here again. I know the students out there don't want to hear these words, but it's time to think about the start of a new school year.

New classes, new instructors, class projects, oral presentations, tests, new people, and maybe even a new school or moving away to college bring about uncertainty and new challenges. Uncertainty is not necessarily a bad thing. This time of year can be exciting, too, especially if you plan ahead and prepare in advance.

Pursuing a good education can be difficult under the best of circumstances, and doing so as a person with vision loss can increase the challenge.

For the students in our readership, you must take personal responsibility for your education. Ultimately, you must be your own advocate. Prepare in advance, speak to instructors, and tell those you'll be working with exactly what types of accommodations will best meet your needs. Your education will have a tremendous impact on every aspect of the rest of your life, so it's crucial that you do everything you can to get the most out of your studies.

Good planning prevents poor performance. It's never too early to begin planning for the next school term whether you're in elementary school or graduate school. Acquiring and learning to use the mainstream and access technology that best suits your situation, registering as early as possible for classes, obtaining reading lists, and searching out alternative formats should be done as soon as you can. Waiting until the last minute is a recipe for disaster.

Just as we have done for the past two years in the July issue, the AccessWorld team once again focuses on providing valuable information and resources for students, parents, teachers, and professionals in the vision loss field to help make educational pursuits less stressful and more enjoyable. We are excited to bring you the information in this issue, and we sincerely hope you will find it useful.

If you are purchasing a new Mac for the school year, Janet Ingber's article, Working with Text and VoiceOver on a Mac, will be a great asset for all those essays you will surely be writing. Bill Holton's article, A Review of the TapTapSee, CamFind, and Talking Goggles Object Identification Apps for the iPhone, will help you choose an object identification app that suits your needs.

Maybe you will be taking a new iPad back to school with you. If so, Deborah Kendrick's review of iOS Success: Making the iPad Accessible: A Guide for Teachers and Parents (National Braille Press) might be of interest.

Are you thinking about a new Android smartphone or tablet as you plan for hitting the books, but wonder how these devices fare with low-vision accessibility? In Android's Big Step Forward: A Review of the Screen Enhancement Features of Android 4.2.2, John Rempel tests accessibility features using the Nexus 4 smartphone and the Nexus 7 tablet. Additionally, Aaron Preece updates AccessWorld readers on the new changes to HumanWare's Victor Reader Stream.

Rounding out the Back-to-School issue, Larry L. Lewis, Jr., presents one instructor's way of teaching technology using Skype in his article, Walking in Two Worlds: One Educator's Quest to Change the Technology Landscape for Today's Visually Impaired Students.

We on the AccessWorld team wish you good luck and good planning as you head back to school!

Sincerely,
Lee Huffman, AccessWorld Editor-in-Chief
American Foundation for the Blind

Comments and Questions

Dear AccessWorld Editor,

My comment is in regard to Bill Holton's article in the May 2013 issue, Voiceye: A Breakthrough in Document Access.

Indeed, a bottom-up consumer approach will help get us the power of this evolving technology. Clearly in tight economic times, corporations and governments are less inclined to make the initial investment, so the start has to be grassroots-based. It's a case of evolutionary social change, and I sincerely hope that the US vendor will listen and move to make a user-friendly software app available. I do believe that approach will give us something long-term.

Sincerely,

Leo Bissonnette

Dear AccessWorld Editor,

My question comes in response to reading the May 2013 article, TextGrabber + Translator from ABBYY and the StandScan Pro: A Review of Two Products, by Janet Ingber.

I would like to know how TextGrabber + Translator compares to Prizmo in speed and accuracy. Any information you can provide will be appreciated.

Thanks,

Ron Armale

Response from AccessWorld author, Janet Ingber:

Hello Ron,

Thanks for your question.

Prizmo's recent update has made the app a viable option. It does have one good feature that TextGrabber doesn't, which is the ability to create documents with multiple pages. I still like TextGrabber better. In my opinion it gives a better scan and has a simple interface. Both of the apps are good, however. There is a Prizmo app for the Mac, but it is not completely accessible. I hope this information is helpful.

AccessWorld News

Take the AFB Described TV Survey

Please take just a few minutes to participate in the AFB Described TV Survey, and let us know about your experiences accessing and enjoying television programming with video description. (Description is oral narration of on-screen visual elements and actions, spoken during natural pauses in program dialogue.)

In the survey, you'll be invited to tell us what your favorite described programs are and which programs you would very much like to have described that aren't currently. By taking this survey, you will help AFB and our field as we work to better understand how well the major broadcast and cable networks are complying with the law and how satisfied viewers with vision loss are with their program offerings.

Your answers will be completely anonymous. You may choose, however, to provide your zip code when prompted. The law requires that video description be provided in the top 25 television markets, and all broadcast stations and cable companies must pass description through to customers unless some exception applies. Providing your zip code will help us better track how well broadcast stations and cable companies in specific TV markets around the country are doing to comply with the law.

We will be collecting survey responses until July 15, 2013.

Take the AFB Described TV Survey.

Odin Mobile Announces the First Mobile Service Dedicated to the Blind and People with Low Vision

Odin Mobile, a mobile virtual network operator (MVNO) on the T-Mobile network, recently announced the country's first mobile service designed for the sole purpose of improving wireless accessibility for the visually impaired.

Beginning in July, Odin Mobile will offer comprehensive cell phone service for the visually impaired, including accessible handsets, rate plans for most budgets, and a unique customer service experience designed to address the needs of its customers. This unique experience will include sending user guides to each of its customers via e-mail in Word and HTML formats, and providing customer support representatives who are expert in the accessibility features of its phones.

Odin Mobile will offer a range of accessible handsets, including the RAY, an innovative mobile device developed by Project RAY Ltd, which features a unique user interface built from the ground up for eyes-free operation. This device offers users a range of capabilities, such as calling and SMS, contact list services, calendar, GPS, advanced Web remote assistance, voice recorder, emergency services, and more. In addition, the RAY provides access to audio books, newspapers, and magazines with one user interface across all services and applications for ease of use.

Odin Mobile will also offer mobile phones for those who are visually impaired and simply want to make calls and text message. These mobile phones, manufactured by Emporia, will be easy to use and have accessibility features, including buttons and functions that speak as well as a high-contrast display.

Odin Mobile will donate two percent of its voice and text revenue to organizations dedicated to serving the needs of the visually impaired.

Nussbaum and Lewis, Former National Library Service for the Blind and Physically Handicapped (NLS) Librarians, Receive ASCLA Awards

Ruth J. Nussbaum, retired reference librarian, NLS, Library of Congress, and Jill Lewis, retired director of the Maryland State Library for the Blind and Physically Handicapped (MDLBPH), an NLS network regional library, are recipients of two Association of Specialized and Cooperative Library Agencies (ASCLA) awards.

ASCLA, a division of the American Library Association (ALA), selected Nussbaum for the 2013 Cathleen Bourdon Service Award for exceptional service and sustained leadership and Lewis for the 2013 Francis Joseph Campbell Award, which recognizes a person or institution that has made an outstanding contribution to the advancement of library service for the blind and physically handicapped.

Nussbaum worked as a reference librarian at NLS from 1987 to 2012. An ASCLA member since 1990, she has been chair of the Librarians Serving Special Populations Section of ASCLA, a member of the Century Scholarship committee, a representative to the ASCLA Board of Directors, chair of the Francis Joseph Campbell Award Committee, a member of the ASCLA Awards Committee, and representative to the ASCLA board. Nussbaum also served as an ALA councilor-at-large from 2004 to 2007 and has long been involved in the American Indian Library Association. She has made significant contributions to professional documents and guidelines, including accessibility policies for both ALA and ASCLA, fact sheets, bibliographies, and other publications addressing library services for people with disabilities.

Lewis served as the director of the MDLBPH from 2003 to 2012. Under her leadership, the library developed partnerships that provided a community center for library users with print disabilities. The center includes adaptive technology, cultural programs, and an interactive children's reading center. She previously worked as a reference librarian for NLS, where she conducted a study of educational reading services for individuals with print disabilities and prepared publications for the Reference Section. In 2012, Lewis was awarded the Distinguished Service Award from the National Federation of the Blind of Maryland and was presented with a Governor's Citation for Outstanding Service. She has been active within ALA and ASCLA since the 1990s and serves on the board of the Montgomery County Public Library in Maryland.

Both women were presented their awards during the ALA 2013 Conference at the ASCLA/COSLA reception on Saturday, June 29, at the Hyatt Regency McCormick Place in Chicago.

Perkins School for the Blind, Helen Keller National Center, and FableVision will Lead the iCanConnect Campaign

Many thousands of Americans who have combined loss of hearing and vision may soon connect with family, friends, and community thanks to the National Deaf-Blind Equipment Distribution Program. Mandated by the 21st Century Communications and Video Accessibility Act (CVAA), the Federal Communications Commission (FCC) established this new program to provide support for the local distribution of a wide array of accessible communications technology.

The FCC is also funding a national outreach campaign to educate the public about this new program. The iCanConnect campaign will be conducted jointly by Perkins School for the Blind in Watertown, MA, the Helen Keller National Center in New York City, NY, and FableVision of Boston, MA. iCanConnect will seek to ensure that everyone knows about the free communications technology and training that is now available to low-income individuals with combined hearing and vision loss. From screen enlargement software and video phones to off-the-shelf products that are accessible or adaptable, this technology can vastly improve quality of life for this population.

iCanConnect seeks to increase awareness about the availability of communications technology for this underserved population, so people who are deaf-blind and have limited income can remain safe and healthy, hold jobs, manage their households, and contribute to the economy and the community.

Information about the new equipment distribution program is available online at the iCanConnect website or by phone at 800-825-4595. Additional information is available through the online FCC Encyclopedia.

"With the right technology, people with disabilities can link to information and ideas, be productive, and move ahead," said Steven Rothstein, President of Perkins. "Perkins' most famous student, Helen Keller, exemplified the potential of a person who is deaf-blind. We are proud to have a role in this transformational program."

The CVAA, championed in Washington, D.C. by Congressman Edward J. Markey of Massachusetts and Senator Mark Pryor of Arkansas, acknowledges that advances in technology can revolutionize lives. Nearly one million people in the United States have some combination of vision and hearing loss. People with combined loss of vision and hearing as defined by the Helen Keller National Center Act whose income does not exceed 400 percent of the Federal Poverty Guidelines are eligible to participate in the new program.

"The mission of the Helen Keller National Center is to enable each person who is deaf-blind to live and work in his or her community of choice," explains Executive Director Joe McNulty, adding, "This critical technology access program accelerates those efforts but only if people know about the resources. iCanConnect is poised to get the word out, coast to coast."

"FableVision's mission is to help ALL learners reach their full potential," said Paul Reynolds, CEO of FableVision Studios. "With this program we advance that mission, helping spread the word about equal access to tools that offer those with hearing and vision loss the transformational power of technology." Reynolds adds, "Now everyone is invited to the technology promise powering the human network."

Walking in Two Worlds: Dr. Denise Robinson's Quest to Change the Technology Landscape for Today's Visually Impaired Students

The Problem: A Disconnect Between Expectations and Reality

A monumental obstacle to today's job seeker with a vision impairment is the disconnect between meeting the mainstream educator's expectations (or performing at a level which makes him or her a valuable asset to the employer), and receiving the technology training required to do so. It's both unrealistic and unfair to expect the next generation of technology users who are visually impaired to be able to create visually formatted PowerPoint slides, or embed multi-media elements into a file, or create and edit a color-coded bar graph at a level comparable to their sighted peers without excellent, iterative training from qualified instructors.

The Current Solution

At this stage, a standard has yet to be developed that today's access technology trainer must meet in order to provide access technology evaluations and subsequent training for people who are visually impaired. While there are various organizations who have created programs to "train the trainer" in how to most effectively provide these technology services, at the end of the day, a service provider who is sighted or visually impaired and has a laptop with a screen reader installed on it may fill out the necessary paperwork and advertise himself as a service provider inmost states throughout the US regardless of his or her ability to actually instruct consumers. Often, the hourly rate that the service provider charges varies from state to state as well.

Consumers who are visually impaired may receive technology training at an agency, in their homes, or, in some instances, online. It's not the intent of this article to revamp how technology services are deployed throughout the US, but we as an industry must advocate for the high quality training received by sighted technology users to be adapted and made available to current and future generations of access technology users. One such individual has made it her crusade to make this happen!

A Change Agent

"It was never my intention to be a teacher of the visually impaired," states Dr. Denise Robinson, President of TechVision LLC, a company she founded almost three years ago. "I was attending Whitman College in Washington State and studying to become an English teacher when my world was turned completely upside down."

In her last semester of college, diabetic retinopathy caused Robinson's retinas to hemorrhage, resulting in her losing virtually all of her visual acuities in both eyes. This life-altering event forced her to depart college abruptly, just shy of graduation. After getting over the initial shock of losing her vision, she moved to Michigan where she had a friend in the medical field, a retinologist who became one of the pillars in her new support system. "I didn't know the first thing about how to do the world blind, but once I pushed through the feelings of self-pity and other emotions that accompany such a loss, I set out to learn how."

Robinson enrolled in the Vision Education program at Eastern Michigan University where she met the mentor who would spark her interest in providing education and technology services for people who are visually impaired.

"I owe much of what I've learned and who I am today to Ted Lennox, a blind faculty member who became my mentor. My exposure to access technology began with the Apple II computers in the mid-80s. We were fixated on pushing this and all technologies that came our way to their absolute limits: testing them, taxing them, getting all that we could out of them. He taught me to believe that technology could never defeat us but, rather, empower us."

Robinson used this time to not only fuel her passion for this newfound technology but also to continue to hone her ability to teach. She completed her internship under Lennox and received her Bachelor of Arts as a Teacher of the Vision Impaired while completing her degree in English. Then, she completed the graduate program at Western Michigan University, focusing on early childhood education while already working her first job in the industry as a teacher of the visually impaired.

During this period of approximately ten years, two things began to happen. The technology began to advance beyond the scope of the Apple II and its synthesizer, and Robinson's vision began to slowly return! "I had gone from a fully sighted college student on the verge of getting my degree to someone who had to adapt to interacting with the world audibly and tactily. As a result, I so get the necessity for learning how to read and write braille and understand from the perspective of someone who cannot see print what a powerful tool technology can be for the person who is visually impaired and learns how to affectively use it. I not only understand this truth, but I lived it."

Robinson began her career as an itinerant vision teacher and, amazingly, regained enough vision that she could drive during the daytime. Over the following ten years, her vision would improve to 20/15, making it possible for her to drive at night again. Her time as an itinerant teacher for multiple school districts provided her credibility in the trenches where she learned how to assess students' needs and communicate them to well-meaning administrators who were at a loss as to how to best serve their students who were visually impaired. "Administrators generally want to do the right thing for their students and want to believe that there are better ways for their students to learn and excel. I made it my mission to show them how," states Robinson.

As her caseload began to exponentially expand and the number of miles she would drive per month began to grow, Robinson believed that there had to be a better way for her to provide these services. "I spent more time in the car than I did teaching students and began to see a profound need for service delivery in very rural, scattered school districts. I knew there had to be a better way!"

While working full-time and driving an average of 3,000 miles per month, Robinson took on the task of earning her PhD in Instructional Online Design from Capella University, a degree she hoped would help her realize her dream of providing top-notch distance learning to more students. Robinson began to learn how to affectively deliver content using the technology that she had grown to love.

She voraciously learned concepts, such as website accessibility, along with methodologies designed to convey ideas, skill sets, and practical know-how to the very audience she desired to serve more effectively. Her ability to interact with technology from the perspective of a user who is blind, coupled with her ability to visually look at a computer application or an assignment means she is able to strategize with her students who are visually impaired on how to use a given technological solution to its fullest to conquer any challenge that they are facing.

In 2006 she received her PhD and launched TechVision, the vehicle for accomplishing her mission. She began by only taking on one student who is visually impaired. "I wanted to really beat up this concept of distance learning with one student, so I could fine tune all the hiccups and iron out any wrinkles."

During this time, she created a plan whereby she could provide real-time audio lessons to her students via remote access simply by using a computer, Skype, and remote access to the students' PCs. Once the word was out about TechVision, news began to travel fast, and just over two years ago, Robinson quit her full-time job to devote her entire energy to the company.

"Teachers, administrators, and parents are wildly happy with the idea of using something free, such as Skype, to interact with their students, and despite the realities of varying bandwidth speeds and PC specifications, instruction has gone remarkably well." Along with the top-notch instruction that TechVision students receive, the TechVision website is a rich resource full of lesson plans and news about the latest technologies and technology trends.

What You Can Expect

"Once the student connects with us on Skype (yes, it's the student's responsibility to call his/her TechVision instructor), we immediately ask them, "What do you need to do today?" It's up to us to be able to respond quickly and effectively to meet our students' needs during the 50–60 minutes that we have access to them," Robinson explains.

Dr. Robinson's business has taken off over the past two years. She's very selective about bringing on any extra assistance to help her with her expanding caseload of students for it's important to her that the instructors who work with her share both her passion and her commitment to educational excellence before she introduces them to her students.

"There [are] a lot of people out there who fancy themselves to be technology experts, but when I begin throwing real-life scenarios at them, they're not able to respond in a timeframe nor at a level of skill that is acceptable for the level of service that I want to provide to my students. I'm committed to TechVision students getting the most comprehensive instruction and practical application using access technology, and I'll only expand as quickly as my resources will allow me to do so."

Dr. Robinson provides virtual instruction to online students all over the United States. These students are predominately K-12, but she also provides instruction to adults as well. It's expected that the students come prepared for their next lesson, beginning with the completion of their given assignments.

"I'm here to get my students ready for real life!" she says. "If they're not prepared for me, how will they be prepared for their teacher? If they're not prepared for their teachers, how will they be equipped for college or the workplace?"

Dr. Robinson sees the mission of TechVision exploding at a similar rate at which technology shifts and changes. She's spending lots more time teaching and lots less time driving, and school districts can rest assured that their money is being solely used to fund their students' learning process. Her students are the recipients of her passion. It's through her own journey of walking in both worlds of the visually impaired and the fully sighted that she can offer such a unique blend of technology training to an industry thirsty for an alternative way to deliver technology training services.

To be sure, Dr. Robinson is only human and needs time to decompress and recharge the batteries. Her personal interests include gardening, wood working, rehabbing houses, working with her hands, and extensive hiking. Dr. Robinson's commitment to excellence shines through in any lesson that she delivers. Her commitment to excellence and problem solving makes her a one-of-a-kind pioneer for students who must rely on technology to level the playing field in the classroom and the workplace.

Comment on this article.

iOS Success: Making the iPad Accessible: A Guide for Teachers and Parents, by Larry L. Lewis (National Braille Press)

While not quite as prevalent as the Apple iPhone, that sleek wonder tablet called the iPad is arguably a close second in popularity among iOS devices. Professionals and soccer moms use them for everything from watching films, tracking reports, and doing rapid online research, to handling e-mail, and playing games. The iPad is also attracting plenty of attention in educational settings. In some school districts around the country, an iPad is assigned to every student in elementary and secondary classrooms, and the trend is rapidly gaining momentum.

What does that trend mean for kids who are blind or visually impaired? Thanks to the built-in accessibility features of the iPhone and iPad, and the option of using these devices in conjunction with refreshable braille displays or Bluetooth QWERTY keyboards, students with vision loss have an equal shot at participation.

However, using the iPad with access tools involves an initially steep learning curve, not just for the kids themselves but also for the teachers and parents who need to guide them through the iPad adventure.

The latest offering in the growing treasure trove of handbooks for iOS users from National Braille Press (NBP) comes at the perfect time for those who need it. Just in time for back-to-school preparation, NBP has released iOS Success: Making the iPad Accessible: A Guide for Teachers and Parents, written by Larry L. Lewis.

A Tour of the Book

Making the iPad Accessible is clear and concise in its organization. While the structure of its chapters makes an easy business of locating the explanation of a particular task (say, setting up e-mail or making a note), the book is also short enough to be read from start to finish in a few hours. With book and iPad in hand, a teacher or parent has a self-guided tour through the process of taking the iPad out of the box, getting acquainted with its physical structure, powering it up, and turning on VoiceOver (the iOS screen reader) to get started.

The author guides the reader through making changes to key settings, rendering the iPad a friendlier environment to the VoiceOver or Zoom (the iOS screen magnifier) user, and provides exposure to most of the iPad's basic apps. The iPad offers two approaches to accessibility for students who are blind or low vision. VoiceOver reads everything on the screen and requires a variety of specific gestures or input commands to use. Zoom magnifies the screen for low vision students. While both are thoroughly discussed in the book, more attention is given to VoiceOver since the Zoom experience more closely replicates that of a typical user who is sighted.

For a parent or teacher who is sighted, using VoiceOver can be daunting at first, and this book eases the reader into the VoiceOver environment with encouragement and clarity. Readers will learn the basic VoiceOver gestures so that, whether they are blind or sighted, they can simulate the iPad experience that a student will have.

After establishing a basic comfort level with VoiceOver and its gestures, the book provides guidance through the use of real-life tasks, such as:

  • Searching and navigating the Internet
  • Setting up and using e-mail
  • Locating, downloading, and navigating books
  • Sending and receiving iMessages
  • Using word processing apps
  • Importing, creating, and sharing documents

For many of these tasks, simple exercises are provided that will enable both teacher and student to see the power of the iPad in action.

The book provides information for using the basic apps built into every iPad as well as for searching the App Store for additional apps that will be of particular benefit to students who are blind or visually impaired. Particular attention is paid to document types that will and won't work for a student who is blind as well as the most efficient methods for sharing documents between student and teacher. After all, it's one thing to tell a teacher or parent that you can, indeed, provide a student who is blind with a worksheet via the iPad and ask him or her to complete and return it using the same device, but a much better thing is to demonstrate how that is accomplished. For many busy parents and teachers, discerning precisely how to accomplish such tasks can be overwhelming. This book provides readers with the needed information and delivers it in clear enough language that even newcomers to iOS devices will be able to follow.

Using External Accessories

The author stresses the value in using either a wireless QWERTY keyboard or are freshable braille display with the iPad to increase speed and efficiency in school. Use of a refreshable braille display, in particular, is strongly recommended for students who are totally blind. Step-by-step instructions for pairing keyboards and braille displays are given, as are keystroke command lists for interacting with VoiceOver or Zoom. Using an Apple device with braille is not without its quirks, and those, too, are explained.

Speaking of Quirks

While the language in this book is, for the most part, clear and straightforward, it was sometimes difficult to follow. The use of pronouns is one matter that seemed unsettled. The authorial perspective of the book usually, but not always, takes first-person plural ("we") even though only one author is listed. In addition, the audience is sometimes addressed as "you" and other times as "we."

There were a few unfortunate copyediting oversights that made reading confusing at times. In a discussion of word processing, for instance, the book reads, "The first good are better for...", and there are inconsistencies with capitalization (e.g., "VoiceOver" versus "Voiceover").

One convention that might be particularly annoying to braille readers, but may go unnoticed by those reading the book in print or through listening, is the manner in which braille keyboard commands are presented. Keyboard commands from braille displays are typically a combination of the numbered keys, 1 through 6. The command to go to the Home screen on the iPad, for instance, is executed by pressing the Home button on the iPad itself or the space bar plus the letter H from the braille display. (The braille letter H is formed by pressing the keys for dots 1, 2, and 5 in combination.) The way this is generally indicated in educators' manuals or user guides for braille devices is to connect the names of keys with hyphens. That is, in the case of directing the reader to press the keys for H, an instruction will read "Press 1-2-5." In this book, the word "plus" is used instead of those hyphens. Thus, the same instruction reads, "Press 1 plus 2 plus 5." While this is admittedly a somewhat tedious point, I found it annoying as a braille reader. Instead of a few spaces, a keyboard command might take up nearly an entire line written in this way, and since keyboard commands are frequently and generously provided, the issue comes up a lot.

Who Should Read This Book?

Although the title indicates that this is a guide for parents and teachers, it's also well worth reading for any person who is blind or visually impaired and interested in using the iPad. Instructions often include specific VoiceOver gestures, such as when to double tap or swipe right, as well as visual indicators, such as the appearance of a particular icon or its location on the screen. Sometimes the location of a button or tab is given (upper right-hand corner of the screen, for example, or across the bottom), but even when this information is not included, the reader is given enough information to render the instruction completely usable.

Since the guide is available in so many formats, including large print, braille, or downloadable online or through CD in DAISY, MP3, eBraille, or Microsoft Word formats, anyone interested in learning or teaching the use of the iPad with blindness accessibility will find this book well worth its price.

One final note on approaching this book: My suggestion to any reader with even slight hesitation about using the iPad in the classroom is to begin by reading chapter 8. Though the chapers in this book are extremly well-organized, this final chapter, entitled "Let's Hear from the Experts," contains letters from two students, a teacher, and a parent of blind students. These letters, particularly the one written by the parent, are so alive and compelling, offering vivid images of the iPad in daily use, that reading this chapter first will inspire you to go immediately to the beginning of the book and begin double-tapping your way to success!

Ordering Information

Title: iOS Success: Making the iPad Accessible: A Guide for Teachers and Parents
Author: Larry L. Lewis
Format: Available in large print, braille, on CD as MP3, DAISY, Word, or eBraille, or downloadable in all of the same formats
Cost: Large print, $28; all other versions, $20.
To order, visit the National Braille Press website or call 800-548-7323

Comment on this article.

An Evaluation of the RAY G300, an Android-based Smartphone Designed for the Blind and Visually Impaired

Project-RAY Ltd. has developed, in partnership with Qualcomm, the RAY device, a smartphone interface for the visually impaired. The device consists of an Android smartphone with custom software that simplifies the interface so that it's easier for someone with a visual impairment to use than a standard smartphone like a Nexus Android device or an iPhone. The device is currently for sale at the Project-RAY website, and it will be available in July from Odin Mobile, a cell phone carrier designed specifically for the blind and visually impaired.

For this evaluation I will describe the $500 model from the Project-RAY website. This device is a Huawei G300 running Android 2.3.6 (Gingerbread). I will describe the phone, interface, and the device's capabilities as well as enhancements that are scheduled for the US release of the RAY device.

Physical Description of the Huawei G300

The top of the Huawei G300 is completely taken by the four-inch touchscreen. On the left edge of the device, close to the top, are two raised volume buttons. On the top edge is a standard 3.5 mm headphone jack on the left and a raised power button on the right. The right edge contains nothing while the bottom edge contains the Micro USB charger port. The back of the device can be removed by prying it from one of the corner edges, but this process requires the user to exert a significant amount of force. Under the back cover, you'll find the removable lithium-ion battery at the bottom half of the interior. There is a horizontal standard SIM card slot in the top left portion of the interior. To the right of the SIM card slot is a vertical MicroSD card slot. Overall, the device's buttons are tactile, and the simple design makes it easy to learn the device's surface. The interior of the phone is textured and easily distinguishable as well.

User Guide and Interactive Tutorial

You can download the RAY user guide from the RAY support webpage in DOC format. In WordPad, the file displays extra characters between pages and chapters. The document remains accessible, however, as the content isn't obscured by this glitch. The document contains diagrams that are labeled, but not described. Unfortunately the labels don't provide any useful information about the contents of the diagrams. The official guide seems to be outdated, as many apps are not described.

Even though the user guide is not very useful at this time, the RAY device contains a helpful interactive tutorial called Wizard, which describes each system control and allows you to practice the gestures and control screens. There are goals established for each practice, and the device will alert you to your statistics for each practice so that you can be sure that you are operating the interface correctly. The apps are very simple, and after completing the tutorial, it's not difficult to understand the apps even without written instructions.

The RAY Interface

The RAY device uses three main types of controls: menus, lists, and management screens. In addition to these three types of controls, the interface contains a special keyboard and dialer for text and number input. All apps (except for those that use the camera) use only these types of controls. Camera apps have a special Camera screen for their interface. The controls are designed to be easily understood and used by the visually impaired so that an individual who is blind can easily learn the operation of the phone and not be confused by differing app layouts.

Menus

Menus are used as the control for the Home screens as well as for certain apps, such as the Telephone and Messaging apps. The screen is organized like a telephone keypad, though the positions are not static. Rather, the screen orients to where you place your finger: wherever you place your finger becomes the 5 position. The apps are arranged in a square around the 5 position, so there are eight icons on each screen. For example, the Telephone menu is located at the 2 position, and the Previous Screen icon is situated at the 1 position. Though you might place your finger at the traditional 6 position, the device would identify that spot as the 5 position. In that situation, you could not use the apps in the 3, 6, and 9 positions, because the device will have placed them off of the side of the touchscreen. Because of this, it's best to place your finger in the center of the device so that all of the app positions will be accessible on the screen.

Once you have highlighted the app you want, simply lift your finger to open it. If you touch the screen, but don't want to make a choice, simply drag your finger off of the edge of the device, and your action will be canceled. There is a sound to alert you when you have placed your finger on the screen and a sound to identify when you have canceled an action. The device also speaks "Cancel" when you are close enough to the edge of the device to cancel an action. The device vibrates and announces the selection when you gesture to an app. Once an app has been activated, the device vibrates and announces what has been opened.

Lists

Apps use lists to present many choices, such as in the Settings and Information screens. When a list appears, the device will announce the number of items in the list. The previous screen item is not taken into account in the device's calculations of how many items are in the list. To move within the list, you place your finger on the screen and the device will announce the item currently selected. You can make the list scroll upwards and downwards by slightly moving your finger upwards and downwards on the screen. An ascending tone sequence indicates that the list will scroll upwards, and a descending tone sequence indicates that the list will scroll downwards. Once you gesture to start the list scrolling, the scrolling will continue until you lift your finger to make a selection. You can tap with another finger to cut off the speaking of an item and advance to the next. You cannot do this quickly to move by multiple items as the list must stop for a moment on every item. A single tone will play to alert you when you reach the first item in the list.

Management Screens

Management screens are screens where options are toggled, and actions, such as playing music, are performed. You place your finger on the screen and cycle through the available options by moving your finger slightly to the left or right. Though functionally these screens are simply horizontal lists, you can't cut off the announcement of the current item by tapping with another finger, and you won't hear a tone to tell you which direction you are moving in. It takes much longer to advance from one item to another in these management screens.

Keyboards and the Dialer

When entering numbers and text, you use the most complicated types of controls, the keyboard and dialer. These screens have a similar layout to one another. The dialer is arranged like a standard telephone keypad. Because the device recognizes where you place your finger as the 5 position and arranges the screen accordingly, it's necessary to place your finger in the center of the screen so that you can access all of the numbers. The numbers and options available on the keyboard are read aloud when they are selected, and you lift your finger to activate the number or option you want to use. Options such as "Delete" and "Read" are located on numbers instead of having separate buttons. For example, the "Finish Typing" option is located in the 6 position. To activate it, you gesture right to the 6, and the device will read "6" and, then, "Finish Typing." The options cycle so that, if you miss the option you wanted to use, you can wait until it returns, then lift your finger to activate it.

The keyboard is very similar except for the number positions, which contain letters in addition to their respective numbers and options. For example, the letters A, B, and C are located in the 2 position and D, E, and F are located in the 3 position. This is the style of typing originally used for text input on cell phones without a QWERTY keyboard.

The Online Interface

The RAY device is connected to an online interface for manipulating various aspects of the phone. This interface can be accessed by activating the "Log In" link on the Project-RAY website. You are able to add music and books to the device from the online interface as well as add contacts and calendar items. You can also cause your device to emit a loud sound for easy location and see its location on a map, but unfortunately, the map is inaccessible. The online interface is very easy to navigate, and it uses accessible links and edit boxes as controls that pose no issues for screen readers. When you choose an option from the list of links, the options for that link appear below the main list of options. A screen reader user would need to be sure to arrow below the options to learn if the options selected have appeared. At this current time, there are no headings to differentiate between the main list of options and the secondary options that appear when an option link is activated. Having a heading below the first list of main options would be useful, so screen reader users could immediately jump to the options that they need.

The Apps

Even though it's not as robust as a standard smartphone, the RAY interface includes many apps found on modern smartphones. These include both productivity tools, such as a voice recorder and calendar, and media apps, such as a music player and audio book reader. These apps are all very simple, but provide decent access to commonly used smartphone services.

Communications Apps

The RAY device supports most of the basic functions of a standard cell phone. You are able to make calls through either a Contacts list or through a manual dialer. The keyboard remains on the screen just in case you need to enter numbers during a call. Pressing the Power button will end an in-progress call. You are able to check your log of missed and incoming calls by using the Telephone menu. You can also specify contacts as favorites and add several numbers to a list of emergency contacts.

You can add contacts through the online interface. They will subsequently appear on your RAY device, and you can manage your contacts from the device itself. You can listen to the contact information, add a note to the contact, mark it as a favorite, and delete the contact as well as call the contact or send a text message to the contact. You can also send messages from the Messages menu as well as check your unread messages.

Media Apps

There are several apps for playing media on the RAY device. One is the Library app. This app allows you to download and play audiobooks and audio magazines from the Israeli Central Library for the Blind (Project-RAY is based in Israel). When you choose books from the online interface, they will be downloaded to your device. The app's main screen is a standard menu with eight options. You can select the book you wish to listen to from a list, and you can play and navigate through the book or magazine using a management screen. You can also delete books that you are finished listening to through the Library app.

Another app for listening to media is the Internet Radio app. At the current time, there are several preset stations that you are able to play. This app uses a list for displaying the various stations, and a management screen for listening to a specific station. At this time, you can't remove stations or add new stations.

The Music app displays a list of various artists and albums. Selecting one of these will place you in a management screen where you can play the current song or navigate to another either by using the "Previous" option or by selecting a song from a track list. You must use the online interface to add music to the device. The online interface will ask for the artist and album before allowing you to select the tracks you wish with the standard windows upload dialogue box. Even though tracks appear on the MicroSD Card, you are unable to add files directly to the MicroSD Card or delete music.

The last media app is the Recordings app, which is located in the Calendar menu. "Recordings" and "Add Recording" are separate icons. After you have finished recording, the device will prompt you for a title. Once you have typed the title and activated the "Finish Typing" option, you will be taken into a management screen where you can play or delete the recording as well as listen to information about the recording, such as the title and the time that the file was recorded. The "Recordings" menu item launches a list of recordings to choose from. Once one is activated, you will be taken to the previously mentioned recording playback management screen.

Time Management Apps

There are several apps that help you manage your time. One of these is the Clock app. Activating the "Clock" icon from a Home screen will announce the time and date. There is a separate Alarm Clock app, which is located in the Calendar menu. This app uses lists to select the hour and minute of the alarm. After selecting these options, you are returned to the Calendar menu. The alarm plays the sound of a mechanical alarm clock. If the device is locked when the alarm goesoff, unlocking it will silence the alarm. If the screen is active when the alarm goes off, touching it silences the alarm. When the alarm rings, it will cause the current control to disappear, and it will not return until you silence the alarm.

There is also an app for creating appointments, located in the Calendar menu on the Home screen. The Add Appointment app will take you step by step through a series of lists to set the date, time, and duration as well as alerts for the appointment. When this process is finished, you are taken to the appointment management screen where you can listen to the information, add a note to the appointment, or update the appointment information. You can also delete the appointment. When the time of the appointment arrives and you have an alert, it will play a very short sequence of piano notes to alert you to the fact.

Miscellaneous Apps

There are many other apps that do not fall into categories discussed above. There is a simple GPS that will tell you your current location. There is also a Bank Notes app where you can recognize American dollars, euros, and shekels. This app displays the camera view, and it will automatically announce the denomination of a bill placed in front of the camera. The Colors app is similar, but you must touch the screen to have the currently focused color recognized. The camera view takes the entire screen, and you must bring up the Power menu by holding down the Power button to return to the Home menu.

Advancements for the United States Model from Odin Mobile

I was able to talk with the developer of RAY, who described some of the advancements that will be available on the version of the RAY device sold in the US by Odin Mobile. The device that houses the RAY interface will be running Android 4.0 (Ice Cream Sandwich), which will allow RAY users to use native Android apps with standard Android accessibility features. The US device will also have the ability to download more sources of media, from Audible audiobooks to podcasts. The device is expected to also support publications from NFB-NEWSLINE. It will also have integrated voice dictation for text entry fields as well as for searching while in lists. This is an extremely useful feature as the major limitation of the current RAY device is that it's far too slow for most people to use without frustration. Adding dictation will make it much easier for users to enter long text messages as well as quickly find a certain item in a very long list.

The Bottom Line

The RAY device is a useful device for those who feel uncomfortable operating standard smartphones. The simple interface and the uniformity of the controls make it easy for a new user to begin using all the apps on the device. The menu concept may take time for some users to understand, but after they do so, operating the device should be no issue. The device is rather limited for more advanced smartphone users, however. It's rather slow because of the control scheme, and there isn't much in the way of customization or advanced functions. The Odin Mobile version of the phone would be much more useful as a user could begin with the RAY interface when they first begin learning how to use the smartphone and, eventually, switch to the standard Android interface. The US device will sell for $300 from Odin Mobile, which is similar to the price of other Android smartphones. Overall, the RAY device would be useful for people who feel they would not be able to learn the functions of a standard smartphone, but the decreased functionality and slow interface would keep most tech savvy users away.

Product Information

Product: RAY
Price: $500
Available From: Project-RAY
E-mail: support@project-ray.com

Comment on this article.