Full Issue: AccessWorld June 2013

An In-Depth Look at the ScripTalk Station from En-Vision America

In the past several decades we have witnessed a tremendous leap forward in both the number and efficacy of prescription medications. The population is also aging, and when you combine the increased availability of new wonder drugs with more and more people who need them, in one sense that can be a prescription for real trouble.

The more medications we take, the more opportunities we have to get confused and make potentially life-threatening mistakes.

According to the AFB Access to Drug Labels Survey Report, the print impaired community is particularly at risk for at-home medication errors, such as swallowing the wrong pill, missing a refill date, or ingesting expired medications. Prescription labels contain vital information about our medications, including how much to take and when to take them, and yet they are among the most inaccessible of documents.

Many individuals with visual impairments create and use their own braille labels, but if they bring home more than one prescription from the pharmacy, sighted help is required to create the labels. Since space is at a premium on those small medicine bottles, the information is usually abridged and incomplete. Nearly 90 percent of the visually impaired population does not use braille regularly and so those individuals must develop other strategies to distinguish their medications from one another. Some use rubber bands or other markers to help tell the bottles apart (one rubber band means blood pressure medicine and a stick-on raised dot means stomach medication). Others might store one prescription bottle on a lower medicine cabinet shelf and another on the top shelf.

But what about those who are taking six, seven, even eight or more medications a day? How do you keep them straight in your head? Particularly if you are elderly and your memory isn't as snappy as it once was, it can be very difficult to remember how much of which medication to take, and when.

If you don't believe this is a serious accessibility issue, just try to imagine a sighted individual telling his or her pharmacist, "No thank you. I don't need labels on my prescription bottles. I will remember the instructions precisely, and I'll be able to figure out which medicine is which by feeling the size and shape of the pills."

Happily, technology has provided at least one solution to this serious problem. The ScripTalk Station from En-Vision America voices prescription label information at the press of a button. In this article we'll take an in-depth look at this useful device, and we'll also tell you how you can join En-Vision America's Pharmacy Freedom program and get a ScripTalk Station on permanent loan to read specially tagged prescriptions labeled by a participating pharmacy.

ScripTalk Station: What It Is and How It Works

ScripTalk Station is an accessible prescription reading device that allows print impaired individuals to manage their own medications without guesswork or sighted assistance. Special "talking labels" incorporate radio-frequency identification (RFID) chips smaller than a grain of rice to store prescription data encoded by a participating pharmacist and affixed to the prescription bottle or package. The ScripTalk Reader scans the label and then uses voice synthesis to announce the medication name, dosage, refill date, and other essential information.

What's in the Box

The ScripTalk Station package includes the ScripTalk Station itself along with a 5-Volt AC/DC Power Adapter and two AA batteries. There is also a mini-USB cable for connecting the unit to a PC for use with the optional downloadable software, which allows you to review the prescription information on your computer using speech, screen magnification, or a braille display. Additionally, the package includes a sample pill bottle, so you can practice using the unit's controls before you receive your first RFID-tagged prescription. The ScripTalk Station documentation is available in braille, large print, and audio CD. A fully accessible PDF copy of the manual can also be downloaded from the company's website. The documentation was clear and concise and covered the device's operation in an easy-to-follow, step-by-step manner. The page also includes an audio demonstration, so you can hear the device in action.

Physical Description

The ScripTalk Station is a semi-circular half-moon-shaped device that measures 6.5 inches by 4.75 inches by 1 inch at its widest points, and it weighs 8 ounces. The housing is made of sturdy plastic, and there are five rubber feet on the bottom that provide a solid non-skid grip. The device is designed to be used lying flat with the curved semicircular edge toward you, but there are also two notched screw holes on the back to accommodate wall mounting.

With the back edge of the device facing forward, from left to right you will find a mini USB jack, a 3.5 millimeter headphone jack, and the unit's power adapter jack. The battery compartment is located on the bottom surface. A spring-release clip made it easy to find and open the compartment, and the batteries were equally easy to install.

The device's curved front edge contains a single control: a thumbwheel that turns the unit on and off and controls the volume. On the top surface just above the thumbwheel, there is a grouping of three buttons. The largest, which is the center button, is an oval-shaped "Read" button with a tactile dot that makes it easy to locate. The smaller triangular button at the right is the "Previous" button and the similarly-sized triangular button on the left is the "Next" button. Also on the top of the device is a horseshoe shaped semi-circle of tactile dots that surround the speaker grill and provide an easy-to-locate space to position a prescription bottle for scanning.

The ScripTalk Label

The included sample medicine bottle is a typical 1 inch by 2.5 inch plastic cylinder with a push-and-twist type safety cap. Along with the standard prescription label, there is a much smaller blank label affixed to the bottom of the bottle. A tiny bump no larger than a single braille dot covers the RFID chip, which has been encoded by a participating pharmacist to hold all of the label data that is printed on the prescription bottle.

Operation

After installing the included batteries or connecting the power adapter, turn the thumbwheel to the left, and the unit switches on with a palpable click. ScripTalk responds with three initialization beeps, a brief pause, and then a longer beep. This is followed by the voice announcement, "ScripTalk Station ready," followed by two more beeps. The unit is now ready to scan and voice an RFID tagged prescription bottle.

At this or any other point, you can choose to listen to the documentation by pressing and holding the "Read" button for three seconds. Unfortunately, the unit does not save your place or offer any section navigation or bookmarks, so every time you consult the onboard documentation, you must start again from the beginning.

To scan a prescription label, position the tagged container on top of the unit inside the semicircle of tactile dots. Press the oval "Read" button. ScripTalk beeps to indicate a scan is in progress, and almost instantly the unit begins speaking the label information.

You can also choose to press the "Read" button before positioning the labeled bottle. ScripTalk beeps steadily until it detects a talking label, and if no label is found after 15 seconds, it responds with a "labeled prescription not found" error message.

The information ScripTalk voiced after scanning the sample prescription bottle included the following field names and data: patient name, the medication name and strength, dosage instructions, the prescription date, the number of refills remaining, the prescriber's name, the phone number to use to call in a refill, the prescription number, warnings and additional instructions, quantity, and the medication's expiration date.

The information is voiced from beginning to end, but the reading can be interrupted at any time by pressing the "Read" button a second time. After that you can move down the list item by item up using the "Previous" and "Next" buttons located to either side of the "Read" button.

RFID chip transmissions only travel a few inches. Indeed, placing the bottle upside down caused ScripTalk not to be able to read the tag. For my evaluation the company sent along a few extra sample prescription bottles, and when I placed one beside the unit and a second on top, ScripTalk only read the proper bottle. I also tried setting two prescription bottles on top of the unit. ScripTalk continued to beep until I removed one of the bottles and, then, scanned and voiced the information from the remaining bottle properly.

Voice Controls

ScripTalk uses the ScanSoft Heather voice. The company also produces a Spanish version of ScripTalk, which was not tested for this review. This unit does not perform any translation, however. Rather, it is programmed to speak a prescription label printed in Spanish with the ScanSoft Paulina voice.

ScripTalk is programmed with five voice speeds. To change among them, press and hold either the "Previous" or "Next" button for three seconds. The different speeds have no numbers or names, such as fastest or slowest. Instead, ScripTalk repeats these instructions: "Voice speed adjustment using the increasing or decreasing speed." Press the "Previous" or "Next" button repeatedly until you reach the desired setting, then press the "Read" button to save your changes.

One step below the slowest voice setting is the unit's spell mode. Selecting this option causes ScripTalk to continue to voice the various heading names (Name, Medication, etc.) word by word, but the field data itself will be voiced slowly and letter by letter. The sample bottle is tagged to contain the popular antibiotic amoxicillin, and each letter voiced clearly. At the highest volume levels, ScripTalk's built-in voice began to grow a bit scratchy, but it was still quite understandable.

It would be handy to be able to change the Spell Mode and Voice Speed settings on the fly, but when I tried pressing the "Read," "Previous," or "Next" buttons after confirming a speed or spell change, ScripTalk announced, "No prescription information is available. Please scan medication," and I was forced to repeat the scan before I could hear the data letter by letter or using a different voice speed setting.

Privacy

ScripTalk only retains medication data for 30 seconds after you finish your review, so it's easy to prevent others from coming along behind you and obtaining personal information. There is also a headphone jack, so you can listen to the information privately. I tried this feature with my Apple EarPods and was disappointed to discover the information only played through one ear because the headphones are stereo, and the ScripTalk sound jack is mono.

Shutting Down

If ScripTalk is left on battery power for more than five minutes without being used, an audio reminder alarm will sound and repeat every 1.5 minutes. The alert sounds through the unit's speakers even if you have headphones attached. It also plays at full volume no matter how the volume level is set, which is a useful feature because during my testing I neglected several times to power down the unit and only realized this when I heard the alert from a different room.

If you take medications just before bedtime and tend to be a bit forgetful, you may want to use the power adapter, so you don't have to get back out of bed if you neglect to turn it off. If you take your medications just before you leave for work, you may also be at risk of forgetting to turn the device off and running down your batteries. A more elegant solution the company might consider for an updated version would be a programmable control circuit that could power the unit down automatically, much like the Victor Stream turns itself off after a period of inactivity or when the sleep timer runs out.

The ScripTalk Station Carrying Case

My review unit also included the optional ($19.99) logoed ScripTalk carrying case. This black fabric bag is approximately 8 inches by 10 inches by 4 inches with a carrying handle and detachable shoulder strap. A zippered outer compartment is designed to hold the ScripTalk unit. Inside the zippered lunchbox-style insulated bag, there is also a mesh inner pocket for a freezer pack, and there's enough room for plenty of medications and other personal care items.

The ScripTalk Software

Recently, the company introduced the ability to connect the ScripTalk Station to a Windows PC via the included mini-USB cable. The software is available upon request, and it works on PCs running Windows versions 8 through XP Service Pack 3. (A Mac version is currently in development along with apps to run on smartphones equipped with near field communication capabilities.)

The ScripTalk User software uses a standard Windows installer, so it is easy to get up and running. Connect ScripTalk to your PC via the supplied USB cable, turn ScripTalk on, and you are ready to run the application software.

At startup you are presented with a Settings menu with three option controls. The first is the "Port Settings" field with a default button that, when pressed, automatically makes the connection between ScripTalk and the software. The second setting is a checkbox you can use to decide if the ScripTalk User software should start when Windows starts or if you would prefer to start the software manually. The third control, a combo box, determines how long the prescription information will remain on your computer display before the built-in privacy controls remove it. The choices are 15, 30, 45, or 60 seconds. You can also choose the "No Time Out" option, in which case prescription data is displayed until you close your browser tab or window.

The ScripTalk User software uses your default browser to display the prescription label information on a standard webpage created on your local system. Your information is not shared or transmitted over the Internet. I tested the software using a Windows 7 64-bit Dell PC running Window-Eyes version 8.2 and both Internet Explorer version 10 and Firefox version 20. Happily, the webpages are created using basic HTML, so no matter what screen reader, screen magnifier, or braille display you use, if you can read a standard webpage, you should have no trouble reviewing prescription information.

With the ScripTalk User software running, scan a prescription bottle as described previously. ScripTalk will voice the information as before, but after three or four seconds, your browser will pop up and display the exact same information. I found it slightly annoying that even when connected to the PC the ScripTalk Station continued to voice the information, causing a bit of auditory confusion as both the unit and my screen reader began voicing the same information at different starting times, but I was able to silence the ScripTalk with a second press of the "Read" button.

The ScripTalk User software is a must-have for deaf-blind individuals and others who wish to access their prescription data via a braille display or screen magnification. However, even if you are perfectly satisfied having your prescription label voiced by the ScripTalk Station, there is still a good reason to install and run the software.

The prescription data webpage created by the ScripTalk User software includes a hyperlink to the medication's Patient Information Monograph. This fully-accessible text version of the same booklet or insert pharmacists include with most medications is chock full of additional information about the medication, how it works, how to take it, and what side effects may result.

Receiving a ScripTalk Station on Permanent Loan

For several years the Veterans Administration has been providing its sight impaired clients with free ScripTalk Stations, and they recently broadened their program to include soldiers who return from combat with traumatic brain injuries that impair their ability to comprehend printed materials.

More recently, En-Vision America itself has begun providing units free of charge on a long-term loan as part of their Pharmacy Freedom Program. To qualify, all you need to do is arrange to have your prescriptions filled by a participating pharmacy. A complete list of participating pharmacies can be found on the company's website, which can be searched by state or zip code. I tried my own small town zip code, and the nearest brick and mortar pharmacy was a Sam's Club nearly 25 miles away. However, the list also included five mail order services, including CVS Caremark, Kohl's Pharmacy & Homecare, and Wal-Mart's mail order prescription service.

The Bottom Line

The ScripTalk Station does one job, and it does it well. Any quibbles I have with the design and feature set are minor and do not affect the device's usability.

The ScripTalk Station would be a valuable resource for many visually impaired and deaf-blind individuals. The free long-term loan broadens the device's appeal significantly, but not everyone can benefit from it. My own health coverage, for instance, will soon involve a requirement that I use Express Scripts for my prescriptions. Currently, they do not participate in the Pharmacy Freedom program, but hopefully, they and many other pharmacies will join soon and make universal prescription label access a true reality.

Product Information

Product: ScripTalk Station
Price: Free
Available from: En-Vision America, Inc
Phone: 1-800-890-1180

Comment on this article.

TextGrabber + Translator from ABBYY and the StandScan Pro: A Review of Two Products

TextGrabber + Translator from ABBYY

A relatively new iOS scanning optical character recognition (OCR) app, TextGrabber + Translator, is available in the iTunes store. This mainstream app from ABBYY works very well with VoiceOver. It costs $9.99 and is compatible with the iPhone 3GS, 4, 4S, and 5 and also works with the iPad 3, 4, and Mini (in compatibility mode). For this article, an iPhone 5 was used for scanning.

According to the app description, TextGrabber + Translator turns your mobile device into a "multifunctional mobile scanner" with the ability to read text from a variety of print sources using your phone's own camera. With this app you can also translate the text into a variety of languages as well as edit it, share it via e-mail and SMS, or post it directly to your Facebook, Twitter, or Evernote accounts. TextGrabber + Translator is distinguished with being the winner of the SUPERSTAR Award in the "Text Input" category in the 2012 Mobile Star Awards.

Using TextGrabber + Translator

The first time TextGrabber is launched, it prompts the user to choose to either listen to Quick Tips or go straight to the app. The tips are very brief and are worth checking out. The "Done" button is in the upper right corner.

When the app loads, VoiceOver says, "TextGrabber. Viewfinder. Image: double tap to focus." Starting at the top left corner is a row of five labeled buttons. Flicking to the right, they are "Enabled Recognition Languages" (which by default is set to English), "Crop Photo," "Flashlight," "Settings," and "User Manual." The User Manual is the same information spoken the first time the app is opened. On the bottom of the screen are three additional buttons: "History," "Camera," and "Album." All buttons say a brief description of their function. In addition, all buttons can be located by swiping either right or left.

Settings

The first settings option is "Enabled Recognition Languages." Whichever languages have been chosen will be listed. Double tapping the name of the language will bring up the Enabled Recognition Languages dialogue. The next heading is Font Size for which the default selection is "Normal." Tapping the "Normal" button will bring up two additional choices, "Medium" and "Large." The next option is to choose a search engine, which is set to Google by default. The other items in the menu involve moving the scanned image.

Scanning a Document

This app does not come with any instructions regarding where to position the device's camera. The user manual does say that there should be good lighting when taking the picture, but if you are in doubt, activate the "Flashlight" button by double tapping it. The flashlight will remain on until it is manually turned off. Touching anywhere on the main screen will prompt VoiceOver to say, "Viewfinder. Image. Double tap to focus." Double tap anywhere on the screen, and either the camera will take a photo, or VoiceOver will indicate that the camera wasn't able to autofocus and to try again. For a standard letter-sized document, I held the iPhone 5's camera about 10 inches above the paper with the lens in the middle of the page. TextGrabber scanned the page, and although there were a few errors, the overwhelming majority of the scan was correct. TextGrabber works if the page is upside down, and it recognizes columns. Unfortunately, the app cannot store multiple pages as one document.

Using the Scanned Document

Once the photo is processed, VoiceOver will say, "Recognize text." The beginning of the text can be found near the top of the page and can be read with VoiceOver gestures. Double tapping anywhere on the page will bring up the keyboard for editing. This edit function works any time the page is on the screen. When you are finished, activate the "Done" button in the upper right corner.

When the "Camera" button at the top left of the page is touched, VoiceOver says, "'Camera' button: tap twice to take a picture." However, tapping twice, instead brings the user to the main screen where the Viewfinder prompt will speak. Double tap on the main screen to take the picture. At the top right of the page is the "Settings" button. On the bottom of the page are the "History" and "Translate" buttons, and the "Menu" button is at the bottom right. However, when flicking from left to right starting at the top left, the button's name will be said after the "Settings" button rather than after the "Translate" button. Once a page is scanned, activating the "Menu" button brings up a list of options, including e-mail, Facebook, and SMS. Activating any of the choices brings up the required dialogue for sending the page.

Translation

An excellent feature that TextGrabber offers is the ability to translate a document into 64 languages. To do this, activate the "Translate" button at the bottom of the page with the scanned document. Two buttons will appear on the screen. The first has the name of the recognized language, which is set to English by default. The second button is for selecting the translation language. Choices for both buttons are made via picker items. Simply swipe up or down to select a language. For example, if the document is in English and needs to be translated into Spanish, Spanish would be selected through the picker item of the second button. Once the selection is made, activate the "OK" button, which is between the two language selection buttons. The next time a document needs to be translated, the Spanish button will already be selected. A new language can be selected by activating the button and using the picker to make a new choice. If Spanish is the correct language, double tap the "Spanish" button and activate the "OK" button. The translated document will appear on the lower half of the screen. The "Menu" button on the lower right corner of the screen gives document options, including e-mail and Facebook. To go back to the original document, activate the "Back" button at the upper left corner. To scan a new page, activate the "Camera" button at the top right of the page. A test translation into Spanish worked very well.

History

All scanned pages are stored in the History section. To bring up the list of scanned pages, activate the "History" button on the bottom left of the screen. Each scanned page will have the date and the first line of the document. The most recent scan will be at the top of the list. To bring up the full scan, double tap anywhere on the listing. Once the scanned page is on the screen, it can be translated or sent. Activate the "History" button at the bottom left of the screen to get back to the list of pages. To delete an item from History, activate the "Edit" button in the upper right corner. Swipe left or right until the name of the page is spoken and the "Delete" option is presented. Double tap on the page and a confirmation dialogue will appear. Double tap and the page will be deleted.

StandScan

The StandScan is essentially a collapsible box with a hole on top of it. When a photo is taken with a document placed on the bottom of the box and the phone's camera placed over the hole with the lens pointing down, the result should be a high quality scan. There is also a StandScan Pro, which includes LED lights. This product can be useful for people who do not have light perception. Like TextGrabber + Translator, the StandScan and StandScan Pro are mainstream products.

The StandScan and StandScan Pro are not necessarily meant to take the place of a scanner. Instead, the developers indicate that they are good for travel and easily fit into a computer bag. According to their website, the StandScan works with any smartphone that has a back-facing camera as well as with the iPod touch. A camera with at least a 3-megapixel resolution is required. At the time of this writing, StandScan does not work with the iPad, but the developers are working on that. StandScan costs $19.95 while StandScan Pro costs $29.95. As of April 2013, there is a special promotion for people who are visually impaired. At checkout, enter the promotion code "VIPhone" and receive a 10-percent discount and a free battery holder.

Assembling StandScan Pro

When the unit arrives, it feels like a flat folder with a cable protruding from one end. There are instructions included on how to assemble the StandScan, but they contain diagrams. Lay the StandScan on the floor or table with the small flap facing up. Open the flap and fold out the sections starting with the left and, then, the right. (To disassemble the unit, fold the right side first.) On the first and third sections, there are tactile lines in the shape of triangles (four in all). Fold the flaps along each line to create four elongated triangles. The fourth section has a center hole and a row of LED lights. Fold along the tactile lines in that section. This is the top of the unit. To put it together, raise the flap to vertical and bring up each side. The box is held together with magnets. Next, bring the section with the hole up and place it on top. There should be one open side. The paper is placed on the inside floor of the box. The power cable is at the back left. Plug the appropriate part of the cable into the box and the other end into the wall.

The StandScan uses one 9-Volt battery. There is a tiny screw that holds the battery pack closed, and it can be opened with a tiny Phillips screw driver. It was somewhat difficult to get the battery to connect with the terminals in the battery pack. The battery pack attaches to the box in the same manner as the AC adapter.

TextGrabber with and without StandScan

The distance from the floor of StandScan to the camera lens is approximately 11 inches. When scanning without the StandScan, I held the iPhone at the same height. Several different types of documents were scanned with TextGrabber, first with the StandScan Pro and then without. When the document was scanned without the StandScan, another source was used to shine light down on the page.

Using both scanning techniques when only a few lines of typed print were scanned, the results were equally good. On several typed pages from a contract, StandScan was slightly better, but the scans with TextGrabber only were still very good. The scans of the instructions that came with StandScan were somewhat better with StandScan, but TextGrabber alone did a good job. A journal page with columns delivered similar results with both scanning techniques. Scanning with both methods was a bit difficult when scanning a hardcover book because the pages don't lay flat. Even with StandScan, TextGrabber gave error messages.

Conclusion

The ABBYY TextGrabber + Translator iPhone app is very easy to use, works extremely well, and is inexpensive.

StandScan Pro provides an excellent solution for people who don't have light perception or who want the convenience of having a guide for taking scans.

Try ABBYY TextGrabber + Translator and, then, decide if you want StandScan or StandScan Pro. It's nice to find mainstream products that can be useful for people who have visual impairments.

Comment on this article.

Using VoiceOver with the Accessible Amazon iOS Kindle App

A free and accessible iOS version of the Kindle app from Amazon was released on May 1, 2013. This app gives people who are blind or visually impaired another option for purchasing and listening to books and periodicals. Previous versions of this app were not accessible with VoiceOver. This article will discuss installing the app as well as purchasing and reading content.

Here is part of the Kindle description from the iTunes Store:

The Kindle app is optimized for the iPad, iPhone, and iPod touch, giving users the ability to read Kindle books, newspapers, magazines, textbooks and PDFs on a beautiful, easy-to-use interface. You'll have access to over 1,000,000 books in the Kindle Store plus hundreds of newspapers and magazines. Amazon Whispersync automatically syncs your last page read, bookmarks, notes, and highlights across devices (including Kindle), so you can pick up your book where you left off on another device.

You can read the entire description at the iTunes Store.

Installation and Set-Up

In the iTunes App Store, type "Kindle" into the search box, and the first result is the new Kindle app. The first time the app is launched you must enter the e-mail address and password for your Amazon account, so you will need to have an account before beginning the registration process. Prior to entering the information, there will be a dimmed "Register this Kindle" button and an unlabeled button. Once the e-mail address and password are entered, the "Register this Kindle" button becomes active, and the unlabeled button disappears. When the new screen loads (and assuming you haven't previously purchased anything from Amazon's Kindle store), VoiceOver will say, "You have no items in the cloud."

On the bottom of the screen are four buttons: "Switch to Grid View," "Cloud," "Device," and "Settings." For easier reading leave the "Grid View" in its default setting. Activating that button will cause items to be read as a grid instead of as a list. If the button is activated, then the material will be presented in grid format, and that same button will change to "List View." If the "Cloud" button is activated, then all Kindle items will be displayed, including those that may be present on another device. If the "Device" button is activated, only the items on that specific device will be displayed.

The Settings Menu

In the Settings menu, the "Back" button is in the upper left, and the "Done" button is in the upper right. The first heading is Library Sort Order. By default it is set to "Recent," but the Library can also be sorted by author or title. Selection is done via radio buttons.

Sync

The next item in the Settings menu is the "Sync" button. Any content purchased on a device should automatically sync to any other supported device. If content isn't synced, activate the button. To test this feature, I tried to sync my content on the iPhone Kindle Library with my MacBook Air and PC running Windows XP and Window-Eyes 8. The Mac app was completely inaccessible. Sighted assistance was needed to register the device, and accessibility didn't improve after that. I got a bit further with the PC version. My books appeared on the screen, but when I opened the books, I couldn't read them.

Registration

The name of the account holder will be listed, and a Kindle e-mail address will also be shown. In my case it took the first part of my e-mail address and added "@kindle.com." Use this address to send documents directly to the Kindle. Next is a button labeled "Contact Us," which brings up a form for entering text. Following that is a button labeled "Page Turn Animation," which is turned off by default. Next is a "Social Networks" link. When activated it explains how to share your information and post parts of books to your social networking Wall or Newsfeed. There are, then, links for Facebook and Twitter. The final Settings option is a button labeled "Other." When this button is activated, it brings up a list of options, including "About" and "Terms of Use."

Library

The app's home screen displays content. To change which material is displayed, use the button in the upper left. By default it's set to "All Items," but other options are "Books" and "Newsstand." If there are items in any of the categories, they will be listed. The button to go back to the main Libraries page is located in the top left corner, but instead of saying "Back," it gives the name of the section currently being displayed. For example, if the Books section is on the screen, the button will say "Books." The "Done" button, which is used to exit the screen, is in the upper right corner.

Purchasing and Downloading Content

It is not possible to purchase content directly from the iOS Kindle app. Your device must be connected to a wireless or data network in order to transfer content from the Kindle store to your device.

For the iPad, go to Amazon's iPad Kindle Store, and for the iPhone or iPod touch, visit Amazon's Kindle Mobile Store. For this demonstration, I am using an iPhone 5.

Searching for Content

Once you are on the website, if a "Sign In" form appears, sign in with the same account information used when registering the app.

On the homepage, there are many links, including "Books," "Newspapers," "Magazines," and "Free Popular Classics." In addition, there is a search form consisting of an edit box and a "Search" button. The page is easy to navigate, and search results are clearly displayed.

If, for example, you activate the "Magazines" link, a new page will load that presents many categories, including Arts & Entertainment (163 listings), Lifestyle & Culture (257 listings), and Science (12 listings). If you activate the "Science" link, you will see a listing of the titles in that category. Select a title, and the next page will give you details. Though there may be some gibberish on the resulting page, VoiceOver reads all of the text clearly. The text will tell you information like the delivery schedule and subscription and single-issue rates. You will find buttons for available actions, such as to subscribe or download the current issue. Once you purchase an option, you'll be asked where the content should be sent. I have only one device, so my iPhone was the only option. Underneath is information about the magazine, and below this is some gibberish and some reviews.

I did a search for "The Legend of Sleepy Hollow," and when the Search page loaded, my results were clearly displayed, and I activated the result that I wanted. When the new page loaded, there was some gibberish, but it wasn't difficult to find the necessary information, such as title, author, and price. Next is the button to buy the item with one click and the pop-up button again showing my iPhone. Below that is a link to hear a sample, and after the link is a brief description of the book followed by a few reviews.

Downloading

I activated the button to buy the book, and the next page said, "Thank you for your purchase: The Legend of Sleepy Hollow, by Washington Irving. We are sending your item, and it will automatically appear in your Home screen when the download is complete." I assumed the item would appear in my iPhone's Home screen, but it was on the Kindle app's Home screen. Along with the name and author, VoiceOver also said that the book was downloaded. A confirmation e-mail was sent to my inbox.

Reading Kindle Purchases

To open a book or any other Kindle purchase, double tap on it. Once the book is opened, VoiceOver will say, "Double tap for menu. Swipe down with two fingers to read continuously and tap. Tap and hold to select text."

The Menu

Double tapping anywhere on the screen while the book is reading will bring up a menu with choices, including "Bookmark," "Return to Book," "View Options," and "Go To." The "View Options" button allows for changes in font size, color, and brightness. The "Go To" button lists locations, such as "Go to the beginning of the book," "Go to the cover of the book," "Location," and "Highlights." To go to a specific page, activate the "Go To" button and, then, double tap "Location." A phone-style keyboard will appear for entering the page number. Once the number is entered, activate the "OK" button, and the book will be on the new page. The next time the book is opened, it will be on the same last page that was read. The final option in the menu is a picker item, which allows you to change locations by swiping up or down with one finger. The picker moves in 10 percent increments. It will not announce page numbers, but it will say percentage and Kindle screen numbers. To get out of the "Book" menu, activate the "Home" button in the upper left corner.

Navigating Text

While reading a book, do a three finger swipe to the right to go backwards to the previous page. To go to the next page, do a three-finger swipe left. If VoiceOver is reading continuously, an indicator will sound whenever there's a page change. To stop VoiceOver when reading continuously, do a two-finger tap anywhere on the screen. Through the rotor, it's possible through to read by characters, words, or lines, but this only works on one screen at a time and not with continuous reading.

There is a search option in the Book menu, which allows the reader to search for specific text. Results are clearly displayed. Double tapping a result brings the user to the page where the text appears.

Bookmarks can be easily set by bringing up the menu at the point in the book where you want to set the bookmark. Then, activate the "Bookmark" button in the upper right corner.

Text can be selected and, then, highlighted. To do this, double tap and hold on the text you want to select. This gesture can be a bit tricky because it's similar to bringing up the menu. When the text is selected, the app will make a sound, and VoiceOver will speak about highlights. To adjust the text that is selected, swipe until you hear either "left most selection edge" or "right most selection edge." Double tap and hold to adjust how much text should be selected.

Near the top of the page are buttons to choose a highlight color. If you don't have any color vision, it doesn't matter which color is chosen. Highlighting colors include pink, yellow, and blue.

If a word is selected, a dictionary definition will appear at the bottom of the screen along with options to get more information from Google, Wikipedia, and other dictionaries.

It is possible to write notes in the book. After the text is selected, activate the "Create a Note" button. This will bring up an edit box and keyboard. After typing the note, activate the "Save" button. This button is located above the "I" and "O" of the QWERTY keyboard. Flicking on the screen does not work for finding it.

If there are any notes, highlighted text, or bookmarks, they will be listed in the "Go To" section of the menu. Double tap on the entry you want, and that page will be displayed.

As part of this evaluation, I also purchased the latest issue of Rolling Stone. There were some additional menu options for reading the magazine, including a "Table of Contents" button and options to go to the previous article and next article. When the next article is activated, the title of the new article will be just above the button. In the menu page are radio buttons for "Text View" and "Image View." Make sure the "Text View" is checked.

I found that the Kindle didn't always do well when reading the magazine with the rotor. The menu doesn't include an option for setting bookmarks or writing notes, and there is not a "Go To" button. The dictionary function did work.

To delete material, find the item in the Library and swipe up or down with one finger. VoiceOver will say "Delete." Double tap the selection and follow the prompts to delete the item.

Conclusion

The Kindle app provides another way for people who are blind or visually impaired to access print material. It's relatively easy to use from the start. VoiceOver had no difficulty reading the content, and the ability to perform searches, look up words, set bookmarks, highlight text, and write notes within the book makes this app an excellent choice. To help you navigate this app and for further detail, a free guidebook, "Kindle for iOS Accessibility Gestures: Quick Reference Guide" is available.

Comment on this article.

CTIA 2013 Accessibility Outreach Initiative

Lee Huffman

Dear AccessWorld readers,

On May 21, I went to Las Vegas to attend CTIA 2013, the Wireless Association conference. I participated in the 2013 CTIA roundtable, "Wireless Accessibility: Building Bridges, Defining Needs, and Interpreting Policy." This roundtable was hosted by Matthew Gerst, CTIA Director, State Regulatory and External Affairs, and moderated by Mary Brooner, President of MB Consulting LLC. Other roundtable participants were: Anita Aaron, Executive Director of the World Institute on Disability, Brenda Battat, Executive Director at Hearing Loss Association of America, Christian Vogler, Technology Access Program at Gallaudet University, and John Morris, Georgia Institute of Technology, Wireless RERC.

As part of the CTIA Accessibility Outreach Initiative, member company representatives were invited to join leading accessibility experts and stakeholders in a roundtable discussion about the ways individuals with disabilities increasingly use wireless services and devices. As part of the discussion, questions were asked and answered concerning how people with varying disabilities used wireless devices and accessed information. Answers to questions about the expectations of people with disabilities were also addressed as were the expected results of pending legislation.

In addition to the moderated discussion, attendees received an FCC regulatory overview and update from Krista Witanowski, CTIA Assistant Vice President for Regulatory Affairs, and heard closing remarks from Jamie Hastings, CTIA Vice President for External and State Affairs.

It was quite an experience participating on the accessibility roundtable, attending CTIA 2013, and learning more about what the leaders in the wireless industry are thinking and where they are looking to move the future of wireless technology. Attending this conference, listening to the keynote speakers, touring the exhibit hall, and interacting with attendees puts into perspective just how fast wireless technology is moving and how it will affect us all. Without a doubt this movement will bring challenges for, among other things, personal identity, security of information, and, certainly, accessibility.

I am happy CTIA has recognized accessibility as a valid consideration in the development and implementation of wireless technology, and I encourage the organization to collaborate closely with disability advocates and accessibility experts to ensure all people can benefit from these truly unbelievable technologies on the horizon.

Keep reading AccessWorld as the team will be working to keep you up to date on advances and access to all things "technology."

Sincerely,
Lee Huffman, AccessWorld Editor-in-Chief
American Foundation for the Blind

Comments, Questions, and Answers

Dear AccessWorld Editor,

Thank you for profiling Dr. Josh Miele and his work in Deborah Kendrick's article, Part I: A Profile of Principal Investigator Joshua A. Miele, in the May 2013 issue of AccessWorld. I had the pleasure of working with Dr. Miele at Berkeley Systems and learned a tremendous amount from him during the five years we were there together. Many of the insights I gained from Dr. Miele during that time related to things such as how to develop efficient and productive interfaces for people who are blind, the importance of exposing the underlying concepts of the (graphical) interface to users who cannot see it, and the importance of non-speech audio and the use of distinctly different voices to enhance a spoken user interface.

I continue to follow Dr. Miele's research with great interest. I am certain that some of his new insights will significantly propel the accessibility field forward.

Regards,

Peter Korn

Dear AccessWorld Editor,

My letter is in response to the May 2013 article, Earl: An Evaluation of the Newspaper-Reading App from Angle LLC, by Jacob Roberts.

Unfortunately, my experience has not been as positive as the author's. I find that you have to have a good WiFi connection to use Earl at all, and as one might expect, it does not work well where there is a lot of background noise. You really need to be in a room on your own whilst reading as anyone sitting near soon gets fed up [with] you repeating the same commands, particularly if Earl has difficulty in understanding what command you are saying.

I would much prefer more flexibility to use the gesture commands so that you can use it in situations where other people are present, but unfortunately, I found the gestures to be not very responsive moving forward or back using the three-finger gesture. Often, Earl would come back and inform me that there were no more articles when I had only just started reading an article list.

I do like the way that [I] can quickly get to the information that I am wanting to read without the clutter that you get from reading normal webpages, and this, to me, is a definite positive and one which many other people who are blind or partially sighted would like.

If there were more gestures you could use without having to speak all the time and if these were more accurate and responsive, then I would use it on a daily basis. However, I think I will only continue to subscribe in order to demonstrate the app to other people who visit our drop-in center and use it personally for reading news on an occasional basis.

I am currently using an iPhone 4S, and one may find that the functionality may be better with the iPhone 5.

Regards,

David Quarmby

Dear AccessWorld Editor,

The icanconnect.org program/FCC National Deaf-Blind Communication Act will open a new window to so many through the use of much needed technology for years to come. Having worked at Perkins for five years, I witnessed firsthand the incredible work the deaf-blind program provides and how technology levels the playing field.

"Deaf-blind" does not reflect how many folks could truly be eligible for services and equipment. I explain to folks that you do not need to be totally deaf and totally blind to qualify. If you have mild vision loss and profound hearing loss, you may qualify. The flip side is that you may have a good deal of vision loss but mild hearing loss, and again, you may qualify. Sometime the terms "deaf" and "blind" can be scary, but remember, these are only general terms for vision and hearing impairments.

Many folks living with low vision may qualify for the latest in video magnification technology, such as the DaVinci and Merlin Elite from Enhanced Vision, Inc. These two approved products not only provide HD magnification but also incorporate OCR scanning [and] reading in multiple languages as well connectivity with iOS devices, such as and iPad or iPhone, enhancing distance communication tools such as Skype, texting, the Internet, and other mainstream apps used for everyday communication.

To learn more, visit the icanconnect site or call 800-825-4595.

Adam S.

Dear AccessWorld Editor,

I just wanted to point out that the journalist of the article "A First Look at the Accessibility of the Google Chrome Operating System" neglected to mention what sort of security features this new computer happens to use.

Something like cloud-based computing is all the rage, but something that is all the rage is bound to create a buzz in hacker fields if only to access protected information and the like in order to steal identities, pilfer credit card accounts, and virus creation.

Is this something this new computer addresses, and if so, how?

Just thought I would point this out as I'm sure many others are wondering the same thing.

Thanks,

Victor Gouveia

Response from AccessWorld author J.J. Meddaugh

Hello Victor,

That's a great question and also a very important one as security should not be taken lightly, especially when it pertains to sensitive information. As you pointed out, Google Chrome stores much of its information in what's known as the cloud, a fancy way of saying your data is being stored online somewhere. While it's important to take precautions with your data, these are the same precautions that one would take with any computer that connects to the Internet.

Whether you use a web-based mail solution like Gmail or Yahoo! Mail, store files online using a sharing service such as Dropbox, or contact someone using a website form, it's important to do your best to guard your personal information and only share it with those you trust. Ideally, you would not want to store passwords, credit card numbers, or banking information on the cloud, whether this is Google Drive, Dropbox, or another service. One could also use password protection and file encryption to weed out potential intruders. Remember, though, that most of these methods are merely deterrents, and a determined thief will do whatever it takes to steal your data. The idea is to make your data more difficult to steal, so they'll move on to another target.

As for Chrome specifically, you are prompted for your e-mail address and password whenever you turn on your machine or wake it from sleep mode. You can also save files locally on your machine instead of on Google Drive, a good plan for sensitive data.

Thanks for your question and for reading AccessWorld.

J.J. Meddaugh

Dear AccessWorld Editor,

Over the past 11 years, I've held a wonderful job as a Windows tech at a large community college. Though I was hired for my knowledge of blindness technology, I've had the privilege of serving a variety of students with learning differences, brain injuries, and physical limitations. Due to budget cuts, my position has been eliminated at the end of June.

Thanks to much downsizing in the past, I've developed a great resume. I've trained court reporters, supported both mainstream and blindness products, and also supervised call center personnel and written product manuals. Despite the economic downturn, I'm finding that I'm getting interviews, and, if were I sighted, I'd probably already have a new job lined up.

My problem is that, today, less seems to be known about the accessibility of software I'll need to use in any potential job. I can surf around for partial answers but find no expert anywhere who can answer questions about particular packages. For example, in yesterday's interview, I was asked if I could access a customer relationship manager called RightNow, a bug tracking database called Jira, the SugarSync cloud-based synchronization manager, and the Windows application package which controls the Shortell VOIP phone system. I was also told that 45 percent of the job involved using a service called LogMeIn to remotely troubleshoot and repair customers' systems. I suspect (but do not know for sure) that LogMeIn is inaccessible.

When I contact agencies about this problem, I'm sent to other agencies, but there is no comprehensive clearing-house with answers. Instead, I'm told I need to re-apply for rehab, and once I'm a DOR client and after I get a job offer, they can hire a contractor to perform a technology evaluation and possibly write some scripts to make my work applications accessible.

This is a terrible model in today's fast-paced high-tech environment. Employers are looking for a self-starter who is nimble and flexible. Dragging a rehab counselor to every job, even figuratively, sends the wrong impression. This old model worked for lifelong employment, and it works for job developers, access technology providers, and the agencies whose job is to help you craft a winning resume and hone your presentation skills. However, I don't want employers to start seeing me as part of a "special" class who needs the protection of extra laws and an army of consultants. With the next economic downturn, I'll need to be ready to look for work yet again, and I wish we had a newer model for this modern world.

Deborah Armstrong

Dear AccessWorld Editor,

I am really enjoying this month's issue of AccessWorld. I wish to comment on the social networking series by Larry Lewis. I have been using mostly Twitter for a few years now, but I've started using Facebook a bit more since they recently hired a team focusing on accessibility. I have personal accounts on both of them, and I help run the Twitter and Facebook accounts for my volunteer job.

Mr. Lewis is doing an excellent job explaining everything involved with using social networks. For Twitter, I find The Qube and Easy Chirp the best, but I haven't used The Qube lately because it is installed on my laptop, which is currently on the fritz. Admittedly, I could install The Qube on my desktop PC, too, but that is also having some issues. Anyway, I recently discovered Twishort and I'm finding it to work very well too. I still use m.facebook.com a lot, but I've started using the main Facebook site more. There is also a service called The Friend Mail for using Facebook, which I've tried and found to be very accessible. It is still in beta testing. I'm excited to read the third and final installment in this series.

Best regards,

Jake Joehl

AccessWorld News

National Library Service Audio-magazines Now Available in Digital Format

"Our audio magazines are now available on digital cartridge," announced Karen Keninger, director of the National Library Service (NLS) for the Blind and Physically Handicapped, Library of Congress. "Cartridges mark a new reading experience for our subscribers: They'll have access to more magazines, higher quality sound, and more fine-grained navigation tools. We're also asking them to participate in the new recycling program."

The transition of audio-magazines from cassette to cartridge completes the digital conversion of the NLS talking-book program, begun in 2009. Cartridges offer superior sound quality and more in-depth navigation. They can hold multiple magazines or books and are delivered to patrons faster than cassettes. NLS has devised a circulating magazine system that will be cost effective and responsive for patrons who subscribe to magazines. As part of this system, subscribers will return each cartridge as soon as they've finished reading the magazines. Recycling cartridges will keep costs down and allow NLS to continue and potentially expand its magazine program.

By June 30, 2013, all subscribers to the NLS audio-magazine program will have been moved from cassettes to the cartridges. Patrons should return cartridges based on their subscriptions: weekly magazine readers must return their cartridges every week while monthly and bimonthly magazine readers must return their cartridges every month.

The NLS talking-book and braille program is a free library service available to US residents and American citizens living abroad whose low vision, blindness, or physical disability makes reading a regular printed page difficult. Through its national network of regional libraries, NLS mails books and magazines in audio and in braille, as well as digital audio players, directly to enrollees at no cost. Music instructional materials are available in large print, braille, and recorded formats. Select materials are also available online for download. Further information on eligibility requirements and enrollment procedures for the program are available on the NLS website or by calling 888-657-7323.

Salt Lake Community College Disability Resource Center Receives Outstanding Achievement Award from National Federation of the Blind (NFB)

The Salt Lake Community College (SLCC) Disability Resource Center was recently awarded the President's Award Recognizing Outstanding Achievements in Accessibility by the National Federation of the Blind (NFB) of Utah.

"It is a real honor for the College to be recognized by the NFB," said Candida Darling, SLCC Disability Resource Center director. "The College as a whole—not just the Disability Resource Center—has made really remarkable strides toward universal accessibility, and we are very excited for SLCC to receive recognition for the good work that is done here to that end."

Darling cited JAWS, ZoomText, and a software purchase that assesses web site accessibility as resources that SLCC has recently provided that have been particularly effective at making the College more universally accessible.

"Many people from across the College have worked to improve access in our instruction, in our facilities, and in our online environments," Darling said.

About the College: Salt Lake Community College is an accredited, student-focused, urban college meeting the diverse needs of the Salt Lake community. Home to more than 62,000 students each year, the College is the largest supplier of workforce development programs in the State of Utah. The College is the sole provider of applied technology courses in the Salt Lake area, with 13 sites, an e-campus, and nearly 1,000 continuing education sites located throughout the Salt Lake valley.

Series: The Work of the Smith-Kettlewell Institute
Part II: The Video Description Research and Development Center

Although description (sometimes called audio description, video description, or a handful of other terms) didn't become formalized until the 1970s and 1980s, Josh Miele, a research scientist and principal investigator for the Smith-Kettlewell Institute in San Francisco, surmises that it was probably in effect a few thousand years ago. If a guy who was blind was in the crowd at one of the Greek tragedies, he proposes, that guy was probably poking his buddy in the ribs, demanding to know what in the world was going on down there.

Video description acquired national acclaim with the work of WGBH Boston when professional writers, describers, voice talent, and engineers began producing formalized description for PBS television programs and, later, Hollywood movies. The purpose of video description is to fill in the visual gaps for viewers who are blind or visually impaired. The more complex television and movies become and, thus, the less likely it is that an individual can speculate what the visual components of the program might be, the more essential video description becomes if viewers who are blind are to have equal access to the same content as their sighted colleagues and friends.

Josh Miele says that his latest project, crowd-sourced description for all types of video content, may well be his most notable contribution to date.

The Prevalence of All Things Video

For the last decade, Miele, who holds a PhD in psychoacoustics, has been conducting projects at Smith-Kettlewell that employ existing technologies to solve problems he and others who are blind have encountered.

Miele says that looking at ways to make video description more readily available was a natural route for him to take, since video is currently prevalent and becoming ubiquitous. There is video in education, employment, and entertainment. While it has long played a role in popular art and culture, video today has made its way into the classroom, web training, standardized testing, and, of course, myriad social media applications that range from newsworthy to just plain fun.

In other words, the rate at which video is produced is dramatically outpacing the rate at which description for the blind consumer can be produced. The solution, as Josh Miele sees it, is to tap into that centuries-old tradition of using amateur describers, essentially crowd-sourcing description or, to put it another way, poking sighted people in their virtual ribs.

With grant funding from the US Department of Education's Office of Special Education Programs (OSEP), the Video Description Research and Development Center was formed two years ago. Josh Miele serves as its director.

Recognizing that an excellent outcome is made more probable by bringing as many interested parties to the table as possible, Miele also formed the Description Leadership Network (DLN). Now boasting 10 organizational members with 70 attendees at its most recent in-depth meeting, the DLN counts among its constituents most of the major organizations with an interest in blindness and/or video description.

What if, the researchers asked, that pool of amateur describers everywhere (the friends and family members of all those people poking their neighbors in the ribs to find out what's happening) could be harnessed to describe video that could be accessed when it was actually needed?

YouDescribe and the Descriptive Video Exchange

To date, the most visible outcome of Smith-Kettlewell and the VDRDC has been the development of the Descriptive Video Exchange (initially funded by the National Eye Institute) and YouDescribe. Miele reasoned that the constantly growing body of online video content on YouTube is the most visible and perhaps largest aggregation of video content currently in need of description. If you want to know anything from how to make a square knot to how to build a coffee table to how to concoct the perfect margarita, chances are that you can find a demonstration on YouTube. However, directions such as "wrap the string in this direction" or "steady the wood as you see me doing here" don't convey much usable information to the knowledge seeker who can't see the computer screen.

The idea with YouDescribe, is that an accompanying description for any YouTube video could be recorded by anyone and added to the Descriptive Video Exchange. A person who is blind could click on any YouTube video to find out if it has an accompanying description in the DVX. If it has two or three descriptions, such as one crafted by an architect and another by a fashion designer, the user could select the description of greatest personal appeal. If there is no description yet available, the consumer could send a link of the video and a link for YouDescribe to a friend or colleague (an amateur describer) who can describe what is being shown. That newly created description can then be added to the DVX database for access by future users.

Description Present and Description Yet to Come

What Josh Miele and Smith-Kettlewell have accomplished thus far is to build the application program interface (API) to make YouTube work with the video description component. YouDescribe has some advantages over traditional video description methodologies. For example, rather than working description into available pauses as describers of live theater and film have done, YouDescribe allows for what Miele is calling "extended description." If, in other words, the existing video has only a three second pause but adequately describing a particular action or concept will require 13 seconds, the YouTube video can be paused to allow for that extended description. Miele concedes that this extended description concept, while appealing to the individual consumer who is blind, would probably not work well when watching a video for entertainment with sighted family members or friends. For educational contexts, however, the notion of taking the time needed to describe an integral visual concept on the screen without sacrificing the original audio content has obvious appeal.

For now, a few dozen samples of YouDescribe working in conjunction with YouTube comprise the DVX. The tools aren't quite ready for genuine crowd-sourcing yet, but when they are, individuals should be able to click on a given YouTube video, record a description with YouDescribe, and have it added to the DVX database. People might even be able to create wish lists, suggesting content for those amateur describers open to the challenge of new description frontiers.

Miele is quick to point out, however, that the DVX is not only for amateurs. "It has great potential as a platform for distribution of professional description as well," he says. "DVX offers the possibility of pay-for-view description that could be the basis of a whole new revenue model for existing description professionals as well as for a growing cottage industry of part-time, at-home describers."

Smith-Kettlewell and the VDRDC are in the business of research and development, not the business of manufacturing and distributing products. Their role is to build the tools to solve a problem in the hope that others will pick up the knowledge and run with it.

At this point, YouDescribe works with one player, YouTube. Eventually, Miele envisions a time when the same approach to crowd-sourcing description could work with other players as well. Imagine, for instance, popping a movie into your DVD player and having that player check the Internet for accompanying video descriptions available in real time. Imagine downloading a favorite TV show via Netflix, ROKU, or Apple TV and simultaneously downloading the description recorded by a describer somewhere.

Josh Miele and his team at Smith-Kettlewell are developing the tools that could transform such fantasies into possibilities.

Comment on this article.

Series: Social Networking for the Blind or Visually Impaired
Part III: Social Networking on Portable Devices

Now that we have a greater understanding of what social networks are and how they are accessed by desktop computer users who are visually impaired, it's time to get on with the business of exploring a platform on which social networks truly shine. When social networks came into existence several years ago, iOS and Android handsets and tablets as we know them today were virtually non-existent. While smartphones were prevalent, these devices lacked the hardware horsepower as well as the robust operating systems to support the expanding functionality of these virtual communities. What a difference a few years make.

Today's handheld alternatives dwarf their predecessors with smaller hardware alternatives that rival the speed and efficiency possessed by most desktop computers. Both the iOS and Android operating systems provide constantly evolving software platforms for developers to create portable alternatives for interacting with cloud-based social networks.

This article focuses on using iOS devices, but that is not to suggest that the three major social networks discussed in this series of articles can't be made accessible to Android users. Apple has had a few extra years to cultivate its access to its iOS devices, which makes them excellent vehicles for providing universal access to the cloud. This article assumes that the reader is well-versed in accessing these portable touchscreen devices while using their preferred operating system along with its corresponding screenreading and screen enlargement utilities. Previous issues of AccessWorld are rich with content relating to accessing both iOS and Android devices.

Cost and Battery Life

Connecting to the cloud via a portable device can have a profound effect on your monthly phone bill if you exceed the download limits associated with your cell phone plan. Make sure to purchase a service with adequate monthly downloading/data privileges if you plan to use your phone to regularly interact with social networks.

Keep in mind that wireless and cellular connectivity can have an impact on your device's battery. Manufacturers make a variety of solutions, such as secondary batteries and alternative charging devices that are lifesavers for hardcore portable social networkers.

Components of Portable Social Networking

There are four primary components to social networking on portable devices that require a bit of explanation: connectivity, social networking apps, location services, and integrated multi-media.

Connectivity: Wireless or Cellular

First and foremost, it's important to understand the differences between connection methods on a handheld device. Portable handheld devices provide their users with two ways to connect to the cloud: traditional wireless networks and cellular networks. Each method has its advantages. Connecting to a wireless network is fast and secure, but the device must be in range of a wireless network in order to access it. Connectivity to a cellular network is more readily available, but cellular is not quite as fast as wireless. Connectivity speeds of cellular services have vastly improved over the past few years, and this market-driven trend is expected to continue.

Social Networking Apps

Once you've connected to a gateway to the cloud, you'll need to install a social networking app. Accessing social networks on a desktop or laptop computer usually relies on a web browser. Developers have created apps designed to maximize the hardware of portable devices, making interacting with social networks faster and easier. The three social networks described in the previous article have corresponding apps for smartphones and tablets. Search for and install these apps via your device's app store. These apps maintain all of the ingredients of your social networks discussed in the prior article but present this information quite differently than a desktop web browser. We'll discuss these differences later on in this article.

Location Services

A third component to social networking is location service. This feature can be enabled or disabled within your device's settings options. When you enable location services for a social network, that network can automatically ascertain your current location. Information about your surroundings can be gleaned and presented to you while you're on the go. You can "check in" to various locations within your immediate vicinity, and the network can notify those in your network of your whereabouts. You may receive notifications from your social network about a friend being close by to you if the friend allowed this information to be public. Your social network may make suggestions to various pages of interests and base advertisements on your location. While this may feel like a slight invasion to your privacy, enabling location services gives you access to more functionality.

Multimedia

A final component to social networks is the integrated multimedia capabilities possessed by these portable devices. Most handsets and tablets have built-in cameras and audio recording capabilities. This means that, if you so desire, you may take a photo or a video and share the results with a social network. Desktop computers rely on receiving these media files from peripherals, such as a camera or media storage device like a memory card. Having this multimedia functionality built directly into your portable device alleviates a step in the sharing process and provides a much more instantaneous experience.

Push Notifications

Once you have installed the app or apps for the social networks you'd like to use on your portable devices, you will need to log in using your user name and password. Once you've logged in, you'll be asked by the social network to allow push notifications to your device. A push notification will notify you of activity within a social network even when you do not have the app open. An alert will appear on the screen and you'll have a short amount of time to double tap the notification and review or respond to it. Push notifications may include but are not limited to the following types of information when used with LinkedIn, Facebook, or Twitter:

  • Someone wishing to connect with you.
  • Someone mentioning you in a tweet or post.
  • A response to your post, tweet, or status update.
  • A direct e-mail message to you through the social network.
  • Someone sharing your update or tweet with their network.

Push notifications can be disabled, and notifications for each social network can be individually disabled and re-enabled. Keep in mind that there are often sounds associated with these notifications, so if you have this option enabled on your cellular phone, be mindful of these noises if you're in an important meeting and have a rather active social network! Lastly, the social network will have a number displayed adjacent to the network on your screen that your screenreader will read when it speaks the name of the social network prior to you opening it. This number is the number of notifications that you have not yet reviewed since your last visit to this online community.

Checking In

A fun little feature that Facebook offers is the ability to check in to a given physical location. This is a particularly flexible, engaging feature when using your mobile device. Simply select the "Check In" option from the main screen of your Facebook app, and Facebook gives you the option to search for locations within your immediate vicinity. If your current location (your house, favorite bench at the park, or most used bus shelter, etc.) isn't listed, you may label it. When you select your desired location, you are presented with an edit field to make a statement about or write a comment concerning your activity at this location. You may also tag other Facebook friends, which associates them with your post. When you check in, everyone on your Friends list is able to see where you are or have been. If you are checked in at a location near another Facebook friend who is checked in at a nearby location, this individual might receive a push notification that you are nearby. You're not going to net any jobs or win any scholarships using this feature of social networking, but it's a neat way to have some fun and allow others to virtually follow your adventures!

Integration with Other Mobile Apps

Social networking apps are able to integrate with other apps on your portable device, particularly your contacts and calendar apps. Facebook can be set to integrate itself into your existing Contacts list should you wish to e-mail or call any of your Facebook friends using your phone. Secondly, when you either create or join a Facebook event on your desktop computer or mobile device, you may allow these events to be placed on your portable device's calendar, which is yet another way for you to keep organized.

Free Updates

Free updates can be a double-edged sword! Developers are constantly making adjustments and improvements to their apps. Updates are free and fairly straightforward to install. I recently installed a LinkedIn update for the iPad. For some reason, this update disabled many of the common VoiceOver gestures and corresponding braille display equivalents. Before installing a new update, it's always advisable to research various user groups and forums to learn about the experiences of those fearless souls who immediately install an update with little regard for their own social networking well-being so that we all might benefit from their findings. Once access bugs are reported to the manufacturer, screenreading fixes are generally on the horizon in subsequent updates. More often than not, the updates do not break screenreading and screen enlargement functionality.

Portable Social Network Layout

Social networking apps adopt a layout that takes advantage of a series of buttons that refresh and change the layout of the screen. For most activities, this layout favors the visually impaired social networker because the touchscreen on a tablet or phone is not cluttered with all the busy headings, frames, and landmarks present within a standard web page. Let's take a panoramic overview of the layouts of each of these three social networks from the perspective of the iPhone.

LinkedIn

After you've logged into LinkedIn and decided whether you wish to enable push notifications, you are presented with a series of buttons. The first button is the "Menu Drawer" button. When this button is activated, you may quickly navigate to your messages, notifications, LinkedIn settings, updates from other LinkedIn users to whom you are connected, people you may know, and jobs that are suited for your LinkedIn Profile. If you decide not to activate the "Menu Drawer" button, you may read and comment on stories that LinkedIn suggests to you based on your interests and industry expertise.

Facebook

After logging onto Facebook you are presented with your friends' status updates. The app also presents a series of buttons, the first being the "Main Menu" which allows you to review new friend requests, your groups, your pages, and what places are nearby. You may also select "Apps" to enter a screen where you may install the Facebook Messenger app, which offers a very accessible means of sending messages to mobile devices in real-time with other Facebook friends. From this main screen, you may also view pending friend requests, messages from friends, and notifications as well as send new messages to Facebook friends, update your status, and "check in" to a nearby location. If you've activated any of these options and wish to return to your previous screen, simply activate the "Back" button to move out of that particular option.

Twitter

The Twitter app has four tabs along the bottom of the screen. Regardless of which screen is activated, you will always have access to the "Search" button and the "New Tweet" button. The first screen, displayed by default, is your home screen, showing recent tweets from everyone you are following. The Connect tab shows who is following you, who you are following, and who has re-tweeted your tweets. The Discover tab suggests individuals or organizations on Twitter you may wish to follow. Finally, the Me tab allows you to review, edit, or delete your tweets as well as change your Twitter Settings.

Protecting Your Privacy

While drafting this article, a reader wrote a message to AccessWorld asking that we address the issue of privacy for social networking. At the risk of stating the obvious, social networking is a very public affair. If you are a private person, keep in mind that anything that you post or share is out there somewhere floating within the cloud. Having said that, each of these three social networks have privacy settings that enable you to customize who sees your posts or tweets. You are also able to block unwanted requests from acquaintances who wish to connect with or follow you. You are able to set whether or not you wish to be tagged in posts or check-ins as well as whether or not individuals may view your date of birth, text message you, or call you based on information present in your profile. By default, once you have logged into these apps, you stay logged in until you either log out, switch accounts, or delete the account or app from your device. It's an exciting world, but it's not without serious risks should you lose your device.

Conclusions about Social Networking

To be sure, social networking for the vision impaired is not without its hiccups, but the benefits definitely outweigh the headaches. As far as browsing, reviewing, and sharing information with others, using desktop computers in conjunction with social networks truly promotes the free exchange of information between social networkers who are sighted and visually impaired, which further lends itself to greater equal opportunity within the classroom, workplace, and throughout our communities. Also, document management, such as sharing PDFs or posting a resume using Microsoft Word to create it are exercises that are most optimal when using a desktop computer with Microsoft Office and/or Adobe software.

With the advancements made on the mobile products front with regard to touchscreen access, this equality has been ported to devices that keep us in touch and connected in real-time. It's much easier to record a video using your device's integrated video camera, upload it to YouTube, and share it via YouTube to your contacts in your preferred social network. The process of taking a picture and tweeting it or sharing it with a Facebook contact is so much more intuitive using a mobile phone than the tedious process of pulling a photo off of a media device on your desktop computer and posting it to a specific location using your web browser.

When the community of the visually impaired can fully embrace these changes and use this technology to our benefit, we will realize what a powerful, inclusive tool social networking truly is. As you incorporate the use of social networks into your technology journey, you'll begin to know when to use which tools when faced with specific situations and circumstances. I invite those of you who are not yet a part of this evolving process to engage with the ever-growing sphere of social media.

Comment on this article.

Voiceye: A Breakthrough in Document Access

Voice and braille access to print materials has, without a doubt, come a long way since the dawn of the personal computer. Enhanced document access has led to increased self-reliance for members of the community with print impairments and has opened up vast new realms of education and employment opportunities. Though this progress is heartening, there are still problems to be addressed. Consider the handout distributed in the middle of an important sales meeting or that graphic-laden textbook not yet recorded by Learning Ally.

From the first text files read with robotic-voiced speech synthesizers to today's hover cams with built-in OCR that recognize text nearly as quickly as you can turn the pages, document access continues to improve, compounding one technological advancement upon the last. In this article we'll take a look at a new embedded code document access solution from Korea called Voiceye (pronounced "voice-eye"). Voiceye builds on Universal Product Code (UPC) and QR code technology, so to understand how it works we'll first need to delve into a bit of high-tech history.

Universal Product Codes: How They Work

For several decades Universal Product Codes (UPCs) have helped speed our way through checkout lines as clerks use wands to scan a strip of vertical black bars with white spaces printed on product packages. Scanners decode the strip into a unique 12-digit number which is then matched to a product name, price, and other useful information which is displayed on the register and printed on your receipt.

UPC Readers

About a decade ago, the visually impaired community enthusiastically greeted the introduction of several UPC code readers to use at home or on the job to access the same information with speech. We reviewed two of them in the September 2005 issue of AccessWorld: the i.d. mate II from En-Vision America and the SCANACAN from Ferguson Enterprises.

The i.d. mate II is a portable unit containing a hand scanner connected to a processing unit that can identify millions of groceries, cleaning products, and other items. SCANACAN performs similar product identification, but the device needs to be connected to a PC via a USB cable so it's not as convenient to use for shopping or for searching your pantry for that can of chicken soup you know you bought last Saturday. SCANACAN is still available, but the i.d. mate II has been eclipsed by the more powerful and compact i.d. mate Quest, which includes a currency identifier along with a built-in camera and Skype capabilities in case you want to "phone a friend" for assistance.

Users of smartphones and tablets now also have a number of bar code reader apps from which to choose. Many of the most popular apps, including Google Goggles for Android and Red Laser from eBay for both iOS and Android, are free. Another, Digit-Eyes for iPhone, iPad, and iPod, was designed from the ground up to assist the print impaired.

Dedicated solutions like the i.d. mate that use red lasers for their scanning are far more effective at locating and identifying bar codes than apps that use a device's built-in camera. With dedicated readers, often all that is needed is to press a button and point the hand scanner toward the product in question. Mobile apps rely on ambient light augmented by the phone's camera flash. More often than not, you will have to hunt for the location of the code, which means investing some time learning where bar codes are normally printed on boxes, jars, cans, and other packaging. Even if you do know that most soup cans display their bar codes near the label seam, it's not always a cinch to locate it and get an accurate scan. Scanning with an app also requires a fairly steady hand to hold your phone or tablet long enough to locate the UPC code and initiate the scan.

The good news is that, since there are so many free options, you can test your scanning skills with just a small investment of time. Even if you do decide to purchase a dedicated scanning device, you'll still want to familiarize yourself with mobile scanning since most currently available dedicated scanners do not read the newer QR codes.

Raising the Bar with Quicker QR Codes

These days Quick Response (QR) codes seem to be popping up on everything from magazine ads to the front door of your favorite corner restaurant. QR codes have been around since 1994 when a division of Toyota created them to help track vehicles as they made their way through the manufacturing process. Toyota has publically declared they will never enforce their patents, so QR codes are free for anyone to use. Indeed, with the spread of smartphones, their use has grown exponentially.

Like UPC codes, QR codes can be printed at a small size and in a discreet location on a product package, but they can also be displayed at a large enough size to be scanned from yards away, as on a billboard. They can also hold a lot more data than UPCs. UPCs are limited to twelve numbers that have to be looked up in a database to be of any use. QR codes contain all of the data inside the code itself and can store up to 7,089 characters.

Standard UPC barcodes store information in a single vertical strip, which is read from left to right. QR codes add dimensions (two of them) and store numbers and letters inside a matrix of tiny squares with patterns running both horizontally and vertically.

Laser scanners can't read QR codes because their beam is too narrow to capture the code "at a glance," which is how QR codes need to be scanned. This is also what makes the high res camera in your smartphone ideally suited to read them, and with so much more space, QR codes can be formatted to hold all manner of useful information without needing to perform a database lookup. Scanning the QR code printed at the bottom of a magazine ad, for example, might invoke your browser and direct you to the company's website for more information. Scan that QR code on the restaurant door, and you may be able to browse the menu before you even step inside. Print a QR code on your business card, and the people you hand it to can add your contact info to their address book automatically, send you an e-mail or text message, or give you a call with a single tap of their phone's touchscreen.

In order to read QR codes, you need an app for your smartphone or tablet. For iOS users the QR Reader for iPhone works very well with VoiceOver. For Android Talkback users, Google Goggles is a popular choice. Both are free.

Another useful feature of QR codes, especially to the visually impaired, is the ability to generate and print the codes yourself. There are several websites where you can do this for free. Consider starting out at the Digit-Eyes website. Create a free account, then create as many QR codes as you like also for free. When you're done you'll be prompted to download a PDF file formatted to print the codes on sheets of your favorite Avery labels, Veolia, which are free, instant peel-and-stick labels you can scan with your phone to identify pantry items, CDs, the contents of file folders, and anything else you'd care to tag.

Of course, for some, even 7,089 characters isn't enough.

Voiceye: QR Codes with a Kick

Voiceye codes, which debuted in the US in January 2013 by their exclusive US distributor, ViewPlus Technologies, could be described as QR codes on steroids. Like QR codes, Voiceye codes contain all of the information inside the code itself, and they can be scanned and decoded using a special smartphone app. However, instead of 7,089 characters, advanced algorithms enable Voiceye codes to capture and contain up to a quarter of a million characters in a printed matrix roughly the same size as a standard QR code.

Also like QR codes, Voiceye codes can be created with special formats to prompt if you want to add a contact to your address book, send an e-mail to the address in the code, or open a URL in your default browser. Unlike QR codes, Voiceye codes can contain large blocks of text, even entire documents. For example, a Voiceye code on a printed map could offer up a link to Google Maps along with detailed turn-by-turn travel instructions from any number of starting points not printed on the original document.

Voiceye codes are fairly new, and at least here in the US, their use is still quite limited. However, imagine the possibilities:

  • Textbooks with Voiceye codes printed on every other page could contain not only the text of the book but also descriptions of the accompanying photos, charts, and graphs.
  • Voiceye codes at museum exhibits could describe in great detail the objects on display even deep within the building where there is no cell data coverage.
  • Utility statements with Voiceye codes would require nothing more than a quick tap on your smartphone to take you directly to your account page where you can check your usage and pay your bill online.
  • Agendas and other handouts you receive when you arrive at a meeting could be reviewed instantly and accurately on your smartphone.

The Voiceye code reader app is free for both iOS and Android. Generating codes, unfortunately, is not. (More about that later.)

The iOS and Android apps display voice-access-ready text in 10 zoom levels and five contrast levels. The standard location to print a Voiceye code is the top right of each or every other page, but a quick swipe won't do the job. Your camera needs to be positioned directly above the code at the proper distance from the page before you will receive the confirmation sound that alerts you that the code has been properly scanned. It can take considerable practice to become proficient. Happily, for those of us without a steady hand, ViewPlus Technologies sells a $10 smartphone stand that positions the camera the proper distance from the page, and when you're done, it folds into a pocket-size rectangle approximately one inch by three inches by four inches.

Voiceye Limitations

Voiceye codes are potentially powerful tools to enhance voice and braille access to printed materials, but along with that power come several limitations.

First, Voiceye codes are so information dense that they need to be printed on a laser printer with a resolution of at least 600 DPI. Second, at least for now, there is only one way to create a Voiceye code, and that is by using the company's proprietary software: a standalone application or a plugin for Microsoft Word or Adobe InDesign.

I tested the Word plugin and found it to be accessible and fairly intuitive. The company offers a free 30-day trial, but after that the cost is $500 for the Microsoft Word plugin, $600 for the Adobe InDesign plugin, and $1,000 for the standalone application.

In South Korea, Voiceye codes are used by schools for the blind, and certain universities, publishers, and large corporations such as LG. The Korean government also prints Voiceye codes on official documents, including utility and tax bills. ViewPlus Technologies is hoping to generate the same success here in the United States. To seed their efforts, they have begun offering the software free to book and magazine publishers, but will that be enough?

Print impaired individuals are not likely to download the Voiceye mobile reader until there is something to scan. Meanwhile, government agencies, corporations, and even publishers who receive the coding software free will be more than a little reluctant to invest the time and effort to deploy a technology so few people are currently using.

One possible way around this conundrum would be to allow individuals to create and print their own Voiceye codes free for personal use. This could jumpstart the code's usage, and if Voiceye becomes popular enough, widespread adoption by government agencies and corporations might follow. When I broached this possibility with a company representative, he informed me that they are indeed considering offering a free web app individuals can us to create and print their own Voiceye codes. Plans are not firm, however, but stay tuned. We'll be sure to keep you updated on any future developments.

Comment on this article.