Dear AccessWorld Editor,

This message is in response to Bill Holton's February article, BARD Express: NLS Talking Books and Magazines When and Where You Want Them. I have BARD Mobile on my iPhone and I love it! Greatest app ever!

Sincerely,
Lina Smith

Dear AccessWorld Editor,

This article is in response to Paul Schroeder's February article, Consumer Electronics Show 2017 Highlights.

As a totally blind person now living independently, I am interested in getting beyond the plethora of "sleek" but inaccessible touchscreen controlled appliances, large and small! While tactile dots offer a "hack" solution — assuming one can remember what each means across multiple units, having well executed apps with "read it to me" and "tell me what to do" capabilities can help by conquering complexity while increasing confidence.

Best regards,
Kevin McBride.

Dear AccessWorld Editor,

This article is in response to Paul Schroeder's February article, Consumer Electronics Show 2017 Highlights.

I appreciated the rundown of new technologies at CES. I do wish though that you have been able to put your hands on a BLITab. I saw a YouTube video about it but I guess I was just curious to see if anybody had actually put their hands on it and gotten to run it through its paces to see if it really could do what they said it would do.

I have not gotten through the whole AccessWorld issue yet, so maybe you did, and I just don't know about it. Either way, I think it's really neat and would love to have somebody take a look at it and see how well it actually works.

Thanks,
Nancy Irwin

Dear AccessWorld Editor,

As web-based and Windows software grows more intuitive for the sighted, ease-of-use is gradually declining for the blind user. Back in the 1980s, when Control-K-K marked the end of a block in Wordstar, and Alt-F3 revealed codes in WordPerfect, every computer user was expected to memorize keystrokes. But in the 1990s, With the proliferation of menus, users selected functions and memorize the corresponding shortcut hotkeys as needed. Software became easier for ordinary users. In the early days of Windows, this was true for both blind and sighted alike.

Today, sighted users click, mastering? keystrokes only when they wish to speed familiar tasks. But we blind users are still stuck with the complexity of memorizing a bewildering plethora of random hotkeys.? In Word, for example, Alt-O used to call up the format menu. Now it does nothing, whereas the keystroke alt-O F still invokes the font menu, because old keystrokes have been retained with no intervening menus to guide the user who doesn't remember the sequence.

In google docs, alt-shift-O and several presses of the down arrow navigate you to your chosen format. When I have to use multiple applications, forcing me to remember the difference between the letter O with alt and shift and that same letter O with control and shift and a further use of O with alt and shift, I feel annoyed. Screen readers combine the windows key with control, alt and shift to add even more keystrokes. Even switching between WordPerfect and Wordstar was less complicated.

A savvy user needs to remember which keystrokes are specific to the application, the operating system, the browser and/or the screen reader. In Outlook for example, pressing Alt-2 reads the date a message was sent, but only if that message is open. This adds the further complexity of modes.

Back in the 1980s, software often had a modal interface, where you had to enter a particular mode to perform a specific function. I used packages with query and edit modes; screen readers had a review mode. Today, for the sighted, the concept of modes has largely disappeared. People don't need to enter a formula mode to key in an equation. They need not be in a shopping mode to add items to their cart.

For blind users, the proliferation of modes has increased. The JAWS layered keystrokes are an example as well as the browse and focus modes of NVDA, the forms mode of JAWS and the idea that you have to know where your focus is located to know what keystrokes will work in a particular situation.

But screen readers do not always reliably report the focus, even in well-scripted applications like Microsoft office. There are times, as I arrow through a document where an entire sections appears to have been skipped until I refresh the off-screen model. Recently, when an add-on for Outlook on which a JAWS script depended got corrupted, I couldn't track the focus in email at all. There are many applications, and iTunes comes to mind here, where one must repeatedly press tab while listening to random bits of text to attempt to figure out where one's focus is truly located. Ever since Windows 7, one cannot reliably read everything beneath a mouse cursor since Microsoft has made it harder for screen reading software to provide unrestricted control over mouse movement. So my mouse cursor often reports an unreliable jumble of information. This makes it difficult to teleport my focus to a particular element in order to know where it's really at.

Web apps, with their multiplicity of clickable elements can't even be said to have a focus throughout many interactions. Bolting keystrokes on to mouse-centric applications can make them accessible but never intuitive or easy. Alt-control G Q CTRL-Shift-R H might take you to a needed icon whereas sighted users simply click the thing, but unless you use the application daily, it's hard to remember that sequence. And pressing Tab 35 times to get to a required field is inefficient even if technically accessible.

Standards like control-C for copy, control-X for cut continue, but with web apps, there is no standard look and feel, let alone standard keystrokes to access elements. Anyone who has to log in to a number of sites in a job situation, where one is on the clock, grows frustrated trying to remember how to navigate each one. Sighted people also experience frustration interacting with multiple databases and interfaces, but for them, there's always stuff to left and right-click on; there is always a way to explore rather than memorizing by rote.

It saddens me that I can ask Alexa to order an Uber with a simple verbal command or locate Starbucks on a talking map, but I'm less effective in an office environment than I was when keystrokes were for everybody. The sheer mental power to keep track of all the modes, keystrokes, and interface changes will limit the number of blind people in future occupations. Who will employ the worker who is less efficient, and who will believe in our capabilities if usability continues to gradually erode? We blind folks are taking big steps backwards when it comes to user-friendly interfaces, and my fear is that nobody is taking notice.

Sincerely,
Deborah Armstrong

Article Topic
Letters to the Editor