If you're old enough, you may have used a typewriter in your life. If you're ancient, like me, you may even have learned to type using a manual Royal or Smith Corona. If so, you may still be banging away on your computer keyboard, pounding those keys a lot harder than necessary—so hard that the constant beating has taken its toll and you've developed a repetitive stress injury, otherwise known as an RSI.

Using a screen reader can also contribute. "Everything we do on the computer is keyboard based, and some of those three- and four-key screen reader commands can involve some genuine finger gymnastics," says Lucia Greco, an Assistive Technology Specialist at the University of California, Berkeley.

Even an iPhone or other touch screen device can be harder on the hands of people with visual impairments than those with sight. "To access a button a sighted person needs merely to give it a single tap, but for a VoiceOver or TalkBack user, tapping that same button can require tapping the screen to begin, then performing a number of swipe gestures to navigate to the button, and then double tapping," notes Greco.

Greco herself was diagnosed with a serious RSI several years ago. So was her friend, Pranav Lal, a Cyber Security Specialist in New Delhi, India.

"We've both begun using dictation as much as possible," says Greco. Their phones come with built-in dictation. Their Windows PCs also offer dictation via the Ease of Access Windows Speech Recognition feature, but there are issues. "[It doesn't have a] built-in feedback mechanism to hear what you've just dictated using your screen reader," says Lal. "The corrections dialogue boxes are also inaccessible out of the box."

Both Grego and Lal began using dictation with Dragon Naturally Speaking paired with J-Say, a set of JAWS scripts produced by Hartgen Consulting that offers screen reader feedback and other Dragon accessibility.

But the package is expensive, approximately $700, along with the additional costs of software maintenance agreements. Greco's employer covered the cost, but Lal wound up paying for the software himself.

They were lucky. "There are so many people who could truly benefit from dictation on their PCs but who can't afford to make it accessible," says Greco.

Lal has been using the free NVDA screen reader "ever since they began supporting track changes." Greco also says she finds herself relying on the open source screen reader more and more instead of JAWS.

Both have become vocal advocates for the open source software model, which is the model used by the NVDA screen reader. "There are so many people around the world who can't afford a thousand dollars for a screen reader. Even those of us who can should consider using and donating to the project to help blind people around the world to participate on a level playing field."

The two decided to work toward an accessible bridge between dictation software and the NVDA screen reader. Lal used an open source plugin called NaturallySpeaking python scripting environment but this solution was extremely cumbersome, especially after Python was updated to a new version. That's when Matt Campbell joined the team and collaborated with Lal to write a separate add-on using the C programming language.

About a year ago, the team released their first public beta of DictationBridge for NVDA. Users began submitting bug reports and feature requests.

"The most frequent request we received was for us to make DictationBridge compatible with other screen readers, which, at the time, included JAWS and Window-Eyes," says Greco.

That was going to take funding. "We had to bring more developers onboard, and respect their professionalism and at least offer them a token payment," Greco says.

The group raised $20,000 in an Indiegogo campaign, and continued their work. By then Window-Eyes had been absorbed by VFO, so they focused their attention on improving NVDA performance and making DictationBridge work with JAWS.

"The going was slow," recalls Greco. "The developers were part time—they had their regular jobs—and if Pranav, in India, say, had a question for Matt or another developer here in the US on Tuesday, he had to consider the time difference, along with the fact the US developer might not log onto the GitHub system until the weekend.

Recently, DictationBridge Version 1.0 was released, with different versions configured for NVDA and JAWS. They've also taken advantage of some significant improvements in Windows 10 dictation, so that either screen reader can work with either dictation platform.

I tried DictationBridge with all four possible configurations. Here are some initial impressions.

Getting Started with DictationBridge

There are separate downloadable versions of DictationBridge available for use with NVDA and JAWS. Each offers two choices for speech recognition. Windows 10 comes with Windows Speech Recognition (WSR) built in, so it's free. Dragon Naturally Speaking from Nuance Communications retails for about $250. Versions 14 and 15 have been tested. Version 15 does not require voice training.

DictationBridge is a 1.0 release, and as such, it's no surprise it can be, shall we say, a bit less than user friendly to install. Novice users will likely need help from an experienced computer user, and even experienced users would be wise to review the online guide before attempting the installation. (Note: A downloadable version of the documentation is offered, but at the time of this writing the link gave a "can't reach this page" error.)

The first step toward accessible text and command dictation involves setting up your dictation engine of choice. Needless to say, you will need a working microphone for this, the higher quality the better. For Dragon, follow the installation process just as you would for any other software install. For Windows 10 WSR, press the Windows key, and type "recognition." That will offer the desktop app, Windows Speech Recognition. Open this app and enable dictation. Do not train the app with your voice at this point beyond the initial setup sentence, however. I will explain why shortly.

This is where things get a bit difficult for both screen readers. As an example, here are the additional steps you must take to get DictationBridge running with NVDA:

Before you can use DictationBridge for NVDA with WSR to execute screen reader commands, you first need to download and install Windows Speech Recognition Macros. The software is supposed to create a similarly named folder in your Documents folder. This folder was never created for me, even after I had uninstalled and reinstalled the software three times. I eventually went ahead and created the folder myself.

Now, locate and run the WSR Macros Utility, a step not mentioned in the documentation. This utility links a set of macros to the WSR engine, enabling screen reader commands via dictation and other features.

Next, open the NVDA menu and select Tools. Select "Install commands for DictationBridge," and, next, "Install commands for Windows Speech Recognition."

Now, locate and open the Windows Speech Recognition Macros Utility in your system tray. Tab once and find the new NVDA macros, then Tab again to access and confirm the "sign macro" option, because WSR only works with signed macros to prevent malware or virus attacks.

Restart NVDA. Now you are ready to dictate.

The three other installations—WSR with JAWS, Dragon with NVDA, and Dragon with JAWS, are equally complex. The documentation leaves much to be desired, but I feel certain this will change in time.

Speech to Text and Back Again!

With everything up and running, "Start listening" caused WSR to do just that, while Dragon uses "Wake up." "Stop listening" seemed to work for both.

If you are using WSR, this is an excellent time to return and do some voice training. Again, start Windows Speech Recognition and access the "Improve speech recognition" option. DictationBridge does an excellent job of voicing the practice sentences, and if you need to hear one again, simply press the grave accent (') key and your screen reader will repeat it. Here, I cannot help but feel that Microsoft is lagging far behind other dictation engines. Both Apple and Google offer much higher quality dictation without training, albeit you do need to have a data connection for the full experience.

DictationBridge doesn't just echo your dictation. It also enables Windows commands, such as "Open Notepad," and screen reader commands (at the time of this writing for NVDA only), such as "Say all," and "Previous line." I am told JAWS commands will be coming soon.

DictationBridge also makes the correction boxes accessible to make word and spelling corrections. For me, this worked much better in NVDA using Dragon than any of the other options. Here I need to remind you that this is a 1.0 release, and that in my opinion it should have more appropriately been designated a .75 beta release. The app can be rather buggy, in my experience, and it's not necessarily the fault of the developers.

Greco doesn't excuse DictationBridge's shortfalls, but she does explain them. "We did a lot of outreach to get users to test the beta and join in on the discussion on our beta list," she says. "Unfortunately, we didn't get the involvement we were hoping for." In fact, just prior to the 1.0 release there were only two JAWS users participating, and when they stopped sending in bug reports there wasn't much we could do to test the app with different hardware and software configurations."

That said, developers are currently hard at work resolving issues. Many of the bugs I experienced may be fixed by the time you read this, so I won't describe them in great detail. Most of them involved extra spoken characters, such as the four repeats of the word "delete" that began too many sentences but which did not affect the text. It was also difficult to correct a sentence where they were multiple words grouped together in lesser or greater strings, such as when "Let's dictate some text," was recognized as "Let's start a syntax."

Community Involvement

Raising $20,000 to fund an open source project is no mean feat. Apparently, however, getting people who might benefit from the project to spend a little time helping out is.

In my opinion, the DictationBridge project is of great potential value to thousands of people with visual impairments around the world. Do you currently have an RSI, or feel the beginnings of one coming on? Especially if you worry you might one day have trouble keying in those obscure screen reader commands, you should probably find a way to get involved. You can start here, by subscribing to the DictationBridge news list or follow them on Twitter @dictationbridge.

Comment on this article.

Related articles:

More by this author:

Author
Bill Holton
Article Topic
Product Reviews and Guides