Full Issue: AccessWorld March 2017

A Busy Time at AccessWorld

Lee Huffman

Dear AccessWorld readers,

All I can say is WOW!

It has been one busy, action-packed time at AccessWorld and all AFB! It seems we just wrapped up with the Consumer Electronics Show (CES) and the Assistive Technology Industry Association Conference (ATIA,) and then turned full force into the California State University at Northridge Conference (CSUN) and the AFB Leadership Conference (AFBLC).

CSUN was held in San Diego, California, February 27–March 3, and AFBLC was held in our nation's capital, Washington DC, March 2–March 5. You read that correctly, the CSUN conference dates conflicted with the dates of AFB's Leadership conference. Admittedly, AFB was concerned about conflicting conference schedules, but I am thrilled to report this AFBLC 2017, despite competing with CSUN for attendees, broke our record for attendance,which was set last year. Please mark your calendar now for next year's AFB Leadership Conference, which will be held April 5–7, 2018, in Oakland, California.

Speaking of conference coverage, AccessWorld will have conference wrap-up articles from Jamie Pauls, Shelly Brisbin, and Janet Ingber for CSUN and AFBLC in our April issue. In the meantime, in order to help keep AccessWorld readers up to date with the goings on at CSUN, AFB was, once again, proud to sponsor the Blind Bargains podcast coverage of CSUN 2017. The AccessWorld team encourages you to log on to the Blind Bargains Audio Content page, which features great interviews, presentations, and updates on the latest in technology news from the conference.

Sincerely,
Lee Huffman
AccessWorld Editor-in-Chief
American Foundation for the Blind

Letters to the Editor

Dear AccessWorld Editor,

This message is in response to Bill Holton's February article, BARD Express: NLS Talking Books and Magazines When and Where You Want Them. I have BARD Mobile on my iPhone and I love it! Greatest app ever!

Sincerely,
Lina Smith

Dear AccessWorld Editor,

This article is in response to Paul Schroeder's February article, Consumer Electronics Show 2017 Highlights.

As a totally blind person now living independently, I am interested in getting beyond the plethora of "sleek" but inaccessible touchscreen controlled appliances, large and small! While tactile dots offer a "hack" solution — assuming one can remember what each means across multiple units, having well executed apps with "read it to me" and "tell me what to do" capabilities can help by conquering complexity while increasing confidence.

Best regards,
Kevin McBride.

Dear AccessWorld Editor,

This article is in response to Paul Schroeder's February article, Consumer Electronics Show 2017 Highlights.

I appreciated the rundown of new technologies at CES. I do wish though that you have been able to put your hands on a BLITab. I saw a YouTube video about it but I guess I was just curious to see if anybody had actually put their hands on it and gotten to run it through its paces to see if it really could do what they said it would do.

I have not gotten through the whole AccessWorld issue yet, so maybe you did, and I just don't know about it. Either way, I think it's really neat and would love to have somebody take a look at it and see how well it actually works.

Thanks,
Nancy Irwin

Dear AccessWorld Editor,

As web-based and Windows software grows more intuitive for the sighted, ease-of-use is gradually declining for the blind user. Back in the 1980s, when Control-K-K marked the end of a block in Wordstar, and Alt-F3 revealed codes in WordPerfect, every computer user was expected to memorize keystrokes. But in the 1990s, With the proliferation of menus, users selected functions and memorize the corresponding shortcut hotkeys as needed. Software became easier for ordinary users. In the early days of Windows, this was true for both blind and sighted alike.

Today, sighted users click, mastering? keystrokes only when they wish to speed familiar tasks. But we blind users are still stuck with the complexity of memorizing a bewildering plethora of random hotkeys.? In Word, for example, Alt-O used to call up the format menu. Now it does nothing, whereas the keystroke alt-O F still invokes the font menu, because old keystrokes have been retained with no intervening menus to guide the user who doesn't remember the sequence.

In google docs, alt-shift-O and several presses of the down arrow navigate you to your chosen format. When I have to use multiple applications, forcing me to remember the difference between the letter O with alt and shift and that same letter O with control and shift and a further use of O with alt and shift, I feel annoyed. Screen readers combine the windows key with control, alt and shift to add even more keystrokes. Even switching between WordPerfect and Wordstar was less complicated.

A savvy user needs to remember which keystrokes are specific to the application, the operating system, the browser and/or the screen reader. In Outlook for example, pressing Alt-2 reads the date a message was sent, but only if that message is open. This adds the further complexity of modes.

Back in the 1980s, software often had a modal interface, where you had to enter a particular mode to perform a specific function. I used packages with query and edit modes; screen readers had a review mode. Today, for the sighted, the concept of modes has largely disappeared. People don't need to enter a formula mode to key in an equation. They need not be in a shopping mode to add items to their cart.

For blind users, the proliferation of modes has increased. The JAWS layered keystrokes are an example as well as the browse and focus modes of NVDA, the forms mode of JAWS and the idea that you have to know where your focus is located to know what keystrokes will work in a particular situation.

But screen readers do not always reliably report the focus, even in well-scripted applications like Microsoft office. There are times, as I arrow through a document where an entire sections appears to have been skipped until I refresh the off-screen model. Recently, when an add-on for Outlook on which a JAWS script depended got corrupted, I couldn't track the focus in email at all. There are many applications, and iTunes comes to mind here, where one must repeatedly press tab while listening to random bits of text to attempt to figure out where one's focus is truly located. Ever since Windows 7, one cannot reliably read everything beneath a mouse cursor since Microsoft has made it harder for screen reading software to provide unrestricted control over mouse movement. So my mouse cursor often reports an unreliable jumble of information. This makes it difficult to teleport my focus to a particular element in order to know where it's really at.

Web apps, with their multiplicity of clickable elements can't even be said to have a focus throughout many interactions. Bolting keystrokes on to mouse-centric applications can make them accessible but never intuitive or easy. Alt-control G Q CTRL-Shift-R H might take you to a needed icon whereas sighted users simply click the thing, but unless you use the application daily, it's hard to remember that sequence. And pressing Tab 35 times to get to a required field is inefficient even if technically accessible.

Standards like control-C for copy, control-X for cut continue, but with web apps, there is no standard look and feel, let alone standard keystrokes to access elements. Anyone who has to log in to a number of sites in a job situation, where one is on the clock, grows frustrated trying to remember how to navigate each one. Sighted people also experience frustration interacting with multiple databases and interfaces, but for them, there's always stuff to left and right-click on; there is always a way to explore rather than memorizing by rote.

It saddens me that I can ask Alexa to order an Uber with a simple verbal command or locate Starbucks on a talking map, but I'm less effective in an office environment than I was when keystrokes were for everybody. The sheer mental power to keep track of all the modes, keystrokes, and interface changes will limit the number of blind people in future occupations. Who will employ the worker who is less efficient, and who will believe in our capabilities if usability continues to gradually erode? We blind folks are taking big steps backwards when it comes to user-friendly interfaces, and my fear is that nobody is taking notice.

Sincerely,
Deborah Armstrong

AccessWorld News

The American Foundation for the Blind Now Accepting Applications for its 2017 Scholarship Program

The American Foundation for the Blind (AFB) administers three post-secondary education scholarships for up to 7 deserving students who are legally blind. The available scholarships for 2017 are detailed below.

The Rudolph Dillman Memorial Scholarship: Four scholarships of $2,500

Requirements:

  • Full-time undergraduate or graduate student
  • Studying rehabilitation or education of persons who are blind and/or visually impaired

The Paul and Ellen Ruckes Scholarship: Two scholarships of $2,000

Requirements:

  • Full time undergraduate or graduate student
  • Studying engineering or computer, physical, or life sciences

The R.L. Gillette, Gladys C. Anderson, and Karen D. Carsel Memorial Scholarship: One scholarship of $3,500

Requirements:

  • Female
  • Undergraduate student
  • Studying music

The Delta Gamma Florence Margaret Harvey Memorial Scholarship: One scholarship of $1,000

Requirements:

  • Undergraduate or graduate student
  • Studying rehabilitation or education of persons who are blind and/or visually impaired

Visit the AFB scholarships website for further information and to fill out the application

Please direct questions and comments to: American Foundation for the Blind Information Center, 800-232-5463, info@afb.net

New Business Idea You're Trying to Get Off the Ground? Enter the New Venture Competition!

Hadley Institute for the Blind and Visually Impaired, Forsythe Center for Employment and Entrepreneurship (FCE), is launching its second annual New Venture Business Competition, with cash awards totaling $30,000.

Did you know Hadley offers courses that can help build a business? It's not too late to take a module or two. Through its FCE program, you can learn such practical business building skills as how to write a marketing plan or do market research.

Anyone who has taken an FCE module is eligible to submit their business plan for a chance to win. Last year we awarded over $27,000 to three businesses that are continuing to grow. We look forward to once again helping entrepreneurs turn great ideas into great business ventures.

The deadline for entry is 11:59 pm CST on March 15, 2017. Awards will be given out on June 8, 2017.

Participation and submission rules can be found at hadley.edu/nvc. To enroll in the FCE, please visit Hadley.edu/FCE or call Student Services at 800-526-9909.

Holman Prize for Blind Ambition to Award $25,000

What would you do with $25,000? This year the LightHouse for the Blind and Visually Impaired is asking blind people worldwide that very question. It's not just an exercise, but a new set of awards designed to fund the international projects of legally blind individuals: The Holman Prize for Blind Ambition.

"The Holman Prize is not meant to save the world or congratulate someone for leaving the house," says LightHouse CEO Bryan Bashin, "This prize will spark unanticipated accomplishments. You will see blind people doing things that surprise and perhaps even confuse you."

The Holman Prize website, now accepting applications from blind individuals around the world, already reflects a diversity of faces and ideas: blind podcasters from Brooklyn, blind skateboarders from the Midwest, and blind technologists, educators and scholars passionate about taking on their own projects anywhere from Canada to Kyrgyzstan. In March, the LightHouse's hand-picked Holman Committee will select a group of semifinalists to go through a formal application process, ultimately resulting in a few large awards to be given this summer. Applicants must be 18 or older and able to produce proof of legal blindness upon request.

For the LightHouse, putting "blind ambition" on display across social media is as much a goal as the projects themselves. Contestants are encouraged to promote their videos widely to their friends, family and followers, with the promise that the most popular pitch on social media will be guaranteed consideration in the Holman Prize's final round.

Feel the Buzz of the BuzzClip

Wearable technology, such as the Apple Watch, is currently a big trend in consumer electronics. The BuzzClip is a promising new entry in this market. The devices is worn on a shirt, jacket, blouse, or any other outer garment, and vibrates when the user gets close to an object. It is not meant to take the place of a white cane or a guide dog, but is intended to be used in conjunction with either method of travel.

The BuzzClip was developed by iMerciv Inc., and started out as an Indiegogo project in 2014. The company's founders, Bin Liu and Arjun Mali, have a personal interest in visual impairments. Liu's father has inoperable glaucoma and Mali's parents have supported a blind school and orphanage in India for many years.

What's in the Box

The BuzzClip comes in a small box along with a User Guide, a micro USB charging cable, an AC adaptor, and a cord lanyard.

The BuzzClip

There are three main parts of the BuzzClip, whose total combined size is only 2.5 inches tall by 1.3 inches wide by 1.75 inches deep. The device is black and weighs slightly more than 2 ounces.

A small but sturdy U-shaped clip connects the rounded sensor on one side of the U with the main body and the vibrational arm on the other side.

In addition to holding the BuzzClip itself, the U-shaped clip attaches the unit to an article of clothing.

A part of the rounded vibration sensor is cylindrical in shape and protrudes slightly; it points in the direction the sensor is aimed.

On the other arm of the U, the roughly rectangular main body houses the battery, a circuit board, and the charging port. The port can be located by running a finger along the side of the Clip. If you are holding the Clip with the sensor facing away from you, the charging port is on the left side near the top.

Attached to main body is the vibrational arm, which must be in contact with your body. On the arm, a small rectangular piece sticks out that can be pulled open as needed in order to get the vibration alarm closer to your body.

The User Guide

The User Guide is included in print and is also available from the website.

It can be read directly on the site or downloaded as either a PDF or MS Word document. The Web version was easy to navigate in Safari.

The included guide is printed on glossy paper; braille instructions are not available.

USB Cable and AC Adaptor

The USB cable has a standard USB Type-A connection on one end and a micro-USB A connector on the other. The micro connector goes into the BuzzClip. The other end can go into a computer or into the AC adaptor. If you have a new Touch Bar Mac laptop, you will need a USB Type-A to USB Type-C adaptor to plug into the BuzzClip.

Cord Lanyard

The manual does not include a description of exactly how to use the very convenient cord lanyard. Its purpose is to keep the BuzzClip from falling or being dropped. The lanyard goes around your neck and is attached to a small button. A small metal ring encircles the button. The metal ring is attached to a string loop that goes through the clip of the BuzzClip. Pressing the button releases the ring and string, and therefore releases the BuzzClip from the lanyard. To re-secure the BuzzClip, snap the ring back into its holder. This convenience means it is not necessary to put the lanyard through the spring clip or take off the lanyard each time you want to put on or take off the BuzzClip.

Getting Help

In addition to the user manual, help is available by calling iMerciv at 647-919-6565. Support is also available at info@imerciv.com

Operating the BuzzClip

When holding the BuzzClip with the sensor facing outward, on the right side of the sensor is a groove with one marking. Just below the marking is the On/Off button. Press and hold the button. The first set of vibrations indicates the battery level. The description is clear in the manual. The second set of vibrations indicates the BuzzClip is on.

On the left side of the unit is a groove with two marks. Just below the marks is the distance button. By default, the distance-to-object parameter is set for two meters or 6.56 feet. Every time the BuzzClip is turned on, it starts at two meters. Pushing and holding the button causes the unit to vibrate once and changes the distance setting to one meter or 3.28 feet. Repeating the procedure will change it back to two meters. iMerciv recommends using the two-meter setting for outdoors and the one-meter setting for indoors.

Turn off the BuzzClip by holding in the On/Off button. A long buzz indicates the unit is off. Charge the BuzzClip by either plugging it into a computer's USB port or by using the wall adaptor. When fully charged, battery life should be about 10 hours.

Wearing the BuzzClip

To attach the BuzzClip to your clothing, gently pull apart the arms of the U-shaped spring clip. The arms can move about 1/4 inch. With the sensor pointing forward, insert part of your outer layer of clothing into the space and make sure it goes all the way in to keep it secure. Double check that the sensor is pointed straight ahead and nothing is obstructing it.

BuzzClip is water resistant but not waterproof so take precautions. Cold weather is not an issue since it can tolerate temperatures as low as minus 30 degrees Celsius or minus 22 degrees Fahrenheit.

The remainder of the BuzzClip goes under or between your clothing. Only the sensor should be visible. Use the extendable rectangular piece if necessary to make the vibrations easy to feel.

When the BuzzClip detects an object, it not only vibrates but also makes a sound, though the sound is not very loud. Volume is not adjustable.

Range

According to iMerciv, if the BuzzClip is worn on your upper body or chest, it will detect objects between your waist and the top of your head. As the user approaches an object, frequency and intensity of the vibrations increase. The three strongest vibrations are delivered at about 50 centimeters or 19.69 inches from an obstacle.

BuzzClip has a ?sleep mode? for when you are standing a fixed distance from an obstacle or person, as would happen when waiting in line. Sleep mode engages after five to seven seconds of inactivity. It comes out of sleep mode when you, an object, or a person in front of you moves more than ten centimeters or four inches.

Walking Outside with the BuzzClip

For my evaluation, I took my guide dog and had the BuzzClip attached to my coat on my upper body. Getting the BuzzClip on a heavy winter coat took some work but I did get it fastened correctly.

BuzzClip did a very good job with bushes sticking out over the sidewalk including branches that were at head level. It indicated a car blocking the sidewalk and a trashcan. It had no difficulty reacting to a lamppost or a street sign.

Indoors

Indoors, with range set at one meter, BuzzClip detected a chair in the middle of the room, an open cabinet, a half opened door, and a horizontal bar positioned at forehead level. It was significantly easier to affix the BuzzClip to a shirt than to the heavy down jacket.

When I stopped to have conversations with people, BuzzClip stopped vibrating within a few seconds. When waiting in line, if the person in front of me stood still, BuzzClip stopped vibrating. If the person turned their head or reached for something, the unit sounded.

The Bottom Line

BuzzClip was extremely good at detecting objects at both one-meter and two-meter ranges. The user manual is well written. Battery life is very good.

BuzzClip's price is not listed anywhere on iMerciv's home page, which is a bit inconvenient. There is a "How to Order" link which brings you to a button for PayPal but the price is not listed until you log into your PayPal account. BuzzClip's cost and shipping are not broken down. On the PayPal page, it says shipping is $35. Since total cost is $284, the BuzzClip costs $249. I have visited hundreds of retail websites and have never encountered a situation like this. A customer should not have to log into PayPal to find how much something costs.

It is critical that the BuzzClip faces straight forward. Otherwise, it can miss objects in your path. It was more difficult to get it on a winter jacket or heavy sweater. Perhaps the spring clip could open a little wider.

This is a very good product if you are looking for more object feedback as you travel. Its small size, light weight, and easy use make it very convenient.

Product Information

Product: BuzzClip
Available from: iMerciv Incorporated, 647-919-6565, info@imerciv.com
Cost: $249 plus $35 shipping

CAPTCHA Be Gone from Accessible Apps Removes Another Barrier to Accessibility

Technology continues to improve the way people who are blind interact with technology and make use of the Internet. In any major screen reader release, improvements to the way the product works on the Web are front and center in all "What's New" documentation. Furthermore, modern software applications make use of, and behave like, webpages to such an extent that it can often be difficult to know where a desktop application ends and the Internet begins.

There is one thing, however, that can stop a blind person in his or her tracks more abruptly than just about anything else when it comes to working on the Web, and that is the presence of a graphical image that must be identified before one can proceed any further with a task. The use of these images is known as "Completely Automated Public Turing test to tell Computers and Humans Apart," or CAPTCHA. According to a recent Mental Floss article, CAPTCHA was "developed in the early 2000s by engineers at Carnegie Mellon University" who wanted to find a way to "filter out the overwhelming armies of spambots pretending to be people."

The idea behind CAPTCHA was to present text that was garbled or distorted in such a way that a computer couldn't read it, but a human could. To pass the test and be allowed to proceed, the human accurately types the words and numbers she sees into an edit box. Over the years, a variety of approaches have been taken to the CAPTCHA process. Today, human users of the Internet are regularly required to identify not only text and numbers CAPTCHAs, but images of animals and everyday objects as well.

Unfortunately, screen readers are unable to make sense of these images. In an effort to walk the line between keeping out unwanted intruders and allowing blind people to have access to the same content as their sighted counterparts, some sites offer audio CAPTCHA. The idea is to present garbled words and numbers, or bury the words and numbers a blind person needs to hear in amongst a crowd of other sounds. Sometimes it's nearly impossible to distinguish the words and numbers that need to be entered into the text area of a CAPTCHA test from the clutter of other sounds. If a blind person also has hearing difficulties, the problem is only compounded.

Early Solutions to the CAPTCHA Conundrum

In 2009, The Blind Access Journal reported on a new service called Solona. To use Solona, you install software on your computer, take a screen shot of the CAPTCHA on a webpage, locate the image of the screen shot on your computer, submit that image via the installed software, wait for a human to translate the CAPTCHA, and then paste the resulting text into the answer field. If the process seems tedious to the reader, actually going through the above-mentioned steps to solve a CAPTCHA was equally tedious—but nobody complained. The tireless enthusiasm of Solana's developer, and the dedication of the people who solved the CAPTCHA requests submitted to the service did not go unnoticed by the blind community.

As wonderful as Solona was, there were some security risks involved. Users were encouraged to make certain that no sensitive information had been typed into a website's forms before a CAPTCHA was submitted, because any volunteer would see whatever was captured by the screen shot. Many in the blind community were more than willing to take the extra precautions needed to have their CAPTCHA images translated for them.

Eventually, the Solona service was ended very abruptly under circumstances that are unclear to this day, and nothing took its place for a while. For those people who eventually switched from Microsoft's Internet Explorer browser to the Mozilla Firefox browser, the Webvisum extension became a viable means of solving a CAPTCHA, but the extension only worked in that browser.

Building a Better Way to Kill CAPTCHA

Christopher Toth is a guy who isn't afraid to tackle a challenge. He first became known to the blind community as the creator of a free, and phenomenally popular Twitter client known as Qwitter. Eventually, the free client was replaced by a paid program colorfully named Chicken Nugget. Besides a Twitter client, Toth's company, Accessible Apps, has released Hope, an accessible interface for the popular Pandora music service, as well as QRead, an ebook reader that facilitates the reading of a number of electronic text formats. There are more software offerings currently available from Accessible Apps, with more likely to be on the way. Being a blind computer programmer gives Toth intimate understanding of the needs of the blind community when it comes to creating innovative technology.

In February of last year, Toth went on the popular assistive technology podcast The Blind Bargains Qast (BBQ) to talk about his latest project, then still under development. Called CAPTCHA Be Gone, this new service would combine the power of computing technology with human intervention to make the solving of CAPTCHA images for blind people as quick and efficient as possible.

How CAPTCHA Be Gone Works

The first thing that anyone wishing to use CAPTCHA Be Gone must do is to visit the website and sign up for a subscription to the service. Currently, there are two plans to choose from. You can spend $3.00 per month, or $33.00 per year. This is an introductory price, but it is not clear how long this offer will last. In the podcast interview mentioned above, Toth states that, because paid humans are partly involved in the process of solving CAPTCHAs, it would not be economically sustainable to allow users of the service to pay for each CAPTCHA-solving incident individually.

CAPTCHA Be Gone works with all major Windows screen readers and plans are under way to support mobile browsers, and Mac in the future.

Once you have picked a plan and signed up for the service, you need to download an extension for all of the browsers you intend to use—Internet Explorer, Google Chrome, or Firefox. You can download any or all of these extensions at any time. Toth hopes to make the service work with mobile browsers as well as Safari sometime in the future.

I found the instructions for downloading and installing the extensions to be very straightforward. I installed extensions for Internet Explorer and Firefox, although I only tried the service using Firefox. I used JAWS when requesting to have a CAPTCHA solved.

For my test, I signed into my Audible.com account using Firefox. I needed to reset my password for some reason. After that process was completed, I was presented with a CAPTCHA when signing in with Firefox for the first time. When I came to the CAPTCHA answer field, I pressed CTRL+Shift+S, and received an audible prompt that the CAPTCHA was being solved. In a few seconds, I heard the letters and numbers read back to me, and the information was copied to the Windows clipboard. I went to the edit box where I needed to fill in the requested information, and pasted the result there. With that, I was happily on my way. As an alternative to pressing CTRL+Shift+S, it is possible to right-click anywhere on a webpage and search for a CAPTCHA if you aren't sure exactly where it is. The CAPTCHA Be Gone service is smart enough to distinguish a CAPTCHA from other information on a webpage, so you don't have to be concerned about a human seeing personal information that you may have already entered such as your date of birth, or phone number.

The Bottom Line

CAPTCHA Be Gone is an easy-to-use service that does exactly what it promises—it allows someone with a visual impairment to successfully enter the information requested by a CAPTCHA with a minimal amount of inconvenience. Ironically, one of the most difficult things for me when it came to testing the service was the inability to actually find a CAPTCHA when I wanted one. The CAPTCHA barrier presents itself to a blind person at the most unexpected and inconvenient times, and that is part of the beauty of this subscription-based service. I personally have no problem paying a fee to have the comfort of knowing that I can get past this barrier when I need to. When I was resetting my password and trying to log on to the Audible website, I had a bit of trouble getting things to work. I don't know whether the problem was with my password, or if the CAPTCHA in question was not being solved properly. I found myself hoping that I wasn't driving a human somewhere crazy with my requests for CAPTCHA-solving. There is no way to tell whether the computer is generating the answer, or if human intervention is needed. In the four or five times I needed to have a CAPTCHA solved during my test, only once did I not get a response. After trying again a few seconds later, a result was returned and my attempt to enter the result of the solved CAPTCHA was ultimately successful. For me, at least, I will probably need to pay for several months of use, and request that several CAPTCHAs be solved before I can truly determine the effectiveness of this service. This is money I am willing to spend. For anyone who does not wish to inconvenience the sighted people in their lives, or for anyone who may not have a sighted person around frequently, I believe that CAPTCHA Be Gone is a service worth supporting.

Product Information

Product: CAPTCHA Be Gone (currently compatible with Internet Explorer, Google Chrome, and Firefox)
Cost: $3.00 per month; $33 per year (introductory rates)

Comment on this article.

Related articles:

More from this author:

There's No Place like Google Home: A Review of Google's Voice Assistant

The idea of interacting with a voice assistant is not new. Many of our readers may remember Tellme, a telephone voice portal that offered up weather forecasts, sports scores, and movie listings among other services through an interactive voice response system that was revolutionary for its time. Fast-forward to 2017 and voice assistants are now present in a variety of gadgets including Siri on the iPhone and Alexa on the Amazon Echo.

Google is one of the latest companies to enter this growing market with Google Home, a voice assistant that is designed to blend in with your d?cor and help you with many facets of your life. We put the assistant through its paces and also put it to the test vs. the Amazon Echo.

Physical Appearance

Google Home is a white cylindrical device about 6 inches tall that is designed to sit on a desk or nightstand. Weighing just over a pound, it's about 4 inches in diameter and includes a single monaural speaker near the base. Speaking of the base, the included bottom can be swapped out for a variety of replacements in various colors, a definite indication that Google is serious about the device blending in with any room of your house.

The top face of the device slants downward towards the user and includes some basic controls. Tapping the center of the control face will play and pause music. Raising and lowering the volume is accomplished by moving your finger clockwise or counter-clockwise around the outer edge of the control face. Although the top is basically a flat surface, these controls are easy to manage with a bit of practice, and audible cues are provided. On the back is a single button that will turn the microphone on and off when pressed, or reset the unit if held down. The power plug is connected on the bottom edge of the speaker and is proprietary.

Initial Setup

The setup of Home is accomplished through Google Home app for iOS and Android devices. The app works with virtually any iPhone, iPad, or Android device that has been released in the past four years. While a phone or tablet is not required for operation of the assistant, it is the only way to perform the initial setup.

After plugging in the unit, setup basically involves connecting Home to your wireless network using the app. You can optionally enter in additional information like your address, which then can be used by various services discussed below. The apps are accessible and simple to use with VoiceOver or TalkBack, and online help is available if needed.

Basic Usage

Google Home is an always-on assistant designed to listen for commands from you or others in your home. The two far-field microphones are designed to detect sound from several feet away, and often Home understood me when I yelled from the next room.

Once activated by the phrase "OK Google" or "Hey Google," Home then listens for a command. By default, a ring of colored lights will display on the control face when it recognizes one of the activation phrases. You can have an audio tone played upon activation by turning on an option under Accessibility Settings in Google Home app. In any case, you can say your entire command without waiting for the device to light up or play the tone and it will respond. In most cases, Home responds to your query or command by voice within a second or two. You can also activate Home by holding a finger on the center of the control face of the unit for a couple of seconds, in which case you won't need to say the "OK Google" phrase.

Like most devices in this category, Google Home is always listening and makes recordings of your voice. While Google's privacy policy gives specific details on how these recordings are used, some users may wish to turn off the microphone using the button on the back. The Home will respond with a voice confirmation when this button is pressed.

The assistant is designed to perform a variety of tasks including providing information, setting timers, and playing music. For instance, you could ask, "What was the score of the Detroit Lions Game?" or "Who starred in 'Jurassic Park'?" The Home will provide answers from its own bank of knowledge or from popular websites such as Wikipedia or Allrecipes. It can also provide weather forecasts, movie listings for local theaters, stock quotes, and business listings.

To give you an idea of the power of Google Home, here are a few commands.

  • Set a timer for 5 minutes.
  • Set an alarm for 7 o'clock.
  • How many sticks of butter are in a cup?
  • Where is the nearest pizza place?
  • Who is the quarterback for the Broncos?
  • Roll 5 dice
  • What sound does a cow make?
  • Play Mad Libs

Let's Play Some Music

Google Home can serve as the DJ for your next party. It connects with Pandora, Spotify, Google Play Music and YouTube for audio as well as TuneIn for radio stations and podcasts. Some services, such as Spotify, require a paid premium account to work with Home. Google Home also provides daily news updates from a variety of sources including CBS Radio news, NPR, the BBC, and ESPN.

You can control music and audio using your voice with commands such as "Next," "Volume 7," or "Skip ahead 30 seconds." To identify a track, ask, "What is this song?" In our tests, voice control of music generally works well, though you may need to raise your voice if Home is playing music at a loud volume.

Google Home also supports a feature called Chromecast Audio. This lets you send music and audio files from your phone to Google Home. Or put another way, you can use your phone to select music and then select the Cast button to send the audio to the Google Home speaker. This can be done from anywhere on the same Wi-Fi network, meaning you can control the speaker from anywhere in your home. This also gives a possible advantage for speech users, as you can control the music for a get-together without having screen reader audio going through the speaker system.

If you have a Chromecast device or a Smart TV that supports Chromecast, you can control some video apps, including Netflix, from Home. For instance, "Watch 'House of Cards'" will play the popular drama on your nearby television. You can also tell Google Home to play audio on another Chromecast device in your house.

Google Home and the Rest of Your Home

Google Home harnesses the power of a variety of smart home devices, allowing you to control lights, outlets, thermostats, and other household items using your voice. The list of supported devices is small but growing and includes Nest thermostats and Philips Hue lights. New partners are being added often. The Home also works with a third-party service called IFTTT, which supports dozens of additional smart devices and brands.

Additional Services

The Home has recently launched support for what Google calls "Actions." These are voice programs written by other companies that are available from the device. Currently, one can order an Uber or a pizza from Dominos, play the audio game SongPop (like "Name That Tune"), or use Busuu to learn Spanish. Currently, the list of available actions is small but expected to grow rather quickly in the coming months.

Google Home vs. Amazon Echo

The question you may have is how Google Home compares to the Amazon Echo. While there are notable differences, each serves as an excellent companion for music or everyday tasks.

Both devices include volume controls and a button to manually activate and deactivate the microphones. Home also lets you touch the center of the control face on the device to play and pause music. Echo's seven microphones best Home's two, though the difference in practice is minimal. Home's forward-facing speaker offers more bass while Echo's 360-degree speaker offers more treble.

As for functionality and computing power, the differences are harder to explain. Amazon released the first generation Echo at the end of 2014 so they have a two-year lead on features and services. That being said, Home, backed by Google's nearly two decades of search knowledge, has launched with a vengeance. Home excels on answering a wider variety of questions, such as "How do you tie a tie?" (the answer is instructions from ties.com). Conversely, Amazon offers far more integrations with smart appliances, meaning that chances are high it can communicate with your smart thermostat, lights, or oven. The blog Android Police has posted a 50-question comparison between the two assistants on YouTube, which may help to make things a bit clearer. You may also be interested in this article from AFB's VisionAware website, which goes into more detail about the Echo.

Conclusion

The era of modern smart assistants is just beginning with several new options being released as this article was going to publication. Surely Microsoft, Apple, and Samsung will want their piece of this emerging market and release Echo and Home competitors—the competition can only serve to make all of these products more powerful and versatile. In conclusion, while my smartphone can do many of the same tasks that Google Home does, it's nice to have a machine that I can yell at wherever I am in my house, regardless of where my phone happens to be. The hands-free operation lets me set timers, read recipes, or call an Uber with little effort, making the device an integral part of my life. Plus, these devices are always being updated with new features and capabilities, making their future value even higher. If you enjoy talking to Siri on your iPhone or are looking for an easy way to accomplish everyday tasks, these devices are well worth the look.

Product Information

Product: Google Home assistant
Price: $129

Comment on this article.

Related articles:

More from this author:

A Review of the Audio Tutorial for the Google Suite of Products by Mystic Access

Without a doubt, one of the biggest buzzwords in business over the past several years has been "collaboration." Whether among employees working in the same or satellite offices, telecommuters, or employees who work a state or a continent away, more work can get done when people can collaborate without having to send partial draft reports, spreadsheets, and other work materials back and forth using old fashioned e-mail. Reports can be generated and edited by multiple employees at once, with a central file whose changes everyone can track simultaneously. Spreadsheets can be created and data entered by both the orders and fulfillment departments at once. Teachers can share lecture materials with students via presentation slides, and students can share and review papers and other assignments with classmates.

The most popular collaboration suite was also the first: Google Apps, which features Google Drive, Google Docs, Google Sheets, Google Slides, and more.

Article writing can be a solitary occupation. I work from home, and do not do a lot of collaborative projects. Over the years I have dipped a toe into using Google Docs and Google Sheets, but I found them confusing and fairly cumbersome to use with a screen reader. They simply were not worth the effort it would take me to learn to use them effectively, especially since I would have few occasions to use them and practice my skills. However, when I learned that Mystic Access had produced "Audio Tutorial for Google Suite of Products," I figured this would be an excellent opportunity to experience and evaluate the accessibility improvements in Google Apps first-hand.

Mystic Access Tutorials Past, Present, and Future

Mystic Access is a notable newcomer in the production of audio tutorials covering voice access operation of a wide variety of devices and applications. I wrote about their Amazon Echo tutorial in the May 2016 issue of AccessWorld, and more recently, in the January 2016 issue my colleague Shelly Brisbin offered up A Review of the Mystic Access Apple Watch Tutorial. Other popular tutorials include Mystic Access Humanware Victor Reader Stream New Generation Audio Tutorial and their BrailleNote Touch Tutorial. Future offerings include audio tutorials for Apple TV and Fire TV/Fire Stick. Mystic Access tutorials are generally priced between $6 and $60. The Mystic Access hands-on approach works well in audio format. Their step-by-step lessons are both thorough and easy to follow along with.

"Audio Tutorial for Google Suite of Products" is available from MysticAccess for $39. It arrives in both a collection of MP3 files with a playlist document, and as a DAISY 2.2 book. I loaded the latter onto my Victor Reader Stream, which made it easy to follow along and practice as I went.

Contents of the Tutorial for the Google Apps Suite

The tutorial begins with a discussion of the various screen readers that will be used. The narrator, Chris Grabowski, uses different voices for the various devices, which include Windows running NVDA, JAWS, and ChromeVox, a Mac running ChromeVox, and both a VoiceOver enabled Apple iOS device and an Android device running TalkBack. He notes that Internet Explorer is the recommended browser to use with JAWS, Firefox works best with NVDA, and ChromeVox is best with the Chrome browser. The different voices make it much easier to follow along as he shifts screen readers throughout the lessons. Additionally, Grabowski discusses the need to bypass certain screen reader hotkeys, and he shows how to do this, either with a screen reader bypass hotkey or by changing the screen reader command itself.

We now move on to an in-depth discussion of the Chrome browser, using NVDA, JAWS, and ChromeVox. For each of these screen readers, Grabowski offers separate lessons on accessing the menus, browsing the Web, and exploring the Chrome extensions page. During the Chrome section, we are introduced to the ChromeVox screen reader, which is available as a Chrome plugin, and, once installed, can be toggled off and on via the ALT+CTRL+Z hotkey.

Areas not covered in this section include using Chrome on an iOS or Android device, and using Chrome with a Mac. Note: Most of the Google apps are fairly inaccessible using the VoiceOver screen reader. The ChromeVox screen reader must be used, and consequently Grabowski holds his discussion of Mac use of Google apps until the end of the tutorial.

Now we move on to arguably the most popular of the Google suite apps: Google Drive. Perhaps you already use Dropbox or One Drive, in which case you are already familiar with the concept of a Cloud drive. You can save, store, and open files from any computer logged into your Google Drive account, whether you are on a computer or a mobile device. But here's where the collaboration really starts. After creating or saving a file to Google Drive, you can choose to share that file with one person, a team, or the general public. You can also set limits on the users: read only, comment only, or read and edit.

In the tutorial, Grabowski demonstrates all of these functions while accessing Google Drive from within a browser window. Here's where you will need to switch off the JAWS or NVDA browse mode. ChromeVox has no browse mode. Google Drive uses many two-character combinations to access and open files. Too many, in my opinion. However they can be easily found by pressing the Shift + / (slash) key in Windows, CTRL + / (slash) for Chrome, or Command + / (slash) on the Mac using ChromeVox.

Another Google Drive feature mentioned here is the Google Drive Download app that will enable you to add your Google Drive to Windows Explorer. Once added, you can perform file management as you normally would on your PC, copying, deleting, and opening files as usual. There are also desktop apps that will open your default browser and take you automatically to one of the Google apps covered in the next section. Grabowski did not include these in his tutorial because his goal is to teach users to use the suite of Google apps, no matter if you are on your own computer or a shared PC.

The next three major sections cover Google Docs, Google Sheets, and Google Slides. These are similar in function to MS Office Word, Excel, and PowerPoint, respectively, and, on the Mac, Pages, Numbers, and Keynote, respectively.

As mentioned, I had dabbled a bit with these apps, but always found them too confusing. However, after following along with Grabowski's lessons and properly matching my browser to my screen reader, the pieces began to fall into place.

Google apps allow more than one user to work on a file simultaneously. Grabowski demonstrates with his Mystic Access partner, Kim Loftis, how the accessibility features in the Google apps will notify you when another person has begun editing along with you, and where exactly their cursor is located.

Grabowski does not demonstrate how to accessibly track changes other users have made to your files. Nor does he demonstrate any of the apps more advanced features, such as adding an image to a Doc or getting help for a Sheets formula. His coverage of mobile apps is somewhat basic. For example, he does not demonstrate how to detect when another user has joined you in editing a document.

From what I learned from this tutorial, Google has come a long way in providing accessibility to their apps. Indeed, each of the apps includes an Accessibility tab on the menu bar, with options such as Speak Selection Formatting, Next List, and Open Comments Thread—each of which includes keyboard shortcuts.

There are still issues. As Grabowski demonstrates, the apps' spell check tool is a bit wonky. It does not read out the misspelled word using either IE or Firefox. It does work well with ChromeVox, however, so even if you don't plan to use the Google screen reader on a regular basis, you should probably install ChromeVox so it will be there to help you overcome the occasional accessibility glitch.

Areas for Improvement

Grabowski limits his discussion of Google Hangouts to texting and making a call using an iPad. Does it work the same on a PC, Mac, or Android device? Hopefully, after completing this tutorial you will have learned enough to go exploring these platforms on your own.

Braille support is not explicitly mentioned. As I understand it, there used to be some significant issues with Google apps displaying menus on a braille display, but not the working text. Grabowski tells me these are no longer issues, but I think some direct mention of braille support should have been included in the tutorial.

The tutorial also does not cover Google Classroom. Though not a part of this tutorial's mission statement, I mention it here as many students and teachers may be expecting its inclusion, and might otherwise be disappointed. Perhaps Mystic Access is working on a tutorial for Google Classroom; I believe there would be a great market for such a resource among school district and special education teachers.

Many students use Chromebooks to do their schoolwork. Chromebooks are mentioned in the tutorial, but since this tutorial was published Google has replaced their initial ChromeVox Chromebook screen reader with ChromeVox Next. Many of the commands have been simplified, and I do hope a Chromebook tutorial is on the Mystic Access to-do list.

Recommendations

At $35, the Audio Tutorial for Google Suite of Products is value priced. I have seen similar tutorials running from $75 to $150.

The step-by-step audio guidance works well. The tutorial does not strive to be comprehensive. Indeed, that would take a book several hundred pages long. What this tutorial aims to do, and in my opinion it succeeds admirably, is to offer a beginner's guide to these apps, giving you sufficient basic knowledge to venture forth on your own into the more advanced features.

I went through this tutorial because I was curious about how far Google had come with their accessibility initiatives. If you are a student, this tutorial is critical. If you are employed and your employer uses collaborative features, you will also want this tutorial. Even if your employer isn't currently taking advantage of a collaborative workplace, it's likely just a matter of time before they do.

Those who are currently seeking employment, or who plan to do so in the future, will also benefit from using this tutorial to become familiar with Google apps. I suspect soon these skills will be as mandatory as using a desktop computer, and the sooner you develop these skills the better.

Product Information

Product: Audio Tutorial for Google Suite of Products from Mystic Access
Price: $39
Phone: 716-543-3323

Comment on this article.

Related articles:

More from this author:

A Review of Manamon, an Audio-Based Role-Playing Game by VGStorm

The Windows operating system has a long history of accessible games stretching back to the time of MS-DOS with accessible text-based adventure games. A recent addition to the Windows audio game landscape is the role-playing game Manamon by a company called VGStorm. Manamon is heavily inspired by the Pokémon series of video games as well as other role-playing games such as the Final Fantasy franchise or the Dragon Quest series of games. The game operates on computers running Windows 7 or later and requires 1 gigabyte of RAM, 300 megabytes of disc space, and a computer with at least a 1.2 gigahertz dual-core processor or a 2.0 gigahertz single-core processor. The game retails for $40 but provides a lengthy demo that includes several hours of gameplay. Audio games are not generally given an age rating, but I would consider this game suitable for ages 12 and up for frequent but mild swearing as well as dark themes and violence in the game's cutscenes. Now, let's dive into the world of Manamon!

Game Premise and Overview

Manamon takes place in the nation of Tangeria, which is home to many different magical creatures that are all classified under the umbrella term of Manamon. Some people trap Manamon using specially designed nets (Mananets), and train them to compete in gladiatorial combat. These trainers are known as Manamon tamers. Tamers can take part in the Stadium Challenge, which requires that they first travel throughout Tangeria and challenge various stadiums in different cities. Whenever a challenger successfully overcomes a stadium, they are presented with a stadium key; these are required to unlock the Master's Stadium, where a challenger can attempt to defeat the current Manamon Champion. You take on the role of a prospective Manamon tamer, who, with another classmate, will forsake his education, against his father's advice, and attempt to become the champion.

Because it's an audio game, Manamon only uses sound to communicate all game information; there are no graphics. In addition, textual information in the game can either be presented using Microsoft's SAPI 5 speech engine or by using a screen reader. The game developer recommends using SAPI speech, however I have found that using the NVDA screen reader provides a very good experience. I tested the game using JAWS and Window Eyes as well. Because JAWS intercepts the Arrow keys, I was not able to use JAWS to play the game, though it seemed that game text could be read. When I attempted to launch the game with Window Eyes enabled, the game would not load at all. I tested the above screen readers on a device running Windows 10; you may have different results on a machine with a different configuration.

Documentation

Manamon is a fairly complex game with many hotkeys, so it is beneficial to read the documentation before starting the game. Manuals for the game can be found in the game's folder in Program Files and in its Start menu folder. The game has a manual in HTML format, with a linked table of contents and headings for each section. The documentation is fairly complete in that it describes nearly all of the aspects of the game. If an item is not described in the manual, an in-game description will be provided. For example, in one of the towns that you can visit there is an arcade where you can play various minigames for rewards. Instead of having each game described in the manual, there is a More Information option in the menu for each game where you will find instructions.

In addition to the manual, players have access to a Type Effectiveness manual, which describes the interplay among the various Manamon types, described later in this article. This document describes the types for which each type has high effectiveness, low effectiveness, and type immunity. This is an invaluable document, as there are 20 different Manamon types, all with their own distinctive weaknesses and strengths. In addition to the provided information, I would have liked to see each entry provide the same information about other types against the given type. As it is, if I want to, for example, see which types are highly effective against the Sound type, I have to search through the document for mentions of it.

Starting the Game and Main Menu

When you first launch the game, you will be presented with the VGStorm logo, which you can skip by pressing Enter. You will be notified when the game begins to load. While the game loads you will hear an elaborate introduction sequence. While this introduction is playing you will be notified when the game has finished loading. From here, you can either continue to listen to the intro or press Enter to continue. In the main menu, you will be presented with these options:

  • Load Game (loads a saved game; only available if you have saved a previous game)
  • New Game (starts a new game)
  • Options (allows you to adjust various game options)
  • Learn Game Sounds (allows you to listen to some of the sounds used in the game)
  • Exit (closes the game)

The Load Game option will list the amount of time you have spent playing as well as when you began the current saved file. Manamon only allows you to have one saved file at once. You can find the saved game data at the following file path: C:\users\(user-account-name)\AppData\Roaming\VGStorm.com\Manamon. The data file is called "Data.dat." With access to this file, you can make copies of it for backup purposes or move it to another location so that you can create a new saved file without deleting the previous.

Navigating the World

Manamon uses a top-down perspective for navigation. This means that your perspective as a player is from above your character. You can move north, east, south, and west. As you move, you will hear your character's footsteps, which change depending on the terrain. In addition, you will hear various objects, people, and structures around you. Manamon uses continuous sounds to indicate objects. For example, the bookshelf in your character's bedroom sounds similar to someone closing a paperback book. This sound repeats in a continuous loop so that you can always tell where it is when you are near it. If you are to the left of the shelf, the sound will play from your right speaker or headphone. The closer you move to it, the closer to the center of the stereo field the sound will play. If you are to the right of an object, the sound will play from the left speaker, getting closer to the center of the stereo field the closer you are to the object. In addition, if you are either directly east or west of an object or somewhere to the south of the object, sound will be played at normal pitch. If you are to the north of an object, the sound will be slightly lower. The closer you are to an object, the louder the sound. Most objects such as people, doors, and treasure chests are one step in size. Others, such as beds and gates, are several steps wide. When you are moving either east to west or west to east past an object that is more than one step wide, the sound will continuously play in the center of the stereo field to indicate that you are still in front of it.

Manamon includes a system that indicates your character's orientation to and distance from walls. A tone is played as you approach a wall, getting louder the closer you come to it. Four tones of different pitches are used to identify north, east, south, and west walls; these sounds resemble a cross between a single tone and a rushing wind. There is also the option to use tones that are clearer and more digital. When the wall ends, the sound cuts off abruptly to indicate that there is now empty space in that direction. North and south wall tones will play in the center of your stereo field, while walls to the east or west will play from the right or left speaker respectively. Similar to objects, when you approach an east or west wall, the tone will begin playing from the far side of the stereo field and move closer to the center the closer you come to the wall.

In addition to specific terrain such as stone, tile, wood, gravel, and grass, each area has its own distinct theme music as well as an ambiance. For example, when you are in a prairie, you can hear the wind in the grass as well as various small birds singing. Sometimes, you will encounter hazards in an area. Some of these are stationary, while others either move about or are only present at certain times. If you touch one of these hazards, the first Manamon in your party will take damage.

The Game Menu

The game menu allows you access to many features of the game and offers quick access to the Options screen found in the main menu. You can open and close this menu by either pressing X or Escape. The game menu contains the following options:

  • Party (offers information about the Manamon that you are currently carrying and allows you to interact with them in various ways)
  • Inventory (allows you to view and use the items that you are carrying)
  • Manapedia: (displays the Manamon you have seen and captured; offers you information about where various Manamon can be found in the wild; allows you to filter and sort your seen and captured Manamon to assist in completing the Manapedia)
  • Save Game (saves your progress)
  • Exit Game (closes the game)
  • Unknown Gifts (unlocks at a certain point in your journey; allows you to collect gifts from the VGStorm website)
  • Options (opens the game's Options screen; this screen is also accessible from the game's main menu)
  • Cancel (closes the menu)

Manamon and Combat

A total of 158 Manamon can be collected. These are then broken down into 20 various types. Any given Manamon can have between 1 and 4 types. Types include classical elements such as Earth and Air but also more nebulous concepts such as Holy or Shadow. Each Manamon can have access to a total of five different techniques that can be used in combat.

Techniques are generally given to one of the Manamon types but in some cases a technique will have more than one type. For example, the Holy Water technique is both Water and Holy type. This means that if a creature is a type that that takes high damage from water techniques but takes low damage from holy techniques, it will only take normal damage from the Holy Water technique.

When you travel outside of cities and some manmade structures you will randomly encounter Manamon that you can defeat for experience and gold or capture to be added to your collection. As you travel you will also encounter other Manamon tamers who will challenge you to combat. Combat is turn based, this means that you and an opponent take turns choosing actions, in your case from a menu. When you are in combat, your Manamon will appear in the right side of the stereo field while the opponent's will appear on the left side. Manamon have a distinctive call when summoned for combat, when they are struck with a technique, or when they are defeated. When a damaging technique is used, a sound will play to indicate if the attack was of normal effectiveness, high effectiveness, or low effectiveness. This indication is useful as you may not always know the typing of a Manamon if you have not seen it before, so this can provide some guidance for which techniques you may wish to use against a new opponent. Manamon techniques all have a distinctive sound effect that plays when they are used. These can be turned off in the Options menu for faster game play; this is particularly useful if you would like to quickly train against Manamon in the wild. In most cases you will be using one Manamon against another, but in some cases you may have a battle with anywhere from 2 to 6 Manamon on each side.

As your Manamon gain experience from combat, they will raise in level. This will raise their statistics nominally but also provide them with opportunities to learn new techniques at certain levels. When a Manamon gains a level, they will also accumulate Training Points that the player can spend on the Manamon's statistics. Applying points will generally provide more long term benefits as the Manamon raises in level, but they can also raise a statistic immediately as well. Many Manamon can also transform into stronger creatures. Most do so when reaching a specific level, but other actions can also cause a transformation.

Each Manamon has a physical description. When you obtain a Manamon, information about the creature, such as folktales about the Manamon or details on its habitat, is added to the Manapedia. The height and weight of the creature is also given. Manamon includes several hotkeys for learning the Manamon's name, hearing a human recording of its name, and a hotkey for having the name spelled letter by letter.

Compliments, Critiques, and Resources

Manamon has very few bugs or errors in its operation. The only true bug that I have personally encountered while playing on Windows 10 and Windows 8.1 systems is some significant lag while walking in a specific part of an area at the very end of the game. Also, after the game was released, the developer was very quick to look into and correct any bugs that occurred. The navigation system may take some getting used to, but can provide fluid movement through areas. The use of different terrain sounds while moving and different high quality ambient sound throughout the game's areas makes it quite immersive. The sound design in general is very well done. I mentioned that the navigation system allows for fluid navigation; other game features extend this fluid operation to other aspects of the game. For example, it is possible to set items to the number row for quick use, and in some long lists it is possible to move through items in increments. Gathering information about a Manamon is intuitive and flexible across various contexts. For example, any time you have focus on a Manamon, you can gain information about that creature. As was described previously, it is possible to view a description of a Manamon and read its name in various ways with hotkeys. These hotkeys can be used in almost all contexts when the Manamon has focus. Additionally, if the Manamon is your own you can view information about the specific creature, such as its statistics, health, and level information on the fly.

The most common critique in online discussions of the game is the lack of customization for Manamon techniques. Manamon learn many techniques as they gain levels, but aside from three items that a Manamon can hold to grant it that item's technique, there is no way of teaching a Manamon moves that it doesn't naturally learn. This can become a problem in cases where a Manamon can only learn techniques that an opponent is immune to, making it impossible for that Manamon to harm its opponent at all. In some cases, a Manamon will learn techniques that operate from its weaker statistic, lowering the technique's effectiveness. The developer of the game has already released updates that have changed some techniques to make them more effective, so balancing of the game appears to be ongoing. It is also possible for Manamon to equip different accessories that provide stat or technique boosts, which does allow for some customization.

There are some external resources that can be beneficial when playing the game. The forum topic for the game on Audiogames.net currently has over 3,400 posts and is an excellent source of information. The topic contains discussion about the game along with answers to most questions that can be asked about the various aspects of Manamon. There are also links to several recordings of people playing through the game buried somewhere in the topic. Jade's step-by-step guide to the game can be downloaded from the Audio Games Archive. The guide provides instructions for navigating the game areas as well as instructions for completing various puzzles. Information on game bosses is also included.

The Bottom Line

Manamon is an excellent game for those who enjoy turn-based role-playing games. The game plays smoothly and the sound design is top notch. Manamon customization is limited, which can make some of the minor balance issues problematic, but the developer continues to make small changes and the equipment system in the game can make up for some Manamon's shortcomings. The demo version of the game provides access to the first two Stadiums, which is the equivalent of several hours of gameplay, so if you're interested, it's definitely worth a try. If you enjoy playing the demo you will more than likely appreciate the full game.

Product Information

Product: Manamon
Developer: VGStorm
Price: $40

Comment on this article.

Related articles:

More from this author:

Advocating for Yourself in an Emergency Medical Situation: Advice for People with Visual Impairments

One morning I was standing in my bathroom about to get into the shower and prepare for my day when my world literally turned upside down.

One moment, I was standing, mind racing about my clothes, my work schedule, my coffee, my dog—and the next, I had the sensation that my thigh had been struck by a large object and I was instantly on my back on the cool tile floor. I knew immediately that I could not stand up. Slowly, carefully, I scooted backward out the door and across the bedroom, where I could reach a phone and call 911.

The paramedics talked to me on the phone. They told me they'd have to break down a door to rescue me and asked me to choose front door or back.

I heard breaking glass, men's voices, and, before long, I was placed on a stretcher and carried down my very steep stairs to the cold outdoors and the waiting ambulance.

"Two things I need you to get," I told them. "My guide dog and my iPhone."

Later, there would be a stretch of hours when I remembered none of this, but in that window of crisis, with no one but me to advocate for me, I gave clear directives. I told them how to fasten my golden retriever's guide harness and told them where the iPhone and its charger were located. In the ambulance, I called my daughter 1,000 miles away so that someone knew where I was going.

It turned out that my left femur, the longest bone in the body, compromised by cancer a decade earlier, had snapped and displaced. I spent eight hours in the emergency room, during which time my surgeon explained to me that serious reconstructive surgery was scheduled for the next day. A metal plate about eight inches long would be screwed to my bone and wired to my hip. The recovery period, during which I would be unable to bear any weight on that leg, would last about three months.

Nothing To Do with Blindness

Like many AccessWorld readers, I am a seasoned veteran of blindness. I mastered my alternative techniques long ago and think about blindness very little, if at all.

I live alone, manage my own home and work life, travel independently with a guide dog or white cane, and have a delectable array of technological tools to make everything from writing a book to color-coordinating a room manageable without sight.

My injury had nothing to do with blindness. My getting to the phone in a familiar environment didn't either. Directing the paramedics to get my dog and phone was, if anything, easier for me as a blind person because I know how to use my words to describe objects and their locations.

Once I was in that hospital, however, my familiar ground was gone. My daughter had immediately called two close friends who met me at the emergency room and they told me that, from my blurred state of shock with morphine added, I repeatedly asked, "Where am I and how did I get here?"

Of course. As blind people, independence is deeply rooted in our ability to take control of our own lives. Essential to taking control is the basic awareness of "Where am I and how did I get here."

I was in the emergency room for eight hours before a room in the Joint and Spine Center of the hospital became available. About halfway through that time, my brain cleared and I became aware that I needed to be alert, to be my own advocate.

While my blindness and hearing impairment are inconsequential to me on a daily basis, they were front and center to these medical professionals who did not know me. Advocating for myself was a matter of survival.

Hear This

My gratitude is abundant for many things that occurred that traumatic day, but two particularly fortunate facts were that I was in a large, flat room rather than on a staircase when my femur fractured, and that my hearing aids were in my ears. Without them, communicating with paramedics or emergency medical personnel would have been next to impossible.

I had been in shock. I was in excruciating pain. I was told not to sit up or move my leg in any way as I could further displace the broken parts. Time and an IV drip of medication gradually returned my lucidity and I knew communication was key.

A first step toward self-advocacy was to ensure that my hearing aids could stay in my ears before, during, and after surgery. Without them, I explained, I might miss questions or information in preparation or recovery. Permission was granted. As it turned out, I never took both hearing aids out throughout my three-week hospital stay. Not being able to see people come and go, I knew I needed to hear them.

Next was establishing a certain style of communication with staff. While being transported to my hospital room, I began what would be my signature survival tool throughout my stay: engaging each person in dialogue and asking questions. What floor are we going to? What is the room number? What is the name of each drug you are asking me to take and what is its purpose? (I happen to have a high sensitivity to all medications, so many routine doses were adjusted in these preliminary conversations, which helped me maintain clarity while also building relationships with medical staff).

And about that medical staff. When you are in the hospital, a steady stream of people come and go, with shifts constantly cycling nurses, personal care assistants, doctors, physical therapists, occupational therapists, social workers, and housekeepers on and off duty. They might wear different colors and/or name badges, but for me, a blind person who doesn't have particularly stellar voice recognition skills, asking people to identify themselves was another key factor in maintaining my quality of care and wellbeing.

At the foot of my bed was a monitor that displayed various kinds of constantly updated information specific to my treatment. Information is essential to advocating for oneself. The kinds of information updated at the foot of my bed included the name of my nurse and personal care assistant, my schedule of physical and occupational therapy, meal times, and special events available to patients (such as healing touch or yoga.) It was all right there for me to read at any time, but in print and therefore completely unavailable to me.

Keep it Light

Whether you are an introvert or an extrovert, keeping a running dialog going in this kind of situation is, integral to survival. Sometimes I asked questions in a straightforward way: "Can you put a note in my chart for staff to identify themselves when they come in? I'm good at being blind, but never did very well in the voice recognition department." Or, "Can you read my board to me? They haven't put one up in braille yet."

In other words, I was clear about my needs, but tried not to communicate those needs in any way that might be perceived as strident or critical.

I was there, as you recall, because my femur had fractured. I had had serious reconstructive surgery, and had to learn new skills like how to transfer safely from the bed to the wheelchair and from the wheelchair to the toilet or shower bench, how to stand up on my one good foot when necessary and not lose balance, and much more.

Even though my being there had everything to do with my leg and nothing to do with my blindness, rare was the nurse or aide who did not ask, "So, exactly what can you see?" Again, I tried to keep it light, but doing so and remaining patient wasn't always easy.

I frequently said things like, "I see with my hands. If you put my hand on it, I will see where it is." If accompanied by a relevant demonstration, that explanation was generally pretty effective.

I quickly learned to make sure everything I needed was within reach before a newcomer left the room. If a technician came to draw blood and moved my laptop out of the way to reach my arm, even though it was six inches away, that laptop was essentially invisible to me. I learned to make quick checks to locate the emergency call button, my iPhone, laptop, and water pitcher each time I returned to my bed or wheelchair from the bathroom or the physical therapy gym, or after any staff person had come to call. When moving about is next to impossible and a needed object has been moved from, say, the table on the left side of the bed to the table on the right, locating it is problematic for someone who can't see. I found that by routinely checking and interacting with staff about this environmental checklist, people learned and became much less likely to inadvertently move objects from one place to another.

Payoff in Wellness

While it might sound a little exhausting (and sometimes it can be), my continually engaging in conversations with all those responsible for my care enabled me to focus on getting stronger and getting well. Even while rooted in a hospital bed, unable to move without assistance, we can still advocate for ourselves, control our own environments to a point, and thus maintain our independence. The physical therapist who was at first troubled that I had no physical eyesight was laughing with me as I "drove" my wheelchair down the hall. Staff who began noticing that I was constantly using my laptop and iPhone eventually caught on to texting me my therapy schedule every evening as an alternative to expecting me to read that inaccessible monitor at the foot of my bed.

The doctor who discharged me told me that I was being released at 18 days rather than the anticipated 24 because I was "so fiercely independent" and determined "not to allow a disability be a disability." Interpret that as you will, but I believe what actually facilitated my speedier release was that by advocating for myself, I took the emphasis off my blindness and put it where it belonged: on my accident, surgery, and recovery. The payoff was that many members of the medical team learned something about blindness in the process and I was able to get home for Christmas!

Comment on this article.

Related articles:

More from this author: