Full Issue: AccessWorld March 2008

Product Ratings

Product Ratings

Feature: Zoom-Twix.

Portability: 5.0.

Accuracy of OCR: 4.5.

Speed of document capture: 5.0.

Image capture: 5.0.

Documentation: 3.5.

Easy-to-learn operation: 3.0.

Stability of the camera stand: 2.5.

Return to article, or use your browser's "back" button.

Product Ratings

Product Ratings

Product Ratings

Return to article, or use your browser's "back" button.

Product Features

Product Features

Feature

Zoom-Twix

Distance viewing

Yes

Desktop viewing

Yes

Self-viewing

Yes

Image capture and saving

Yes

Optical character recognition (OCR) and text-to-speech through your computer

Yes

Capture documents or books and save to your computer

Yes

Text reformatting on the screen

Yes

Self-Voicing user interface

Yes

Return to article, or use your browser's "back" button.

Product Features

Product Features

Feature: Zoom-Twix.

Distance viewing: Yes.

Desktop viewing: Yes.

Self-viewing: Yes.

Image capture and saving: Yes.

Optical character recognition (OCR) and text-to-speech through your computer: Yes.

Capture documents or books and save to your computer: Yes.

Text reformatting on the screen: Yes.

Self-Voicing user interface: Yes.

Return to article, or use your browser's "back" button.

Calendar

March 11-15, 2008

California State University at Northridge (CSUN) Center on Disabilities' 23rd Annual International Conference: Technology and Persons with Disabilities Conference

Los Angeles, CA

Contact: Center on Disabilities, CSUN, 18111 Nordhoff Street, BH 110, Northridge, CA 91330-8340; phone: 818-677-2578; e-mail: conference@csun.edu; web site: www.csun.edu/cod/conf/index.htm.

June 12-14, 2008

Collaborative Assistive Technology Conference of the Rockies

Denver, CO

Contact: Assistive Technology Partners, Statewide Augmentative/Alternative Communication Program, University of Colorado at Denver and Health Sciences Center, 601 East 18th Avenue, Suite 130, Denver, CO 80203; phone: 303-315-1280; web site: www.assistivetechnologypartners.org.

June 29-July 5, 2008

National Federation of the Blind National Convention

Dallas, TX

Contact: National Federation of the Blind, 1800 Johnson Street, Baltimore, MD 21230; phone: 410-659-9314; e-mail: nfb@nfb.org; web site: www.nfb.org/nfb/national_convention_2008.asp.

July 5-12, 2008

American Council of the Blind National Convention

Louisville, KY

Contact: American Council of the Blind; phone: 202-467-5081; e-mail: info@acb.org; web site: www.acb.org/convention/info2008.html.

July 15-17, 2008

QAC Sight Village

Birmingham, UK

Contact: Queen Alexandra College; web site: www.qac.ac.uk/sightvillage/index.html.

September 5-6, 2008

Envision Conference A multi-disciplinary low vision rehabilitation & research conference.

San Antonio, Texas

Contact: Michael Epp, Envision; phone: 316-425-7159; e-mail: michael.epp@envisionus.com; web site: www.envisionconference.org.

October 16-18, 2008

26th Annual Closing the Gap Conference: Computer Technology in Special Education and Rehabilitation

Minneapolis, MN

Contact: Closing the Gap, P.O. Box 68, 526 Main Street, Henderson, MN 56044; phone: 507-248-3294; e-mail: info@closingthegap.com; web site: www.closingthegap.com.

January 28-31, 2009

Assistive Technology Industry Association (ATIA) 2009 Conference

Orlando, FL

Contact: ATIA, 401 North Michigan Avenue, Chicago, IL 60611; phone: 877-687-2842 or 312-321-5172; e-mail: info@atia.org; web site: www.atia.org.

Editor's Page

I recently attended the Assistive Technology Industry Association (ATIA) conference in Orlando, Florida. Here are a couple of observations based on my visits to various booths in the exhibit hall.

GW Micro and Ai Squared announced that Window-Eyes and ZoomText, respectively, will include scripting. Scripting will allow the automation of tasks—you will have the ability to write a program that lets the software perform a number of functions with a single keystroke. Both companies say that their scripting will be easy to use. So, these products finally join JAWS, which has had a scripting language for years.

Several new closed-circuit televisions (CCTVs) were on display. There are more than 100 CCTVs on the market, with handheld models and laptop-compatible products leading the way. I realize that the market for low vision products is larger than the market for blindness products, but, can even this larger market support so many products? I would not be surprised if, in the next year or two, some of these CCTV manufacturers buy each other out, merge, or go out of business.

In this issue, Deborah Kendrick interviews Bill McCann, president and founder of Dancing Dots. McCann left a job as a computer programmer with a major company to start Dancing Dots and developed its flagship product, the GOODFEEL music translator. After 15 years of hard work, Dancing Dots now has customers throughout the United States, Canada, and 40 other countries. Read this entrepreneur's story and listen to an original composition provided as a treat for AccessWorld readers.

Stephanie Bassler, a writer and web-accessibility expert, presents an overview of Web 2.0 and the access challenges it poses. Web 2.0 is a new way of using the web that lets users collaborate and share information online. Web 2.0 sites allow you to do something, such as publish words and pictures or keep a group calendar. This article covers social networking sites; project-management sites; blogs; and "wikis," web sites that allow users to add and edit information. Learn about Web 2.0 and accessibility.

Lee Huffman, of AFB TECH, evaluates two new products, the Zoom-Ex and Zoom-Twix, both from ABISee. The Zoom-Ex is a portable, lightweight, computer-compatible document scanner that converts the scanned page to speech and magnifies and wraps the lines on the screen to eliminate the need for an x-y table. The Zoom-Twix incorporates the physical design and features of the Zoom-Ex and facilitates live-distance, desktop, and self-viewing through the addition of a second camera attached to the Zoom-Ex stand.

William S. Carter and Guido D. Corona, of IBM's Human Ability and Accessibility Center, provide an introduction to virtual worlds and propose ways of making them accessible to people who are blind. Virtual worlds, such as Second Life, are places where sighted people play interactive games; visit "islands" replete with buildings, museums, and people; attend college lectures; transact imaginary or real business; chat with others; manipulate objects; and more. This article proposes early solutions to how these virtual worlds may be made accessible.

Deborah Kendrick reviews Google It! A Guide to the World's Most Popular Search Engine by Jonathan Mosen with Anna Dresner, published by National Braille Press. This tutorial offers many tips and tricks to enhance your Google searches. Do you know how to use Google as a dictionary or to track packages? This book has something for everyone. Read our review.

I report on the ninth annual conference of the Assistive Technology Industry Association (ATIA), held from January 30 to February 2, 2008, in Orlando, Florida. The ATIA conference featured many new products and updates of products, as well as a number of sessions of interest to people who are blind or have low vision. Learn what we found in the exhibit hall and conference sessions.

Jay Leventhal
Editor in Chief

A Reading Machine In Your Pocket: Introducing the KNFB Reader Mobile Edition

Washington D.C. - Ray Kurzweil and James Gashel of the K-NFB Reading Technology, Inc., announced a product on January 28, that had hundreds of blind people gathered for the National Federation of the Blind's 2008 Washington Seminar applauding and cheering with raucous enthusiasm. Estimated by Kurzweil to be 5000 times smaller than his original Kurzweil Reading Machine, introduced in January, 1976, the new KNFB Reader Mobile is loaded on to a Symbian-based Nokia N82 cell phone, which measures about two inches by four inches and weighs just four ounces.

With just the press of a few buttons, the phone can snap a picture of a memo, book page, or piece of U.S. currency, and read it instantly to a person who is blind, visually impaired, or has learning disabilities. The captured text also appears on the phone's screen in large font, with the spoken text highlighted, rendering it easily distinguishable from other text on the screen.

The phone itself is small enough to fit into a shirt pocket, in keeping with a prediction made by Ray Kurzweil in 2002. He said at that time that he believed a reading machine for people who are blind small enough to fit into a pocket could be ready for market within six years. The phone is sleek and extremely tactile, with buttons easily identified by touch.

If a call is received while reading a document, the user can take the call and return to the task at hand immediately once the call is completed.

The Nokia phone itself has myriad high-end features, including a web browser, e-mail capabilities, MP3 player, and GPS functions. Although these features require a cell phone screen reader to become completely accessible, such additional software is not required for the Reader. Both Mobile Speak and TALKS are compatible with the phone.

James Gashel, vice president of business development for KNFB Reading Technology, said that while many people will choose to purchase the unit to use only as a reading machine, many will also love the phone itself, and the full range of features made accessible with the addition of a screen reading package.

The KNFB Reader Mobile edition, will sell for about $2000 and will begin shipping on February 15. (Screen readers run for about $300 and can be purchased from other sources.) The Nokia N82 is currently supported by T-Mobile and AT&T.

For more information, contact: KNFB Reading Technology: phone: 877-547-1500; web site: www.knfbreader.com.

Scanning and Reading on the Move: A Review of Zoom-Ex and Zoom-Twix

As part of AccessWorld's ongoing efforts to keep our readers abreast of options in assistive technology for people with low vision, we published a series of articles in May, July and September, 2006, on laptop-compatible closed-circuit televisions (CCTVs) entitled, "Is This for Here or to Go?" As the demand for portable magnification continues to increase, more manufacturers are working to supply products to meet this market's need, resulting in additional models of laptop-compatible CCTVs with an ever-increasing number of features from which people with low vision can now choose. In this article, we take a closer look at two more such devices, the Zoom-Ex and Zoom-Twix, both from ABISee.

ABISee is a relative newcomer to the field of assistive technology for people who are blind or have low vision. The Zoom-Ex is a portable, 1-pound, L-shaped, computer-compatible document scanner that converts the scanned page to speech, and magnifies and wraps the lines on the screen to eliminate the need for an x-y table. The second device, the Zoom-Twix, incorporates the physical design and features of the Zoom-Ex and facilitates live-distance, desktop, and self-viewing through the addition of a second camera that is attached to the Zoom-Ex stand.

To conduct these evaluations, I set up the device at my desk and used it to do my daily office work and conduct specific tests. This way, I learned firsthand the characteristics of the Zoom-Ex and its optional second, "Frog Camera," as ABISee calls it, which makes it the Zoom-Twix.

Because of the emphasis on portability, most people will use the device with a laptop computer. Therefore, I used an IBM ThinkPad laptop running Windows XP Professional Service Pack 2, with an Intel Core Duo CPU with a 2.2 GHz processor, and 1.9 6GB of Ram. ZoomText 9.1 was also used in conjunction with the Zoom software to conduct the testing. The device was evaluated in four main areas: documentation, software installation and setup, minimum computer requirements and product design, and features.

Zoom-Ex and Zoom-Twix

The Zoom-Ex is shipped with a cloth carrying bag, the folding camera arm with a camera head, the cables for connecting it to a computer, a software installation and setup guide, two Quick Reference Guides for keyboard commands, and the software installation CD. The Zoom-Twix also comes with the second camera, which adds additional viewing functionality.

A man reading a book placed under the Zoom-Ex camera.

Caption: Reading a book with Zoom-Ex.

Documentation

The 11-page Software Installation Guide is detailed and walks you through each specific step of the software installation process, including the New Hardware Wizard. The Installation Guide is written so even people who are not computer savvy will understand and be able to complete the installation process. The problem is that the Software Installation Guide is printed in standard 12-point type, which is too small for most people with low vision to read. The guide also contains drawings and examples of dialogue boxes that are small and may not be readable by many people with low vision who purchase the product. Unless they already have a magnification device, it will be difficult for them to read the instructions that are needed to set up the device independently.

The Quick Resource Guide for Keyboard Commands is provided in a larger print format and lists all the hot keys and their functions. This list can be useful, especially when you first learn to use the devices, because there are no controls on the devices and all features are controlled through the mouse or keyboard commands.

The Zoom-Ex User Manual is presented in two versions, one for people who are blind and another for people with low vision. The User Manual for people who are blind concentrates on describing features that will be useful to people who use nonvisual techniques to operate the device. The pictures have been removed, and the User Manual discusses using the device from the perspective of a person who is blind. This User Manual is available in electronic format on the software installation CD or in braille upon request.

The User Manual for people with low vision is available on the software installation CD and, like the version for blind users, is formatted so you can use the table of contents and links within its text to jump to specific sections of the document. Both manuals have a "Familiarizing Yourself with Zoom-Ex" section. This section allows you to practice viewing and listening to documents saved by the manufacturer to help prepare you to create, view, and listen to your own documents.

The 28-page User Manual is in 12-point type, although, because it is in electronic format, it can be altered and printed in a larger size. This is a problem if you do not have a printer connected to your computer at the time of installation. Most people with low vision would prefer a spiral-bound, large-print manual in addition to the electronic version, even if it means additional pages. Having to make alterations to and print the User Manual or switch back and fourth between the application and the electronic manual adds a layer of inconvenience when you are learning to use the device and its many features. The User Manuals, Quick Reference Guides for Keyboard Commands, Software Installation Instructions, Frequently Asked Questions, and other helpful supplementary documents are also available for download from the ABISee website at www.abisee.com.

Software Installation and Setup

Installing Zoom software and setting up the product are straightforward procedures. The installation and device setup instructions that are provided with the device are easy to understand and follow, as are the software installation instructions on the screen. There is a potential challenge, though. That is, the dialogue boxes that are displayed onscreen during the installation are larger than the standard computer font, but not large enough for many with low vision to read, and they are not self-voicing. Screen-magnification programs, such as ZoomText, will magnify the instructions on screen, but will not read them aloud.

Minimum Computer Requirements and Product Design

According to ABISee, to use either device you need an IBM-compatible desktop or laptop computer running Windows XP (strongly recommended) or Vista. The computer must have one integrated high-speed USB 2.0 port to use the Zoom-Ex or two high-speed USB 2.0 ports to use the Zoom-Twix. A Pentium 1.3 GHz processor or equivalent AMD processor with a minimum of 256 MB of working memory (RAM) and a minimum of 30 MB of free disk space on your hard drive for the software is needed. Additional space is needed for saving documents or books or images that are captured from the Zoom-Twix's Frog Camera.

The device weighs approximately 1 pound and stands approximately 16 inches high. Its two "legs" fold out to form a right angle and provide the support for the device. The right angle formed by the supports provides the guide for proper placement of documents to be captured. You align the page with the supports, and the document is in position to be scanned or viewed by the camera. If you are using the Zoom-Twix, its adjustable Frog Camera attaches 11 inches high on the vertical support and provides distant views for zooming in on a chalkboard or watching a presentation. It also allows you to handwrite and complete forms more easily and enables self-viewing.

The device is connected to and derives its power from your desktop or laptop computer via the USB 2.0 port. There is no external battery or AC adapter. The device folds down like an umbrella and slips into a cloth carrying bag that will fit into most standard laptop bags.

The folded Zoom-Ex being held in a man's hand.

Caption: Zoom-Ex folds down to a portable, umbrella-like device weighing less than two pounds.

Features

Zoom-Ex and Zoom-Twix have many features. The best way to describe them is with seven words: view, capture, format, read, listen, save, and organize.

Using the Zoom-Ex, when you place a document under its camera, you can choose to view it in Magnified (CCTV) mode and use the mouse or arrow keys to move the document electronically around on the screen. The actual paper does not move, and no x-y table is used. You can also format a document into a column of text, word wrapped to fit your screen, like on a teleprompter, and read the text onscreen at a magnification level of your choosing and use a high-contrast color combination to view the text if you like. You can then decide to listen to the text read aloud to you in a voice and at a speed of your choosing, where the spoken word is highlighted so you can follow along. After reading or listening to the text read aloud, you can save the document for later use and organize your saved documents to suit your needs.

Another main feature of the device is the ability to scan and save entire books, regardless of the number of pages. Using this feature, you can choose to scan a book page by page or, with a paperback-size book, two pages at once. Unlike traditional scanners that take approximately 30 seconds to scan a page, Zoom-Ex can scan a page in 3 seconds.

When scanning the book, you do not need to move it, just turn the pages. The book can be scanned manually with one mouse click or key press per page, or it can be scanned in Auto mode. When you use Auto mode, you just turn the page, and the camera with motion-detection functionality knows when to snap the picture. When the page is scanned, you will hear a snapshot sound to provide auditory confirmation that the page has been scanned. It may take you 10 to 15 minutes to scan a 200-page book, but after that, as with a single document, you can format it to a larger-size type on the screen and have it read aloud to you page by page.

Once you begin reading or listening to the book, you can stop at any point and then come back to it later and pick up where you left off. Multiple books can be saved to your computer, and they can be organized to suit your needs.

The electronic book that you create can also be saved to a CD-RW or flash drive and taken with you. The electronic book can be read on any computer with the Zoom software loaded on it, without the need to have the Zoom-Ex or Zoom-Twix device connected.

Another use for the device is to create large-print documents. By placing a document under the camera and formatting it on the screen, you can create large-print documents or books in the size and color that you need. Documents that are scanned with Zoom-Ex can also be converted into text files that can be used in such applications as Word and Excel.

The Frog Camera of the Zoom-Twix provides the features that are familiar to the user of a more traditional style CCTV. It allows you to handwrite under the camera, such as writing checks or addressing cards, and to see distant objects, people, or items like classroom chalkboards up close on the laptop's screen. The images from the Frog Camera can be captured and saved to a file on your computer for later reference. The camera can also be tilted or positioned for self-viewing.

What Would Make It Better

The Zoom-Ex and Zoom-Twix offer valuable tools for people with various levels of visual impairment and do so in a portable device. As with all products, however, there is some room for improvement. The following are areas in which I believe the devices could be improved.

  1. The User Manuals for the devices could be improved to provide a better explanation of how to use the product. The format of the electronic manuals is good in that it allows you to jump to specific sections, but the manual could be streamlined. Some information is repeated, some items seem explained down to small details, almost over explained, while others could use more clarification.
  2. When Zoom-Ex is used in the Magnification (CCTV) mode to view saved documents, the image is not always as sharp and clear as it could be. The image can, of course, be formatted into a high-contrast word-wrapped image, but it is not the original image that some would like to see.
  3. I also had some difficulty clearly viewing round cans and bottles, especially ones with metallic or shiny plastic labels. This problem is not uncommon with CCTVs, especially when you use high-contrast display modes. I believe that improving this aspect of the Zoom-Ex camera would make the device more usable.
  4. One significant improvement would be to increase the stability of the physical design. At first, I did not think that stability would be an issue, but in real-world situations, the device will be bumped, for example, if you brush a book against its upright support or bump the camera arm as you standup from your desk. The device tends to fall backward or to the left, which could easily cause damage to the Zoom-Ex or Frog Camera. Finding the right balance between portability and stability is not always easy, and I believe that more work needs to be done in this regard.

The Bottom Line

If your work or schooling requires you to read a good bit of text, or if it is difficult to find leisure reading books in alternative formats and you need to create your own, the Zoom-Ex or Zoom-Twix is an option to consider.

It is important to keep in mind that these devices are used in conjunction with a computer, so computer literacy is important. This is not your father's CCTV. These devices, like other computer-compatible CCTVs, are somewhat more complex because of the number of features and their ability to save and organize files. The Zoom-Ex and Zoom-Twix have a number of keyboard commands to memorize, and it will require some practice to learn to use either device efficiently.

Manufacturer's Comments

ABISee

"We thank Lee Huffman for his review. His suggestions will be easy to implement; think of it as done. Note that Zoom-Ex is not a CCTV. It has line-wrapping software that allows the low-vision user to keep reading his printed page without moving or even touching it, just scroll down the screen. The magnified lines don't run off the screen because they are reformatted into a single column that is screen-wide. The whole printed page is in the memory, and for the blind users it converts to speech within 3 seconds. A Braille output model will be available by the time this review is published. Both low vision and blind users find Zoom-Ex uniquely convenient.

"Zoom-Ex and Zoom-Twix are a result of extensive research and development by ABISee engineers. We realized that simply taking off-the-shelf OCR, camera, and software and putting them together in one box would not create the right solution. Therefore, we developed our patented technology that suites the needs of both blind and low vision people.

"Zoom-Twix is more than Zoom-Ex. It makes a low vision student fully functional in the classroom environment. With one key stroke Zoom-Twix, like Zoom-Ex, instantly magnifies any printed material and wraps the lines on the screen. With another key stroke, the user can see the blackboard magnified on the laptop screen in real time—no need to flip the camera, it has two.

"Zoom-Ex scans books, 20 pages per minute, and converts them to text. Those scanned pages can be printed out in large font, which would be impossible without its line wrapping software. As to the blind users, in today's busy world it takes Zoom-Ex just a few seconds to scan the printed page, process it, and start speaking."

Product Information

Product: Zoom-Ex and Zoom-Twix.

Manufacturer: ABISee
Address: 141 Parker Street, Suite 201, Maynard, MA 01754; phone: 800-681-5909; web site: www.abisee.com.

Price: Zoom-Ex, $2,400; Zoom-Twix (with additional Frog Camera), $3,500.


****

View the Product Features as a graphic

View the Product Features as text

****

View the Product Ratings as a graphic

View the Product Ratings as text

Surfing into the Future: An Introduction to Web 2.0

The World Wide Web created a revolution in how people shop, acquire information, and interact with others. Now the web is undergoing a revolution of its own, and it is called Web 2.0.

So what is Web 2.0? First, let me address what it is not. It is not a separate, "all-new" version of the World Wide Web; rather, it is just a new way of using the web that lets users collaborate and share information online. The term Web 2.0 was coined to sound like a new version of the web, much like the number given to a new version of software that indicates which version it is. Web 2.0 web sites expect the user to contribute. It may be best to think of Web 2.0 web sites more like applications than just web sites. Web 2.0 sites allow you to do something such as publish words and pictures or keep a group calendar.

What types of sites are considered Web 2.0, and what are they used for? Probably the most well-known type of Web 2.0 site is the social networking site, such as MySpace and Facebook. These sites allow users to put up their own content, share content with other sites, and browse the content of fellow members. They also have the facility for you to send messages to fellow members and even post your content on someone else's site if you have been given permission to do so by the other user. While these sites initially gained popularity with teenagers and college students as a way to keep in touch with friends and meet new people, many have become important ways for individuals and groups to communicate. For example, many secondary and postsecondary school clubs create MySpace pages where information regarding activities and upcoming events is posted. In many cases, these pages are the only places where this information is made available to members. If students who are visually impaired do not have reliable access to the pages, they are essentially cut out of the information loop for the organizations.

In addition to clubs, many businesses and political organizations use social networking sites for marketing and communication. For example, the Carroll Center for the Blind, in Newton, Massachusetts, thought that it needed to have a MySpace page as a way to get the word out about the center. Webmaster Mark Sadecki said that the center added pages on both MySpace and Facebook "not to direct people from our site to theirs (the content on MySpace is much more easily accessed on our home page), but to market to existing and future users of these sites. If they are using the sites, we assume that they are not having accessibility issues with them." Even though Sadecki experienced issues of accessibility on both sites, he said it is an important part of the center's marketing effort to have at least a basic page on both sites.

Another popular Web 2.0 application is project management. Project management sites provide individuals and organizations with a way to manage projects within their organizations. For example, the popular project management site Basecamp allows users to manage multiple projects at once, providing to-do lists, message centers, calendars, and reminders along the way. Multiple users can access and share information via an electronic whiteboard. This type of software has become common in the business world and is used to help companies manage all the details of large projects. Project management software used to come in a box, but now Web 2.0 versions like Basecamp allow people to use the software as a web-based tool.

Blogging is another popular Web 2.0 application. Short for web log, a blog is essentially a compilation of diary entries that are arranged in reverse date order. Blogs can range from the personal to political. They can include pictures, MP3 files, videos, and the like, which usually make accessibility difficult.

Another example of a Web 2.0 application is wikis, web sites that allow users to add and edit information. A wiki usually provides information about multiple topics, and users can update, edit, and add to the information that is provided. Like blogs, accessibility issues in wikis are usually tied to the addition of media.

Sounds Great, So What Is the Problem?

Before you even get to whether the content of these sites is accessible, you need to get past the inaccessible elements of the sign-up process. All the Web 2.0 sites that were reviewed for this article require users to sign up, and all use a method called CAPTCHA (completely automated public Turing test to tell computers and humans apart) to verify that you are a human, not a computer. A CAPTCHA is a small graphic that contains text, numbers, or both. You are asked to type the characters that are displayed into a text field. CAPTCHAs are basically a Web 1.0 technology that is used to prevent automated systems, such as those used by spammers, from signing up for services.

Unfortunately, because they are graphic, CAPTCHAs are completely inaccessible. Some CAPTCHAs include an audio alternative, but because of voice-recognition technology, the quality of the audio is poor on purpose. Anyone with less-than-perfect hearing or with auditory-processing problems would find them difficult to use. (For an example of an audio CAPTCHA, visit www.recaptcha.com and follow the link for What Is ReCAPTCHA.) The World Wide Web Consortium (W3C) recommends that CAPTCHAs not be used at all because they are inherently inaccessible, but that if you use them, you should use an audio alternative. According to W3C, CAPTCHAs are not so effective in preventing automated sign-up, and their limited value is not worth the loss of accessibility. Also, audio alternatives are not accessible to braille-only users such as people who are deaf-blind. Alternatives to CAPTCHAs would include providing the user with a question that requires a human to answer or check boxes that need to be unchecked.

Most of the Web 2.0 sites have not gotten the message. Both YouTube and MySpace do not bother to use audio alternatives. In fact, both provide a frustrating "If You Can't Read This" link next to the CAPTCHA, which, at first, gives you hope that an audio prompt will follow, but alas, all the link does is refresh the screen with a different CAPTCHA. This lack of audio is inexcusable, since ready-made CAPTCHAs that include audio are freely available, most notably from Carnegie Mellon University's CAPTCHA project (www.captcha.net).

Once you manage to sign up, you will find a variety of obstacles to participation. First, because much of the content found on Web 2.0 sites is user generated, little attention is paid to making the content accessible. The average user just does not know anything about the need for accessibility or how to go about making the content more accessible. This situation can be aggravated by the fact that the applications that end-users use to put content on the web site typically do not provide any way to make their content more accessible. For example, MySpace is a social networking site where individuals and organizations can put up content on their personal MySpace pages. Content can include photographs and videos. Even if users are aware of accessibility issues and want to provide accessible content to visitors to their pages, there is no facility to do even something as simple as providing alt-text with photographs.

Web sites like YouTube that focus on videos are equally inaccessible to use. Even if users decided to create a description for their videos that could be played simultaneously, no facility to upload them or play them is built into the web site.

New Technology

Another big issue with Web 2.0 is the introduction of new technology that is intended to make these sites more dynamic. In a way, these sites are becoming more like television, where content is updated on the screen without the user having to do anything. Unfortunately, screen readers do not always notice the new content, or, worse, the new content can cause the screen reader to begin reading the page again from the top, basically hijacking control of the page away from the user.

As a group, these technologies are referred to as rich internet applications (RIAs). Unfortunately, RIAs provide web designers with a multitude of new options for web design, most of which are not accessible.

Here We Go Again ...

When computers moved from DOS to Windows, screen readers had to come up with a whole new way to read the screen. Most screen readers began to use an off-screen model and relied on interpreting objects to provide accurate information about which part of the screen had focus and what it said. The off-screen model's functionality was enhanced when the companies that produced the operating systems provided additional information and consistency through the use of accessible application programming interfaces (APIs), the most well known of which was Microsoft Active Accessibility. APIs worked fairly well until the introduction of the World Wide Web and the proliferation of HTML in both web pages and documents. Manufacturers of screen readers had to scramble once again to find a way to read the underlying HTML efficiently.

As the web exploded in popularity, the number of authors of web sites expanded with it. Since HTML was simply a markup language, there were no real rules to govern its use, thus very little consistency in how web pages were structured. This situation created an accessibility problem for users of screen readers because there was no reliable way to tell something as simple as whether a table was really a table or was just being used to format the page. The facility existed to provide alternative descriptions of pictures, but not everyone knew how to use them or even why they might want to use them.

Enter W3C. W3C is an international consortium, created by one of the inventors of the World Wide Web, as a way to "lead the World Wide Web to its full potential by developing protocols and guidelines that ensure long-term growth of the web" (www.w3.org/Consortium/Overview.html#mission). In 1997, W3C created a Web Accessibility Initiative (WAI) whose goal was to create standards for HTML authoring that created fully accessible web sites. The WAI provided the ground rules for accessible HTML, and a host of other web sites and applications used these guidelines to help web authors create accessible sites.

Unfortunately, with the advent of the new RIAs, we are back at the beginning of the process of making the web accessible once again. Much of the technology behind RIA is new, and no standards yet exist for web authors to make their sites accessible. According to W3C's "Roadmap for Accessible Rich Internet Applications," much of the accessibility architecture that is needed to create standards does not yet even exist. The little that does exist is so new that it will not work in all web browsers.

RIA accessibility is very much a work in progress, but one thing is clear. It will be extremely difficult to provide any kind of backward compatibility for RIA web sites. Old versions of screen readers simply will not work with the new technology, but neither will old versions of web browsers.

According to Eric Damery, vice president of software product management for Freedom Scientific, the company is actively working to make changes in future versions of JAWS and MAGic that will support accessible rich internet applications (ARIA), but these changes and the full ARIA standards are 9 to 12 months away and will be compatible only with Firefox, not Internet Explorer. Damery was hopeful that the problem would eventually be solved. When asked to compare this new technical struggle with the early battles to make the Internet accessible, he said that the assistive technology community is more involved with software developers now, so these issues are being addressed much earlier in the process.

The outlook is similar at GW Micro. According to Doug Geoffray, GW Micro's vice president of development, Window-Eyes added some Web 2.0 accessibility for things like tree views and buttons in the last version when it added Firefox compatibility. Geoffray said that IBM has worked hard to make sure that Firefox has all the support necessary for ARIA, but Microsoft does not have the same commitment for Internet Explorer. He also noted that the ARIA standards are "much cleaner than the WAI standards and right to the point," but he still believes that there is a "bumpy road" ahead for computer users who are visually impaired until Web 2.0 code is accessible and developers begin to use the more accessible code.

Geoffray was not willing to predict when Window-Eyes will make the changes that are necessary to take full advantage of ARIA, but he said that the company is actively pursuing these technologies and will pay greater attention to Web 2.0 compatibility after the release of the next version of Window-Eyes, which is due to come out this spring.

In the meantime, to be accessible, Web 2.0 sites will need to provide an alternative to RIA content, much like the "text-only" versions of web sites that were popular in the early days of the World Wide Web.

A big hurdle for accessibility is a technology called Ajax (the acronym for Asynchronous JavaScript and XML). Ajax is intended to make web pages more dynamic by providing the ability to refresh parts of a web page without having to refresh the entire page. Essentially, new content can come onto the page without the user having to do anything. Several problems are presented by Ajax, however.

First, since Ajax is a form of JavaScript, anyone who turns JavaScript off or is still using technology that is not able to process JavaScript has no access to the information. Second, even if your screen reader will read the information, it will not tell you that the page has updated, so you will not even know that something has changed. This second problem is being addressed by the new ARIA standards and should be implemented by the end of 2008, but it will still depend on web developers understanding and implementing the new standards and on users upgrading to the newest screen readers.

Fortunately, products, such as the Dojo Toolkit (www.dojotoolkit.org), are available that build ARIA standards into their tools for web developers, making it easier for developers to create accessible Web 2.0 pages. Like the ARIA standards, the Dojo Toolkit is a work in progress, but its tools are impressive and its definition and explanation of what makes a Web 2.0 site accessible are well written and fairly easy to understand.

In the meantime, while the standards are being finalized and the screen readers are catching up, here are some recommendations for what Web 2.0 sites can do to maximize accessibility. First, sites should use Ajax only as a layer on top of existing web sites, so that users can use the page, even if they cannot run JavaScript or recognize when updates have occurred. Second, sites should inform users, at the top of a form, if the form requires JavaScript to be submitted, and provide a link to a version of the form that does not require JavaScript. Finally, keyboard alternatives must be provided for any actions that require a mouse.

In the end, accessibility on the web requires a commitment to creating accessible web pages that developers still do not seem to have. Most of the Web 2.0 sites have accessibility issues that are left over from Web 1.0 and require attention only to available standards to fix. For example, many of these sites do not even provide something as simple as the ability to add Alt-text to your photographs, all of them use CAPTCHAs for security, and many use old-fashioned rollover menus that were never accessible. So while the new technology has presented new technological hurdles, the biggest hurdle still appears to be the lack of attention to accessibility. For every Web 2.0 web site I reviewed for this article, I searched for information on accessibility using the search engine provided by the site. For all these sites, the search returned no documents. I even tried calling the press office at MySpace for comments several times but never received a return telephone call. It would seem that the biggest hurdle is to get the owners of these web sites even to consider accessibility.

If you have comments about this article, e-mail us at accessworld@afb.net.

AccessWorld News

Accessibility Is a Right

Probably one of the most talked-about news items to surface during the ATIA 2008 conference, held in Orlando, Florida, in late January, was the launch of the new nonprofit AIR (Accessibility Is a Right) Foundation. Headquartered in Minneapolis, Minnesota, the foundation will make available to anyone who uses a computer anywhere in the world an application that renders computers instantly accessible to persons who are blind or have low vision. Through an Internet connection, users can access, free of charge, the SA To Go screen reader, powered by Serotek's award-winning System Access. With SA To Go, users can browse the web, read web pages, fill out forms, and use applications that reside on the host computer. When the connection is terminated, there is no trace of the software remaining, thus delivering an efficient, nonintrusive way for people who are blind or have low vision to use computers in libraries, Internet cafés, and other public settings. Art Schreiber, executive director of the new organization said, "The basic tenet of the AIR Foundation is that accessibility is a fundamental human right, regardless of financial or geographic constraints."

SA To Go is available in English. The first priority of the AIR Foundation is to develop a version in Mandarin Chinese, with other languages soon to follow. Serotek Corporation will continue to sell its System Access product, which offers features that are not available in the free Internet-based application. For more information, visit www.AccessibilityIsaRight.org or telephone 877-369-0101.

Maestro 2.1 Adds Web Browser

HumanWare Canada has announced the addition of web access for its Maestro handheld product. Based on an off-the-shelf PDA (personal digital assistant) and with the addition of a tactile overlay, the Maestro has become popular with many customers who are blind or have low vision as a notetaker; DAISY book player; voice recorder; calendar; address book; portable music player; and, with the addition of Trekker, GPS (global positioning system) navigation tool. With version 2.1, customers can now surf the web, read web pages, navigate links, headings and frames, and fill out forms. Using screen-reading commands, users can access the Internet anywhere that wireless Internet capability is available.

Version 2.1 is available for download as a free upgrade to current Maestro customers. For information on the new features, visit www.humanware.ca/web/en/maestro-trekker-upgrade.html. For more information on the Maestro product itself, visit www.humanware.ca/web/en/maestro.html or phone 450-463-1717.

Perkins to Acquire ATC

The Perkins School for the Blind, in Watertown, Massachusetts, has recently announced its plans to acquire the assets of Adaptive Technology Consulting (ATC). Owned and operated by Gayle Yarnall, ATC is a private company that sells a wide range of assistive technology products, including screen readers, magnification products, braille displays, and embossers.

Along with hundreds of products, Perkins will also acquire the ATC staff, who have provided training in a variety of products. According to Steven Rothstein, president of the Perkins School, in a recent announcement: "We have been looking for a way to dramatically increase our technological expertise and offer a greater variety of adaptive devices to better educate students and to enhance lifestyle for all."

Exactly what the acquisition means for the school or the individuals it serves is not entirely clear at this time, but Perkins is definitely expanding its boundaries beyond the familiar (mechanical) Perkins Brailler. For more information, visit the Perkins School for the Blind at www.perkins.org or Adaptive Technology Consulting at http://adaptivetech.net.

A Low-Tech Innovation for Science and Math

Anyone who went through high school or college as a blind student before the 21st century can probably remember the struggle involved in getting concepts like graphs, charts, diagrams, and other pictorial representations off the page and into the brain without seeing them. Thanks to a $300,000 grant from the National Science Foundation and an innovative collaboration between Andy Van Schaack, lecturer at Vanderbilt University's Peabody College of Education and Human Development, and Joshua Miele, a blind researcher at the Smith-Kettlewell Eye Institute in San Francisco, that age-old conundrum may well soon be solved. Not only can their proposed product do the job, but it promises to do it at an affordable price.

Originally designed for the mainstream market, the basic product is a kind of computerized paper plus pen incorporating handwriting and audio recording. With notebooks called LiveScribe and a device called Smartpen, students will be able to touch raised drawings of figures and charts and then touch the Smartpen to specific points to hear an audio explanation. While writing notes in class, students can simultaneously record the lecture and later touch the pen to one handwritten note to hear in detail what the professor said at that particular time in the discussion.

Miele and Van Schaack are working with Sewell raised-line drawing paper as an add-on to the Smartpen and LiveScribe product, to incorporate raised graphics into the existing framework of touching the pen to a particular point on the paper to hear audio.

The LiveScribe notebooks are to be the size and price of traditional notebooks and the Smartpens are expected to cost approximately $200.

HumanWare Reorganizes

With offices in the United States, Canada, and New Zealand and assistive technology products that extend into the braille, speech, and screen-magnification markets, HumanWare is certainly one of the biggest, if not the biggest company in the assistive technology industry. The company recently announced a restructuring of its management staff in an attempt to serve its customers better in a growing number of countries.

In November, Gilles Pepin was named the company's new CEO. The reorganization includes the appointment of three vice presidents, for marketing, research and development, and operations. These vice presidents will be responsible for activity in New Zealand and Canada. The company's business managers in the United States, Europe, and Australia will each have expanded geographic responsibilities.

A recent news release illuminated some of the personnel shifts as follows: "The following people within the organization have been assigned new group responsibilities: Greg Brown as corporate controller, Richard Nadeau as vice president of operations, Pierre Hamel as vice president of research and development, and Ivan Lagacé as vice president of marketing. Ron Hathaway, managing director of Australia and Asia, will now add New Zealand sales to his responsibilities. Renee Gosselin is now the manager of market development for all products. Pedro Polson and Phil Rance are keeping the same positions."

The company offers a growing list of products for people who are blind, have low vision, or have learning disabilities. How the new reorganization will affect the development and delivery of products remains to be seen. For more information, visit the web site www.humanware.com.

Music Is His Life and His Livelihood: An Interview with Bill McCann of Dancing Dots

The story is familiar. A guy has a great job and is frequently promoted, but continues to harbor a dream of doing something else, something that he loves. He waffles. Then, despite the risk involved for their growing family, his wife, expecting their second child, tells him that she believes in him, that he must give up his nice job with the good paycheck and follow his dream.

It is not a fluffy romance from Danielle Steele or a new box office shoo-in from Warner Brothers. It is the story of how one of the better-known, albeit small, companies in the assistive technology industry began.

A portrait of Bill McCann.

Caption: Bill McCann of Dancing Dots.

Follow the Sun

Bill McCann was working as a programmer for Sun Microsystems. It was not lost on him that in a country where the unemployment rate for people who are blind is appalling, his situation was not one to be easily discounted. He was a valued employee. He was routinely promoted. And, as he quipped more than a decade after leaving, he was even important enough to have an office with a window.

McCann was born with a small amount of vision, which he lost altogether at age 6. Or, in the more colorful way he described it: "I was born legally blind ... and at age 6 became illegally blind, which means I see nothing." At age 9, his love affair with the trumpet began. (Today, he enjoys all kinds of music, but said that hearing Louis Armstrong's trumpet remains his absolute favorite.)

Despite a successful job in computer programming, McCann's heart had always been with music. Playing the trumpet professionally—sometimes with his own small band, sometimes as a duet with his wife, Mary Ann, on the harp, he was increasingly obsessed with the notion of a computer program for braille music translation, a program that would allow musicians who are blind to create their own compositions or arrangements and actually produce hard-copy music scores that both blind and sighted musicians could read.

McCann had begun talking about the concept as early as 1979 while he was a student at the Philadelphia College of the Performing Arts (now known as the University of the Arts). He had even tossed the idea around with programmers in the assistive technology field who were capable of writing such a program. But no one else got around to it, and McCann finally realized that if he was ever going to have the music-translation program that kept percolating in his imagination, he would have to write it himself.

Window of Opportunity

It was 1991, and people who were blind who worked in the computer field knew that jobs would be changing until there was a full-blown and fully tested screen-reading solution for the new Microsoft Windows operating system. For McCann, it looked like a window of opportunity. He could voluntarily leave his comfortable job, get a great severance package to sustain him financially for a time, and figure out how to make his dream come true.

For the next year or two, McCann cobbled together a collection of ways to generate income and steps toward forming his own company. He was, in other words, a part-time professional musician, a part-time assistive technology trainer, a part-time student (learning the C programming language) and a part-time budding entrepreneur. He learned of a program at the Wharton School of Business at the University of Pennsylvania and wound up as the "project" for two undergraduates. The result was a business plan for a company that would become known as Dancing Dots and whose flagship product would be the GOODFEEL music-translation software. McCann acquired start-up money from a state program that funded new businesses involving technology. St. Lucy's, the school for blind children where he had once been a student, provided him with office space. But the best thing that happened in the formation of Dancing Dots, McCann said, was a meeting that occurred at a family gathering.

It was at a gathering of his in-laws that McCann met Albert Milani, the then-boyfriend and now husband of his wife's sister. A simple conversation about the work each man was doing led to the realization that Milani, an electrical engineer and whiz programmer, was the missing piece in McCann's overall plan.

"What I had before I met Albert," McCann recalled, "was a prototype. Albert turned it into a product." In 1994, Milani joined Dancing Dots full time and continues to be its chief technical officer.

Altogether, five programmers, including McCann and Milani, have been involved in the development of the GOODFEEL program. Software, McCann said, "is like your house; it's never really done."

On the Same Page

What is the GOODFEEL program? Put simply, it is the only software that makes it instantly possible for blind and sighted musicians to be literally on the same page. If a blind musician is given a printed score and needs to read it, Dancing Dots has software to convert it into braille. If the blind musician wants to produce braille and print copies of the same composition, the software can do that, too. And if a blind musician wants to turn a computer into an accessible recording studio, the task can be accomplished with McCann's product as well.

While GOODFEEL is the company's flagship product, Dancing Dots offers other solutions to blind musicians and sighted persons who want to share music with them. With Lime Aloud, for example, a student who is blind can create a piece of music using a screen reader and then e-mail it as an attachment to her teacher. Dancing Dots also publishes books on braille music, teaching the code to blind and sighted students alike.

After 15 years of hard work, Dancing Dots now has customers throughout the United States, Canada, and 40 other countries. As the company's president, McCann has been interviewed by the BBC and the Associated Press, been on television in Italy, and made the front page of a newspaper in Venice. He has enjoyed building friendships with celebrity musicians who are blind and who use his product and has delighted in interacting with blind children at a number of camps and schools.

Milestones

Maybe it is because music is naturally uplifting. Or maybe it is because McCann and Milani are such magnificent human beings. Whatever the reason, the names Dancing Dots and GOODFEEL are representative of the company's spirit. McCann is one of those rare individuals who just makes you feel happy to be in his company. In 2000, he and his wife, Mary Ann, built a house on a hill, and life there is infused with music. Although he does not perform professionally any more, if you talk long enough with him, you will learn about the family gatherings where he plays trumpet or keyboard, Mary Ann plays the harp, and all five children (aged 5 to 18) contribute with instruments ranging from voice to clarinet to glockenspiel to enliven the mix. His attitude, in other words, reflects the company's monikers: his spirit dances, and he makes others feel good.

Following his dream has been, he said, a wonderful ride so far, including myriad milestones along the way. Some of the milestones have included building relationships with well-known blind musicians, such as Marcus Roberts, Ronnie Millsap, Diane Schuur, and France's Jean-Philippe Rykiel. A serendipitous string of events led to Ray Charles showcasing the GOODFEEL product at a spectacular "party-jam session," delighting a few hundred attendees at the CSUN Technology and Persons with Disabilities conference in Los Angeles in 2003.

Originally, McCann said, he just planned to host a kind of jam session. He secured a large room in the hotel and invited a few people to perform and several others to enjoy. At an earlier conference, he had made the acquaintance of a musician from Paris who was a longtime friend of Ray Charles. Once the software was introduced to the legendary musician, Charles let McCann know that he would like to demonstrate what he had done.

The event was nothing short of magical. Charles took the stage and demonstrated how he composed some music with the GOODFEEL program on his computer. Then, with saxophone players gathered by Dancing Dots staff—musicians Charles had never met—32 bars of the new composition were printed out, and a performance was born. "If I made mistakes, play 'em," is what McCann remembers Charles saying to his newly assembled band. But there were no mistakes.

Still, McCann said, no single moment necessarily outshines the rest. He enjoys each opportunity to share his music-translation software with someone new—whether a famous blind musician or an 8-year-old child. At the time of our interview, he was particularly excited about an invitation he received from France. On January 4, 2009, in celebration of the bicentennial of Louis Braille's birth, the French government and association for the blind will hold a week-long series of events honoring the inventor of literacy for people who are blind. McCann has been invited to speak on—what else?—Braille's system of musical notation.

Other upcoming events of interest to AccessWorld readers will be two presentations at the 2008 CSUN conference by the staff of Dancing Dots and a two-week musical camp for blind youths at the Texas School for the Blind.

At the Friends in Art showcase at the 2005 convention of the American Council of the Blind, McCann performed an original composition on trumpet. Later, Gordon Kent, a well-known Washington, DC-based musician who is blind and who works for Dancing Dots, used the GOODFEEL software to craft a beautiful arrangement to complement McCann's performance. As a special treat, McCann shared that file with us, so that AccessWorld readers can hear, firsthand, one of the many things the software can achieve. Remember, as you listen to it, that the music was composed, independently, by a blind musician; the score was produced in braille by a blind musician; and the lovely accompanying arrangement was produced and recorded by another blind musician. Think about that for a minute and appreciate it. And then do what drives McCann and Dancing dots all the time: Just feel and love the music.

"My People," original composition by Bill McCann, arranged by Gordon Kent, copyright Dancing Dots.

For more information on Dancing Dots products, visit the web site, www.DancingDots.com or telephone 610-783-6692.

If you have comments about this article, e-mail us at accessworld@afb.net.

ATIA 2008

The ninth annual Assistive Technology Industry Association (ATIA) conference was held from January 30 to February 2, 2008, at the Caribe Royale All-Suites Resort and Convention Center in Orlando, Florida. More than 2,300 people attended. ATIA is a not-for-profit membership organization of manufacturers, sellers, and providers of technology-based assistive devices and services. One in three people at this year's conference was a first-time attendee. Speakers and attendees came from places as far afield as Australia, Brazil, Canada, China, Europe, Guam, Israel, Japan, Malaysia, Puerto Rico, Singapore, South Africa, and Venezuela.

New Products

The product that attracted the most attention at ATIA was the KNFB Reader Mobile Edition, software that is loaded on to a Symbian-based Nokia N82 cell phone, which measures about 2 inches by 4 inches and weighs just 4 ounces. With just the press of a few buttons, the cell phone can snap a picture of a memo, book page, or piece of U.S. currency and read it instantly with synthetic speech. The text also appears on the cell phone's screen in large font, with the spoken text highlighted, rendering it easily distinguishable from other text on the screen.

The Nokia cell phone itself has myriad high-end features, including a web browser, e-mail capabilities, MP3 player, and GPS functions. Although these features require a cell phone screen reader to become completely accessible, such additional software is not required for the Reader. Both Mobile Speak and TALKS are compatible with the cell phone.

The KNFB Reader Mobile edition sells for about $2,000. Screen readers cost about $300. The Nokia N82 is currently supported by T-Mobile and AT&T, and is available from K-NFB Reading Technology, Inc. dealers.

Ai Squared was showing ZoomText Scripting Edition, which makes it possible to create scripts to customize the behavior of ZoomText and other applications, providing enhanced functionality and automation of many computing tasks. For example, you can automate the process of finding specific fields in a large database. A script is a text file that describes the steps that are required to complete a given task. ZoomText scripts can be written using industry-standard scripting languages, such as VBScript, Java script, or Perl. Scripts can be written in Notepad; no additional software is required.

GW Micro announced that the next version of Window-Eyes will include support for a scripting language. It will be possible to write scripts in several programming languages.

Clarity introduced the i-vu, a pocket-sized CCTV with 5-20x magnification on a 2-inch screen. It lets you view images in color, reverse image, and freeze frame.

Conference Sessions of Interest

Dusty Voorhees and Eric Damery, of Freedom Scientific, conducted a session on JAWS and MAGic. They demonstrated how speech from JAWS has been integrated into MAGic. They said that JAWS navigation keys, such as H for heading, would be added to MAGic in the future.

Doug Geoffray, of GW Micro, led a session on the new scripting capabilities that will be included in the next version of Window-Eyes. These scripts will streamline the screen reader's use with various programs. Scripts have already been written for Quicken and WinAmp.

Ike Presley, of the American Foundation for the Blind (AFB), presented a session on teaching the use of audio-assisted reading. The focus was on describing how to teach students to identify and take note of important information while reading audio books.

Kay Ferrell, of AFB, reported on the development of guidelines for describing audio material for children.

Cecilia Robinson, of the Region 4 Education Service Center, discussed assistive technology and resources that can help instructors teach the Nemeth Code to students.

Anne Taylor, of the National Federation of the Blind, discussed low-cost screen readers, including Nonvisual Desktop Access, Thunder, and System Access. The session covered the strengths and weaknesses of these products and made recommendations on when they would be viable alternatives to full-featured products.

Conference Access

There were good points and bad points regarding access for attendees who are blind at this year's conference. Volunteers at the conference's registration area were not helpful in answering questions or giving verbal directions for blind people to get to the Accessibility desk, where accessible conference materials were being distributed. They were not familiar with the format or contents of the conference CDs. Attendees were dismayed to find no braille on the covers of sections of the braille program. Instead, they found useless, raised-print letters. It may help the people who give out braille programs at registration to have print on the covers, but there must be braille on the covers as well.

On the other hand, carpeting and "bumpers" were used to guide people from the buildings with guest rooms across the parking lot to the convention center where sessions and exhibits were to be found. Braille menus were available and accurate in the Tropicale restaurant.

Leadership Forum

This year's conference included the second ATIA Leadership Forum on Accessibility. The forum provided an opportunity for more than 100 representatives from leading corporations, governmental agencies, and educational institutions to explore specific strategies for integrating accessibility throughout their enterprises.

During the opening general session, Frances West, IBM's director of human ability and accessibility, presented "Justifying Accessibility: The Business Case for Inclusion." Rob Sinclair, director of accessibility for Microsoft, discussed "Technology and the Accessible Workstyle." These two sessions provided a look at the actual benefit and value of investing in accessibility and the direction of accessible technology. General sessions provided case studies from Walgreens, Canon USA, Adobe, and CAP (Computer/Electronic Accommodations Program) within the Department of Defense. Each presenter discussed the strategies that were used, the benefits that were realized, and the lessons that were learned.

This is just a small sampling of the information and networking opportunities that the participants shared at the conference. The ATIA conference continues to grow and has become an important annual event in the field of assistive technology.

If you have comments about this article, e-mail us at accessworld@afb.net.

Exercise for Everyone

I'd like to add to the discussion on access to exercise. Responding to your superb article on gym equipment, I believe also a major barrier is excessively visual teaching style. Unsuccessfully, I've attempted to keep up in aerobics classes, only to find that I needed so much assistance I slowed the pace for the entire group. Just watch any exercise video and you'll find the instructor shouting: "reach your arms up" or "stick your elbows out." Unless you are already an aerobics instructor, I challenge the average user to correctly follow the moves with the screen turned off. And audio exercise tapes rely on included booklets with diagrams to demonstrate their exercises.

I've heard of students whose success in exercise class occurred by meeting with the trainer beforehand to become completely familiar with all the movements. That strategy never worked for me, perhaps because I'm overweight, middle-aged and kinesthetically challenged, and because the instructors insisted they improvised and that the moves for each class session were always slightly different.

Vincent Martin, who wrote your January issue's letter to the editor, suggests we all get out and walk. I once lost 100 pounds walking, and couldn't agree more. When I ran all my errands on foot, I stayed slender.

When I was thin, I could do chin-ups, roller-blade, jump rope and ski. But when I was thin, I was very poor, because I've never been able to stay thin while working full-time. Sidewalks are disappearing from our suburbs. Some of us are stuck on or waiting outrageous periods for paratransit. And additional factors like arthritis or inflexible work hours can prevent us from getting out there. One major barrier to using the gym often is simply finding the time to commute to a gym when you don't drive.

Having equipment at home doesn't work when spouses don't want that ugly treadmill in the living room, or fixed income prevents the purchase of gear. My sighted friends can't afford a personal trainer either, which only shows that most barriers to exercise aren't unique to us, but visual impairment does intensify those barriers!

I thought it was interesting that Mr. Martin can "design a workout regimen with nothing but body-weight, dumbbells, a mat, and an exercise ball, and keep any sighted or visually impaired person in shape." Good for him, he's a paraolympic athlete. I wish he'd design an exercise program for me! Between commuting and working, I'm away from home 14 hours each weekday, and my boss recently ordered me to work an extra hour because he thought I was leaving too early! In between times, I try to be a good homemaker, and to get a few hours of sleep.

I'd really like to see someone get a grant to do just that—design an exercise program and deliver it for reasonable cost to any interested blind or visually impaired individual. I've written National Braille Press, for example, suggesting an exercise book that contains no-miss directions for the common moves with plans for active, sedentary, older, fatter, and also more athletic folks. They're interested but haven't found a suitable book yet to adapt.

A few individuals have created exercise tapes for blind people, but these are small enterprises that stay in business for only a few years. I'd love to see an open-source online training funded by a federal grant. I've written to colleges and universities, hoping they can interest some graduate students in designing such a program. I would be happy to be a guinea pig. I've written to radio reading services hoping they'd put an exercise hour on the air. I've written to the descriptive video folks, asking them to describe some of PBS's exercise shows. I'll be happy to put my pudgy self on video, demonstrating the moves so sighted trainers can get precision of instruction down. But nobody's taken me up on my offer! Maybe someone reading could forward this letter to a group who might.

Exercising would also be ideal material for a software program. You would perform a few simple fitness tests, and the software would instruct you in exercises that would be challenging, but not impossible for your particular fitness level. Instructions would be precise—rather than "stick your arms out," "extend your arms at right angles to your body, parallel to each other, palms facing towards the floor, fingers slightly spread apart."

Just as it is easier for a blind geek to figure out how to make JAWS co-exist with an unfriendly application, a physical education teacher who lost her sight would likely have less trouble accessing an exercise regimen than an out-of-shape klutzy blind person. Just as online access technology training is now becoming popular, I hope more organizations will find grant money to teach exercise to us remotely.

Deborah Norling
Milpitas, California

Google It! A Guide to the World's Most Popular Search Engine

Even people who do not use computers have some sense of what Google is. Google is the only Internet search tool that became such an almost overnight phenomenon that its name became a widely recognized and accepted verb. Do you need to know something? Just Google it!

Jonathan Mosen is a familiar name to many who use assistive technology for people who are blind. He was first known as a voice on ACB Radio's Main Menu, then a major presence with large assistive technology companies. But his work as an individual—just one smart blind guy sharing what he has learned with others—is perhaps the reason why most people are familiar with his work. The tutorial was first an audio tutorial; then Anna Dresner, of National Braille Press, worked to update it and bring it onto the braille page. Now even more people can appreciate what Mosen has presented in this powerful and concise package.

What Mosen and Dresner have done with this cleverly concise tutorial is to give us heaps of tips and tricks for harnessing the power of Google in ways that can spark many an "aha" moment for even the most sophisticated Internet seeker.

Did you know, for example, that Google can be a speedy and powerful dictionary? Did you know that you can use Google as a quick and easy way to check the status of your favorite stock? Or, if you are expecting a package, did you know that Google is so smart that it can recognize whether your tracking number is from UPS or FedEx and take you directly to the tracking information that you need?

Organized into 25 convenient "chapters" or categories, the book is a wonderful reference tool. In other words, after you work through it once, if you later need to use Google as an efficient means for checking flight status or weather reports, you can locate these areas in the table of contents and go directly to them.

"Working through" the book the first time is an apt description of how it can be best appreciated. With either the hardcopy braille version or the downloadable one on your braille PDA or notetaker, you can sit in front of your computer and experiment with each technique that Mosen describes and experience the amazing power firsthand. Incidentally, one of the most valuable pieces in the book may well be in the early section where Mosen explains how to customize your Google preferences. Learning, for instance, how to set preferences so that each new link that is followed can be closed independently, thus returning you to your Google search results, is worth the $12 price of the book in itself.

Examples are clearly illustrated, and each includes a sample of how the results page should appear. Because or these examples, I quickly realized that all Google searches are not made equal. For some time now, I have used the Google "accessible" link—the link that will bring up only web pages that are accessible to users of screen readers. (This URL is www.labs.google.com/accessible.) Although this beginning location is often useful, it falls short of making the most of several of the tricks that Mosen has to offer.

Using Google as a calculator, for instance, fell flat when I entered an equation into the search box on the "accessible" Google page. Upon returning to plain www.google.com, however, I found that using Google as a calculator worked like a charm.

If you just want to be a power searcher of information, this book will show you how to fine-tune your search terms to obtain the precise information that you want. If you are a news junkie, you can learn how to track and organize the news that is most interesting to you.

The book is a quick read, a handy reference, and a powerhouse of information. If there is anything you really want to know, just Google It! This book will show you how to do so in ways that will dazzle your friends and associates.

To order, call 800-548-7323 or visit www.nbp.org.

If you have comments about this article, e-mail us at accessworld@afb.net.

Exploring Methods of Accessing Virtual Worlds

"Musings on the Evolution and Longevity of Accessible Personal Digital Assistants," in the November 2007 issue of AccessWorld, left the reader in suspense. The author—slightly winded from climbing all the way from the hotel lobby to the 47th floor—once more became ridiculously lost trying to locate the ever-elusive elevator bank that would have taken him back to his own floor. After 10 minutes of senseless wandering, he asked directions of other blind guests, who seemed as befuddled as he was by the misleading echoes generated by the seemingly irregular open spaces around him. Attempting—and failing miserably—to detect the subliminal sound of the elevator bell just below the threshold of any reasonable hearing, perhaps while slightly dissociating from stress, he imagined himself immersed in the virtual online universe of Second Life—the popular but totally inaccessible interactive three-dimensional (3D) Internet environment that is used by more than 9 million sighted subscribers worldwide. He had abundant time to reflect on the implications of his admittedly humorous accessibility plight inside the real hotel in a recently begun project—an early investigation of accessibility issues in the rapidly growing online world of 3D virtual environments. In a virtual world like Second Life, users—called citizens—all of whom are sighted, see themselves immersed in a visible representation of some reality—ranging from the mundane to the fantastic—where they may play interactive games; visit "islands" that are replete with buildings, museums, and people; attend college lectures; transact imaginary or real business; chat with other citizens; manipulate objects; or otherwise get hopelessly lost. Virtual worlds are undoubtedly a highly visual experience, yet, as early as July 2007, we already had good reasons to be confident that they can be made accessible to people who are blind more easily than may outwardly be expected.

The 3D Internet

The two-dimensional (2D) Internet is filled with standardized features that yield many forms of content in a variety of formats, including HTML, dynamic HTML, video and audio streams, interactive widgets, and secure transaction. The growing presence of major mainstream enterprises like IBM in virtual worlds, such as Second Life, may be a telling sign that the 2D Internet paradigm is showing accelerating evolutionary paths to 3D extensions—sometimes having emerging capabilities to emulate real-world situations. Admittedly, 3D Internet is in its infancy, and while it does not have many capabilities that real enterprises, schools and universities, governments, or even virtual e-tourists need to conduct business, a growing number of mainstream users find the overall virtual world experience far more immersive than that of the classic Internet. There is a growing opinion that virtual worlds may eventually replace the 2D Internet for many applications.

Accessibility Goals in Virtual Worlds

The 3D Internet's outwardly visual medium—containing complex and often absurdly detailed spaces, myriads of objects, a vast number of fancifully attired virtual "people," and a bewilderingly rich variety of modalities of interactions—presents unusual challenges for enabling access to users who are blind. Yet, we have reason to believe that there are many possible solutions to the access challenge. Virtual worlds are conceptual spaces that bear various degrees of correlation to the real world. By operating a mouse—or an equivalent device—sighted users move, learn, and interact by controlling a highly personalized iconic representation of self called an avatar. The often fancifully attired avatar is the user's point of regard—an extension of the 2D cursor into a content-rich 3D environment. The avatar has visible spatial and operational relationships with nearby objects and other avatars. Our challenge is to transform the visual operational paradigm into an equivalent nonvisual paradigm for users who are blind.

The ultimate goal of nonvisual accessibility to virtual worlds is to create an alternative paradigm that is both sensorially immersive and operationally effective. While a blind citizen of most any virtual world is shut out of the whole experience, we suggest that, in the future, it may be possible to create an experience for users who are blind that is as operationally efficient and emotionally fulfilling as that for sighted users. People who are blind may eventually control their virtual surroundings with predictive ease and comfort through alternative nonvisual operational methods, while enjoying a convincing sense of "being there" yielded by the spatial and tactile clues of a rich canvas of immersive soundscapes and semantically dense haptic stimuli. We suspect that totally immersive sensorial environments, by themselves, will remain insufficient to yield true operational accessibility and may, in isolation, be highly confusing to users who are blind. Rather, we are confident that sensorial immersion will eventually constitute a valuable augmentation of 3D extensions to more traditional software accessibility techniques that are derived from the familiar world of the 2D Internet. Some limitations of sensorial immersion are exemplified by the marginal accessibility of the self-voiced game Terraformers, discussed later, in which the admittedly interesting soundscape created by the virtual environment remains ancillary to the operational accessibility that is realizable by using a screen reader. In the real hotel where he resided during the 2007 convention of the National Federation of the Blind, the blind author remained lost and confused, while cutting only a dendritic path toward his goal in spite of the rich sonic and tactile feedback afforded by the environment. At each turn, he asked for directions—in other words, he sought a more deterministic method that would augment his senses and let him reach his destination.

The immediate goal of our project is to identify and develop a set of methodologies and components that are necessary and sufficient to constitute a minimal core of deterministic operational accessibility for people who are blind in virtual worlds. We believe that these new techniques can be successfully derived from existing software-accessibility paradigms. A blind citizen of a virtual world must be capable of doing the following:

  • Determining spatial and operational relations between his avatar and nearby objects—in other words, query a "Where Am I?" function. The function may yield a text message, such as "Museum of Natural History. You are in the Cambrian explosion exhibit. You are surrounded by dozens of specimens of Opabinia regulis."
  • Query objects' descriptions. For example, the citizen may ask what an "Opabinia regulis" is.
  • Discover operational modalities for interactive objects—in other words, what can the citizen do to an Opabinia regulis?
  • Operate on objects—in other words, activate one of the various operations defined for the object.
  • Navigate and transport to other locations or move to the operational boundaries of different objects in the same neighborhood.

Admittedly, our goal is challenging. Yet in virtual worlds, many things already work in our favor. The real world has physical characteristics that impose operational limits. We cannot leap into the air and fly, walk through solid objects, change our size and shape in an instant, and certainly cannot teleport. If we had fallen down the infamous "blue stairs" in the real convention hotel, there would have been humiliating and potentially exceedingly painful consequences. Fortunately, none of these tawdry limitations need apply in a synthetic environment, a place where we—and a score of virtual world programmers—may all act as Dei ex machina and control and even bend the "forces of nature." Through early investigation, we may be starting to glimpse how implementations of virtual worlds can include accessibility features that enable users who are blind to participate as effectively, although perhaps not always as conveniently, as sighted users.

Applying 2D Internet Accessibility to a 3D Environment

Today, people who are blind can use the 2D Internet successfully. Textual web content and structural elements are made available and navigable through current screen-reader technology. If web accessibility guidelines are followed, even images may be described. Screen readers' conversion of web site content into synthetic speech or braille and users' ability to navigate a site with only the keyboard can often yield a satisfactory experience. It is our intent to extend this successful paradigm to yield an equally satisfactory experience for users who are blind in a virtual world.

Serendipitously, virtual world graphical user interfaces (GUIs) often inherit a considerable set of legacy 2D widgets. There already are toolbars, dropdown menus, text-entry fields, selection buttons, sliders, and many other familiar GUI components (see Second Life screen shot below). Traditional 2D interactive objects also often appear when one clicks on—or moves the avatar over—any of the nearly countless denizens of the virtual world. Software-accessibility techniques for these commonplace GUI components are already well understood and may be addressed by existing screen-reader and keyboard-navigation techniques.

An avatar is shown standing in front of the Reuters building. The user has mouse-clicked the Inventory button, and a list of things being carried is shown in a dialogue box.

Caption: A screen shot from Second Life.

Yet, how can the native 3D content of a virtual world be accessed? How can a citizen who is blind—who cannot see his avatar or any surrounding virtual objects—ever interact with the other residents of virtual space or participate in any way to transact virtual business?

We may think of a virtual world as a GUI application that behaves much like a vast extension of a classic web browser. The avatar's general vicinity may be thought to be analogous to a web site, and virtual objects nearby can be thought of as the web site's content. We may regard the operation of moving to a different location in the virtual world to be analogous to opening a different web site or web page. Undeniably, though, there are glaring differences between the spaces of virtual worlds and the classic Internet. Among other things, not only are there the familiar dimensions of height and width inherited from the 2D web site paradigm, but these two dimensions are augmented by the Z axis of depth. Screen-reader technology and keyboard navigation must be enhanced and extended by technical breakthroughs specific to 3D environments. We may also develop 2D operational models of virtual worlds, where current screen-reading technologies can already operate.

Virtual World Accessibility Techniques

Accessible applications in 2D have an architected hierarchical tree structure. A window has toolbars, and toolbars have buttons and dropdown menus; a web site has pages, and pages contain headers, text paragraphs, forms, links, and other static and dynamic elements. Rather than attempt to interpret or recognize the myriad visible shapes and light patterns painted on the computer display, modern screen readers are capable of traversing and peeking into these underlying abstract tree structures. Names, roles, states, and other accessible attributes of many standardized 2D GUI components are queried by the screen-reading software and are then spoken or brailled as the user traverses them by keyboard.

By extension, the "engineering struts" behind virtual worlds may be abstracted as a set of hierarchical structures and directed graphs consisting of nodes and edges that represent abstract spatial relations, objects, and object properties. Virtual buildings have rooms, rooms have windows and doors, and doors lead from one room to another. Some virtual objects may exist in proximity to the user's avatar, while other objects and spaces are far away. Some may be purely decorative, while others may have a highly operational value. Some objects are completely static, while others may be highly interactive. Some are atomic, while others contain structural subcomponents. These relationships and characteristics remain undeniable challenges to current software-accessibility technologies, but constitute exciting opportunities for developing breakthrough accessibility techniques. It is interesting that some aspects of accessibility to virtual worlds may let us harness legacy software technologies that are already serving users who are blind in more traditional 2D applications.

Imagine a scenario in which a user hits a hot key that causes the system to announce or braille a list of names of all the items within a 3-meter radius of the avatar (see avatar image below). This list may be numbered. The user may further specify that he or she requires only the names of nearby avatars while ignoring other objects or may prefer to learn about avatars and interactive objects while ignoring items that are likely to be only decorative. A key combination may be designed to move his avatar to the proximity of a specific item and activate it and then announce what operations are available in the particular context. In many virtual world implementations, a traditional and more familiar and accessible 2D menu may be associated with an interactive item and may be accessed by known means. In an exploratory mood, a user may direct the GUI to transfer one's avatar to the next frame or virtual sector of space to the north and generate a list of nearby items.

An avatar is standing in front of an interactive 3D display of IBM computer products. Each display item that can be selected and queried with a mouse is highlighted, and there is a number and a name beneath each of them.

Caption: An avatar stands in front of an interactive nearby object.

As we mentioned earlier, this is somewhat akin to browsing 2D web sites, except that in the 3D Internet, one is immersed in regions of virtual space instead of observing flat and stylized web pages. A user who is blind may also customize the size and shape of the area of perception—in other words, may tailor the virtual horizon around the avatar. The virtual horizon of perception may be extended or reduced as desired via keyboard control to span a useful region of space for the particular situation. The horizon may span hundreds of yards in an open and sparsely populated field of flowers. Conversely, while exploring a clockwork mechanism, a user may prefer to reduce his or her virtual field of view to span just a few virtual cubic inches.

As with 2D accessibility, objects in virtual worlds should be designed to support a set of accessibility attributes to be made available to assistive technologies. Not only should there be a name and a description of an object, but role, state, and perhaps spatial orientation and category attributes must also be defined. In the doorway image below, for example, the door to the Reuters building is in the open state and faces south. Accessibility requires that a user who is blind who is facing north should be able to determine independently if he or she can pass through the door.

A screenshot of the entrance to the Reuters building is shown with a circle around it. Next to the door is a list of accessibility attributes as follows: name: door, description: doorway to the Reuters building, role: an access portal, state: open, orientation: facing south, position: 115.99.25.

Caption: A doorway with an associated list of attributes.

Browsing the Virtual World

Wandering in an invisible environment is not a new challenge unique to 3D virtual worlds. Since the late 1970s, game players who are blind have sparred with text-based adventure games implemented for the Apple IIe and later for DOS with the sole help of an old screen reader. Who in the now-silvering crowd has not pitted his or her formerly younger wits against Cave and Zork? In the 1980s, some of us even enjoyed text-based MUD (multiuser dungeon) games in which the players interacted with an imaginary world via a command-line interface viewed on an ASCII character display. Simple keyboard commands were sufficient to navigate the world, and we received information about our surroundings through terse and often-glib bits of text that were verbalized by a Votrax synthetic speech synthesizer or displayed on a primitive refreshable braille display. We wandered inside vast mazes, found and picked up sundry objects, opened brown bags, extracted bottles of water, offered lunches "smelling of hot peppers" to ungrateful Wumpuses, and even wore cloves of garlic. We then vanquished trolls with a "bloody axe"—never try it with the "rusty knife"! As early as the days of MUDs, multiple players occupied the same virtual game space, and it was possible to engage in text chats with nearby players. The playing field was quickly leveled in a world where every one of us—blind and sighted alike—was served only terse and glib text as food for our wild and vivid 3D imaginations. In recent times, a modern version of the text MUD, called Terraformers, was developed by Pin Interactive. In this game, the graphic video display is said to be optional and can—at least in theory—be turned off. The game is marginally accessible to players who are blind through keyboard navigation and self-voiced audio cues alone, although more functional accessibility requires the use of a screen reader.

Modern general-purpose virtual worlds are not games per se, although they often contain some gamelike components. In games, players overcome deliberately introduced obstacles and are intentionally misled in their quest. Game designers cleverly tune the degree of difficulty to prevent completion before the game space has been thoroughly explored. Navigational solutions that serve well in a game world may not perform optimally for users who are engaged in business transactions in virtual worlds, where speed and expediency may be paramount.

The objective for virtual worlds that are tuned to business applications is usually of a more functional and pragmatic nature than that of a game. The design of the user interface should concentrate on ease of use and the efficient realization of a user's intentions. Tawdry limitations that are imposed by physical laws governing the real world may be happily glossed over. "Teleportation" to a meeting, for example, must be available to any user, regardless of his or her disability. Accessibility may require a blind user to control a simple dialogue box to enter the meeting location or its coordinates in the virtual universe. Then, by arrowing through a familiar-looking pop-up menu or tree view, the blind citizen may determine that a number of seats are nearby and may locate a seat that is in the "empty" state, perhaps even next to a friend. A keystroke may then move the avatar to the selected seat, and perhaps one more command may cause the same avatar to "sit" down. A citizen may scan the list of nearby avatars to determine if the meeting chairperson is present and then join a live discussion with other colleagues using an accessible text or voice chat.

A blind visitor to a virtual world must be able to perform an initial exploration of the virtual environment with maximum ease. One simple technique emulates real-world tethered navigation methods. The user who is blind may connect his avatar to that of a sighted friend or volunteer in "follow" mode. The "blind" avatar would then travel behind or alongside its guide in an oversimplified imitation of a blind person following a sighted guide. As in the real world, the tethered technique has its limitations; eventually, the blind user of virtual worlds will adopt much more independent travel strategies.

In many cases, interaction with a virtual object requires that the avatar be positioned within a field of proximity to the object of interest. "Dendritically stumbling" around the desired object is not noticeably effective (remember the infamous blue hotel stairs?). A simple keyboard command should be implemented to move one's avatar automatically to an object's operational radius, perhaps even in "ghost" or "flying" mode, if physical obstacles intervene. Optionally, a kinder and gentler pathfinding algorithm may gradually guide the user to her or his destination while passing around any "solid" obstacles. The synthetic nature of the environment opens up unlimited possibilities.

Metadata (Annotations) for Objects and Spaces

In many cases, the simple names and properties of surrounding objects may not satisfy our curiosity (What do you mean by "green toad"?). We may require a more comprehensive description of our object of interest. Ideally, the creators of the items, or perhaps even just some helpful visitors, have added descriptions that the user interface can verbalize or braille. A recorded voice description may even be provided. (Ah, so this is what the infamous Texas Houston Toad looks like!) These are data about data, or metadata, and in this article, we refer to them as "annotations." Annotations serve the same purpose as alternative text attributes and longdesc (a link to a file with a detailed description of the image) on images in the 2D Internet. Some areas in virtual worlds are prone to being heavily annotated with descriptions that enhance their usable accessibility to people who are blind. Other areas may remain relatively barren of annotations, representing "blind spots" for some of us. Annotating virtual objects alone is not adequate for effective spatial orientation. Spaces must be annotated as well. Any significant region, sector, or discrete volume of virtual space requires a label and a description.

In some cases, what is outwardly perceived to be an object may instead be constructed purely as a space, such as an open door or doorframe. A mechanism should be created to annotate objects and spaces interactively as needed by different users, regardless of the objects' ownership. The same object may receive multiple annotations—each with a unique digital signature identifying its creator. Users who are blind may then decide to accept annotations created only by a circle of "known" users. To reduce "virtual graffiti," users who are blind may optionally accept annotations from contributors who have a "reputation" above a certain threshold of "trust." Annotating activity may take advantage of social networking behaviors and generate useful information that may benefit all inhabitants of virtual worlds, regardless of their disabilities.

Cognitive Filters in the Virtual World

Sighted people in the physical world perceive hundreds of visual objects at a glance, but automatically ignore items that are not immediately important or operationally relevant or are otherwise not interesting. Similarly, a person who is blind in a virtual setting may need to establish selective perception filters to limit cognitive overload caused by overdetailed environments. There are essential operational properties of objects, but there is also a frequent abundance of decorative properties that have no operational value and can be safely ignored. We propose that virtual reality should first provide a full range of operational capability for users who are blind. Decorative aspects are secondary. It may not be essential to know that the avatar is standing in a virtual field of 4-inch-tall multicolored Portulaca flowers, the sun is shining 32.5 degrees above the horizon, and nearby avatars all look like feathered lizards—except for one who is impersonating a Texan Houston Toad. More important, what are the names of avatars surrounding the blind user? What are they chatting about? What objects can be manipulated in the vicinity, and what transactional options do they yield? "What is my location, and where are my friends?"

Multiple filtering modes should be provided to allow one easily to select different types of object awareness, depending on the setting or context. A social context may have perceptual requirements that are different from those of a business-related scene or an exploration activity. Some virtual world locations may offer special filter modes that visually impaired users may select to highlight particular features and activities. Adaptive user interfaces may be developed that learn a user's perceptual preferences in various types of situations over time.

Virtual worlds are used for a variety of purposes by sighted users, including training, education, collaboration, simulations of real-world scenarios, modeling, and the delivery of various forms of entertainment. How may these purposes apply to the accessibility requirements of users who are blind? We posit that exploratory research projects that are being launched in various accessibility research organizations, such as ours at the IBM Human Ability and Accessibility Center—are seminal to long-term accessibility for people who are blind and may, before long, enable the creation of elegant and interoperable standards for usable accessibility in virtual worlds.

The Bottom Line

We are confident that it is possible to provide a high degree of accessibility for people who are blind in virtual worlds. In our early research, we have attempted to identify some of the most promising software techniques among a wide range of possibilities. In the future, we hope that virtual worlds may serve as accessible models of the real world and that it may be possible to extend some virtual accessibility techniques to life in our physical space. The blind author cannot help musing that if he had access to a virtual model of the hotel where he so often got lost last July, and perhaps had the opportunity to explore it at some length in cyberspace, his wandering adventures in that factually hard reality may have been just a little more sure-footed.

If you have comments about this article, e-mail us at accessworld@afb.net.