The iPad was meant to revolutionize accessibility. What happened?
The default TouchChat display of PRC-Saltillo’s communication device, for example, consists of 12 rows of eight buttons displaying a mix of letters, object icons (“apple”), category icons (“food”), and navigation elements (back arrows)—many of them in garish neon colors. Part of what I find infuriating about the interface is how it treats every button similarly—they’re all the same size, 200 by 200 pixels, and there’s no obvious logic to button placement, text size, or capitalization. Some words are oddly abbreviated (“DESCRB”) while others (“thank you”) are scaled down to fit the width of the box. The graphic for “cool” is a smiling stick figure giving a thumbs up; aside from the fact it’s redundant with “good” (a hand-only thumbs up), “yes,” and “like” (both smiley faces), what if the user means cool in temperature?
Established principles of information hierarchy and interface design for AAC devices aren’t standard—it’s up to Surabian to define the number and size of buttons on each screen, as well as icon size, type size, and whether a button’s position should change or remain fixed.
“Everything moves slowly because it has to be compatible with the past, which means if the past was kind of clunky, part of the present is kind of clunky too.”
Mark Surabian, AAC consultant
I’d called Surabian in hopes of being wowed. When he and I met up at a café in lower Manhattan, I got excited by the rolling briefcase by his side, thinking he might show me the coolest stuff happening in AAC. But I was again underwhelmed.
Because the reality is this: the last major advance in AAC technology happened 13 years ago, an eternity in technology time. On April 3, 2010, Steve Jobs debuted the iPad. What for most people was basically a more convenient form factor was something far more consequential for non-speakers: a life-changing revolution in access to an attractive, portable, and powerful communication device for just a few hundred dollars. Like smartphones, iPads had built-in touch screens, but with the key advantage of more space to display dozens of icon-based buttons on a single screen. And for the first time, AAC users could use the same device they used for speaking to do other things, like text, FaceTime, browse the web, watch movies, record audio, and share photos.
“School districts and parents were buying an iPad, bringing it to us, and saying ‘Make this work,’” wrote Heidi LoStracco and Renee Shevchenko, two Philadelphia-based speech and language pathologists who worked exclusively with non-speaking children. “It got to the point where someone was asking us for iPad applications for AAC every day. We would tell them, ‘There’s not really an effective AAC app out there yet, but when there is, we’ll be the first to tell you about it.’”
A piece of hardware, however impressively designed and engineered, is only as valuable as what a person can do with it. After the iPad’s release, the flood of new, easy-to-use AAC apps that LoStracco, Shevchenko, and their clients wanted never came.
Today, there are about a half-dozen apps, each retailing for $200 to $300, that rely on 30-year-old conventions asking users to select from menus of crudely drawn icons to produce text and synthesized speech. Beyond the high price point, most AAC apps require customization by a trained specialist to be useful. This could be the reason access remains a problem; LoStracco and Shevchenko claim that only 10% of non-speaking people in the US are using the technology. (AAC Counts, a project of CommunicationFIRST, a national advocacy organization for people with speech disabilities, recently highlighted the need for better data about AAC users.)
There aren’t many other options available, though the possibilities do depend on the abilities of the user. Literate non-speakers with full motor control of their arms, hands, and fingers, for example, can use readily available text-to-speech software on a smartphone, tablet, or desktop or laptop computer. Those whose fine motor control is limited can also use these applications with the assistance of an eye-controlled laser pointer, a physical pointer attached to their head, or another person to help them operate a touch screen, mouse, or keyboard. The options dwindle for pre-literate and cognitively impaired users who communicate with picture-based vocabularies. For my daughter, I was briefly intrigued by a “mid-tech” option—the Logan ProxTalker, a 13-inch console with a built-in speaker and a kit of RFID-enabled sound tags. One of five stations on the console recognizes the tags, each pre-programmed to speak its unique icon. But then I saw its price—$3,000 for 140 tags. (For context, the National Institutes of Health estimates that the average five-year-old can recognize over 10,000 words.)