top of page
DJ Savarese

Disrupting the Garden Walls

I’ve been augmenting my communication since I was six years old. Circumventing the ableism encoded into the devices I use to speak has been much harder.


The first time that I borrowed someone else’s voice I was six years old. After surviving two years in care-less foster homes, I had recently moved in with a family I loved and hoped to be a part of permanently. But, I worried, would I really get to stay? At the time, with no other recognizable means to communicate, I used the trailer for the film Angels in the Outfield to ask my most pressing question: “Dad, when are we gonna be a family?” I knew exactly when the kid popped the question, and when I saw him turn to face his dad, I’d hit STOP on the VCR. It took a number of tries before my parents learned to hear me, but eventually they did and began reassuring me that I was there to stay.


We soon began creating a number of tools to communicate with each other. Cameras became our main translators. My parents took pictures of everything: activities, places, people, foods. They used the photos to make sure I understood them and to teach me how to make choices by picking photos of what I wanted. I could escape a plate of cauliflower by bringing them a logo for KFC. Each photograph was labelled with the printed word, so I could learn sight words and begin to understand what they were saying. Still, not everything could be captured in a photo. I needed other means of communication to say “stop” when my dad tickled me and I needed to catch my breath, and a way to say “bathroom,” when I needed to pee NOW. And so we developed a basic sign language to convey essential messages. Instead of insisting I join their speaking world, my parents learned these new languages with me.


When it was time to start regular kindergarten at my neighborhood school, I brought my languages with me. Before long, my classmates and I were all using photos and learning to spell with our fingers. But participating in school also required new technologies. I started using a simple voice-output device like the single-switch BIGmack and Cheap Talk 8 that allowed me to play pre-recorded messages in either my mom’s or dad’s voice to answer questions during class. Because I had learned to communicate in these ways, I was taught to read and write, first with laminated sight words and later with a seventeen dollar label maker from Staples.


By the time I entered middle school in 2003, written English had become my dominant mode of communication, and I began to develop a public voice. As my language got more sophisticated, so did my devices. The Gemini—a large laptop device with a touchscreen that was a quarter of my weight—allowed me to create a countless number of expressions with any degree of sophistication. In ninth grade, I got the Dynavox, a smaller but similarly heavy equivalent to the Gemini, with a clearer mechanical voice. It had a hard drive prepopulated with thousands of phrases, but they didn’t sound like me. With one finger, I laboriously programmed in as many of my own phrases as I could.


For more private conversations, I far preferred the silence of written words. I brought my labeler with me everywhere, using it to converse with friends and process trauma with my therapist. It wasn’t until the tenth grade, when I got my first laptop with text-to-speech software, that I had one lightweight device that allowed me to communicate silently or speak with a digital or recorded voice.


These are only some of the different technologies and modes of communication that I have used over the past two decades to gain entry—and be heard—in speech-based society. Speech-generating computers and augmentative and alternative communication (AAC) devices, like the Gemini and Dynavox, have allowed me to contribute to discussions about my people, as well as the world around us. Because they are easy for hearing-based communities to comprehend and sophisticated enough for us to convey complicated ideas in an apparently timely and efficient manner, communication technologies have given me and other alternatively communicating people a voice to be heard by large groups of people over space and time. But those technologies have also worked to define—and confine—us through their economics, their software, and the ways in which they reinforce ableist culture and notions of how communication ought to be structured.


AAC devices have been around for over seventy years, yet most nonspeaking people in the US still experience widespread segregation in school and throughout their lives. I am one of only two alternatively communicating autistics to be fully mainstreamed from kindergarten through college graduation. One of the problems is accessibility. In our society, the Gemini and the Dynavox cost $12,000 and $9,000, respectively. Even at $500, an iPad with text-to-speech software is still unattainable for many disabled adults on social security, who receive $770 per month to cover all of their living expenses.


Even for the relatively small number of us who can access these technologies, we are too often left to rely on prerecorded, preordained messages—to speak only the devices’ language. In our speech-centric, hearing-privileged society, speakers are unquestioningly assumed to be able-bodied, self-reliant individuals whose vocal cords effortlessly produce spoken words and whose ears naturally decode spoken language. AAC devices have been designed to mimic this narrow “ideal.” According to research, 90 percent of what a person says in a given day is made up of repetitive, automatic phrases. AAC devices are populated only with these generic messages.


This ableist design assumes speakers know best what others should say and limits the kinds of relationships nontraditional communicators can form. The technology also renders invisible how much effort and time it takes to communicate this way, and it requires nothing of the speaking, hearing world. When we choose not to use AAC devices—with their stiff, generic, confining, and inauthentic prerecorded messages—society usually stops offering us other ways to connect and instead declares us “uneducable,” “untrainable,” “asocial,” “unempathetic” and “willingly walled off from the world.”


I have come to think of ableism as the cultivated garden of a speech-based society. Many assistive technologies assume the disabled are outsiders, striving to inhabit that cultivated garden. These technologies don’t change the world we live in; they just allow a few of us to climb up and over the garden wall, helping us pass or pose as independent, able-bodied, speakers. Once in the garden, we are seen as validating the status quo, further fortifying the very walls that many of us hope to dismantle with other technologies, other modes of communicating, other ways of being.


Hearing in Red


We do not have a ready word for the kind of flexible communication that I practice. Instead of calling someone who uses this kind of flexible communication a “multimodal communicator,” as I choose to do, people like me are labelled “nonspeaking.” People use the verbal/nonverbal binary to render nonspeakers unheard and therefore invisible. In a speech-centric, hearing-privileged world, we are always seen as disabled, lacking. “Success stories,” maybe; “inspirations,” perhaps—but always on others’ terms. What would it mean to build technologies that create opportunities for more multimodal communication and the dense interpersonal connections such communication offers?


For the past five years, I have been working through a combination of art and activism to imagine what these other modes of communication might be. My starting points are the modes of communication that I used as a child. There was an interdependent flourishing that formed around the technologies my classmates and I used in my early school years. These technologies were more multisensory, more communal, and in a sense more democratic; in pictures, sign language, and tangible sight words, my parents, teachers, friends, and I were all learners, all teachers. Using these alternative, communal languages, others in our classroom considered “at-risk for school failure” found their own pathways to literacy: some learned to spell with their fingers, others learning English as a second language used photographs as helpful translators, and visual learners found that pictures grounded in meaning what fleeting, spoken words could not.


A decade and a half later, I had another insight into multimodal communication. In 2016, I was late into my undergraduate thesis in Anthropology, before I realized I was writing an autoethnographic study that completely left out the contributions of people who prefer nonalphabetic languages and that remained largely inaccessible to the majority of nontraditional communicators, who are never taught to read. A high achiever in mainstream education from kindergarten through college, I had come to communicate almost entirely in written English. What, I wondered, might I have lost in the process? I began engaging with the visual artwork of various autistics, eventually compelled by the drawings, paintings, and sculptures of seven artists to write a poetic series. By the last poem, modes of communication had begun to blur as I proudly tell five-year-old impressionist artist Iris Grace that “I’m no longer visual exactly; nor am I verbal. When I type, my fingers speak / with an accent.”


Around the same time I was finishing my thesis, production was also wrapping up on Deej: Inclusion Shouldn’t be a Lottery, a documentary about my life, which I co-produced and narrated. In the first four minutes of the film, I use a myriad of technologies—my laptop and Dynavox, trees, walls, backpack, other people’s bodies and voices, film, literacy, and a combination of spoken poetry and animation—to maneuver my way past communication and physical barriers at my high school. That opening sequence is something of a model for what I imagine communication might be like in a world that doesn’t privilege speech over other, more interdependent, modes of communication.


The strongest example of this multimodal way of communicating are the parts of the film when my poetry and the oil-paint animations of the British artist Em Cooper converse with each other. In Cooper’s constantly flowing work, no image is static. Figures emerge briefly and then merge into the background, before re-forming into other figures; everything blurs into everything else. In a dynamic that seemed to reproduce the differences between speech-dominant cultures and more multimodal ways of connecting, other animators who were approached to work on the film took my words too literally, pairing the lines “The ear that hears the cardinal hears in red” with a cartoon cardinal and “The eye that spots the salmon sees in wet” with an animated salmon. Cooper’s brushstrokes, by contrast, were full of color, motion, and texture, occasionally offering a fleeting trace of vines, volcanoes, waves, or flags—metaphors I had used elsewhere in my writing to describe myself or challenge the world we lived in.


Cooper and I never thought we were taking everyone in the audience to the same destination; instead, we offered people multiple pathways into a world in which everything is interwoven, where motion, rhythm, pattern, color, sound, and texture freely interact, offering endlessly unfolding possibilities. I recognized, however, that this was a rarefied means of communicating; not something that could always be open to me, let alone anyone else. Four years later, at the outset of the pandemic, I began to ask myself how technology might allow us to create new communities in which diverse bodies, voices, and language might come together, as they had in my collaboration with Cooper, and thrive, much like we all had in kindergarten.


Cut off and segregated in my own home, I turned again to poetry and technology to create some alternative pathways: co-teaching multigenerational, global, and intersectional poetry writing courses for beginning poets, and collaborating with three fellow poets based on the artwork of the artist Malcolm Corley, who is also autistic. In both the courses and the collaboration, speakers and alternative communicators came together to make work that challenged the supremacy of speech-based culture. Traces of our entanglements live on in a chapbook, Studies in Brotherly Love. In the introduction, poet Claretta Holsey describes our modes of communication this way: “We crafted poems that speak to us and to our causes: awareness of performative utterances, as communication can and does happen outside of written text, outside of simple speech; embrace of Black vernacular, its rhythm and blues; recognition of the Black family as a model of resilience; respect for nature, which awes and overwhelms; respect for the body, made vulnerable by want.”


Imagining that technology alone can liberate us is a bit shortsighted and, in some ways, disabling. But, if we imagine the cultivated garden of a speech-based society is the only way of being, then the communication technologies we build will continue to keep us stuck in an inclusion/exclusion binary, in which some beings are seen as disposable and others not.


-----


This article was originally published in Logic Magazine.


DJ Savarese is a public speaker, writer, and activist who works to make literacy-based education, communication, and inclusive lives a reality for all nontraditionally speaking people through artful advocacy, teaching, and community organizing. A 2017-19 OSF Human Rights Initiative Youth Fellow alum, he is also the co-producer of the Peabody award-winning, Emmy-nominated documentary Deej: Inclusion Shouldn’t be a Lottery, which unearths discrepancies between insider and outsider perspectives of his lived experience as an alternatively-communicating autistic person.


Open Minds Silicon Valley provides platforms to elevate the voices of diverse students, professionals, and families. We encourage writing submissions to be emailed to eric@openmindschool.org. We look forward to being in touch about possible feature options.

47 views0 comments

Recent Posts

See All

Comments


bottom of page