Glass Ear (2014)
Persons with hearing loss use visual signals such as gestures and lip movement to interpret speech. While hearing aids and cochlear implants can improve sound recognition, they generally do not help the wearer localize sound necessary to leverage these visual cues. In this project, we designed and evaluated visualizations for spatially locating sound on a head-mounted display (HMD). To investigate this design space, we developed eight high-level design dimensions. For each dimension, we created 3-12 example visualizations and evaluated these as a design probe with 24 deaf and hard-of-hearing participants (Study 1). We then implemented a real-time proof-of-concept HMD prototype and solicited feedback from 4 new participants (Study 2). Study 1 findings reaffirm past work on challenges faced by persons with hearing loss in group conversations, provide support for the general idea of sound awareness visualizations on HMDs, and reveal preferences for specific design options. Although preliminary, Study 2 further contextualizes the design probe and uncovers directions for future work.
Dhruv Jain, Prof. Leah Findlater & Prof. Jon Froehlich