Products

Reports and Policy Filings

  • Vogler, C. (2019). Comments on Telecommunications for the Deaf and Hard of Hearing, Inc. et al. Petition for Declaratory Ruling and/or Ruling Making on Live Closed Captioning Quality Metrics and the Use of Automatic Speech Recognition Technologies. Filed with the FCC, CG Docket 05-231, RM-11848, October 15, 2019. https://www.fcc.gov/ecfs/filing/101630433070
    See also our news post on this matter.

Publications

  • Abraham Glasser, Joseline Garcia, Chang Hwang, Christian Vogler and Raja Kushalnagar. 2021. Effect of Caption Width on the TV User Experience by Deaf and Hard of Hearing Viewers. In Proceedings of the 2021 ACM Conference on Web Accessibility (W4A’21). ACM, New York, USA.
  • Gabriella Wojtanowski, Colleen Gilmore, Barbra Seravalli, Kristen Fargas, Christian P. Vogler and Raja S. Kushalnagar. 2020. Alexa, Can You See Me? Making Individual Personal Assistants for the Home Accessible to Deaf Consumers. Journal on Technology and Persons with Disabilities, Oct 2020, 9(10), 128–146. http://hdl.handle.net/10211.3/210399
  • Athena Willis, Elizabeth Codick, Patrick Boudreault, Christian P. Vogler, and Raja S. Kushalnagar. 2019. “Multimodal Visual Languages User Interface for Deaf Readers”. Journal on Technology and Persons with Disabilities, Oct 2019, 7(16), 172–182. http://hdl.handle.net/10211.3/210399
  • Jason Rodolitz, Evan Gambill, Brittany Willis, Christian P. Vogler and Raja S. Kushalnagar. 2019. “Accessibility of Voice-Activated Agents for People who are Deaf or Hard of Hearing”. Journal on Technology and Persons with Disabilities, Oct 2019, 7(16), 144-156. http://hdl.handle.net/10211.3/210397
  • Larwan Berke, Khaled Albusays, Matthew Seita and Matt Huenerfauth. (2019). Preferred appearance of captions generated by automatic speech recognition for deaf and hard-of-hearing viewers. In Proceedings of the 2019 ACM Conference on Human Factors in Computing Systems (CHI’19 Extended Abstracts). ACM, New York, NY, USA, Paper LBW1713, 6 pages. DOI: https://doi.org/10.1145/3290607.3312921
  • Sushant Kafle, Cecila Alm, and Matt Huenerfauth. (2019). Modeling acoustic-prosodic cues for word importance prediction in spoken dialogues. In Proceedings of the 8th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT’19). Collocated with the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL’19). Minneapolis, Minnesota, United States, June 7, 2019.
  • Larwan Berke, Matt Huenerfauth, and Kasmira Patel. (2019). Design and psychometric evaluation of American Sign Language translations of usability questionnaires. ACM Transactions on Accessible Computing 12(2), Article 6, 43 pages. https://doi.org/10.1145/3314205
  • Kafle, S., Huenerfauth, M. (2019). Predicting the understandability of imperfect English captions for people who are deaf or hard of hearing. ACM Transactions on Accessible Computing 12(2), Article 7, 32 pages. https://doi.org/10.1145/3325862

Presentations

Software

Web-based player for generating caption stimuli: https://tap.gallaudet.edu/drrp/norman/