Reports and Policy Filings
- Vogler, C. (2019). Comments on Telecommunications for the Deaf and Hard of Hearing, Inc. et al. Petition for Declaratory Ruling and/or Ruling Making on Live Closed Captioning Quality Metrics and the Use of Automatic Speech Recognition Technologies. Filed with the FCC, CG Docket 05-231, RM-11848, October 15, 2019. https://www.fcc.gov/ecfs/filing/101630433070
See also our news post on this matter.
- Berke, L., Albusays, K., Seita, M., Huenerfauth, M. (2019). Preferred appearance of captions generated by automatic speech recognition for deaf and hard-of-hearing viewers. In Proceedings of the 2019 ACM Conference on Human Factors in Computing Systems (CHI’19 Extended Abstracts). ACM, New York, NY, USA, Paper LBW1713, 6 pages. DOI: https://doi.org/10.1145/3290607.3312921
- Kafle, S., Alm, C.O., Huenerfauth, M. (2019). Modeling acoustic-prosodic cues for word importance prediction in spoken dialogues. In Proceedings of the 8th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT’19). Collocated with the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL’19). Minneapolis, Minnesota, United States, June 7, 2019.
- Berke, L., Huenerfauth, M., and Patel, K. (2019). Design and psychometric evaluation of American Sign Language translations of usability questionnaires. ACM Transactions on Accessible Computing 12(2), Article 6, 43 pages. https://doi.org/10.1145/3314205
- Kafle, S., Huenerfauth, M. (2019). Predicting the understandability of imperfect English captions for people who are deaf or hard of hearing. ACM Transactions on Accessible Computing 12(2), Article 7, 32 pages. https://doi.org/10.1145/3325862
Web-based player for generating caption stimuli: https://tap.gallaudet.edu/drrp/norman/