About us

Gallaudet University

Christian Vogler is the principal investigator. He is a professor at the Department of Arts, Communication and Theatre in the Communication Studies Program. He also is the director of the Technology Access Program at Gallaudet University. He is deaf, a hearing aid user, and employs both spoken English and American Sign Language in his everyday communications. He earned his Ph.D. in Computer and Information Sciences at the University of Pennsylvania. He has co-directed the RERC on Telecommunications Access and is currently directing the RERC on Technology for the Deaf and Hard of Hearing. He has actively worked on access matters for the deaf and hard of hearing in the FCC Disability Advisory Committee, Emergency Access Advisory Committee, and the Communications Security, Reliability, and Interoperability Council. Dr. Vogler leads the overall DRRP and the engagement with stakeholders. 

Raja Kushalnagar, who is deaf, is the co-lead investigator and scientific lead. He is a professor and Director of the Information Technology Program within the Department of Science, Technology and Mathematics at Gallaudet University. He earned his Ph.D. in Computer Science at the University of Houston, and his J.D. at Texas Southern University. He has conducted numerous qualitative and quantitative accessibility studies, especially related to captioning. Dr. Kushalnagar also has extensive experience managing undergraduate research through directing a Research Experience for Undergraduates Site program on Accessible Multimodal Interfaces. He has served on the FCC’s Consumer Advisory Committee on matters related to disability access. Three of his students have gone on to PhD programs and received the prestigious NSF Graduate Research Fellowship. 

Norman Williams is a senior research engineer in Gallaudet’s Technology Access Program, where he has worked from 1990-2001 and from 2005 to the present. He holds a B.S. in Computer Science from Gallaudet University. Mr. Williams is deaf. For three years, he was a developer for Communication Services for the Deaf, Inc., designing and testing text- and video relay service technologies. He was a part of the RERC on Telecommunications Access, where he developed an interface for integrating messaging with real-time text, which was adopted by AOL in AIM 6.8, and for which he was awarded a patent in 2013. He also has extensively contributed to software for smart home alerting technologies for the deaf and hard of hearing, and currently is leading the technical setup of the collaboration with MITRE. Mr. Williams also pioneered the integration of computer functionality with TTY text telephony by developing a full-featured software program, Futura-TTY. Mr. Williams oversees the technical customizations of the captioning software and other software and systems that need to be adapted for the purposes of the project. 

Rochester Institute of Technology

Matt Huenerfauth is project lead at the Rochester Institute of Technology. He is a professor in the College of Computing and Information Sciences and the director of the Center for Accessibility and Inclusion Research at RIT.  His research group operates bilingually in English and ASL, and it includes over 30 members, a third of whom are Deaf or Hard of Hearing.  He has secured nearly $5 million in external research funding, including a National Science Foundation CAREER Award in 2008. He has authored over 65 peer-reviewed scientific journal articles, book chapters, and conference papers.  He is a distinguished member of the Association for Computing Machinery (ACM), vice-chair of the ACM SIGACCESS special interest group on accessible computing, the editor-in-chief of the ACM Transactions on Accessible Computing (TACCESS), and a four-time winner of the Best Paper Award at the ACM SIGACCESS Conference on Computers and Accessibility (more than any other individual in the history of the conference).  In 2018, RIT awarded him the Trustees Scholarship Award, which is the university’s highest honor for a faculty member in recognition of research achievements.  He received a PhD from the University of Pennsylvania in 2006 in Computer and Information Sciences.

Larwan Berke is a Ph.D. student (since 2015) in Computing and Information Sciences at the Rochester Institute of Technology, and he is a researcher at the Center for Accessibility and Inclusion Research.  His research is in the areas of human-computer interaction and computing accessibility.  He is a recipient of a National Science Foundation Graduate Research Fellowship and a Google Lime scholarship.  He has received a Best Paper Award in 2019 from the ACM SIGACCESS Conference on Computers and Accessibility, and he received a Best Paper Honorable Mention Award in 2018 from the ACM CHI Conference on Human Factors in Computing Systems.  His dissertation research investigates the accessibility and usability of automatically generated captions.

Abraham Glasser is a Ph.D. student (since 2019) in Computing and Information Sciences at the Rochester Institute of Technology, and he is a researcher at the Center for Accessibility and Inclusion Research.  He previously completed internships at Microsoft and at the NASA Kennedy Space Center, and he has conducted research at the NTID Center on Access Technology and through several NSF Research Experiences for Undergraduate programs.  He is the recipient of an Honorable Mention in the National Science Foundation Graduate Research Fellowship program, and he was the first-place winner in the Student Research Competition at the ACM CHI Conference on Human Factors in Computing Systems.  He research investigates technologies for people who are Deaf or Hard of Hearing, including captioning and automatic speech recognition.

AppTek

Jintao Jiang is Chief Scientist at AppTek and has an extensive background in automatic speech recognition. Dr. Jiang leads AppTek’s side of the scientific effort to help design and advise on the UI and UX studies related to deaf and hard of hearing viewers watching closed captioned videos.  He is responsible for generating captions for the videos using AppTek’s ASR technology, and for providing any updates to the technology and captions.  

Steve Cook is a senior software engineer at AppTek.

Mr. Cook leads the technical effort to help with the design and advice for the UI and UX design.

He will also provide consultation for the studies related to deaf and hard of hearing viewers watching closed captioned videos, and for discuss and provide logistical support for updates to the technology and captions.  

Serge Proskuryakov is a software engineer at AppTek.

He will support the technical effort to help with the design and advice for the UI and UX studies.

He will also consult on the UI and UX studies related to deaf and hard of hearing viewers watching closed captioned videos.

Katie Nguyen is a data manager at AppTek.

She manages AppTek’s efforts for this project, and works with Gallaudet PIs, consumer advocates, and NAB representatives to examine and develop criteria to select video materials and select the final list of captioned videos and stimuli for evaluation.  Katie has led AppTek’s rapidly evolving Data and Operations team for over 6 years. Previously, she worked at SAIC, managing linguistic resource projects.

Dan Direnfeld is a Program Manager at AppTek.

He supports AppTek’s efforts for this project, and works with Gallaudet PIs, consumer advocates, and NAB representatives to examine and develop criteria to select video materials and select the final list of captioned videos and stimuli for evaluation.