Jinjun Xiong, right, speaks with a student at the National AI Institute for Exceptional Education. Photo: Douglas Levere
Release Date: April 4, 2025
BUFFALO, N.Y. – A 5-year-old wearing headphones is instructed by a prerecorded voice to repeat what they hear word for word. Such a test, called sentence recall, measures auditory processing, memory and language skills.
It is one of many tools employed by speech-language pathologists (SLPs), schools and other organizations to screen children at risk for speech and language disorders.
Yet due to the nationwide shortage of SLPs, it can be difficult and expensive – sentence recall tests, which require time and expertise to administer, can easily cost families hundreds of dollars – to access these services.
That could soon change due to advancements made at the National AI Institute for Exceptional Education, a University at Buffalo-led research organization that is creating artificial intelligence systems to ensure that children with speech and language disorders receive timely, effective assistance. AI-powered systems are being developed that could be made available to anyone with an internet connection.
The institute, which is comprised of dozens of researchers from nine universities who specialize in machine learning, natural language processing, education, social robotics and other fields, is supported by the National Science Foundation and the Institute of Education Sciences.
“This is only the beginning,” says the institute’s director and principal investigator Venu Govindaraju, PhD, a UB computer scientist who more than two decades ago co-led the development of a handwriting-recognition system that has saved the U.S. Postal Service billions of dollars by automating the sorting of mail.
The groundbreaking work that transformed the global postal industry is now being leveraged to aid children with dyslexia and dysgraphia.
“Inconsistent letter formation, crowded text, reversed letters, spelling mistakes and irregular use of case that are often present in child handwriting have challenged adoption of language models to address neurodevelopmental disabilities including dyslexia and dysgraphia,” says Govindaraju, who also serves as UB’s vice president for research and economic development. “There is so much potential for AI to increase and enhance how we diagnose and treat children with speech and language challenges.”
Through the development and use of a language model that is specifically for child handwriting recognition – called Extended-TrOCR – SLPs, occupational therapists and educators will have a tool to boost the efficiency of screening and improve timely intervention for dyslexia and dysgraphia.
Early prototypes are 90% effective, and getting better
Jinjun Xiong, PhD, a SUNY Empire Innovation Professor in the UB Department of Computer Science and Engineering, arrived in Buffalo in 2021 after working at IBM’s Thomas J. Watson Research Center for more than a decade.
As the institute’s scientific director, he oversees its research agenda and technology development.
That includes the development of the AI Screener, which screens young children who may need to receive formal diagnostics and assessment for their speech and language needs, and the AI Orchestrator, which acts as a virtual teaching assistant to SLPs by providing students with ability-based interventions.
The AI-powered sentence recall test the institute is developing is part of the AI Screener. It is based upon the work of University of Utah researcher Sean Redmond, who developed the Redmond Sentence Recall (RSR) through years of evidence-based research.
Xiong collaborated with Redmond to anonymize roughly 1,000 audio recordings of the RSR test and make them publicly available for researchers to use. Xiong and his team then created the AI-powered, automated screening solution, called AutoRSR, which is roughly 90% as accurate as humans administering the test, Xiong says.
“We expect to further improve the model’s accuracy to match or even exceed the capability of speech language pathologists who administer the test,” says Xiong, who also directs the UB Institute for Artificial Intelligence and Data Science.
Xiong explains that because humans are unique, it is difficult to train SLPs uniformly, and also they can make mistakes when fatigued or overworked. “On the contrary, the AI solution, once developed, will perform consistently across different settings, and is easy to be debugged if it fails,” he says.
Different languages, deploying tech to schools
Researchers are working to automate other screening tests, including tests for children who speak different languages, so the institute’s solutions can reach more people.
For the AI Orchestrator, researchers are developing a handful of tools, including AI-generated flashcards with pictures and words that SLPs utilize to help children with speech and sound disorders.
They’re also advancing technology that would record the intervention session where SLPs treat their clients and interact with them. This solution would understand the intervention session, and then automatically generate the SOAP (subjective, objective, assessment and plan) report that health care professionals typically type themselves.
“It takes a great deal of time for SLPs to keep track of what’s happening in an intervention session, especially when working with a group of children. They take notes, organize their thoughts and finally write the SOAP report,” Xiong says. “We’re working to automate this task so SLPs have more time to focus on their interactions with children.”
Eventually, the institute plans to field-test the AI Screener and AI Orchestrator in roughly 80 classrooms, reaching 480 students — most of them in Western New York — before it scales out the solutions nationally to benefit more students and SLPs.
Meanwhile, some of the institute’s advanced AI models are scheduled to run at Empire AI, the $400 million statewide research consortium. The center, which Gov. Kathy Hochul and state lawmakers announced last year, aims to harness AI for the betterment of society and drive innovation in New York State. Its supercomputing center is located at UB.
“Our overall goal is to leverage our academic and technological resources to create AI systems that address societal problems – in this case, the shortage of speech-language pathologists – and ensure that millions of children receive the assistance they need and do not fall behind in their academic and socio-emotional development,” says Govindaraju.
Cory Nealon
Director of Media Relations
Engineering, Computer Science
Tel: 716-645-4614
cmnealon@buffalo.edu