top of page
VAST advertisement

Advertisement

Writer's pictureAutism Digest

AI Screening Tool Seeks to Automate Documentation for Busy SLPs

With National Science Foundation funding, two SLPs are building AI language sampling into an autism screener that aims to maximize clinicians’ time for direct services.

As speech-language pathologists, we often find ourselves in a juggling act, balancing the responsibility of providing top-notch care with the pressure of administrative tasks. Late nights spent catching up on paperwork, weekends consumed by IEP management—it’s a familiar story for many of us. And while we pour our hearts into this work, documentation time often isn’t reimbursed, adding an extra layer of frustration to an already challenging situation.


What if we could use intuitive technology to streamline these tasks, freeing up our time and energy to focus on what truly matters—our clients? Intuitive technology refers to AI-powered tools that can understand and learn from user interactions, adapting to provide a more personalized and efficient experience. In the context of speech-language pathology, intuitive technology allows for automation of routine tasks, and, correspondingly, increased time to engage with clients.


The technology’s promise drives my work with Lois Jean Brady, who co-founded iTherapy, LLC, with me. As SLPs who’ve worked in schools, private practice, and hospitals, among other settings, we know the struggle to find balance in our professional and personal lives. The latest project in our search for AI-powered solutions is the Evaluative Artificial Speech Intelligence-Autism Screener (EASI-AS, www.easi-as.com), funded by a Phase 2 National Science Foundation (NSF)/Small Business Innovation Research grant.


Built-in documentation
speech therapist with a large amount of paperwork
Automate your work load!

EASI-AS draws its capabilities from large language models—applications trained on massive, constantly updated data-sets that enable them to understand requests and generate text in response. We’re using that technology to create a conversational interface that can populate reports and progress notes based on our real-time interactions with clients. The interface’s voice-to-text AI will allow automated transcription of SLPs’ dictation and filling in of needed fields. Alternatively, users can opt to type in information directly. The idea is that documentation feels like a natural extension of the evaluation process, not the time-consuming, onerous task it is now.


By easing the documentation burden on clinicians, an AI tool like this has the potential to help them manage a growing volume of evaluation referrals. This, in turn, could reduce families’ often lengthy wait times for evaluations, which can delay access to critical early intervention services.


The AI technology in EASI-AS will screen for autism by drawing on an extensive database of recordings of children’s speech. The tool will analyze linguistic and non-linguistic features such as intonation patterns, stress, rhythm, pitch, and voice duration that meta-analysis has shown reliably distinguish people with autism, especially in adolescence and adulthood (see sources).


A mobile app will enable clinicians and caregivers to collect speech samples via smartphone, with the goal of providing accessible early screening to aid timely evaluation. As part of our NSF research, we are seeking speech samples to populate the database—see the EASI-AS website for details.


Scenario: Screener in action

What would using the system look like in practice? Let’s walk through a hypothetical scenario:


Maria, an SLP, is evaluating Liam, age 4, whose parents are concerned about his communication development. Using EASI-AS, she records a conversation with Liam as they play with some of his favorite toys. However, Liam seems disinterested in the toys and wants to talk about something else.


To engage him and elicit more language, Maria asks him for a topic. Liam excitedly says, “spaceships and dinosaurs,” which Maria enters into the app’s text-to-image AI feature.


An array of spaceships and dinosaurs pop up, spurring animated chatter from Liam. Maria then switches to the app’s augmented reality filters. Suddenly, Liam can see himself on the tablet’s screen with a funny astronaut helmet or dinosaur face. He giggles and makes roaring noises, which EASI-AS records and analyzes in real time.


By tapping into Liam’s interests this way, Maria can collect a natural, representative sample of his communication abilities. EASI-AS analyzes the sample through the lens of the Bloom-Lahey model of language development (focusing on form, content, and use; see sources). It examines mean length of utterance, syntax, vocabulary diversity, and pragmatics; screens for signs of autism; and transcribes the sample into the International Phonetic Alphabet, enabling Maria to quickly assess his articulation patterns.


In minutes, the app produces an auto-populated report with data-driven evaluation and treatment recommendations, which Maria reviews and shares with Liam’s parents.


Some caveats

Incorporating new AI technology into our services isn’t about replacing the expertise and judgment of SLPs. Rather, our aim is for AI to augment and enhance clinical decision-making. Just as we trust dishwashers to clean our dishes while still requiring us to load and unload them, these tools are designed to assist us, not to take over our roles.


However, as with any discussion of AI, it’s important to note some caveats. As SLPs working in this sphere, we incorporate thoughtfulness in our design and implementation of AI tools, taking care to ensure they are used ethically and do not replace the critical human elements of our work.


We also prioritize client privacy and data security, encouraging users of AI to obtain informed consent for use of these tools in clinical practice—and to be transparent about their limitations in discussions with clients and families.


Aiding accessibility

Our work developing EASI-AS is informed by another app that leverages large language models—InnerVoice—which we previously created with Microsoft. Using prompts from an SLP, parent, or student, InnerVoice creates stories tailored to a student’s reading level and areas of needed skill development. Like EASI-AS, 
the app provides automated documentation, transcribing a client’s speech into the International Phonetic Alphabet and generating SOAP notes based on their reading or comprehension performance.


EASI-AS takes the concepts of intuitive technology and streamlined documentation even further. We also see potential for conversational interfaces to extend clients’ and families’ education and understanding beyond the therapy room, while making our services more inclusive. AI-powered conversational tools could help break down barriers by providing information in plain language, answering questions, and even offering guided practice of skills. For example, InnerVoice has been trained to provide information about communication development, speech and language interventions, and evidence-based practice.


Similarly, we believe that bringing conversational interfaces into interventions can help clients and their families better understand therapy goals and engage with targeted concepts. A child could, for instance, practice conversational turn-taking with an AI avatar that provides immediate feedback and guidance. With increased interactivity and personalization provided by these tools, clients can take a more active role in their own development—and build their confidence to practice their communication skills without AI.



To learn more about our work and join the conversation on AI possibilities for aiding SLPs’ practice, visit www.itherapyllc.com or www.easi-as.com. Automation, when approached thoughtfully and with a clear purpose, has the potential to help us reclaim the time and energy to do what we do best: engaging with our clients, developing innovative interventions, and nurturing the human connections at the heart of our services.



Matthew Guggemos, MS, CCC-SLP, co-founder and chief technical officer of iTherapy, LLC, is chief technology officer for EASI-AS. He lives with Tourette syndrome, driving his work on inclusive technologies. matthew@itherapyllc.com


Sources

Asghari, S. Z., Farashi, S., Bashirian, S., & Jenabi, E. (2021). Distinctive prosodic features of people with autism spectrum disorder: A systematic review and meta-analysis study. Scientific Reports, 11(1), 23093. https://doi.org/10.1038/s41598-021-02487-6


Bloom, L., & Lahey, M. (1978). Language development and language disorders. New York: John Wiley & Sons.



Commentaires


bottom of page