Voice assistants, or VAs, offer great potential for older adults to manage their own health. But current commercial options, often designed for younger users and working in tandem with mobile applications, don’t fit the specific needs of older adults, who rely far more on voice-based interactions due to deteriorating dexterity and eyesight.

That’s why Johns Hopkins computer scientists have developed a new personal VA for older adults to help them take their health into their own hands. The team presented its work at the 2025 ACM Conference on Human Factors in Computing Systems.

“Older adults want more control in managing their own health,” says Amama Mahmood, Engr ’20 (MSE), ’25 (PhD), the study’s first author and a Malone Postdoctoral Fellow. “Continuous, need-based, and flexible voice support can help them achieve this and age more gracefully—which is everyone’s desire.”

The Hopkins team—led by Mahmood and including Maia Stiber, Engr ’21 (MSE), ’25 (PhD), current computer science PhD students Shiye “Sally” Cao, Engr ’21, ’22 (MSE) and Victor Nikhil Antony, and John C. Malone Assistant Professor of Computer Science Chien-Ming Huang—made sure to involve older adults in every step of their design process. They began by conducting interviews with stakeholders living at home and in independent and assisted community centers to identify the challenges they face in managing their health.

They discovered two major issues: First, older adults struggle to read and understand the highly detailed after-visit summaries provided to them after seeing their care teams; and second, they have trouble following the medical regimens prescribed to them, often missing appointments and forgetting to take medication on time—especially for those managing multiple chronic conditions.

“We also learned about what older adults find desirable in a voice assistant,” Mahmood says. “They want their interactions to be intuitive and consistent across multiple interactions, adaptive to their routines and needs, flexible in both conversation flow and functionality, and respectful of their autonomy.”

Based on these findings, the team designed a prototype VA by integrating a large language model, or LLM, into Amazon’s smart assistant, Alexa. Using only a cell phone picture, the new system processes patients’ after-visit summaries to understand their medical issues, translates this information into more accessible terms, answers any questions the user may have, and creates medication reminders tailored to a user’s routine, medication interaction effects, and personal preferences.

The researchers then refined their prototype based on feedback from several user workshops and validated their system’s usability with in-home study sessions. Through the power of conversation, the system successfully guided older adults in setting up medication reminders that aligned with doctors’ instructions, the team reports.

Next, the team will conduct a large-scale study to see how well the system works over time and with diverse populations, such as older adults with mild cognitive impairment.

“However, several challenges must be addressed before deploying an LLM-powered voice assistant in the homes of older adults—a vulnerable population—without supervision,” Mahmood notes. “These include concerns around hallucinations, bias, and reproducibility issues associated with LLMs.”

The researchers also plan to explore additional uses of this VA technology, including integrating the system with smart pill boxes, using it to encourage better sleep habits, and investigating its potential as a tracker for users’ physical activity and mental well-being.

“The kind of support afforded by our system can help older adults maintain their autonomy and is especially valuable for those individuals without strong support systems,” Mahmood says. “Our work offers design implications that enhance the accessibility and effectiveness of VAs to support older adults in managing their health at home.”

This research was supported by the National Science Foundation.