Advancing Patient-Centered Radiology Reporting with LLMs

A recent publication from our lab represents an important contribution to patient-centered radiology AI and highlights a central direction of the lab. In Self-reported comprehension of large language model-generated summaries of lung cancer screening reports, Juan Serna, Yannan Yu, Koharu Sakiyama, and Jae Ho Sohn, and collaborators demonstrated that large language model-generated patient-friendly summaries significantly improved understanding of lung cancer screening CT reports and reduced anxiety across a national cohort of 1,815 participants, with the largest benefits seen among individuals with lower English and health literacy. These findings provide early quantitative evidence that generative AI tools can meaningfully improve how patients interpret complex imaging results and may help reduce disparities in access to understandable medical information.

The study, published in Radiology Advances, reflects a major focus of the lab: developing clinically validated large language model tools that improve communication between radiologists and patients and help shape the future of patient-facing AI in medical imaging.

Find it at: https://doi.org/10.1093/radadv/umag008

Figure on benefit of LLMs

Juan A Serna, Yannan Yu, Parris Diaz, Koharu Sakiyama, Meng Ye, Alison Rustagi, Jae Ho Sohn, Self-reported comprehension of large language model-generated summaries of lung cancer screening reports: a vignette survey, Radiology Advances, Volume 3, Issue 2, March 2026, umag008, https://doi.org/10.1093/radadv/umag008