Members Member Login
Not yet a member? Sign up now

AI

Using AI Models to Assess Communication in Healthcare: a literature review

EALTHY

January 2025 | AI | Clinical practice | Research summary

Clay, T. J., Da Custodia Steel, Z. J., Jacobs, C. (November 15, 2024). Human-Computer Interaction: A Literature Review of Artificial Intelligence and Communication in Healthcare. Cureus 16(11): e73763. doi: 10.7759/cureus.73763

What this research was about and why it is important

The literature review provides an overview of AI´s role in patient-doctor interactions, focusing on overcoming language barriers, summarising complex medical data, and offering empathetic responses. It disproves assumptions of AI as an inaccurate, irrelevant, misleading and unempathetic source of medical information for both healthcare professionals and lay-people.

The review discusses the potential of AI models as tools capable of rapidly summarizing vast amounts of data, providing patients with information on their medical conditions and supportive feedback, and helping healthcare professionals to enhance their clinical communication skills.

The review advocates for further research and policy development to mitigate risks and enhance AI´s integration into clinical practice.

What the researchers did

  • They performed analysis of the methodologies behind existing studies´ into AI models. They aimed to answer the following questions:
  1. What are the disciplines in healthcare (within the scope of the review) in which AI technology has been studied?
  2. What is the quality of the research in AI, including a formal assessment using a valid instrument?
  3. Do AI models provide emphatic communication?
  • They searched papers in the Medline PubMed database published from 2018 to the present date using the following combination of keywords: “artificial intelligence”, and “health communication”, or “communication”.
  • After several assessment procedures, they selected 10 papers from 381 screened. Those ten were scored according to the Standard Quality Assessment Criteria instrument (SQACI).

What the researchers found

  • The AI models used in the 10 selected studies include ChatGPT-2; ChatGPT; ChatGPT-4 and Gemini 1.0; ChatGPT-3.5 and AI-Guidebot; Med-PaLM and Flan-PaLM; AMIE; HAILEY; and unstated ChatGPT model.
  • This literature review confirmed promising potential of the AI models that can be used to study communication situations in multiple specialties, such as oncology, paediatrics, radiology, primary care, and mental health.
  • AI models can be used to assess communication performance when sharing information in a question-answer or initiating-corresponding conversations. They can also serve as a tool to improve the communication skills of healthcare professionals, e.g. to show empathy, both in oral and written communication. For instance, one of the studies revealed that AI provided more supportive, empathetic responses to patients living with mental health conditions compared to human-only feedback.
  • The analysis showed that AI-generated content was more understandable, comprehensible, accurate and provided more satisfactory, age-appropriate answers to patients´ questions compared to that produced by humans.
  • The AI models also rated better than primary care physicians in terms of clarity, structure, comprehensiveness, addressing and understanding patient concerns, showing empathy, building rapport, responding to emotions, and using appropriate language when used for assessing doctor-patient conversations as a part of OSCE examinations (Objective structured clinical examinations).
  • In clinical practice, AI summarized the results of MRI scans in better patient-centred language than humans. It also rated better than humans in terms of the quality of information provided and the empathetic manner in which it responded to patient´s questions on a public social media forum.

 Things to consider

  • The selected papers used different AI models, more specifically different generations of ChatGPT, to study various aspects of communication. In addition, some researchers compared AI-generated output with human performance, while the rest compared two AI-models against each other.
  • The current challenges of using AI to search medical information still pose the risk of inaccuracies and “hallucinations” *.
  • The need for further refinement in AI algorithms, training datasets and mechanisms of data processing by AI-models, reliability and consistency in sensitive healthcare settings remain open for discussion.

 

 *Incorrect or misleading results generated by AI models casued by insufficient data training, incorrect assumptions made by the model, or biases in the data used to train the model.

 

Full-text available here: https://www.cureus.com/articles/312796-human-computer-interaction-a-literature-review-of-artificial-intelligence-and-communication-in-healthcare#!/