Abstract
Large language models (LLMs) have demonstrated potential in enhancing various aspects of healthcare, including health provider–patient communication. However, some have raised the concern that such communication may adopt implicit communication norms that deviate from what patients want or need from talking with their healthcare provider. This paper explores the possibility of using LLMs to enable patients to choose their preferred communication style when discussing their medical cases. By providing a proof-of-concept demonstration using ChatGPT-4, we suggest LLMs can emulate different healthcare provider–patient communication approaches (building on Emanuel and Emanuel’s four models: paternalistic, informative, interpretive and deliberative). This allows patients to engage in a communication style that aligns with their individual needs and preferences. We also highlight potential risks associated with using LLMs in healthcare communication, such as reinforcing patients’ biases and the persuasive capabilities of LLMs that may lead to unintended manipulation.