ChatGPT in Healthcare

Published by: Harnoor Gill, B.S., B.A.

Advanced technology is quickly pervading many aspects of healthcare and has the potential to radically transform many processes within healthcare delivery. In this essay, I will be discussing how telemedicine has had a transformative effect on healthcare practices, especially during and after the Covid-19 pandemic. I will also be discussing how advancements in telemedicine, namely artificial intelligence technology, can be used to enhance medical care as well as reduce physician burden. Lastly, I will delve into a discussion of how ChatGPT, an artificial intelligence chatbot developed by OpenAI, can be employed in the healthcare space to improve physicians’ bedside manners.


Telemedicine in Healthcare

The Covid-19 pandemic triggered a bold reemergence of technology and conspicuously restructured the concept of “work” in many different realms, such as education and healthcare. Virtual mediums of communication popularized to make ends meet. While remote healthcare delivery methods such as telemedicine were designed to maintain the thread of communication between the physician and patient during an arduous time, they undoubtedly resulted in a visceral shift in the culture of healthcare. Doctors more routinely set up remote online meetings and video calls with their patients, many of whom were located in “healthcare deserts”, or areas in which medical services were not readily available.1 Patients, in turn, had improved medical access and, in many cases, adherence to their treatment plans. 2 Thus, telemedicine’s role in supplementing conventional health care delivery provided fragile communities with adequate medical care at a time when resources were strained. However, telemedicine’s role in medicine may be both augmented and superseded by the influx of artificial intelligence technology entering the healthcare space. 

Artificial intelligence in Healthcare

The most novel of telemedicine advancements lies in the field of artificial intelligence (AI). AI analyzes copious amounts of data to do machine learning and mimic human intellectual processes, such as decision-making. Because of the vast volume of data that is generated in the medical field, AI technology has begun to procure a special place in the healthcare space, aiding clinicians in making major clinical decisions as well as freeing up time and cognitive space to focus on other goals, such as the tending to the humanistic aspects of the patient visit. For instance, diagnostic imaging is a major area in which AI is used. Using neural networks, AI can quickly and accurately identify associations within pathological states, gauge medical outcomes and establish sound diagnoses. For example, AI can be used to provide timely and life-saving detection of stroke onset in at-risk patients through the detection of abnormal or pathological movement. Other avenues for AI technology to intervene in stroke care is by predicting the efficacy of stroke treatment and the long-term outcomes of the treatment, including stroke mortality.3 AI technology is not a replacement for physicians but rather a tool to assist and guide the physician in ascertaining probabilities of certain pathological events. 



ChatGPT Enhances Bedside Manners

Perhaps one of the most fascinating as well as ironic applications of AI technology in medicine is its ability to produce more humanistic patient encounters, or improve physicians’ “bedside manners”. It is not a well-kept secret in healthcare that many doctors suffer from burnout at some point in their profession, nor is it surprising that this burnout is correlated with lower expression of physician empathy toward patients.7 It has been recently discovered that AI can be used to facilitate the physician’s interaction with the patient and provide language that is therapeutically empathetic toward the patient. 

A new AI chatbot developed by OpenAI, ChatGPT (Chat Generative Pre-Trained Transformer), is fulfilling this very role— and quite successfully. For instance, a recent study was conducted to compare the chatbot responses of ChatGPT versus physician responses to selected questions posted on a public social media forum. A team of licensed healthcare professionals were chosen to evaluate and compare the quality of these responses. A whopping 78.6% of the evaluators showed a clear preference for the AI bot’s answers. The responses from ChatGPT were noted as generally more comprehensive and rated as better quality and significantly more empathetic.5 Thus, AI assistants can be used to guide clinicians’ language and create more meaningful patient interactions. With AI, we may be on the verge of a healthcare culture that, instead of alienating physicians, supports them in the emotional labor that is inherent in healthcare and reduces the immense burden placed on them to immaculately shapeshift into a humanistic role. 

Of course, artificial intelligence cannot single-handedly solve the empathy problem. Institutions must support physicians by creating a better work environment, encouraging the expression of emotions and seeking therapy. 7 Greater expression of empathy and a better physician-patient relationship has several beneficial downstream effects as well, such as increased communication and trust, better patient adherence and greater efficacy of treatment.4

Conclusion

It is an undeniable fact that AI technology has the power to analyze a staggeringly large amount of data. In the healthcare world, this is especially useful as AI can use accumulated data to find predictive features of disease and lead to early and accurate diagnoses. A recent study has also revealed that ChatGPT has been shown to outperform physicians in forming empathetic and comprehensive chat responses to patients’ questions. Therefore, AI can satisfy a niche for emotional intelligence and compassionate behavior as well, not solely computational or algorithmic tasks. This is not to say that AI does not come with its own set of complications. Being such a novel frontier, AI is largely unregulated and uncharted territory. Drawing ethical, moral and legal parameters around the use of AI will be as important as the use of AI technology itself.


References


1. Preston, J., Brown, F. W., & Hartley, B. (1992). Using telemedicine to improve health care in distant areas. Psychiatric Services (Washington, DC), 43(1), 25–32.

2. Haleem A, Javaid M, Singh RP, Suman R. Telemedicine for healthcare: Capabilities, features, barriers, and applications. Sens Int. 2021;2:100117. doi:10.1016/j.sintl.2021.100117

3. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243. Published 2017 Jun 21. doi:10.1136/svn-2017-000101


4. Kelm, Z., Womer, J., Walter, J.K. et al. Interventions to cultivate physician empathy: a systematic review. BMC Med Educ 14, 219 (2014). https://doi.org/10.1186/1472-6920-14-219

5. Ayers JW, Poliak A, Dredze M, et al. Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Intern Med. 2023;183(6):589–596. doi:10.1001/jamainternmed.2023.1838

6. Zeidner M, Hadar D, Matthews G, Roberts RD. Personal factors related to compassion fatigue in health professionals. Anxiety Stress Coping. 2013;26(6):595-609. doi:10.1080/10615806.2013.777045

7. Kelm, Z., Womer, J., Walter, J.K. et al. Interventions to cultivate physician empathy: a systematic review. BMC Med Educ 14, 219 (2014). https://doi.org/10.1186/1472-6920-14-219



















Previous
Previous

Artificial Intelligence in Intraoperative Surgical Guidance   

Next
Next

The End of Affirmative Action in College Admissions: Implications for URiM Medical School Applicants