New York American College of Emergency Physicians

Moshe Weizberg, MD FACEP

Moshe Weizberg, MD FACEP

Medical Director, Emergency Department Maimonides Midwood Community Hospital Chair, New York ACEP Professional Development Committee

Interviewer Lauren Curato, DO FACEP

Interviewer Lauren Curato, DO FACEP

Assistant Professor, Department of Emergency Medicine Columbia University Irving Medical Center/ New York-Presbyterian Hospital

Interviewee Chat GPT 3.5

Interviewee Chat GPT 3.5

Chat Generative Pre-trained Transformer developed by OpenAI

Ask The Experts: Chat GPT 3.5

Each quarter, we in the Professional Development Committee interview a practicing Emergency Physician with an interesting body of expertise on their niche area. This month, we’ve interviewed a different type of “expert”: Chat GPT 3.5.

This Chat Generative Pre-trained Transformer is a form of Artificial Intelligence (AI), and although I’ve been a casual observer of the increasing omnipresence of AI in everyday life, I’ve been increasingly curious about the relevance of AI to my life as an Emergency Physician. Articles from The NY Times to Annals of Emergency Medicinehave covered topics such as physician use of ChatGPT to improve bedside manner, write insurance appeal letters, or even compare human versus AI differentials.

In the spirit of professional development, I wondered if ChatGPT had any non-sentient “ideas” about how it could be of service to the practice of Emergency Medicine. With no prodding whatsoever, it agreed to my interview request…(Interview has been edited for brevity)

Hello, ChatGPT. I’m an Emergency Physician working in New York. I’m interested in learning more about how you can assist me during my busy clinical shifts in the Emergency Department (ED).

ChatGPT: Hello, I’m glad you’re interested in exploring how I can support you during your shifts. As an AI language model, I can provide various forms of assistance, including clinical decision support, differential diagnosis assistance, patient education, documentation assistance , and more. How can I help you specifically during your shifts?

That’s great to hear. Let’s start with clinical decision support. During my shifts, I encounter complex cases where I need to quickly access evidence-based guidelines and treatment recommendations. Can you help me with that?

ChatGPT: Absolutely. I can provide real-time clinical decision support by helping you access relevant guidelines, treatment protocols, and drug information. Whether you need guidance on diagnostic criteria, treatment options, or management strategies for specific medical conditions, I can assist you by synthesizing information from a wide range of medical resources.

That would be helpful. I also frequently have medical students or resident doctors in training who need assistance with building a comprehensive and accurate differential diagnosis. Can you assist with that as well?

ChatGPT: Absolutely, I’d be happy to assist with building a comprehensive and accurate differential diagnosis. Just provide me with the relevant clinical information, including the patient’s presenting symptoms, medical history, physical exam findings, and any pertinent laboratory or imaging results, and I’ll generate a list of potential diagnoses to consider. Feel free to ask for clarification or additional information as needed, and I’ll do my best to help facilitate the learning process.

Another area where I could use your assistance is patient education and communication. Can you help me with that too?

ChatGPT: Certainly, I can assist you in providing patient education by generating easy-to-understand explanations and educational materials for patients and their families. Whether you need information about medical conditions, procedures, treatment options, or discharge instructions, I can help you communicate effectively with your patients and improve patient satisfaction.

Going back to your earlier offer about helping with documentation assistance, I don’t think I’m ready for you to assist me with documentation yet, I’m leery about the medico-legal ramifications of using a bot to write my medical chart.

ChatGPT: That’s completely understandable. Medical documentation is a critical aspect of patient care, and ensuring accuracy and compliance with medico-legal standards is paramount. While I can provide assistance and information during busy shifts, it’s important to rely on established protocols and guidelines for medical documentation to mitigate any potential risks or concerns.

If you have any questions or need assistance with other aspects of patient care, such as clinical decision-making, differential diagnosis, or accessing medical resources, feel free to let me know. I’m here to help in any way that supports your practice and enhances patient care.

Thank you, ChatGPT. It sounds like you may be able to provide valuable support to me during my clinical shifts in the ED. I look forward to integrating your assistance into my workflow.

Since ChatGPT was so generous with its offers of help, I put it to the test. ChatGPT said it could help me with real time clinical decision support with relevant guidelines, protocols, and drug treatments. Could it? Not really, or, not quite yet anyway.

I asked Chat GPT for help with some (hypothetical) clinical questions, some as mundane as the treatment of refractory crusted scabies and some as critical as the indications for catheter directed thrombolytics for massive pulmonary embolism (PE). I quickly learned how to phrase my queries for the types of answers I wanted, but the answer content was still lacking anything resembling “intelligence.” Typically, the answer would give an accurate synopsis of [said condition] and some common medications or treatments, but no actionable details. When asked for references, the answer was usually something like: “While I can’t provide specific citations or direct references, the information provided is based on widely accepted medical knowledge and guidelines established by reputable organizations…” When pressed for details, such as in the above PE example it provided a list of organizations such as the American College of Chest Physicians (ACCP) and European Society of Cardiology (ESC). Missing, critically, was information to convince me that the available data was synthesized properly. In this regard, ChatGPT was far less reliable than our common peer-reviewed quick reference resources (such as UpToDate).

It did a bit better in helping me “workshop” differentials for complete- ness. When offered a typical HPIs (as could be cut/pasted from a chart, sans PHI, of course), ChatGPT constructed what I would regard as thorough differentials. This might help an Emergency Physician double check their differential for completeness or a learner brainstorm further differentials.

There are two things ChatGPT did quite well. One is summarizing journal articles. I provided the titles of several rather lengthy articles and for each I received a generally solid summary paragraph and bulleted outline. I found that ChatGPT performed well in summarizing without editorializing. ChatGPTs also showed aptitude in explaining diagnosis in patient friendly language. We’re often in the position of having to break bad and unexpected news or even just assuage fears over the “incidentalomas” and ChatGPT demonstrated admirable capability in explaining various diagnoses in simple and understandable terms, with an appropriate level of detail and even giving anticipatory guidance.

AI in general and ChatGPT in particular improve along an exponential curve. The above opportunities and shortcomings exist at a moment in time, but will be different before the ink is dry on this column. I intend to keep paying attention so I’m neither the first nor the last Emergency Physician to let ChatGPT help me do my job better.