Since ChatGPT was so generous with its offers of help, I put it to the test. ChatGPT said it could help me with real time clinical decision support with relevant guidelines, protocols, and drug treatments. Could it? Not really, or, not quite yet anyway.
I asked Chat GPT for help with some (hypothetical) clinical questions, some as mundane as the treatment of refractory crusted scabies and some as critical as the indications for catheter directed thrombolytics for massive pulmonary embolism (PE). I quickly learned how to phrase my queries for the types of answers I wanted, but the answer content was still lacking anything resembling “intelligence.” Typically, the answer would give an accurate synopsis of [said condition] and some common medications or treatments, but no actionable details. When asked for references, the answer was usually something like: “While I can’t provide specific citations or direct references, the information provided is based on widely accepted medical knowledge and guidelines established by reputable organizations…” When pressed for details, such as in the above PE example it provided a list of organizations such as the American College of Chest Physicians (ACCP) and European Society of Cardiology (ESC). Missing, critically, was information to convince me that the available data was synthesized properly. In this regard, ChatGPT was far less reliable than our common peer-reviewed quick reference resources (such as UpToDate).
It did a bit better in helping me “workshop” differentials for complete- ness. When offered a typical HPIs (as could be cut/pasted from a chart, sans PHI, of course), ChatGPT constructed what I would regard as thorough differentials. This might help an Emergency Physician double check their differential for completeness or a learner brainstorm further differentials.
There are two things ChatGPT did quite well. One is summarizing journal articles. I provided the titles of several rather lengthy articles and for each I received a generally solid summary paragraph and bulleted outline. I found that ChatGPT performed well in summarizing without editorializing. ChatGPTs also showed aptitude in explaining diagnosis in patient friendly language. We’re often in the position of having to break bad and unexpected news or even just assuage fears over the “incidentalomas” and ChatGPT demonstrated admirable capability in explaining various diagnoses in simple and understandable terms, with an appropriate level of detail and even giving anticipatory guidance.
AI in general and ChatGPT in particular improve along an exponential curve. The above opportunities and shortcomings exist at a moment in time, but will be different before the ink is dry on this column. I intend to keep paying attention so I’m neither the first nor the last Emergency Physician to let ChatGPT help me do my job better.