Use of AI (Artificial Intelligence) in Medicine
AI has the potential to transform medicine for the better. It's unlikely that AI will completely replace physicians anytime soon. The human aspects of care, including empathy, compassion, critical thinking, and complex decision-making, are invaluable in providing holistic patient care beyond diagnosis and treatment decisions. AI does have the capacity to assist in diagnostics including creating differential diagnoses, recognizing patterns in the electronic medical record and creating documents that previously were a high administrative burden for physicians.
ChatGPT successfully passed the USMLE (proficiency exam in medical school) and can solve internal medicine case files, indicating its versatility and potential for future clinical applications. In fact, Google and DeepMind developed the Med-PaLM language model trained on several existing medical Q&A datasets to offer "safe and helpful answers" to questions posed by health care professionals and patients.
Most patients agree that they don't want a computer taking care of them. They do appreciate the comprehensive responses and the improved accessibility for clinical answers. Interestingly, AI use in diagnostics is relatively underfunded. More generally, diagnostics have long been considered unattractive investments. Unlike their therapeutic counterparts, which see around $300 billion in research and development investment a year, diagnostics receive a modest $10 billion in private funding.
In a study in 2023, The Pew Research Center found that on a personal level, there’s significant discomfort among Americans with the idea of AI being used in their own health care. Six-in-ten U.S. adults say they would feel uncomfortable if their own health care provider relied on artificial intelligence to do things like diagnose disease and recommend treatments; a significantly smaller share (39%) say they would feel comfortable with this.
On the positive side, a larger share of Americans think the use of AI in health and medicine would reduce rather than increase the number of mistakes made by health care providers (40% vs. 27%).
And among the majority of Americans who see a problem with racial and ethnic bias in health care, a much larger share say the problem of bias and unfair treatment would get better (51%) than worse (15%) if AI was used more to do things like diagnose disease and recommend treatments for patients.
But there is wide concern about AI’s potential impact on the personal connection between a patient and health care provider: 57% say the use of artificial intelligence to do things like diagnose disease and recommend treatments would make the patient-provider relationship worse. Only 13% say it would be better. The majority of Americans do not think AI has a role in the form of a Chatbot in medicine.
Overall, it feels like patients see AI as another tool with which their human physician can use to assist but not take in their care. Americans are looking for more face-to-face time with their physicians, not less. They are looking for more humanity, not less.
We believe AI has a role in the administrative burden of documentation. At Fulcrum, we will be experimenting with a few models to decrease the time we spend documenting. This software records the visit and transcribes it into a clean and organized manner. This information is not shared with any additional people and is deleted after being transcribed and is always reviewed before insertion into the patient's chart. This allows your physician to spend less time on administrative tasks and more time with their patients. If you would not like to have your physician use AI during your visit, please let them know.
Comments