Can GPT Chat help doctors with diagnosis? What does science say

When tested on the same medical conditions as healthcare professionals, ChatGPT-4 has shown significant diagnostic accuracy

(Ernie Mondale – HealthDay News) – The Doctors’ brains They are excellent for Decision making, Even most doctors tasty could Benefit from some diagnostic help from ChatGPTsuggests A Stady Recently.

the main benefit It comes from a process Reasoning known as “probabilistic reasoning”: knowing the odds that something will happen (or not happen). “Humans have Difficulties With probabilistic thinking, the practice of making decisions based on calculating probabilities,” explained the study’s lead author, Dr. Adam Rodmanfrom Beth Israel Deaconess Medical Center, in Boston.

“Probabilistic inference is one of several reasons diagnostic components, What is it An incredibly complex process Which uses a variety of Different cognitive strategiesHe explained in a press release from the House of Israel. “We chose to evaluate probabilistic inference separately, because it is a well-known domain that humans can use supports“.

The Beth Israel team used data from a previously published survey on 550 health professionals. Everyone was asked to do probabilistic reasoning Diagnosing five different medical conditions.

Probabilistic reasoning, which is fundamental to the medical diagnosis process, is a skill in which ChatGPT can provide great support to health professionals (Image infobae

However, in the new study, Rudman’s team gave the same five instances to ChatGPT’s AI algorithm, the large language model (LLM), ChatGPT-4.

The cases included information from common medical tests, such as a chest exam for pneumonia, a mammogram for breast cancer, a stress test for coronary artery disease, and a urine culture for urinary tract infections.

See also  They have released the luxury cruise ship with 206 people on board that ran aground on the coast of Greenland

Based on this information, the chatbot used its own probabilistic logic to re-evaluate the probability of different diagnoses for the patient. Of the five cases, the chatbot was more accurate than the human doctor in two cases; Equally accurate are the other two; And less accurate for anyone. The researchers considered this a “tie” when comparing humans to a chatbot for medical diagnosis.

But ChatGPT-4 excelled when a patient tested negative (rather than positive), and became more accurate in diagnosing doctors in all five cases.

“Humans sometimes feel the risk is higher than it is after a negative test result, which can lead to overtreatment, more testing and too much medication,” Rodman said. He is an internal medicine physician and researcher in the Department of Medicine at Beth Israel.

Study reveals that ChatGPT-4 is particularly effective in diagnosing when medical tests are negative (Getty)

The study appears in the December 11 issue of JAMA Network Open. It is then possible, the researchers said, that doctors could one day work with artificial intelligence to be more accurate in diagnosing patients.

Rodman described this possibility as “exciting.” “Even if it’s imperfect, it’s easy to use [de los chatbots] Their ability to integrate into clinical workflow could, in theory, lead humans to make better decisions. “Future research on collective human and artificial intelligence is urgently needed.”

more information

Learn more about artificial intelligence and medicine at Harvard.

Source: Beth Israel Deaconess Medical Center, press release, December 11, 2023

*Ernie Mundale HealthDay Reporters © The New York Times 2023

See also  Harvard University removes from its library the “human skin” that covered the book “The Destinies of the Soul”

Freddie Dawson

"Beer specialist. Award-winning tv enthusiast. Bacon ninja. Hipster-friendly web advocate. Total social media junkie. Gamer. Amateur writer. Creator."

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top