Health workers

Data with destiny: How medical workers think AI will change health care

Feelings about AI depend on how it is used, according to an informations systems researcher.

Betsy Loeff

You're not alone if you suffer from techno-insecurity — the fear that technology like artificial intelligence could eliminate your job. Even people in health care, a field once considered a safe haven from unemployment, are starting to worry about AI replacing people.

This was among the findings of Assistant Professor of Information Systems Pascal Nitiema, who collected and analyzed more than 1,100 comments off MedScape, a leading online news forum for medical professionals. His research shows that feelings about AI depend on its applications. AI used as an administrative and diagnostic tool generally garnered favorable remarks, while AI applied at the level of direct patient care prompted comments of caution and concern.

Loss leader

The leading topic of discussion among those who commented on MedScape’s AI-related news was job loss. “AI will replace the office workers and the physician,” one commentator wrote. “AI will be a game changer for future physicians, and some medical specialties will be completely replaced,” wrote another. Yet another predicted that once “AI becomes cheaper than a clinician, AI and technicians will replace many clinical functions in psychiatry and mental health.”

Are people needlessly worried that health care organizations will eventually pare down or eliminate clinical roles? “It may not happen during their lifetimes, but it will change certainly change how health care workers diagnose diseases and deliver care,” Nitiema says.

As a physician himself, Nitiema also noted that AI issues go beyond mere job loss. They also involve potential diagnostic and treatment errors because good medicine relies on more than patient-reported symptoms. It also draws on a doctor’s physical examination, nearly a decade of schooling, and years of experience, Nitiema explains.

MedScape posters shared such apprehension. “New technologies may add something to the diagnostic process, but, at the end of the day, we should rely on our composite evaluation of the patient by history and examination,” said one person. Another added, “AI may help with atypical presentations, but attention to detail and the basics of history and physical (examination) are irreplaceable.”

AI presents liability issues, too, Nitiema points out. “If a patient can just purchase a device, put it on his chest, and the device will tell him a diagnosis of a heart condition, what’s the physician’s role in that process,” he asks. “Who is liable if the algorithm makes a mistake and the patient files a lawsuit? The manufacturer of the device? The physician who used the algorithm? The patent himself?”

Nitiema explains why any of the above could be culpable. “When a manufacturer puts out an algorithm, that algorithm is already trained with specific data provided by the manufacturer,” he says. This could be problematic because disease symptoms and treatment recommendations may differ across countries or regions. For instance, an AI developed with data from patients living in the U.S. may recommend an antibiotic that is effective for treating pneumonia or any other bacterial infection in the states. Still, it may not be effective in another country because of bacterial resistance to that antibiotic in that other nation.

Another issue is that algorithms adapt as more data is fed into them. "The algorithm put out by the manufacture two years ago is completely different from today's version," Nitiema says. At that point, a manufacturer could argue innocence because the algorithm has changed significantly from what was released. As one MedScape reader posted, "How do you sue a computer?" "It will be up to the legislative branch to develop regulations" related to AI in medicine, Nitiema says. Because algorithms often use patient data in calculations, the pending regulations should also address patient privacy.

Potential and doubt

While impacts on patient care garnered largely negative comments, the health care professionals posting on MedScape found AI’s use in some areas much more acceptable. Administration, for instance, could benefit. "For instance, AI could find better allocations of physicians' time — detecting when physicians are needed or not," Nitiema says. AI could also help detect billing fraud by sniffing out providers with an unusual reimbursement pattern.

On a grander scale, AI is good at helping health care professionals analyze disease outbreaks and epidemics. Still, using AI, particularly when it touches the patient-care level, concerns health care professionals.

According to Nitiema, specificity in roles and responsibilities must be carefully outlined. For example, he pointed to a care provider using AI to help detect cancer through radiography. "Is he using AI as a tool? Or is he applying the AI and just taking the answer it gives? What is the provider’s role when using AI?" Nitiema asks. "That needs to be clarified." He adds that organizations and their care providers must work together to determine how to integrate AI into care operations and decision-making.

Another issue will be patient acceptance. "If you go to a physician and you list your symptoms, and then the physician types all those symptoms into ChatGPT, and then ChatGPT comes up with a diagnosis, will you trust the diagnosis?" Nitiema asks. "You always have a patient-physician relationship," he adds. "How does AI affect that?"

Such questions and concerns he’s seen health care providers voice about AI lead Nitiema to conclude that AI’s inevitable slide into the health care industry shouldn’t be left up to coders and device manufacturers. "The successful implementation of AI-powered technologies requires the involvement of all stakeholders," he says. "This includes health care organizations and their workers, insurance institutions, government agencies, and the patients themselves."

Latest news