Select Page

Most Americans are uncomfortable with artificial intelligence in health care, survey finds

Most Americans are uncomfortable with artificial intelligence in health care, survey finds
– Source: CNN ” data-uri=”archive.cms.cnn.com/_components/video-resource/instances/h_1c1d55ede2ab7b322d67614e2928d089-h_117aebfa8535c7c39995830a035645b7@published” data-video-id=”tv/2019/07/26/exp-gps-0728-topol-on-ai-and-medicine.cnn” data-vr-video=””>

exp  GPS 0728 Topol on AI and medicine_00002325.jpg

On GPS: How AI will impact the future of medicine

01:47 – Source: CNN

CNN  — 

Most Americans feel “significant discomfort” about the idea of their doctors using artificial intelligence to help manage their health, a new survey finds, but they generally acknowledge AI’s potential to reduce medical mistakes and to eliminate some of the problems doctors may have with racial bias.

Artificial intelligence is the theory and development of computer programs that can solve problems and perform tasks that typically would require human intelligence – machines that can essentially learn like humans can, based on the input they have been given.

You probably already use technology that relies on artificial intelligence every day without even thinking about it.

When you shop on Amazon, for example, it’s artificial intelligence that guides the site to recommend cat toys if you’ve previously shopped for cat food. AI can also help unlock your iPhone, drive your Tesla, answer customer service questions at your bank and recommend the next show to binge on Netflix.

Americans may like these individualized services, but when it comes to AI and their health care, it may be a digital step too far for many.

Sixty percent of Americans who took part in a new survey by the Pew Research Center said that they would be uncomfortable with a health care provider who relied on artificial intelligence to do something like diagnose their disease or recommend a treatment. About 57% said that the use of artificial intelligence would make their relationship with their provider worse.

Only 38% felt that using AI to diagnose disease or recommend treatment would lead to better health outcomes; 33% said it would lead to worse outcomes; and 27% said it wouldn’t make much of a difference.

About 6 in 10 Americans said they would not want AI-driven robots to perform parts of their surgery. Nor do they like the idea of a chatbot working with them on their mental health; 79% said they wouldn’t want AI involved in their mental health care. There’s also concern about security when it comes to AI and health care records.

“Awareness of AI is still developing. So one dynamic here is, the public isn’t deeply familiar with all of these technologies. And so when you consider their use in a context that’s very personal, something that’s kind of high-stakes as your own health, I think that the notion that folks are still getting to know this technology is certainly one dynamic at play,” said Alec Tyson, Pew’s associate director of research.

The findings, released Wednesday, are based on a survey of 11,004 US adults conducted from December 12-18 using the center’s American Trends Panel, an online survey group recruited through random sampling of residential addresses across the country. Pew weights the survey to reflect US demographics including race, gender, ethnicity, education and political party affiliation.

The respondents expressed concern over the speed of the adoption of AI in health and medicine. Americans generally would prefer that health care providers move with caution and carefully consider the consequences of AI adoption, Tyson said.

But they’re not totally anti-AI when it comes to health care. They’re comfortable with using it to detect skin cancer, for instance; 65% thought it could improve the accuracy of a diagnosis. Some dermatologists are already exploring the use of AI technology in skin cancer diagnosis, with some limited success.

Four in 10 Americans think AI could also help providers make fewer mistakes, which are a serious problem in health care. A 2022 study found that medical errors cost about $20 billion a year and result in about 100,000 deaths each year.

Some Americans also think AI may be able to build more equity into the health care system.

Studies have shown that most providers have some form of implicit bias, with more positive attitudes toward White patients and negative attitudes toward people of color, and that could affect their decision-making.

Among the survey participants who understand that this kind of bias exists, the predominant view was that AI could help when it came to diagnosing a disease or recommending treatments, making those decisions more data-driven.

Tyson said that when people were asked to describe in their own words how they thought AI would help fight bias, one participant cited class bias: They believed that, unlike a human provider, an AI program wouldn’t make assumptions about a person’s health based on the way they dressed for the appointment.

“So this is a sense that AI is more neutral or at least less biased than humans,” Tyson said. However, AI is developed with human input, so experts caution that it may not always be entirely without bias.

Pew’s earlier surveys about artificial intelligence have found a general openness to AI, he said, particularly when it’s used to augment, rather than replace, human decision-making.

“AI as just a piece of the process in helping a human make a judgment, there is a good amount of support for that,” Tyson said. “Less so for AI to be the final decision-maker.”

For years, radiologists have used AI to analyze x-rays and CT scans to look for cancer and improve diagnostic capacity. About 30% of radiologists use AI as a part of their practice, and that number is growing, a survey found – but more than 90% in that survey said they wouldn’t trust these tools for autonomous use.

Dr. Victor Tseng, a pulmonologist and medical director of California-based Ansible Health, said that his practice is one of many that have been exploring the AI program ChatGPT. His group has set up a committee to look into its uses and to discuss the ethics around using it so the practice could set up guardrails before putting it into clinical practice.

Tseng’s group published a study this month that showed that ChatGPT could correctly answer enough practice questions that it would have passed the US Medical Licensing Examination.

Tseng said he doesn’t believe that AI will ever replace doctors, but he thinks technology like ChatGPT could make the medical profession more accessible. For example, a doctor could ask ChatGPT to simplify complicated medical jargon so that someone with a seventh-grade education could understand.

“AI is here. The doors are open,” Tseng said.

The Pew survey findings suggest that attitudes could shift as more Americans become more familiar with artificial intelligence. Survey respondents who were more familiar with a technology were more supportive of it, but they still shared caution that doctors could move too quickly in adopting it.

“Whether you’ve heard a lot about AI, just a little or maybe even nothing at all, all of those segments of the public are really in the same space,” Tyson said. “They echo this sentiment of caution of wanting to move carefully in AI adoption in health care.”

Source: https://www.cnn.com/2023/02/22/health/artificial-intelligence-health-care/index.html