EU Think Tank
  • Home
  • Business
  • Leadership
  • Economics
  • Recruitment
  • Innovation
  • Strategy
  • More
    • Customer Experience
    • Managing People
    • Managing Yourself
    • Communication
    • Marketing
    • Organizational Culture
    • Technology
Featured Posts
    • Economics
    Is America in Decline?
    • June 25, 2022
    • Marketing
    Stop Losing Sales to Customer Indecision
    • June 24, 2022
    • Organizational Culture
    How to Make Your Business More Resilient Webinar – SPONSORED CONTENT FROM TRINET
    • June 24, 2022
    • Managing Yourself
    Why Microsoft Measures Employee Thriving, Not Engagement
    • June 24, 2022
    • Strategy
    Build a Strategy that Addresses Your Gnarliest Challenges
    • June 23, 2022
Featured Categories
Business
View Posts
Communication
View Posts
Customer Experience
View Posts
Economics
View Posts
Green
View Posts
Health
View Posts
Hiring and Recruitment
View Posts
Innovation
View Posts
Leadership
View Posts
Managing People
View Posts
Managing Yourself
View Posts
Marketing
View Posts
Middle East
View Posts
News
View Posts
Organizational Culture
View Posts
Russia
View Posts
Saudi Arabia
View Posts
Strategy
View Posts
Technology
View Posts
Ukraine
View Posts
Uncategorized
View Posts
EU Think Tank EU Think Tank
7K
9K
4K
1K
EU Think Tank EU Think Tank
  • Home
  • Business
  • Leadership
  • Economics
  • Recruitment
  • Innovation
  • Strategy
  • More
    • Customer Experience
    • Managing People
    • Managing Yourself
    • Communication
    • Marketing
    • Organizational Culture
    • Technology
  • Customer Experience

For Patients to Trust Medical AI, They Need to Understand It

  • December 9, 2021
  • euthinktank
Total
0
Shares
0
0
0

AI holds great promise to increase the quality and reduce the cost of health care in developed and developing countries. But one obstacle to using it is patients don’t trust it. One key reason is they perceive medical AI to be a black box and they think they know more about physicians’ decision-making process than they actually do, the authors research found. A remedy: Provide patients with an explanation of how both types of care providers make decisions.

Artificial intelligence-enabled health applications for diagnostic care are becoming widely available to consumers; some can even be accessed via smartphones. Google, for instance, recently announced its entry into this market with an AI-based tool that helps people identify skin, hair, and nail conditions. A major barrier to the adoption of these technologies, however, is that consumers tend to trust medical AI less than human health care providers. They believe that medical AI fails to cater to their unique needs and performs worse than comparable human providers, and they feel that they cannot hold AI accountable for mistakes in the same way they could a human.

This resistance to AI in the medical domain poses a challenge to policymakers who wish to improve health care and to companies selling innovative health services. Our research provides insights that could be used to overcome this resistance.

In a paper recently published in Nature Human Behaviour, we show that consumer adoption of medical AI has as much to do with their negative perceptions of AI care providers as with their unrealistically positive views of human care providers. Consumers are reluctant to rely on AI care providers because they do not believe they understand or objectively understand how AI makes medical decisions; they view its decision-making as a black box. Consumers are also reluctant to utilize medical AI because they erroneously believe they better understand how humans make medical decisions.

Our research — consisting of five online experiments with nationally representative and convenience samples of 2,699 people and an online field study on Google Ads — shows how little consumers understand about how medical AI arrives at its conclusions. For instance, we tested how much nationally representative samples of Americans knew about how AI care providers make medical decisions such as whether a skin mole is malignant or benign. Participants performed no better than they would have if they had guessed; they would have done just as well if they picked answers at random. But participants recognized their ignorance: They rated their understanding of how AI care providers make medical decisions as low.

By contrast, participants overestimated how well they understood how human doctors make medical decisions. Even though participants in our experiments possessed similarly little factual understanding of decisions made by AI and human care providers, they claimed to better understand how human decision-making worked.

In one experiment, we asked a nationally representative online sample of 297 U.S. residents to report how much they understood about how a doctor or an algorithm would examine images of their skin to identify cancerous skin lesions. Then we asked them to explain the human or the algorithmic provider’s decision-making processes. (This type of intervention that has been used before to shatter illusory beliefs about how well one understands causal processes. Most people, for instance, believe they understand how a helicopter works. Only when you ask them to explain how it works, do they realize they have no idea.)

After participants tried to provide an explanation, they rated their understanding of the human or algorithmic medical decision-making process again. We found that forcing people to explain the human or algorithmic provider’s decision-making processes reduced the extent to which participants felt that they understood decisions made by human providers but not decisions made by algorithmic providers. That’s because their subjective understanding of how doctors made decisions had been inflated and their subjective understanding of how AI providers made decisions was unaffected by having to provide an explanation — possibly because the had already felt the latter was a black box.

In another experiment, with a nationally representative sample of 803 Americans, we measured both how well people subjectively felt that they understood human or algorithmic decision-making processes for diagnosing skin cancer and then tested them to see how well they actually did understand them. To do this, we created a quiz with the aid of medical experts: a team of dermatologists at a medical school in the Netherlands and a team of developers of a popular skin-cancer-detection application in Europe. We found that although participants reported a poorer subjective understanding of medical decisions made by algorithms than decisions made by human providers, they possessed a similarly limited real understanding of decisions made by human and algorithmic providers.

What can policymakers and firms do to encourage consumer uptake of medical AI?

We found two successful, slightly different interventions that involved explaining how providers — both algorithmic and human — make medical decisions. In one experiment, we explained how both types of providers use the ABCD framework (asymmetry, border, color, and diameter) to examine features of a mole  to make a malignancy-risk assessment. In another experiment, we explained how both types of providers examine the visual similarity between a target mole and other moles known to be malignant.

These interventions successfully reduced the difference in perceived understanding of algorithmic and human decision-making by increasing the perceived understanding of the former. In turn, the interventions increased participants’ intentions to utilize algorithmic care providers without reducing their intentions to utilize human providers.

The efficacy of these interventions is not confined to the laboratory. In a field study on Google Ads, we had users see one of two different ads for a skin-cancer-screening application in their search results. One ad offered no explanation and the other briefly explained how the algorithm works. After a five-day campaign, the ad explaining how the algorithm works produced more clicks and a higher click-through rate.

AI-based health care services are instrumental to the mission of providing high-quality and affordable services to consumers in developed and developing nations. Our findings show how greater transparency — opening the AI black box — can help achieve this critical mission.

Total
0
Shares
Share 0
Tweet 0
Pin it 0
You May Also Like
Read More
  • Customer Experience

For Retailers, a Direct Link from ‘Buy Now, Pay Later’ to the Bottom Line – SPONSOR CONTENT FROM PAYPAL

  • euthinktank
  • June 22, 2022
Read More
  • Customer Experience

What You’re Getting Wrong About Customer Journeys

  • euthinktank
  • June 14, 2022
Read More
  • Customer Experience

Twitch CEO Emmett Shear Says “Still Work to Be Done” to Make Streaming Platforms Safer in Wake of Buffalo Tragedy

  • euthinktank
  • May 20, 2022
Read More
  • Customer Experience

Creating a Patient-Centered Clinical Experience

  • euthinktank
  • May 19, 2022
Read More
  • Customer Experience

Are You Tracking the Customer Service Metrics That Really Count?

  • euthinktank
  • May 11, 2022
Read More
  • Customer Experience

Video Quick Take: HubSpot’s Poorvi Shrivastav on Turning Customer Relationship Management into Magic – SPONSOR CONTENT FROM HUBSPOT

  • euthinktank
  • April 20, 2022
Read More
  • Customer Experience

Do Your Marketing Metrics Show You the Full Picture?

  • euthinktank
  • April 4, 2022
Read More
  • Customer Experience

Video Quick Take: Sitecore’s Steve Tzikakis on Why Experience is Redefining the Future of Business – SPONSOR CONTENT FROM SITECORE

  • euthinktank
  • April 1, 2022

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Featured Posts
  • 1
    Is America in Decline?
    • June 25, 2022
  • 2
    Stop Losing Sales to Customer Indecision
    • June 24, 2022
  • 3
    How to Make Your Business More Resilient Webinar – SPONSORED CONTENT FROM TRINET
    • June 24, 2022
  • 4
    Why Microsoft Measures Employee Thriving, Not Engagement
    • June 24, 2022
  • 5
    Build a Strategy that Addresses Your Gnarliest Challenges
    • June 23, 2022
Recent Posts
  • Supporting Your Team When the News Is Terrible
    • June 23, 2022
  • Dehumanization Is a Feature of Gig Work, Not a Bug
    • June 23, 2022
  • What Sales Teams Should Do to Prepare for the Next Recession
    • June 23, 2022

Sign Up for Our Newsletters

Subscribe now to our newsletter

EU Think Tank
  • Home
  • Privacy Policy
  • Guest Post
  • Contact

Input your search keywords and press Enter.