Artificial intelligence is already in our hospitals. 5 questions people want answered

Artificial intelligence is being used to improve health care. AI can detect patterns in images of medical conditions and use them to diagnose diseases. It can predict which patients in a hospital will degrade. It can quickly summarise research papers in order to keep doctors up-to-date with the latest findings.

AI can be used to make or shape decisions made by health professionals. More applications are being developed.

What do consumers think about AI in healthcare? How should the answers to these questions shape future AI use?

AI is being used in healthcare. Not all AI is medical-grade.

What do consumers think?

AI systems are taught to search for patterns within large data sets. AI systems can use these patterns to make recommendations, diagnose problems, or initiate action. They could learn continuously, improving their performance over time.

When we combine international data, including our own and others, the majority of consumers seem to accept AI’s potential in health care.

This value can include, for instance, improving accessibility to care or increasing accuracy in diagnoses. These are mostly potential benefits, not proven ones.

Consumers say that their acceptance is conditional. But they still have concerns.

1. Does AI work?

AI tools must work as expected. Consumers often say that AI should perform tasks at least as well as a doctor. Some consumers say that AI should not be used if it leads to more medical errors or incorrect diagnoses.

2. Who’s responsible if AI gets it wrong?

It is also a concern for consumers that AI systems may generate decisions, such as diagnosis or treatment plans, without human input. This could lead to confusion about who will be held accountable for any errors. People often want clinicians responsible for making final decisions and protecting patients from harm.

3. Will AI make health care less fair?

If the health care services are discriminatory, then AI systems can use data to learn these patterns and replicate or worsen discrimination. AI in health care could exacerbate health inequalities. In our study, consumers stated that this was not okay.

4. AI will dehumanize the health care system.

Consumers are worried that AI will remove the “human” element from health care. They say AI tools should complement rather than replace physicians. This is often because AI is perceived as lacking important human characteristics, such as empathy. When feeling vulnerable, consumers say that communication skills, touch, and care from a healthcare professional are important.

5. Will AI de-skill our health workers?

Consumers value the expertise of human clinicians. Women who participated in our study about AI and breast screening were worried about its potential impact on radiologists’ expertise and skills. Women viewed this expertise as an important shared resource. If AI tools are over-used, this resource may be lost.

 

The communities and consumers need to have a voice

AI is not a technical tool that can be used to improve the Australian healthcare system. Social and ethical factors, such as the high-quality engagement of consumers and communities must shape AI in health care.

To access trustworthy and reliable health information, services, and resources, communities need to be able to develop digital literacy and digital skills.

The engagement of Aboriginal and Torres Strait Islander peoples must be respectful. The Australian Institute of Aboriginal and Torres Strait Islander Studies defines Indigenous data sovereignty.

The right of Indigenous Peoples to control the collection, ownership, and use of data about Indigenous Communities, Peoples, Lands, and Resources.

Any use of data for AI is included.

Respectful engagement is essential with Aboriginal and Torres Strait Islander Communities. Thurtell/GettyImages

Before managers can design AI into their health systems, before regulations develop guidelines on how AI should be used and shouldn’t, and before clinics buy a new AI tool, they must engage with consumers and the community.

We are making progress. We ran a citizen’s jury earlier this year on AI in healthcare. We sponsored 30 Australians from all states and territories to spend three weeks learning about AI and health care and making recommendations to policymakers.

Their recommendations will be published in a forthcoming issue of the Medical Journal of Australia. They have also influenced a national road map recently released for the use of AI in healthcare.

That’s not all.

AI is also a tool that health professionals need to use. They must learn how to use digital health tools critically and understand their pros and cons.

Our analyses of safety incidents reported to the US regulator Food and Drug Administration show that a defective device did not cause the most serious harm but rather by the way in which consumers and clinicians use the device.

When should health workers inform patients that an AI tool will be used to assist them in their treatment? And when should they seek informed consent from the patient for this use?

Last but not least, everyone involved in the development and use of AI needs to become accustomed to asking: Do consumers and communities believe this is an appropriate use of AI?

The AI-enabled healthcare system that consumers want will only come when we achieve this.

Leave a Reply

Your email address will not be published. Required fields are marked *