AI’s impact on healthcare reliant on building trust

AI could revolutionise healthcare, but not without experts addressing privacy & security concerns as well as existing concerns over industry medical knowledge and practices.

The use of artificial intelligence (AI) in healthcare is growing rapidly. But while it is already providing patients with better care and support, distrust among the public and staff lingers.

Tyler Fletcher, executive vice president of healthcare data at GlobalData, said: “Without trust being built, the healthcare industry cannot fully explore the potential benefits brought by AI-boosted technology, leading patients to miss out on improved treatments and potentially suffer wider consequences.

“Patients’ mistrust in new technology is largely centred around concerns regarding the management, ownership and privacy of patient data as well as cybersecurity threats presented by AI. This also extends beyond AI to the systemic inequalities that still need addressing within healthcare.”

Fletcher argues that there is a responsibility to educate and consult on the potential risks involved, by experts who understand the technology and its impact on all stakeholders.

How AI intersects to fill healthcare gaps

Fletcher added: “Since the AI boom, its use has grown exponentially and spread to all areas of work, effectively planting its presence into our daily lives.

“Healthcare companies now have much bigger data sets and sources that are not only more accessible but also relevant to the practice. Combining this data with the applications of AI opens a world of innovation which is rapidly improving healthcare and research.

“However, whilst trust in data use drives industry-wide barriers to AI adoption in healthcare, if we want to build patients’ trust in AI we must acknowledge legitimate reasons for mistrust in the technology.

“Data security remains a pressing concern, with many people hesitant to hand over their personal data and for good reason. Healthcare is a highly targeted industry too, research from the Health Information Sharing and Analysis Center tracked 458 ransomware attacks in healthcare in 2024.

“While AI can be used to accelerate the rate at which we bridge existing gaps in research, we must still also acknowledge the risks it brings with it.

“AI experts and technology consultants can bridge the gap and build greater trust in medical practices for underserviced demographics and datasets for example, but only if they acknowledge the inherent risks beyond technology and industry-wide limitations when it comes to medical research.”

Data management, ownership and privacy

Fletcher continued: “With the sheer volume of sensitive data collected and stored that keeps steadily growing, privacy concerns have become a main source of mistrust. People are questioning where their data is saved and for how long, who has access to it, and what it is being used for.

“Previous data controversies such as with Care.data or the General Practice Data for Planning and Research scheme have shown how a lack of public confidence can significantly inhibit innovation in health care.

“Digital rights group the Electronic Frontier Foundation (EFF) warned that women using periodic tracking apps must ensure they know how their data is being used in the wake of the Roe v Wade ruling, since some apps shared data with third parties.”

“Experts must seek to understand the positioning of the public and staff on this topic and engage people in a conversation about AI and the future of care.

“The management of patient data can be supported by AI models that prevent private and sensitive data from being exposed to the wrong people, and organisations must remain transparent with how they handle data and where AI and other tools play a part in its management and access.”

AI as a cybersecurity aid

Fletcher said: “Considering 92% of healthcare organisations experienced a cyberattack in 2024, healthcare data breaches are also a cause of distress among patients and of concern within the industry.

“The digitalisation of the NHS will broaden the potential attack surface of cyber-attacks and data breaches, so organisations must invest in robust cybersecurity measures to uphold patient and staff trust.”

“AI’s ability to analyse databases and recognise patterns is invaluable in cybersecurity – it can identify unusual patterns that could indicate a breach, adapt and learn from new threats, and even prevent possible attacks by analysing potential weaknesses and recommend measures to address them.

“Experts positioning AI as an ally to data protection, rather than a cybersecurity threat, is crucial to inspiring confidence in the technology and its security.”

Building trust

He added: “Experts and organisations must understand that communication and transparency are critical enablers for public confidence in AI, and these factors should be considered while designing, regulating and training staff to use AI tools – as well as engaging the public on the resulting trade-offs.

“It’s imperative we learn from causes of mistrust that prevail in healthcare, largely around demographics who have been largely underserved, before we look to assume trust in new technologies.

“There must also be an active focus on removing biases from AI to present it as a solution to biases. The potential for a positive impact on the health industry, however, is huge.”

Previous articleCan bioprinting cut infection rates?
Next articleThe rapid rise of decentralised science