AI in Healthcare: 5 Privacy & Security Considerations When Leveraging the Latest Technology

 

AI in Healthcare - 5 Privacy & Security Considerations When Leveraging the Latest Technology

While artificial intelligence (AI) can benefit healthcare in numerous ways – improving patient outcomes, increasing efficiency, enhancing privacy and security – misuse of the technology can lead to pitfalls. Iliana Peters, Shareholder at Polsinelli, and Chris Arnold, Imprivata FairWarning’s VP of Product Management & Engineering, discussed potential privacy and security issues when using AI in healthcare on a webinar on ethical and legal considerations surrounding AI. The two industry experts also shared recommendations for healthcare professionals who are considering or already implementing AI-enabled technology.

Here are five considerations for adopting AI, along with ways that privacy and security experts can become involved in how AI is being used across the organization.

#1: Access controls

Machine learning – an aspect of AI that discerns patterns from the information it ingests – requires large amounts of data to learn accurately and effectively. As a result, you may end up with large data pools that increase vulnerability.

This vulnerability makes access controls a particular area of concern, according to Chris. Healthcare organizations should ask themselves:

  • Who can see that data?
  • Who has permission to manage the data?
  • Who has permission to make changes to the rules/algorithms/models the machine is using to learn?

And it’s not just who, but what, added Iliana. There may be other applications, systems, or enterprises accessing your information. It’s essential to understand all the people and entities with access, and ensure controls are appropriate for the level of access necessary to each.

#2: Who built your robot?

Next, said Chris, you should ask yourself: ‘Who built that machine learning system?’

  • Do you know who built the algorithm or model you’re using?
  • How can you trust them?

What procedures can you put in place to make sure the people giving you this information are doing so securely and with the best intentions?

AI in healthcare should take care to avoid bias and exacerbating disparities – one of the most pressing concerns with AI is inherent bias. Learn the goals and motivations of those who programmed the machine, understand the data it’s drawing patterns from, and ask questions about any potential gaps that could compromise the outcome.

#3: Data integrity

As highlighted by the AMA’s AI policy recommendations on AI, “garbage in, garbage out” applies just as much with artificial intelligence as it does with everything else. If your database only includes information from a cohort group of one million men, then your clinical decision support for women and children may be weak. It’s important to consider these built-in biases, Iliana emphasized.

And it’s not enough just to be aware that bias exists. In order to tackle the type of data corruption that bias causes, you must be able to detect and remediate hidden prejudices within the information that’s provided to the machine. Furthermore, the algorithm itself must be analyzed to ensure that nothing about the way it’s coded produces bias.

“You need robust data that is as bias-free as possible, and you need robust programming that is as bias-free as possible,” Iliana said. “That ensures a good outcome.”

#4: Provider security

It’s equally important to assess the security posture of any provider of AI-enabled technology, said Iliana.

“Is security baked in?” Iliana added. “You don’t want to have to be asking questions later on about how the data of any particular technology you’re using is being secured by that technology.”

  • Are the people who developed your technology reputable?
  • Can you rely on them and openly discuss the security controls they have built into the technology to ensure your data is protected?

Chris added that evidence of security-rich DNA can come in the form of SOC 2 Type 2 certification, HITRUST attestation, ISO 9001 compliance, and more. What security regulations does the provider follow? Referenceable customers and past successes can also help demonstrate the partner’s commitment to security.

#5: Board communications

Not only can you use your knowledge to discuss the benefits of AI with your board, but you can also highlight ways to better protect the privacy and security of patient data. All purchases should go through the regular IT requisition process regardless of who’s championing the technology. This helps ensure all relevant parties understand the risks involved and can establish plans to mitigate them.

And make sure your board understands that not all technology is suitable for all uses.

“These types of tools may not be designed for clinical decision support – they may just be designed for research,” Iliana explained.

AI can be revolutionary in healthcare – from improving diagnoses to protecting patient privacy – but it has its pitfalls. Understanding risks like medical bias and taking steps to avoid them can ensure that the AI your organization adopts can accurately serve your organization and patients.