4 Ways Healthcare Organizations use Machine Learning in Privacy

4 Ways Healthcare Organizations use Machine Learning in Privacy

Across all industries, artificial intelligence (AI) is becoming ubiquitous. With the power to improve the patient experience, it’s especially innovative in healthcare. HIPAA requires that healthcare organizations protect patient data – regardless of the type of technology used to do so. More and more, organizations are choosing full lifecycle platforms with AI and machine learning baked in to detect advanced threats, help streamline workflows, and manage false positive alerts. From patient care and privacy to advanced threats like drug diversion and identity theft, machine learning in healthcare privacy is becoming a vital way to transform the lives of both clinicians and patients. In fact, as many as 50 percent of healthcare organizations plan to adopt artificial intelligence in the next four years.

With its ability to leverage massive quantities of data, machine learning is an especially effective tool to safeguard patient privacy. Machine learning is a facet of AI that mimics human behavior, learning from large quantities of data to make decisions the way a person would — but with considerably more speed and efficiency.

Healthcare privacy officers can see myriad benefits from implementing machine learning, including:

1. Eliminating false positives

Machine learning is programmed to mimic human behavior, and it comes with the added benefit of leveraging massive amounts of data in a short period of time. While monitoring patient record access for privacy and security incidents is essential for compliance and privacy, analyzing a large volume of false positives can be time-consuming for a privacy or compliance officer – especially when incident alerts reach into the hundreds or thousands. However, machine learning algorithms can rapidly document and then close large numbers of false positives by learning from the way similar alerts have been handled in the past. Privacy, compliance, and security officers can then feel more confident that the incident alerts are true violations.

In some full lifecycle platforms, this false positive reduction is tunable based on your risk appetite – you could, if you chose, tune them past 0 and get a higher volume of false negatives.

2. Detecting anomalous behavior

Machine learning can also be used to detect unknown but risky behaviors – user behavior that deviates from the normal workflow – helping to more accurately pinpoint potential security risks. A full lifecycle patient privacy platform might look across user activity and workflow, clustering these items and identifying users who fall far enough outside the bounds of a “normal cluster” that they could be considered anomalous. Examples of anomalies include:

  • Identity theft
  • Sale of medical information
  • Workflow anomalies like accessing data from unexpected locations or at unusual hours
  • Mass snooping
  • Drug diversion
  • Compromised credentials
  • Suspicious patient activity/VIP access

3. Predicting cases of drug diversion

Plaguing hospitals throughout the United States, drug diversion poses an alarming security risk. Not only is it dangerous for facilities, patients, and the drug diverters themselves, it’s also expensive to be unprotected. In May 2018, Effingham Health System paid $4.1 million in a settlement to the United States after failing to provide effective procedures to guard against prescription drug theft.

With its keen ability to predict behavior, machine learning can help identify people who might be at risk for opioid addiction. In 2018, more than 115 people died each day from prescription opioid overdoses. For the most part, people are diverting drugs not with malicious intent, but instead because they struggle with addiction.

“There are very few instances where drug diversion is made for financial gain. In the vast majority of cases, it’s addicts feeding their own addiction.” Commander John Burke, President of Pharmaceutical Diversion Education

Consequently, using AI to identify those at risk for opioid use and drug diversion can allow hospital staff to seek help for care workers who struggle with addiction, potentially saving lives in the process.

4. Employee retention
Machine learning can also be used to predict departing employees. A manager can use that information to retain the potentially departing employee by addressing their concerns. Because AI can spot a potentially disgruntled employee, it can also predict any potential security risks associated.

Ethical and legal considerations of AI in patient privacy

As machine learning becomes more prominent in healthcare, principles for regulated use in healthcare must be considered. Consequently, the AMA has released policy recommendations with AI and machine learning for providers to harness the power of AI and machine learning while garnering patient trust. These regulations seek to preserve the integrity of PHI with transparency while following best practices and addressing potential biases while keeping a human in the loop.

With its ability to evaluate massive amounts of data to detect anomalous behaviors, discover the potential for drug diversion, and eliminate false positives, machine learning is a powerful tool for healthcare. When leveraged to improve privacy and HIPAA compliance, we can use AI to achieve the goal of improving patient trust and care.