3 Ways Stony Brook Medicine Transformed its Privacy Program Using AI in Healthcare

 

 

3 Ways Stony Brook Medicine Transformed its Privacy Program Using AI in Healthcare

The goal of any healthcare privacy professional is to protect patients’ sensitive information. But with a small privacy team to monitor thousands of EMR users, where do you start? At Stony Brook Medicine, just two professionals were able to revolutionize its privacy program using cutting edge artificial intelligence (AI) in healthcare.

Stony Brook Medicine is an academic healthcare system in Suffolk County, NY with over 600 beds and 8,000 employees that focuses on research while delivering quality care to patients. But with only two professionals to monitor 12,000+ EMR users, they had steep hurdles to overcome in pursuit of creating a cohesive privacy program. After a HIPAA gap analysis, Stephanie Musso-Mantione, Chief HIPAA Privacy Officer at Stony Brook Medicine, knew that the organization needed to establish a proactive monitoring program in order to fill those gaps. Although she started manually looking into access into VIP records, household and family snooping, and other reports, she was inundated – the workload was too enormous to complete alone.

“It was a mountain I just couldn’t climb by myself.”- Stephanie Musso-Mantione, Chief HIPAA Privacy Officer at Stony Brook Medicine

After the addition of Cristina Striffler, Senior Privacy Analyst, Stony Brook implemented a proactive monitoring solution with AI and machine learning capabilities, which improved their privacy program in three crucial ways:

1. Saving time

Machine learning is a facet of AI that learns like a human, but with the added benefit of being able to identify patterns from massive volumes of data in a fraction of the time it would take a person. Monitoring access to patient records for privacy and security incidents is essential, but when it requires dismissing a large number of false positives, it can be frustrating and time consuming. AI and machine learning can identify and dismiss false positives – by analyzing how similar alerts have been handled in the past, the machine knows what to dismiss and what to send to privacy, security, and compliance officers for further investigation. This frees up valuable time for professionals to focus on legitimate violations.

At Stony Brook, implementing AI and machine learning allowed their privacy team to cut their alerts in half over the past year, gaining time back in their day without missing any high-risk access. Where they used to receive six to eight alerts in a row, they now receive about three.

“Machine learning absolutely does reduce more of those false positives that we were seeing. We have data that shows that our cases, our investigations, are cut in half since we implemented AI.” – Stephanie Musso-Mantione

2. Gaining insight into user behavior

Because AI has the keen ability to leverage immense quantities of data and read patterns, it can also skillfully detect behavioral deviations, pinpointing potential privacy and security risks. When a nurse in pediatrics accesses a record from the mental health department, AI can easily detect that type of workflow abnormality, allowing privacy and security professionals to stay informed of potential incidents quickly and efficiently.

AI can even detect dangerous behaviors. Threats like identity theft, drug diversion, and access to a VIP patient’s records are the types of anomalies that AI can discover; detecting these risks early can preserve the health of patients and staff and save an organization from reputational damage – all while potentially saving millions of dollars in HIPAA violation fines.

3. Building a culture of privacy

Because AI can efficiently detect anomalous behaviors, privacy and security professionals can use that knowledge to build a culture of privacy within their organizations. What is a culture of privacy? According to the International Association of Privacy Professionals (IAPP), culture is defined as “an integrated pattern of human knowledge, belief, and behavior… that characterizes an organization.” A culture of privacy at a healthcare organization is when patient privacy comes first, even when employees aren’t being monitored.

“We do tell our workforce that they need to do the right thing even when they know they aren’t being watched. And as I said, that starts, not only, we hope, from the top, but from this office. We certainly practice what we preach.” – Cristina Striffler, Senior Privacy Analyst at Stony Brook Medicine

At Stony Brook Medicine, harnessing the power of AI to identify workflow anomalies was only the first step. With the time they regained from implementing a proactive monitoring program, they built a comprehensive onboarding program that includes monitoring training from the very beginning so every employee knows how to protect patient privacy. They’ve increased visibility at their organization to the level where practitioners and other users come to them proactively if they need to access records they feel will trigger an alert.

“We also have staff that will proactively email us and say, ‘Just a heads up, we have this employee in our office who accessed the following coworker’s records in order to test new appointments or to test something,’ and so they let us know right up front that we might see.” Cristina Striffler, Senior Privacy Analyst at Stony Brook Medicine

Even with just two people to monitor thousands of users, Stony Brook was able to build a world-class culture of compliance with the help of AI – all without losing the human touch. By regaining valuable time with tools that help them detect and remediate potential privacy incidents, they’ve engendered a level of trust among staff to the point where they proactively report business reasons for questionable access before it happens.

“The combination of the technology and the human elements really can’t be understated.” – Stephanie Musso-Mantione, Stony Brook Chief HIPAA Privacy Officer at Stony Brook Medicine