Why you need risk analytics
Gain visibility into risky, abnormal behavior to secure your organization
Protect patient privacy
Securing private health information
According to the HIPAA Privacy Rule, any healthcare provider who electronically transmits health information may only disclose protected health information (PHI):
- For treatment, payment and operations
- For public interest
- As a limited data set
- To the individual
- As authorized by the individual
Use and disclosure outside of these limitations constitutes a breach subject to civil money penalties (up to $1.5M per year) and potentially criminal prosecution (up to 10 years).
When privacy and security incidents occur, healthcare organizations suffer from lost business, reputation, and profits. Already small margins shrink further as customers lose faith in your system and seek care elsewhere. While patient care is likely your number-one priority, successfully securing PHI and complying with regulations is critical, too.
Increase in number of patient records breached from 2018 to 2019
Of healthcare organizations experienced a breach since 2016
Average cost of a healthcare breach
To secure PHI, healthcare organizations need to deal with predictable and unanticipated risks. What this means is that organizations need a solution that is smart, adaptable, and understands the unique context of a healthcare environment. A combination of artificial intelligence and behavioral analytics is critical to responding efficiently and effectively to the unique risks within healthcare organizations.
Many risks to patient privacy are well known and can be concretely defined. To comply with the HIPAA Breach Notification Rule, access that clearly is impermissibly must be addressed immediately. Other access can be monitored for trends that signal an uptick or difference from peers and acted upon at a given threshold.
Common patient privacy use cases
Despite being familiar, these threats are common and difficult to track, particularly when you lack the appropriate resources or rely on manual methods. Some key examples of known impermissible access that Imprivata FairWarning has been helping health systems address for many years include:
Looking at records of coworkers, supervisors, household members, neighbors or VIP patients when not for treatment, payment or operations is not acceptable to HIPAA, patients or your organization. Unfortunately, it can be almost impossible to identify this impermissible access without a tool to help you mine log records.
Imprivata helps healthcare providers identify and stop snooping of all types. Imprivata FairWarning Patient Privacy Intelligence monitors record access logs and notifies you when impermissible access occurs. It can also notify you when patterns of snooping behavior are detected.
Inappropriate record modification
Most healthcare organizations have a policy that prohibits users from viewing, editing or canceling their own records. While this activity could be innocuous, “self-modification” poses a risk of fraud, drug diversion, and financial loss to the organization.
Imprivata FairWarning Patient Privacy Intelligence uses AI and behavioral analytics to cross-reference user profile information against patient information. This feature triggers an alert to identify noncompliance. Awareness that this monitoring is in place also acts as a preemptive deterrent.
It’s not uncommon for a user to print records for a handful of patients for the day’s rounds or other purposes, but when a user exports a more significant number of patient records than usual, it’s may be a sign of patient poaching, fraud, or identity theft, and should be examined.
That’s why Imprivata FairWarning Patient Privacy Intelligence analyzes log records to identify users who are exporting unusually high amounts of data.
Access by terminated users
When former employees, inactive users or third-party contractors continue accessing clinical applications and records despite their change in status, it creates a significant risk. At best, their rights have been properly revoked and there’s simply the worry of why they were still attempting access. If a single login remains intact, that user could export data, steal identities, or insert malware.
Imprivata FairWarning Patient Privacy Intelligence confirms that users are active in the HR system and triggers an alert for anyone who has been terminated or is on leave.
Unfortunately, stolen IDs and passwords can represent an even greater risk to an organization than a lost laptop or phone. Whoever has the credentials has unfettered access to your system to remove information or inject threats. Worse, the user typically doesn’t know that their credentials have been compromised until it’s too late.
Imprivata FairWarning Patient Privacy Intelligence monitors for abnormal behavior which can signal compromised credentials. Whether logging in from an unusual location, at a unique time or exhibiting other atypical patterns of behavior, Imprivata FairWarning Patient Privacy Intelligence can alert you to the threat early.
Unfortunately, you don’t always know what behavior to look for to avoid a breach. This unanticipated risk is often what keeps privacy, compliance, and IT teams up at night.
Imprivata FairWarning Patient Privacy Intelligence uses behavioral-anomaly-detection AI to examine the data from multiple angles to determine if there is an issue that needs to be investigated. Our algorithms are built on the largest and most reliable healthcare application user activity dataset in the world, which drives the best predictive capability.