Aligning Your Healthcare Organization with AMA’s AI Policy Recommendations


AI can empower care providers - if used properly

Artificial intelligence has the potential to transform healthcare for the better. AI in healthcare can empower care providers to improve patient care by making better diagnoses, increasing efficiency, and better securing patient data. And now, the AMA’s new AI policy recommendations make it clear where healthcare should be headed.

Thirty-one percent of healthcare executives say that AI is the most disruptive technology in the industry, outranking IoT, 3D printing, and robotics, according to PwC’s 2017 Global Digital IQ Survey. And as many as 50 percent of healthcare organizations plan to adopt artificial intelligence in the next four years.

AMA Releases AI Policy Recommendations

As an influx of vendors with machine learning capabilities enter the marketplace, healthcare organizations need to ensure they are harnessing the power of these advanced technologies in an ethical and legal way. The goal should always be to increase patient trust and improve patient care. If AI technology is not properly vetted, organizations may find themselves with legal, ethical, and regulatory burdens, along with diminished patient trust.

The American Medical Association recently passed its first policy recommendations on augmented intelligence (its term for artificial intelligence) to help care providers reap the benefits of AI and machine learning with the best interest of patients in mind. The policy recommendations highlight the need for thoughtfully designed AI that:

  • Preserves the security and integrity of personal data
  • Is designed to follow best practices in user-centric design
  • Conforms to reproducibility standards
  • Is transparent
  • Addresses bias

“As technology continues to advance and evolve, we have a unique opportunity to ensure that augmented intelligence is used to benefit patients, physicians, and the broad healthcare community,” said AMA Board Member Jesse M. Ehrenfeld, M.D. M.P.H.

3 Principles for the Ethical and Legal Use of AI in Healthcare

So how can care providers capture the benefits of AI while maintaining patient trust and avoiding ethical and legal blunders? Organizations around the globe are joining the AMA to take up the issue of ethical, or principled, AI. In each case, the goal has been to build a framework to help developers, vendors, end users, and consumers use this technology in a way that empowers its users and subjects.

Common to all these frameworks are three principles that serve as foundational considerations for the use of AI in healthcare:

  • Principle #1: Values alignment. When King Midas asked for everything he touched to turn to gold, he wanted to be rich, not turn his friends and family into hunks of precious metal. With values alignment, you want to make sure the system’s goals align with those of the recipient, patient, customer, and care provider so that there are no unintended outcomes.
  • Principle #2: Transparency. There must be a level of transparency affiliated with machine learning in terms of consent and intended use of data. Furthermore, how the machine is learning should be clear to those involved; algorithms should be explicit and explainable, as much as possible. What ethical values were prominent in the engineering process? What evidence is there of compliance and methodology used (e.g., data used to train the system, the algorithms used, the results of behavioral monitoring)?
  • Principle #3: A human in the loop. Many workers fear that AI will make them obsolete. AI that keeps humans out of the loop, however, is doomed to fail. Machines tend to see in black and white; having a human in the loop can clear up any gray areas. If a surgical robot is faced with a life or death decision, for example, a surgeon may be better equipped to make that final determination.

When considering the use of AI/machine learning in healthcare, care providers should first understand all of the ways AI can benefit their organization. Then, come to understand the ethical and legal considerations for vetting and implementing these powerful technologies. Doing so can empower you to responsibly improve patient care – and keep AI from becoming an insider threat itself.