Home >> ALL ISSUES >> 2024 issues >> As AI use expands, ethics at the leading edge

As AI use expands, ethics at the leading edge

image_pdfCreate PDF

The FDA, he says, has never asserted authority over electronic health records. “That’s one area of protection where I’d like to see the FDA continue to develop and evolve rules and oversight.”

Dr. Jackson

Ethically, the proprietary nature of AI algorithms in itself can be a problem, Dr. Jackson says. “If you’re going to let an algorithm loose on patients, you need to be able to evaluate its accuracy and safety.” When companies shield their software as intellectual property and make it hard to evaluate, the consequences can be serious, as Michigan Medicine’s external validation cohort study of the Epic Sepsis Model (ESM) revealed (Wong A, et al. JAMA Intern Med. 2021;181[8]:1065–1070).

In that study, the ESM was found to have poor discrimination and calibration in predicting the onset of sepsis at the hospitalization level. When used for alerting at a score threshold of six or higher (within Epic’s recommended range), it identified only seven percent of 2,552 patients with sepsis who were missed by a clinician (based on timely administration of antibiotics).

“Owing to the ease of integration within the EHR and loose federal regulations,” the authors write, “hundreds of US hospitals have begun using these algorithms.”

The ESM story is an object lesson in the need to control AI, Dr. Jackson says. “Epic has a long history of making its software difficult to evaluate. They see it as an intellectual property area. And it’s hard to critique and evaluate it in the public sphere. We need open evaluation, open monitoring, a lot of transparency, and those mechanisms aren’t well defined yet.”

Some of the most egregious cases of AI ethics problems are in the medical insurance space, he says, where companies use algorithms to deny care. “Historically, they’ve hired doctors to make those determinations, but it’s cheaper to hire algorithms, and they’re usually not designed to answer questions about why they rejected a care decision as medically unnecessary. So they can blame the algorithm, rather than take ownership for basically denying legally required care, which is completely unethical.”

Dr. Powell

With transparency, some of these outcomes could be avoided, Dr. Jackson says. “But there’s very little transparency. These algorithms are not being independently evaluated for accuracy or performance. There’s no monitoring going on and patients are getting hurt, because the companies are pushing the envelope to see what they can get away with.” It would be much more ethically defensible if the algorithms were implemented to make it transparent to the stakeholders what’s going on, he says.

“As human beings and as organizations and companies, we need to own our responsibilities for developing and implementing and using AI in ethical ways. It’s not okay, if something goes wrong, to point the finger at the AI and say, ‘Oops, the algorithm screwed up.’ No—someone used that algorithm and someone needs to be held accountable if they didn’t put the controls in place to make sure it was going to be used effectively and safely.”

AI model cards, which explain a machine learning model for the purpose of transparency and accountability in development and use of AI, are one way to bring control. “The idea is that you document the algorithm, with some explanation of how it performs, so it adds a level of transparency. It’s not sufficient; it’s a step in the right direction,” Dr. Jackson explains.

He uses car safety as an analogy to the needed protections in software. “Brakes don’t solve the problem alone; neither do airbags or driver’s licenses. But you put them all together, it makes a pretty effective safety network. So the model card is one proposal that would ensure a bit more transparency in how the models perform.”

The classic principles of medical ethics, named in the Belmont Report for the protection of human subjects of biomedical and behavioral research, are patient autonomy, beneficence, nonmaleficence, and justice. “All of those relate to AI in one way or another,” Dr. Anderson says.

Patient autonomy, for example, should affect how patients’ clinical data goes into the AI models, he explains. “Is your clinical data being used in a way that’s right and fair to the person? Beneficence involves making sure results are correct.” Nonmaleficence, the principle of “Do no harm,” could relate to making sure there are checks and balances against commercial interests of corporations overriding ethical treatment, he says.

As for justice, Dr. Anderson sees accessibility as a critical component. “As AI tools become available, to whom will they be available? If we can only have them in areas or hospitals that are relatively well funded with a lot of the expertise, what happens to the hospitals that don’t necessarily have five informatics experts? How do we make sure everyone can take advantage of technologies that have clear benefits?”

The White House’s Blueprint for an AI Bill of Rights and the AMA’s guidelines to advance AI in medical education through ethics, evidence, and equity point to some of the ethics glitches or gaps having drawn attention, Dr. Jackson says.

At the CAP, too, members of the AI Committee have developed a document outlining ethics principles, which members of the Ethics and Professionalism Committee are vetting now. “But it’s very early stage,” Dr. Jackson says.

Dr. Powell cites a study of 253 articles on AI ethics in health care whose authors propose a responsible AI framework that encompasses five main themes for AI developers, health care professionals, and policymakers (Siala H, et al. Soc Sci Med. 2022;​296:​114782). Summarized by the acronym SHIFT, the themes (and some of the subthemes) are as follows:

  • Sustainability: responsible local leadership; societal impact on
    well-being of humans and the environment.
  • Human-centeredness: embedding humanness (recognition and empathy, for example) in AI agents to meet ethics of care requirements; the role of health professionals to maintain public trust.
  • Inclusivity: inclusive communication (patient-provider) and involvement in AI governance.
  • Fairness: alleviating algorithmic and data bias; health disparities in low-resource settings.
  • Transparency: safeguarding privacy; explainable AI-driven models and decisions; informed consent for data use.

In Dr. Powell’s view, pathologists are uniquely suited to helping medicine skirt some of the dangers of AI. “From a pathologist’s standpoint, of course, we would all be horrified that any sort of algorithm would be utilized clinically without having been rigorously validated at one’s own institution. We just need to remind everybody of that. It’s the basis of all clinical testing.”

Dr. Anderson is especially interested in the risks and potential benefits of AI in training, and he points to a few of the questions it raises:

  • Are trainees using AI in a HIPAA-compliant way?
  • Are trainees using AI in a way that is consistent with the educational mission?
  • Are they using it in a way that avoids plagiarism, and in a way that is safe for patients?

When Google searches were rolled out decades ago, he recalls, the initial reaction from some quarters was “to tell people not to Google stuff, that that was a really awful way and you should consult a textbook if you want the right answer. And now that’s silly.”

Large language models like ChatGPT could well follow the same course in terms of the degree to which they’ll be adopted, he says.

“They will find their way into our day-to-day work in some way, shape, or form. I have no doubt. It’s just a matter of to what extent and how they’re being used.”

Anne Paxton is a writer and attorney in Seattle.

CAP TODAY
X