AI-supported Access Decisions May Not Reflect Our Values

July 12, 2023

Article by:

Camm Epstein
Founder
Currant Insights

As artificial intelligence (AI) permeates all facets of work and life, many thought leaders are increasingly concerned that AI poses an existential risk to humanity. A well-known thought experiment imagines how an AI algorithm designed to maximize the manufacturing of paper clips, a seemingly innocuous goal, could conclude that humans should be eliminated! And in a survey of AI researchers, 48% of respondents gave at least 10% chance of an extremely bad long-term outcome (e.g., human extinction). Those reflecting on this risk imagine how artificial intelligence gone rogue could, at best, make decisions that do not reflect our collective societal values and, at worst, lead to extinction or an irreversible collapse of human civilization.

AI has been used beneficially to predict the efficacy of novel targeted drug therapies from antibiotics to oncolytics. It has been used successfully to help spot breast, lung, and prostate cancers — cancers with high incidence rates and greater utilization than rare cancers. When utilization is higher, the amount of data that can be used as training data is greater. (Ceteris paribus, the more training data, the better the predictions.) And early work has trained models to help detect amyloid-related imaging abnormalities (ARIA). These models will likely improve as more people with Alzheimer’s are treated with Leqembi, which requires a recent pretreatment baseline brain MRI, MRIs prior to the 5th, 7th, and 14th infusions, and, if indicated, an MRI if a patient experiences symptoms suggestive of ARIA. More utilization, more images, more training data, better predictions.

Payers have leveraged AI to manage customer-service inquiries, process claims, review charts, and detect fraud, waste, and abuse. When making access decisions, payers and health systems also have used AI to identify high-risk patients (disease onset, hospitalization, rehospitalization, and noncompliance), and review prior authorization (PA) requests. Danger, Will Robinson, Danger! While AI-supported access decisions do not pose an existential risk to all of humanity, they pose a potential risk to some individuals and small groups when training data underrepresent these patients or contain missing data. Resulting decisions may be biased, perpetuate inequities and disparities, and not reflect our collective societal values.

Small numbers

Machine learning requires training data — a suboptimal amount of training data can result in unacceptable and potentially harmful false-positive and false-negative rates. Identifying high-risk patients is a potentially cost-effective goal that can lead to targeted education, outreach, and efforts to reduce access barriers (e.g., patient support). But AI-based predictions of risk will likely be less accurate for rare diseases or small subgroups with more common conditions due to a small-numbers problem. Other than increasing the amount of training data (which may not be possible due to limited utilization data), this problem has no good fix.

Missing data

Missing data may also reduce the accuracy of AI-based predictions for some groups. Suppose a health system, IDN, or a payer with access to the EHR trains an AI algorithm to extract predictive unstructured text data from the EHR to review PA requests. The PA algorithm trained on available utilization data may not perform as well for some racial, ethnic, geographic, socioeconomic, and other underrepresented groups that often experience health disparities. Other than weighting the currently available data, there is no quick fix for this type of missing data.

Even when the record itself is not missing, important information in the EHR (and the training data) may be missing. There are several methods for dealing with this type of missing data — the best fixes arguably use predictive models to impute values when the missing data are missing completely at random (i.e., the missing observations are a random subset of all observations). However, when the missingness is not random, the predicted, imputed values may be biased, which, in turn, may bias AI-based predictions. For example, patient-reported outcomes (PROs) documented in the EHR may be used to satisfy some PA criteria. It is reasonable to assume that language barriers and cultural differences may systematically impact PROs and, by extension, access. Unfortunately, this can perpetuate inequities and disparities.

Measuring success

Access decisions and their feedback loops are complex. They are often a confluence of variables including, but not limited to, competition, utilization, differentiation, negotiation, spend, and trend. To optimize access, imagine a brave new world where payers or health systems leverage AI to inform formulary and medical policy decisions and, possibly, contracting decisions. Further, imagine a future where manufacturers similarly leverage AI to inform pricing and contracting decisions.

This vision faces both the small-numbers and missing-data problems. As for the small numbers, some markets may have too few products or patients to yield accurate predictions. And as for the missing data, a measure of success, the dependent variable — however that is defined (e.g., lower costs, increased revenue) and operationalized — is often missing from access data. For use in training data, a measure of success would have to be tracked. The use of AI to predict access success would be challenging but potentially very rewarding.

This use case may currently sound like science fiction, but science fiction often becomes reality.

Mirror, mirror on the wall

Which AI-supported access decisions are fairest of them all? The fairest use cases are those with ample training data with little to no missing data (and when missing, missing completely at random) and with a valid measure of success. If AI were used to optimize access decisions in a way that minimizes bias and the risk of perpetuating past inequities, then it would likely make more sense to use it for prevalent conditions and/or competitive markets. AI-supported access decisions for most rare conditions could do too much harm.

Some AI-supported access decisions could pose serious and, at the extreme, existential risk to some vulnerable individuals and small groups. Those making AI-supported access decisions should be aware of the risks and try to mitigate biases, inequities, and disparities. Hopefully, future AI-supported access decisions will reflect our collective societal values.

No Comments

Leave a Reply