Life hacks are those simple, clever methods that boost productivity or efficiency. While the term life hack was coined in 2004, people have always looked for ways of accomplishing more with less. The wheel — originally used as potter’s wheels around 3500 BC, and picked up (pun intended) as wheelbarrows about 3,000 years later, was invented thousands of years after sewing needles and thousands of years before WD-40 and its many uses. Hacks are also common in health economics and outcomes research (HEOR) — the very reason payers and other access decision makers view manufacturer-sponsored HEOR insights with caution.
According to FDA guidance, manufacturers’ HEOR related to drugs, biologics, and medical devices can be shared with payers, formulary committees, and other entities (e.g., hospitals and health systems) that have HEOR knowledge and expertise and are responsible for coverage and reimbursement decisions, but only if it relates to the disease or condition, its manifestation, or related symptoms of the patient population in the FDA-approved labeling. Wow, that’s a mouthful! Importantly, this guidance gives manufacturers considerable latitude to research and disseminate insights related to duration of treatment, health care setting, burden of illness (e.g., missed days of work), dosing/use regimen, patient subgroups, length of stay, surrogate or intermediate endpoints, clinical outcome assessments (e.g., patient-reported outcomes) or other health outcome measures (e.g., quality-adjusted life year), adherence, or persistence. Let’s cut through a few of these options and some related hacks.
Selecting variables
In essence, HEOR analysts select variables on both sides of the equation — the dependent variable(s), which refers to the outcomes or endpoints, and the independent variable(s), also known as the predictors, which include the treatment or intervention and any controls. And unlike clinical trials in which variables are selected in advance of data collection, variable selection in HEOR is often done after the data are collected. Hack 1: If a predictor variable is not predictive or less predictive than desired, then try another; and if the outcome variable does not tell a desired story, then find one that does.
Say, for example, that overall survival data in a clinical trial are sufficient for approval but perceived by payers and other access decision makers as being lackluster. The manufacturer can look for a different endpoint with a shinier outcome, perhaps progression-free survival or time to progression, to augment the trial results and help to tell a value story. And HEOR analysts can iteratively test many combinations of control variables when developing a desirable model. The FDA permits, and big data affords, manufacturers the flexibility to develop predictive models that support a value proposition designed to resonate with payers or other access decision makers. While the FDA says that the rationale and consequences of including and excluding specific variables in models should be discussed in the analysis, full transparency is quite rare. Many, many decisions are made when analyzing and reporting HEOR, and for practical or other reasons, the full details of these decisions are typically not disclosed.
Defining subgroups
HEOR analysts can choose to focus on subgroups defined by demographics like age, sex, race and socioeconomic status, clinical characteristics, and/or disease severity. Whereas trials specify inclusion and exclusion criteria up front, the selection of patient populations in HEOR is typically done on the back end. The FDA allows manufacturers to share response rates of HEOR-defined subgroups — even when they vary from the rates reflected in the FDA-approved labeling — as long as these groups are part of the patient population for the approved indication. Hack 2: If the data indicate that some subgroups respond to treatment or show a greater response, then highlight those.
Increasing observations
HEOR analysts can pursue an “acceptable” P value by selecting variables and defining subgroups or by simply increasing the number of observations. Hack 3: If the test of significance falls short, then simply increase the number of cases to attain a “significant” result.
There are several reasons why payers and other access decision makers think a “significant” difference may be insignificant, and one of those reasons relates to big data. If the sample is large enough, then even the smallest effects can be statistically significant — increasing the likelihood of spotting a “significant” effect that is not real or not clinically or economically significant. There is growing concern about determined researchers who “P-hack” their way to achieving statistically significant results — an undesirable but understandable behavior in response to pressures from publishers and research sponsors.
As an aside, the conventional approach to embrace a P value of .05 or smaller as evidence that the difference is real (and for those more comfortable with the jargon, reject the null hypothesis) is not beyond reproach. By definition, a P value of .05 means that 1 in 20 studies assumed to demonstrate a real difference are wrong! How many would bet the farm if the risk of losing were 1 in 20? P values were developed during an era of “little data” when samples were small and are increasingly less relevant in today’s era of big data.
It cuts both ways
Paradoxically, flexibility is a strength and a weakness of HEOR. On one hand, such flexibility increases the likelihood of discovering something new, important and, above all, actionable. Progress in personalized medicine depends on data mining and being less dependent on theory to guide inquiries, and finding a needle in the HEOR haystack can help to improve or save lives. Real-world evidence (RWE) is what payers really, really want, and to the extent HEOR can use RWE to validate or convincingly augment clinical trial conclusions, all the better. On the other hand, such flexibility can encourage bad behaviors like P-hacking that threatens our ability to reproduce and replicate HEOR results and spot differences that are real.
Payers and other access decision makers would likely be less critical of manufacturer-sponsored HEOR insights if they were more transparent and if failures were reported alongside the successes. Trials can result in failures. In contrast, manufacturer-sponsored HEOR never fails — at least not the results that manufacturers share. For payers, knowing what does not work is as important — if not more — than knowing what works, as this can be incorporated into prior authorization criteria. Unfortunately, any resulting restrictions would be undesirable to manufacturers. This inherent conflict puts manufacturers and payers and other access decision makers at odds.
When reviewing manufacturer-sponsored HEOR results, payers and other access decision makers may never know what ends up on the cutting room floor, but the final cut can sometimes benefit all.
No Comments