A recent FDA bulletin puts the success rate of Phase II clinical trials at just 18%, 50% in Phase III, citing insufficient efficacy and safety concerns as the most common reasons for drug trial failures. Given the challenges of disease heterogeneity and misleading inferences from Phase I data, U.S. regulators have suggested that the chances of clinical successes might be improved if drug companies designed studies that stratified populations to better reflect “the intended use of the drug.”
Michael Liebman, CEO of IPQ Analytics, recently conferenced with PropThink to discuss how big pharma could follow the FDA’s advice with better outcomes. IPQ is an analytics firm that supports industry clients in improving planning and outcomes in the product development process. Liebman is a thought leader in the field of biomedical informatics. He served as Global Head of Computational Genomics at Roche Pharmaceuticals (RHHBY) and Director of Bioinformatics and Pharmacogenomics at Wyeth Pharmaceuticals, and today serves in leadership roles at numerous medical and scientific institutions (full profile) in addition to his role at IPQ.
PropThink: The successful sequencing of the human genome in 2003 has provided drug researchers with new tools to understand the function of genetic factors in human disease. Could you comment on why, a decade later, big pharma is slow to translate findings in the lab into better treatments for common disorders such as obesity, dyslipidemia or hypertension?
Liebman: A disconnect exists between medical research and applying its discoveries to unmet clinical needs. Whether it be assessing the clinical utility of using Selective Estrogen Receptor Modulators (SERMs) for the chemoprevention of breast cancer in high-risk women, or evaluating the risks to patients who receive treatment with one of the new prescription weight-loss drugs, scientists and patients alike are looking for the “silver bullet” to solve their problems. Genomics is part of the solution – but not the answer to all of the problems out there, and it’s critical to understand the true complexity of real-world clinical problems, not just aggregate data.
PropThink: Could you elaborate on this?
Liebman: Computational modeling at the molecular level and new molecular technologies generate so much data – information that far outstrips the ability of scientists to convert it all into usable knowledge. Consequently, for the purposes of diagnosis and hypothesizing experiments, academic research tends to focus on disease processes by studying them at a singular point in time. They are like the proverbial man who looks for his keys under the street lamp: not because he dropped them there, but because the light’s better there.
Disease needs to be understood as involving a set of complex biological processes that evolve over time through the interaction of not just genetics, but environmental and lifestyle factors too and the patient’s clinical history.
PropThink: So if I understand you correctly – though genomics and other high-throughput technologies can generate lots of data, it doesn’t mean the answers can be found specifically in the data.
Liebman: A fundamental problem that exists in research labs is the perception that “more data” implies “more knowledge.” This is reflected in the allure of – and the dependence on – technology to problem solve, rather than investing in the time to find out what the real problems are. Again, disease represents an evolving process, not a simple state, and all of these factors, including genomics, impact the disease process differently at each step in its evolution.
PropThink: The average cost to develop one new drug – pre-clinical through two Phase III trials – has risen from $300 million in 1987 to about $1.3 billion, according to the Tufts Center for the Study of Drug Development. Won’t your bi-directional “bench to bedside” approach increase an already costly approach to drug development?
Liebman: To the contrary – moving bedside problems into the lab, early, should actually prove to be the less expensive approach.
Companies prefer to develop a drug to be marketed to as broad a treatment population as possible – imposing a grouping of potential disease sub-types under one simple classification. Time and again, however, when the drug fails to hit its pivotal primary endpoints, companies are left scrambling to salvage the drug. They seek sub-populations in which the drug may have shown benefit.
My approach is to stratify the disease before engaging in development to better assess market opportunity and efficacy requirements; better identify appropriate test subjects before enrollment in clinical trials, and to shorten trials by having them be more directed. By focusing on the economics earlier in the game, clinical trials can optimally be designed, likely resulting in fewer outcome surprises and actually being less costly, leading to a quicker regulatory approval and longer time for marketing under patent protection. This shift of investment to earlier in the drug development process can significantly improve the financial return.
PropThink: Can you give an example?
Liebman: I believe we have shown this in two different ways that are critical for evolving the discovery process for new drugs and achieving their commercial success.
First, we have to evolve the focus on creating “blockbuster” drugs. As we’ve shown through pharmaco-economic modeling, investing in a better understanding of the disease and its complexity can help identify earlier the optimal target patient, evaluate more accurately the potential market and speed up development by improved focusing of clinical trials. This can lead to shorter, more cost effective trials and result in earlier product marketing and improved financial return on the product by having a longer protected lifetime.
Second, we need to recognize that “comparative effectiveness” needs to be more than just comparative efficacy. Although the focus of clinical development is on succeeding in clinical trails and achieving regulatory approval, it is critical to understand whether the physician will prescribe the drug and whether the patient will take the drug, because without these we only have laboratory success, not commercial success for the product, and potentially a financial deficit.
Both of these reflect an engineering or systems-based perspective on both the patient and the healthcare system, and recognize the actually complexity of healthcare in the real world. A patient is not just a clinical trial participant but must be treated as a system or a set of subsystems that have been acted on by many elements, from lifestyle choices [smoking, alcohol intake] to environmental stresses and of course, clinical history of other diseases and treatments.
PropThink: This approach of viewing the whole patient and applying customized treatments that reflect each person’s unique confluence of biology and experience sounds a lot like translational medicine/research.
Liebman: I actually have a different definition than the one that’s typically used today: the conventional use has TR/TM meaning the translation of research from the laboratory into clinical use. There is a problem and a notable gap in that only an extremely limited amount of laboratory research makes it into clinical use, although almost all academic researchers have this as their goal.
The problem is that they don’t start with a clearly defined clinical problem or unmet clinical need but rather something that’s of academic interest and assume that it will convert into clinical use – this is the “bench to bedside” transition that is the common definition.
I define the “bedside to bench to bedside” transition as being the true definition of TR/TM. I work with clinicians to build models of the full disease process to identify real clinical problems or gaps, bring those into the laboratory and then their solutions will have immediate clinical utility. This differs from the focus of the NIH National Center for Advancing Translational Sciences (NCATS); but, I believe it is the only way to makereal progress . . . it’s construed to be a bit more applied than basic research.
Many academicswill tell you that this is what they do . . . but the observed failure of research results having any true clinical utility tends to tell me otherwise.
PropThink: Can you give us a concrete application of where modeling pathway behavior, starting at the point of hypothesis generation from the “bedside” [patient] improved the odds of trial success and simplified R&D effort?
Liebman: Yes, though I do need to maintain discretion regarding our client.
“PharmaCo” was developing a “New Product” believed to be superior when compared to a similar formulation that uses a competing delivery system in treating metastatic disease resulting from breast cancer metastases to the brain. Based on preliminary results, PharmaCo decided to move ahead with a phase I/IIa study.
PharmaCo engaged us to get an early assessment of the potential risks that the New Product could face during the clinical trial phases as well as commercialization. An early understanding of the risks would allow PharmaCo to take the appropriate upfront measures in the study design and implementation to mitigate the risks and improve the overall odds of trial success and commercial viability.
In particular, PharmaCo was interested in having us find out if there was a better/alternative target indication for the New Product; if there was a better alternative for the current Phase I/II trial; and, were there other viable options for the proposed Phase IIb trial?
By identifying potential clinical problems – bridging the void from bench to bedside – we identified several additional new risk factors that could have negatively impacted success of the clinical trial as well as commercialization. Specifically, our analysis identified issues with patient selection for the primary indication, as well as the established inclusion/exclusion criteria and potential adverse event risks (such as, Herceptin drug interactions, either through direct or indirect mechanisms).
Additionally, we identified a second indication that had the potential to double the market opportunity and provide even more criteria for patient selection!
PropThink: A final question. Notwithstanding hard data on return-on-investment, how difficult has it been to convince and convert the current crop of industry leaders – getting them to focus more on causality than correlation and this approach of yours that “emphasizes solving problems over just generating hypotheses”?
Liebman: First, let’s be clear on the terms. Whereas current thinking identifies correlation between isolated variables as the key, we believe a system-based, integration approach will help determine causality, thereby improving diagnosis and treatment [outcome].
For us to establish a broad impact will require addressing current priorities, like ROI, and showing how the bottom line can change with new approaches. But, the industry is extremely risk adverse so it needs to be done in pockets before being considered for more general adoption by industry R&D. There are some individuals who “get it,” but they are frequently locked into a system that does not enable the risk-taking, at least in the large corporations.
Small companies can be more flexible but it requires getting into their gestalt at the beginning, not after they’re underway. I have been told by CEOs of several small companies that they cannot afford to have me evaluate the risks, etc. of their company’s only product that is under development because they cannot go to their board and tell them that “this sole product is not positioned for success!”
PropThink: This kind of thinking, managing costs at the expense of success from cash-strapped small companies, is part of what makes investing in biotechnology difficult. On the flip-side, large pharma with their deep pockets, seems entrenched in a methodology that generates few major treatment breakthroughs.
Regardless, insight into your approach at IPQ is appreciated Dr. Liebman. Thanks for you time.
Liebman: You’re welcome.
Michael Liebman, PhD, is Founder and CEO of IPQ Analytics, a life sciences and healthcare analytics firm. Liebman is a leading scientist and thought leader in the field of biomedical informatics. He serves on 14 scientific advisory boards, including the PhRMA Foundation, and is on the Board of Directors of the Nathaniel Adamczyk Foundation for Pediatric ARDS. Michael has been the Managing Director of Strategic Medicine after serving as the Executive Director of the Windber Research Institute since 2003. Previously, he was Director, Computational Biology and Biomedical Informatics at the University of Pennsylvania Cancer Center. He served as Global Head of Computational Genomics at Roche Pharmaceuticals and Director, Bioinformatics and Pharmacogenomics at Wyeth Pharmaceuticals. His research focuses on computational models of disease progression stressing risk detection, disease process and pathway modeling and analysis of lifestyle interactions and causal biomarker discovery and focuses on moving bedside problems into the research laboratory to improve patient care and their quality of life. He received a PhD in physical chemistry and protein crystallography from Michigan State University in 1977.