ThermoFisherScientific - Custom and Bulks
Pharma Focus Europe
Worldwide Clinical Trials - Neuroscience Clinical Development

Using AI to Accelerate Drug Discovery

Professor Amin Rostami-Hodjegan, PharmD, PhD, FCP, FAAPS, FJSSX, FBPhS, Chief Scientific Officer, Certara

Professor Piet van der Graaf, PharmD, PhD, FBPhS, Senior Vice President, Head of QSP, Certara

Artificial intelligence (AI) helps to address two diverse scenarios in drug discovery – instances when researchers have too much data and want to make sense of it or too little data and need to derive the most realistic virtual alternative data describing the scenario. But using AI simply to optimize target affinity for virtual compounds is not addressing the key bottleneck. It is the combination of their pharmacokinetics (PK), efficacy for desired effects and avoidance of unwanted issues that are harder to predict. Physiologically based pharmacokinetic (PBPK) and quantitative systems pharmacology (QSP) models can provide those wholistic answers and predict clinical endpoints during drug discovery.

Quantitative systems pharmacology (QSP) combines computational modeling and experimental data to examine the relationships between a drug, the biological system, and the disease process. QSP converts biology into clinical pharmacology and clinical endpoints, and we use artificial intelligence (AI) to further that. There are also instances where we have very limited actual data, and we use AI to mine the huge amount of knowledge in the public domain to help create the databases to build the QSP models.

What makes AI useful, as opposed to just another gadget, is asking the right questions. But what are the right questions to ask to find your way through that avalanche of data? How do you find a pathway for a drug that may or may not have an effect?

If you are researching a new mechanism of action in drug discovery, by definition, you do not have any clinical data. How are you going to predict whether that new mechanism is going to be useful in the clinic? And how do you decide how much of that drug to give so it is effective but not harmful? By employing QSP and physiologically based pharmacokinetic (PBPK) modeling, you can borrow the necessary data and knowledge from a variety of other sources to answer your questions. In effect, we use AI to build models that are a mathematical representation of our current understanding of the biology.

Managing Data Overload

AI allows us to easily mine millions of documents and unstructured data sources in a systematic and meaningful manner. That is more data than any human could review and digest.

We also use AI to seamlessly couple data that is in the public domain with a pharmaceutical company’s proprietary data to build a unique database. As each company has lots of internal databases, unstructured documents, papers, and lab books, the output will be specific to that company, even though the algorithm used may be the same. As the starting point is different for each company, they will get their own unique answer, depending on their data.

The ability to mine unstructured data is incredibly important for pharmaceutical companies because they gather so much data over the years, especially during mergers and acquisitions, that it is very difficult to get it organized without integration tools such as Certara’s D360 system, and extract pearls from the ocean of data without AI tools such as Certara’s Vyasa.

Our AI platform mines about six million public sources, including all the massive regulatory databases and the associated filings, memos, and meetings, at a click of a button. It is too much data for a person to sift through, but with AI we can do it quickly and completely. We ask a question, and get real-time answers. We can ask for everything that has been written about a particular compound, class of compounds, or disease, and then search and mine the data.

Building a Biological Map

We start by asking questions, such as “How does A go to B, and how does B go to C?” much as you would with ChatGPT, and building a biological map. We then request scientific references to support each step.

For example, we might begin by stating, “We have hormone X. How does hormone X go to compound Y?”, and the AI tool will populate the model with that information and the pertinent reference. Then, we might ask, “How fast does it go?” To which the AI platform might respond – “I don’t know, I can’t find it” – thus identifying a new area for research. Then we might inquire, “How much of the hormone is there?” and it will let us know the details and perhaps that Smith discovered it in 1973.

As there are a specific set of questions that we routinely ask when we build QSP models, such as “Does that step exist? How much of X is there, how much of Y, and how fast does it go?” it is possible to automate some steps. But once you have made the QSP model, how do you validate it?

Validating QSP Models

Once you have conducted a clinical study, you can conduct statistical testing, and retrospective analyses of the data to validate it. But mechanistic modeling is prospective, it focuses on extrapolating from something that has not been done before. Therefore, it requires a different view of validation.

It is not useful to say, “I only believe the weather forecast when I’ve seen the weather.” When you have seen the weather, the forecast is obsolete. You can choose to either use the model or not, but not say, “It’s not validated because we haven’t seen the weather yet” because you are creating the model to get a glimpse of the future. That is what we are doing with mechanistic modeling, whether it is PBPK or QSP. Obviously, when a number of earlier forecasts have turned out to be true, a level of credibility is built around the models, but that is not a guarantee for precise prediction of any future event. The differentiations between model qualification, verification, credibility, and validation are the subject of a recent in-depth article by Frechen and Rostami-Hodjegan1.

Thus, our model will not say exactly what is going to happen. It is not a deterministic model that delivers a single answer. Instead, it is a stochastic model that provides the most likely answer, alongside the least likely answers, and assigns a percentage value based on the likelihood of them happening. For example, it may state that the likelihood of it raining at 4:00 pm tomorrow is 70%, and then you must decide whether you should take an umbrella or walk without it!

In other words, decisions still must be made in the face of uncertainty, with or without AI and a QSP model. The models just allow you to make better informed and optimal decisions. You are determining the probability of success, which is hardly ever 100%.

Most of the models we develop now describe a set of conditions that cannot be easily accessed and evaluated prospectively and in large numbers, but they do occur in the clinic when drugs are being administered to patients.

Providing Real-world Validation

AI helps to manage information overload, particularly when the facts are sparse and seemingly unconnected, by going through data and extracting elements that are useful. It also gives us confidence in models by gathering indirect evidence that verifies the model-informed decisions.

For example, Joseph Grillo and his colleagues at the FDA built drug-drug interaction (DDI) models for renally impaired people taking rivaroxaban more than 10 years ago.2 It is difficult evaluating one drug in people with renal impairment, for both practical and ethical reasons, so no one studies whether two drugs are going to have a different DDI in this patient group.

The FDA team used our tools instead to create models that predicted what would happen under those conditions. Their models showed that renal impairment put patients in a higher risk bracket for a certain combination of drugs. As a result, the FDA asked Johnson & Johnson to annotate their drug label to that effect even though they did not have clinical data to show it.

Now 10 years later, an AI analysis of real-world data from people with renal impairment who received a combination of those drugs despite the warning on the label, demonstrated indirectly through side effects of bleeding that drug interactions were occurring.

Improving Upon the Rule of 5

The fundamental facts being investigated in the lab during drug discovery have not changed significantly during the past 20-30 years. We still need to know whether a drug is permeable enough to pass through the wall of the gastrointestinal tract if it is going to be given orally. Will it dissolve at the dose we are giving in the liquids that are in place for this sort of administration? Then, how is the liver going to deal with it? And how long will it stay in the body?

In the past, chemists would follow the Lipinski Rule of 5 and make decisions regarding candidate drugs based on their ranking against a small number of physical chemical properties and rules that were based on simple plots. Many drugs were killed because they failed to meet one criterion, but they may have achieved an acceptable result if the approach considered those criteria combined.

Now AI can tell us that a candidate molecule, based on its chemistry, will have a tendency toward lipophilicity, and will be acidic with a PK of x. Many of those elements are related to the molecule having a particular charge or receptor affinity for cytochrome P450.

Chemists used to assess those elements individually and make determinations. But AI allows us to take all the available data and integrate it into a model, which can be used to answer our questions, such as “How long will the molecule stay in the body? Will it turn into an undesirable metabolite?”

We now have access to millions of chemical structures in silico, generating lots of physical chemical properties to feed into the model. Then we can do a multivariate analysis of all the elements together. AI can support the application of a Rule of 50, 5,000, or 5 million!

Progressing Beyond Target Affinity

Chemists initially began using AI to generate virtual chemical structures. They created millions of candidate molecules on the computer and then used AI to link them to pharmacological properties. But that approach only focused on finding affinity for targets. They optimized compounds so they were potent against their chosen target and then took them into the clinic. But several of the first AI-designed compounds have failed in clinic trials or been deprioritized.3

But finding potent molecules was never really the issue. The hardest part is optimizing the pharmacokinetics and then predicting biomarkers and actual clinical efficacy. We are now working on methods where we do all three simultaneously as the basis for translational virtual drug discovery. We can make a large number of virtual compounds in an iterative manner and optimize them not just for the pharmacology but also the pharmacokinetics, and them feed them into a QSP model that can predict clinical outcomes on the computer for a whole range of compounds.

Predicting Clinical Endpoints

By combining QSP and AI, we can also predict clinical endpoints in discovery for novel mechanisms. This is remarkable because in discovery you are working with novel targets. In the past, the best you could do is test the compound in animal models and maybe an organ on a chip, but certainly not in patients, and you could never get to clinical endpoints which are often soft, subjective scores, such as rating how you feel on a scale of 1 to 5.

But now we can use QSP to identify virtual biomarkers. We use QSP to capture the fundamental biology, and then simulate what happens to the biomarkers, cell types, and cytokines when we put a compound into that system.

While we cannot model actual clinical endpoints, such as “How do you feel?” in a mechanistic way, we can calibrate a QSP model with known compounds, where we do know the clinical endpoints. Then, we can put new compounds into the model and look at the virtual biomarker output and link that to known correlates with clinical endpoints. This allows us to predict clinical endpoints for novel mechanisms in the discovery phase.

The beauty of this approach is that we start with very little actual data, and the QSP model generates a very large virtual biomarker dataset, including biomarkers that have been measured and those that have not, which is ideal for analyzing with AI.

Helping to Repurpose Drugs

In the past, researchers approached a disease from one angle, exploring the redundancy of a receptor, lack of an enzyme or protein, just hitting one target. But the network that is causing the disease is more complex than a single receptor that is failing. That is why QSP models are becoming increasingly relevant.

In some cases, when researchers have redefined diseases based on biomarkers and networks as opposed to symptoms, they have discovered that conditions which were previously considered distinct, are in fact, fundamentally the same. This revelation has provided a rationale for repurposing drugs for other conditions that would previously not have been considered because they did not have similar symptoms. That is another benefit coming from AI.
 
Consider immunology 10 years ago – there was a very narrow set of indications, essentially arthritis, psoriasis, and Crohn’s disease. But now immunology is recognized as a component in almost every disease, ranging from diabetes to Alzheimer’s disease and cardiovascular disease.

As a result, methotrexate, which was originally only prescribed for cancer, is now used to treat many immune-related diseases. Furthermore, dexamethasone, which was traditionally used to relieve inflammation, has shown efficacy in treating COVID-19 because it is an immunological disease as well as an infectious one.

Offering Regulatory Support

The new AI application that the United States Food and Drug Administration (FDA) is most excited about involves interpretation of QSP models. When a company submits a QSP model to the FDA for review, that model is by nature very complex, and a member of the Agency’s team needs to evaluate it.

We can approach our QSP model in reverse and go through each step, annotating it with scientific references. We can employ a traffic light system, where green signifies that we found a reference for that step and it is qualified, while red means that information may be true, but we have not found a reference for it.

This process would help regulators to quickly get a sense of whether a QSP model is based on data from quality peer-reviewed papers in the scientific literature and therefore credible.

Conclusion

AI is providing tremendous support for the drug discovery process. Its applications range from reviewing millions of documents and unstructured data and gleaning relevant information, to helping build biological maps and QSP models of new mechanisms of action, validating QSP models using real-world data, and predicting clinical endpoints. It is an asset in situations where researchers are faced with data overload and data sparsity.

References

1. Sebastian Frechen and Amin Rostami-Hodjegan. Quality Assurance of PBPK Modeling Platforms and Guidance on Building, Evaluating, Verifying and Applying PBPK Models Prudently under the Umbrella of Qualification: Why, When, What, How and By Whom? Pharm Res 2022 Aug;39(8):1733-1748. doi: 10.1007/s11095-022-03250-w.
2. Grillo, J. A., McNair, D., & Zhao, P. (2023). Coming full circle: The potential utility of real-world evidence to discern predictions from a physiologically based pharmacokinetic model. Biopharmaceutics and Drug Disposition, 44(4), 344–347. https://doi.org/10.1002/bdd.2369
3. https://endpts.com/first-ai-designed-drugs-fall-short-in-the-clinic-following-years-of-hype/
4. Benjamin Ribba. Quantitative systems pharmacology in the age of artificial intelligence. CPT: Pharmacometrics & Systems Pharmacology. First published: 01 October 2023. https://ascpt.onlinelibrary.wiley.com/doi/10.1002/psp4.13047.

--Issue 03--

Author Bio

Professor Amin Rostami-Hodjegan

Professor Amin Rostami-Hodjegan, PharmD, PhD, FCP, FAAPS, FJSSX, FBPhS, is the Senior Vice President of Research & Development and Chief Scientific Officer at Certara. Previously, he was co-founder of Simcyp Limited, a University of Sheffield spin-off which was acquired by Certara. Amin is also a Professor of Systems Pharmacology and the Director of the Centre for Applied Pharmacokinetic Research at the University of Manchester.

Professor Piet van der Graaf

Professor Piet van der Graaf, PharmD, PhD, FBPhS, is Senior Vice President and Head of QSP at Certara and Professor of Systems Pharmacology at Leiden University. He was co-founder of XenologiQ Limited, which was acquired by Certara in 2015. Before joining Certara, Piet was the CSO of the Leiden Academic Centre for Drug Research and held various research leadership positions at Pfizer across discovery and clinical development.

magazine-slider-img
Thermo Fisher Scientific viral vector services (VVS)World Orphan Drug Congress 2024World Vaccine Congress Europe 2024Advanced Therapies USA 2024
cytiva