Contributing author: Lars Johansson, Chief Scientific Officer @ Antaros Medical
Medical imaging in drug development is very different to medical imaging in clinical practice. These differences can be with respect to the intended purpose, practical constraints, and measurement or evaluation methods.
In clinical practice, imaging is used primarily by healthcare professionals for diagnosis and follow-up of disease progression/regression. That is, to detect a disease or stage its severity. Whereas in clinical trials for drug development, imaging serves to quantify treatment effect in a standardised way that allows for comparisons.
Imaging in clinical practice
Healthcare professionals are using medical imaging to help inform treatment decisions for an individual patient. In most cases, this means either diagnosing or detecting disease, staging the severity of the disease, or assessing whether the current treatment plan is working. The key considerations are therefore that the image has good sensitivity and specificity. That is to say that as a clinical tool, the primary purpose of the image is to enable correct identification of those patients with the disease and correct identification of those patients without the disease.
Practically speaking, the image is usually taken at a single site and only at the one timepoint. Assessment of the imaging is qualitative (non-numerical) in nature, and the image is used in conjunction with other clinical assessments to create an overall picture of the patient’s disease state. There is less need for standardisation of the imaging acquisition method, as variability between imaging sites and professionals will not necessarily impact the utility of the image as a tool for diagnosis.
Furthermore, the chosen imaging modality will be based somewhat on the suspected disease, but also on the availability of scanners at the site and relative cost of the test. Preference is often given to the most readily available and cost-effective modality.
Imaging in clinical trials for drug development
Drug developers use medical imaging to help measure treatment efficacy at a group level. They do this by measuring quantitative changes to disease state that can be associated with the treatment being investigated. The clinical utility of the quantitative image measure is determined by its precision and power, that it can reliably reproduce what it measures and that there is a high probability of it finding a true difference between two groups.
In drug development and clinical trials, there are several motivations for including imaging, such as:
- To provide additional evidence of treatment effect to complement other available biomarkers.
- To provide further information beyond what is deducible from other available biomarkers to learn more about the treatment and its effects.
- To reduce the overall cost of the trial; either by reducing the time needed to see results or by reducing the number of patients required.
How imaging can help provide additional evidence to complement other available biomarkers
Generating additional evidence throughout the drug development process can be important for both internal and external purposes.
Firstly, imaging can help provide evidence and insights to inform internal go/no-go decisions. Drug development is very expensive, and clinical trials in drug development are inherently risky. Trial failures, were the predetermined outcome is not successfully met, are much more expensive in later stages of development. Having as much evidence as possible in earlier stages can save a lot of money and time in the long run. Obtaining early evidence of potential success or failure will help to ensure that money and resources are being allocated in the best way possible.
Furthermore, drug approval from regulatory authorities will rarely be granted solely based on the results of only one clinical outcome; the primary endpoint against which the trial’s success is determined. In certain disease areas, there are established and approved biomarkers that are used for obtaining approval. However, these endpoints can have limited utility if they are not sensitive enough and/or are highly susceptible to confounding factors (i.e., death or quality of life), or if they are expensive and time consuming to measure (i.e., collecting tissue biopsies). In these instances, imaging can be used as a surrogate endpoint.
A surrogate endpoint can be used as a substitute for a clinical outcome when they are not suitable and there is a sufficient evidence of a link. Establishing new surrogate endpoints is, quite frankly, a laborious process that necessitates generating a lot of evidence throughout several clinical trials. However, surrogate endpoints can be used at different levels of clinical validation to help make the case for a drug approval. One example is Total Kidney Volume (TKV), an imaging-based surrogate endpoint that has been designated a ‘reasonably likely’ surrogate endpoint by the Food and Drug Administration (FDA) and used as the basis for accelerated approval in patients with autosomal dominant polycystic kidney disease (AKPKD).
How imaging can provide further information beyond other biomarkers to learn more about the treatment and its effects
Imaging is a non-invasive tool that can provide certain information that is not always possible through other biomarkers. Providing an itemised list of information that can be captured with imaging is beyond the scope of this article, however two important examples are related to understanding mechanism or mode of action (MoA), and enabling a holistic perspective.
Imaging can be extremely useful in understanding mechanism or mode of action (MoA). Different imaging methods can be developed and tailored to help visualise or confirm biological processes and targets based on the treatment hypothesis. An example of this in practice is using PET tracers to investigate target engagement and receptor occupancy.
Another potential application is using imaging to gain a more holistic perspective. This can be through using imaging to investigate treatment effects across multiple organs or throughout the whole body, or by looking at both treatment efficacy and safety effects in the same examination. For example, imaging can provide a way of investigating cardiorenal effects in diseases that involve both the heart and the kidneys.
How imaging can reduce the cost of clinical trials
Imaging can also be used to reduce the costs of clinical trials. This is either through providing early insights and thereby shortening the require trial length, or by reducing the number of patients needed to observe an effect due to higher precision.
An example which demonstrates both is the use of imaging to quantify liver fat. Magnetic Resonance Imaging Proton Density Fat Fraction (MRI-PDFF) is a non-invasive imaging measurement that can quantify liver fat quickly and safely. Using this method can reduce the cost of a clinical trial in many ways. Importantly, it is a highly precise and powered measurement, meaning it can detect differences faster and in smaller sample sizes so less patients and less time are required for the trial. In addition, compared to the alternative of a tissue biopsy, the test is less expensive and less time consuming. It is also safer for the patient and its non-invasiveness can help ease the recruitment of patients for a trial.
Imaging in clinical trials also has different practical constraints to imaging in clinical practice. Clinical trials can sometimes involve a single site but often are spread across multiple sites. Imaging is also required at multiple timepoints in order to investigate changes. Assessment of the imaging is quantitative (numerical) in nature, and is used to detect significance at the group level. Thus, there is greater need for standardisation to ensure that results across patients, scanners, and sites are comparable. Variability between imaging sites and professionals can significantly impact the utility of the images in clinical trials.
Choice of imaging modality is often predetermined based on the needs of the study and the disease area, and imaging sites will be selected based on whether they have the required scanners and personnel to implement the acquisition protocol.
A summary of these differences can be found below:
Example: Non-invasively assessing liver fibrosis
Liver fibrosis, the accumulation of extracellular matrix (ECM) proteins in the liver, is considered one of the main prognostic factors in many liver diseases. Both its presence and severity are correlated with the risk of developing cirrhosis and liver-related complications.
Historically, liver biopsy has been the gold standard for assessing liver fibrosis. Liver biopsies have several limitations; they are expensive, invasive, and carry a risk of severe complications. Furthermore, there is also risk of a sampling bias, where the sample of tissue taken is not representative of the amount of fibrosis in the whole liver. And importantly in the context of treatment decisions, liver biopsy gives a snapshot and not an insight into the dynamic changes during the process of fibrogenesis.
All this is to say that there is a need for non-invasive ways to assess liver fibrosis. An in-depth review of all possible methods is beyond the scope of this article; the updated EASL Clinical Practice Guidelines give a nice overview of the advantages and disadvantages of the main non-invasive tests (NITs) for assessing liver fibrosis. For this example, we will consider only clinical events, and three imaging methods: FibroScan, Magnetic Resonance Elastography (MRE), and Platelet-derived growth factor receptor ß (PDGFRß) Positron Emission Tomography (PET) imaging.
Most patients with liver disease are not diagnosed or treated in the early stages of fibrosis. In the early stages the disease in asymptomatic, in fact, studies have shown that 6-7% of the adult population without liver disease have liver fibrosis. Therefore, one way of assessing liver fibrosis is through clinical events, as these will manifest in the later disease stages. Clinical events include cirrhosis, hepatocellular carcinoma (HCC), liver failure, liver transplant, or death.
Fibrosis accumulates over time. From a clinical perspective, waiting for such events is undesirable, as this level of fibrosis is associated with a much higher relative risk of liver-related morbidity and mortality, and the treatment possibilities have less hopeful outcomes. In the context of clinical trials, using these events as outcomes requires long trials with many patients. Therefore, in both cases, alternative methods that can enable earlier detection of fibrosis are preferred.
FibroScan, developed by Echosens, is an ultrasound transient elastography method for measuring liver stiffness. Liver stiffness is associated with liver fibrosis, and FibroScan is currently the most widely used and validated technique in the clinic for assessing fibrosis. As a test it has very good clinical utility; short procedure time, can be performed at bedside by staff with minimal training, and gives immediate results.
However, liver stiffness is a physical tissue property that is not solely determined by the amount of fibrosis. Several other factors can also contribute to liver stiffness, and liver stiffness measurements (LSM) using ultrasound can overestimate fibrosis in the presence of space-occupying abnormalities like oedema or inflammation. The utility of FibroScan to assess fibrosis in a clinical trial setting is also limited by its reliability across devices and its applicability in patients with obesity or ascites. These factors would necessitate a trial with a larger sample of patients.
Magnetic Resonance Elastography (MRE)
MRE also assesses fibrosis through the measurement of liver stiffness, and a strong correlation has been shown between MRE-measured liver stiffness and histological stage of fibrosis. The key difference between MRE and ultrasound transient elastography is that MRE provides a quantitative map of tissue stiffness over the whole liver, while ultrasound transient elastography provides a spot measurement at limited depth in areas of liver that have an acoustic window.
MRE is not as commonly used in clinical practice, largely due to it being more costly and requiring an MRE facility with trained personnel to perform and analyse the imaging. It is, however, a very attractive option for clinical trials. It has been shown that MRE can detect earlier stages of fibrosis, and the method has very high repeatability, both factors that will require less patients. Furthermore, the MRE sequence itself has a short acquisition time and can be incorporated into a standard abdominal MRI protocol, which can give a comprehensive overview of the liver by including other measurements like fat content, for example.
PDGFRß PET imaging
Another method which works upstream of assessing fibrosis is to quantify the activity of the cells producing fibrosis (fibrogenesis) using Positron Emission Tomography (PET) imaging with a radiolabelled PDGFRß binding affibody molecule. PDGFRß is a known biomarker of activated hepatic stellate cells. Hepatic stellate cells orchestrate the deposition of extracellular matrix (ECM) in the liver. Research in using a PDGFRß radiotracer is ongoing and shows promising signs for the future use of this method in clinical trials. This approach could potentially be a much faster readout where inhibition of stellate cell activation should translate into inhibition of progression of disease.
To summarise briefly everything that has been discussed:
- Imaging in clinical trials is different to imaging in clinical practice.
- There are several different motivations for including imaging in a clinical trial.
- Imaging methods have different advantages and disadvantages that can make them more suitable for clinical practice or clinical trials, as was shown using the example of assessing liver fibrosis.
The views and opinions expressed in this article are solely those of the contributing author/s. These views and opinions do not necessarily represent those of Antaros Medical.
If you have any questions regarding this article, please reach out to firstname.lastname@example.org