News

Improving clinical trials with machine learning

Machine learning could improve our ability to determine whether a new drug works in the brain, potentially enabling researchers to detect drug effects that would be missed entirely by conventional statistical tests, finds a new UCL study published on 13th November in Brain.

 “Current statistical models are too simple. They fail to capture complex biological variations across people, discarding them as mere noise. We suspected this could partly explain why so many drug trials work in simple animals but fail in the complex brains of humans. If so, machine learning capable of modelling the human brain in its full complexity may uncover treatment effects that would otherwise be missed,” said the study’s lead author, Dr Parashkev Nachev (UCL Institute of Neurology).

To test the concept, the research team looked at large-scale data from patients with stroke, extracting the complex anatomical pattern of brain damage caused by the stroke in each patient, creating in the process the largest collection of anatomically registered images of stroke ever assembled. As an index of the impact of stroke, they used gaze direction, objectively measured from the eyes as seen on head CT scans upon hospital admission, and from MRI scans typically done 1-3 days later.

They then simulated a large-scale meta-analysis of a set of hypothetical drugs, to see if treatment effects of different magnitudes that would have been missed by conventional statistical analysis could be identified with machine learning. For example, given a drug treatment that shrinks a brain lesion by 70%, they tested for a significant effect using conventional (low-dimensional) statistical tests as well as by using high-dimensional machine learning methods.

The machine learning technique took into account the presence or absence of damage across the entire brain, treating the stroke as a complex “fingerprint”, described by a multitude of variables.

“Stroke trials tend to use relatively few, crude variables, such as the size of the lesion, ignoring whether the lesion is centred on a critical area or at the edge of it. Our algorithm learned the entire pattern of damage across the brain instead, employing thousands of variables at high anatomical resolution. We used well-established methods of machine learning, teaching the algorithm on subsets of data and then testing its performance on other subsets it had not seen,” explained the study’s first author, Tianbo Xu (UCL Institute of Neurology).

The advantage of the machine learning approach was particularly strong when looking at interventions that reduce the volume of the lesion itself. With conventional low-dimensional models, the intervention would need to shrink the lesion by 78.4% of its volume for the effect to be detected in a trial more often than not, while the high-dimensional model would more than likely detect an effect when the lesion was shrunk by only 55%.

“Conventional statistical models will miss an effect even if the drug typically reduces the size of the lesion by half, or more, simply because the complexity of the brain’s functional anatomy—when left unaccounted for—introduces so much individual variability in measured clinical outcomes. Yet saving 50% of the affected brain area is meaningful even if it doesn’t have a clear impact on behaviour. There’s no such thing as redundant brain,” said Dr Nachev.

The researchers say their findings demonstrate that machine learning could be invaluable to medical science, especially when the system under study—such as the brain—is highly complex.

“The real value of machine learning lies not so much in automating things we find easy to do naturally, but formalising very complex decisions. Machine learning can combine the intuitive flexibility of a clinician with the formality of the statistics that drive evidence-based medicine. Models that pull together 1000s of variables can still be rigorous and mathematically sound. We can now capture the complex relationship between anatomy and outcome with high precision,” said Dr Nachev.

“We hope that researchers and clinicians begin using our methods the next time they need to run a clinical trial,” said co-author Professor Geraint Rees (Dean, UCL Faculty of Life Sciences).

The study was funded by Wellcome and the National Institute for Health Research University College London Hospitals Biomedical Research Centre.

Share this article

More services

 

This article is featured in:
Drug Trials

 

Comment on this article

You must be registered and logged in to leave a comment about this article.