Companies Making Automated Drug Discovery a Reality

by Andrii Buvailo, PhD          Biopharma insight

Disclaimer: All opinions expressed by Contributors are their own and do not represent those of their employers, or BiopharmaTrend.com.
Contributors are fully responsible for assuring they own any required copyright for any content they submit to BiopharmaTrend.com. This website and its owners shall not be liable for neither information and content submitted for publication by Contributors, nor its accuracy.

   2994    Comments
Topics: Industry Trends   
Share:   Share in LinkedIn  Share in Reddit  Share in X  Share in Hacker News  Share in Facebook  Send by email

Data is king in modern biopharmaceutical research, and ability to generate massive amounts of quality biomedical data represents a tremendous research and business potential in the artificial intelligence (AI)-driven drug discovery realm. Challenges associated with big biological data, such as poor reproducibility, low accessibility, low standardization, etc, represent a considerable bottleneck for the advent of AI in drug discovery at scale, and the ambitions of the industry leaders to shift from drug discovery as a largely artisanship process to a so called "industrialized" drug discovery.

 

AI-driven companeis demonstrate progress in drug discovery

There is a growing wave of companies building drug design platforms of new generation -- Recursion Pharmaceuticals (NASDAQ: RXRX), Insitro, Exscientia (NASDAQ: EXAI) , Insilico Medicine, Deep Genomics, Valo Health, Relay Therapeutics (NASDAQ: RLAY), you name it -- companies that create highly integrated and automated AI-driven and data-centric drug design processes from biology modeling and target discovery, all the way to lead generation and optimization (sometimes referred to as “end-to-end” platforms). These “digital biotechs” are trying to transform traditional drug discovery, a notoriously bespoke, artisan process, into a more streamlined, repeatable, data-driven process -- more resembling an industrial conveyor line for drug candidates. Announcements by Exscientia (NASDAQ: EXAI) (here), Deep Genomics (here), Insilico Medicine (here), and other companies point to a situation where the average time for an entire preclinical program -- from building disease hypothesis to official nomination of a preclinical drug candidate -- have shrunk down to timelines as short as 11-18 months, and at fraction of costs of a typical project of similar nature conducted “traditionally”. Rapid timelines are achieved in drug repurposing programs with previously known drugs or drug candidates, for example, using AI-generated knowledge graphs, e.g. BenevolentAI (AMS: BAI) in their Baricitinib program, or advanced multiomics analysis and network biology to derive precision biomarkers for better patient stratification and matching novel indications -- as Lantern Pharm (NASDAQ: LTRN) does to rapidly expand their clinical pipeline.

However, a lot of those AI-driven “digital biotechs” are still relying on community-generated data to train machine learning models, and this may come as a limiting factor. While some of the leading players in the new wave, such as Recursion Pharmaceuticals and Insitro, are investing heavily into their own high-throughput lab facilities to get unique biology data at scale, other companies appear to be more focused on algorithms and building AI systems using data from elsewhere, and only having limited in-house capabilities to run experiments.

Data generation is a bottleneck in AI-driven drug discovery

A common practice is to use community-generated, publicly available data. But it comes with a caveat: an overwhelming majority of published data may be biased or even poorly reproducible. It also lacks standardization -- conditions of the experimentation may differ, leading to a substantial variation in data obtained by different research labs or companies. A lot has been written about it, and a decent summary of the topic was published in Nature: “The reproducibility crisis in the age of digital medicine”. For instance, one company reported that their in-house target validation effort stumbled at their inability to reproduce published data in several research fields. The obtained in-house results were consistent with published results for only 20-25% of 67 target validation projects that were analyzed, according to the company’s report. There are numerous other reports citing poor reproducibility of experimental biomedical data.

This brings us to a known bottleneck of “industrializing drug discovery”: the necessity for large amounts of high quality data, highly contextualized, properly annotated biological data that would be representative of the underlying biological processes and properties of cells and tissues.

In order for a wide-scale industrialization of drug discovery to occur, the crucial thing is the emergence of widely adopted global industrial standards for data generation and validation -- and the emergence of the ecosystem of organizations which would be “producing” vast amounts of novel data following such standards. Then, large drug makers and smaller companies would be able to adopt AI technologies to a much deeper extent. If we take the automotive industry as an example, a component of, say, an engine, developed in one part of the world would often fit into a technological process line in the other part of the world. So, highly integrated processes can be built across geographies and companies, as a “plug-and-play” paradigm.

Continue reading

This content available exclusively for BPT Mebmers

Topics: Industry Trends   

Share:   Share in LinkedIn  Share in Reddit  Share in X  Share in Hacker News  Share in Facebook  Send by email

Comments