Pharma’s Data Dilemma: How Machine-Learning Tools Hold the Key to Unlocking Novel Insights

by Satnam Surae Contributor        Biopharma insight

Disclaimer: All opinions expressed by Contributors are their own and do not represent those of their employers, or BiopharmaTrend.com.
Contributors are fully responsible for assuring they own any required copyright for any content they submit to BiopharmaTrend.com. This website and its owners shall not be liable for neither information and content submitted for publication by Contributors, nor its accuracy.

  
Topics: Tools & Methods   
Share:   Share in LinkedIn  Share in Reddit  Share in X  Share in Hacker News  Share in Facebook  Send by email

Covid-19 is still having a devastating effect on many countries and economies worldwide and has brought to light the impact of diseases that affect some people in worse ways than others. With SARS-CoV-2, ethnicity, gender, age, and underlying health conditions all play a part in disease severity and mortality, but there is mounting evidence to suggest that the status of the immune system early after infection can be predictive of those who go on to have the worst symptoms. The scale and threat of the pandemic has led to the need for rapid analysis of the disease – because the better we understand it, the more easily we can help healthcare professionals to make critical personalised treatment decisions. Advanced technologies such as artificial intelligence and machine learning are playing a large role in this process, and with other diseases, too, accelerating scientific research to enable major breakthroughs.

 

Digital first for pharma

The life science industry has been talking about artificial intelligence and machine learning for a long time and the benefits of increasing throughput, reducing human error, boosting productivity, and saving time and money have helped to cement automation within pharma and biopharma R&D. However, the extensive amount of high-quality data available within drug development pipelines poses a significant challenge for scientists trying to derive insights.

The range of different techniques being used generates data in many different formats and sizes, creating difficulties when structuring, sharing, and analysing the research. Data analysis is often still highly manual and requires specialist programming and data science skills, and it has been estimated that two thirds of researchers’ time is spent on processing data – valuable time that could be used for higher value scientific analysis and the complex tasks that researchers do best. For R&D labs to gain maximum value from data in a realistic timeframe, data processing and analysis technology must keep pace with automation innovation.

 

The value of flow cytometry in pharma research

Flow cytometry is a diverse and crucial technique in pharmaceutical research, used to investigate disease aetiology and alterations in immune responses, as well as for quantitative studies. There are several different steps during the flow cytometry data life cycle which include: data acquisition, processing, population selection, results integration, data analytics and insight generation. Unfortunately, its high throughput, multiparameter functionality is hampered by this immense output of highly complex data, especially with modern equipment. It requires significant expertise to interpret the data correctly, and there is a lack of standardisation in assay and instrument set-up. The technique has the potential to be used in every stage of drug discovery and development, so making it more efficient could have major, positive consequences for big pharma. 

Continue reading

This content available exclusively for BPT Mebmers

Topics: Tools & Methods   

Share:   Share in LinkedIn  Share in Reddit  Share in X  Share in Hacker News  Share in Facebook  Send by email