Machine learning can help researchers accomplish scientific tasks more quickly and effectively — if labs take steps to ensure seamless integration.
Big data has well and truly arrived. But for all its immense potential, it brings unprecedented challenges for industry researchers. “Scientists, from research techs to principal investigators, spend way too much time searching through data,” says Zareh Zurabyan, head of eLabNext, a lab digitization specialist.
It won’t be long before there is too much data to manage without assistance. “Rapidly accelerating data generation has outgrown the rate at which human beings can digest in a meaningful way,” says eLabNext founder and managing director Erwin Seinen. “This means artificial intelligence (AI) will become a necessity, not just a novelty, for biotech.”
There are numerous opportunities for machine learning (ML) and other AI-based tools to transform the research process. Such algorithms can, for example, manage the execution of automated multi-instrument experiments, or identify patterns within a research pipeline. When such tools are directly coupled to electronic lab notebooks (ELNs), such as the one developed by eLabNext, it becomes possible to seamlessly integrate AI capabilities into researchers’ daily workflow.
Use of AI in drug development is not yet commonplace. Research from Deloitte in 2021 found that 38% of biopharma companies surveyed use AI day-to-day, although a further 31% are investigating the use of such tools.
“In many cases, we’ve observed that upper management is absolutely supportive of integrating AI into their research, drug discovery and precision medicine work,” says Taylor Chartier, founder and CEO of life sciences-focused AI company Modicus Prime. But she points out that introducing such capabilities can be a struggle. “Pharmaceutical companies are not software companies.” Effective digitization requires access to external AI-based tools and services that minimize the pain of integrating with existing workflows, she adds.
For labs that are software-naive, there are many benefits from adopting AI tools. And for those that are already digitized, there are strategies to make the most of AI technologies. eLabNext is supplementing its ELN platform with two AI add-ons, developed by Modicus Prime and ImmunoMind, a startup specializing in single-cell multi-omics for cell therapy development, with the aim of seamlessly meeting biopharma labs’ data needs, from image analysis to cell identification.
How can AI be of service?
It can be difficult for newcomers to the world of AI to distinguish hype from reality in terms of what the systems can do. “AI in general can solve three kinds of problems,” explains Vadim Nazarov, co-founder and CEO of ImmunoMind. “It can automate small and simple tasks, augment the performance of some complicated tasks or, if you have a lot of data, provide an opportunity to extract insight.”
Before a biopharma company decides to adopt a new AI-based system, it needs to have a clear understanding of the problems it wants to solve and how a tool can streamline or enhance existing processes. This puts the burden on AI firms to develop tools with clear and compelling applications. “There are some really incredible AI companies who have access to enormous magnitudes of data,” says Chartier. “But any time you introduce new technology, it has to be very relevant to the specific customer.”
For Modicus Prime, this has meant developing an AI-based image analysis tool that can be easily trained for specific research problems. Chartier notes that computational image analysis and interpretation is among the most mature applications for AI in the life sciences. Her company’s mpVision software can rapidly analyse most categories of biotech imaging data. For example, users can discriminate cells of interest from other cell types or from cellular debris, detect anomalies during drug production, characterize crystallization processes, or perform rapid quality control on biologics at any scale – from the lab bench to the manufacturing floor.
ImmunoMind’s software is designed for more specific immunological applications, drawing on proteomic, transcriptomic and other data types to help identify T cell subpopulations and characterize their physiological state. This can be especially important for quality control in areas like cancer immunotherapy, wherein subsets of donor-derived T cells are cultivated and genetically manipulated to selectively target and kill tumour tissue. “Finding relationships between gene expression and different cell phenotypes is extremely important for cell therapy development,” says Nazarov. “Those are tasks that simply can’t be solved with traditional statistical methods — only with machine-learning algorithms.”
The AI-ready lab
In the realm of AI, data is king. Algorithm performance strongly depends on the quality of both the training data and the experimental results subsequently fed into it.
ImmunoMind has addressed the former by assembling a curated training database based on multi-omic analysis of vast numbers of immune cells. The company then provides a user-friendly portal for researchers to extract insights about their own cells based on these data. “We work closely with customers to help them design experiments and quality control measures to eliminate all the risks associated with batch effects and bias of non-ideal experiments,” says Nazarov. By contrast, mpVision’s more generalized image-analysis framework is trained by the user; Chartier says as few as 20 representative images from a particular experimental process may be sufficient to prime the AI for assessing future data from the same pipeline.
Neither add-on requires any formal training in AI or expertise in computational biology, and user-friendly interfaces are a standard component. Having the right underlying data management infrastructure – such as a laboratory information management system (LIMS) – is also crucial to make effective use of AI tools.
Zurabyan suggests that companies can further accelerate adoption by having personnel focused on this task. “One well-received approach is having team leads that dedicate time and effort to strategizing on implementation of new technologies within a specific timeframe, with very clear goals and milestones,” he says.
For lab staff who don’t fully understand what the algorithms are doing, there’s a natural fear of the unknown — indeed, many AI systems have been criticized as ‘black boxes’, relying on convoluted and obscure processes. But AI developers can achieve a measure of transparency by explaining the underlying mathematical models, and providing auditing procedures that allow users to check the machine’s work. This is especially critical for scientific software slated for use in tightly regulated environments such as good manufacturing practice (GMP) facilities. GxP-compliant ELNs such as the eLabNext platform do this by automatically tracking and logging the movement of data throughout the system and its various add-ons.
Chartier encourages researchers to think about AI systems as a potential assistant to their regular research routine. “AI is just helping them to do what they do well, but faster and more efficiently.”
Implementing change can be challenging. Read about why labs and companies resist transitioning to an electronic lab notebook (ELN) and how to get past them.Read more
Electronic Lab Notebooks (ELNs) can solve GLP or GMP compliance issues. Read how eLabNext helps construct audit trails, ensure data security, and more.Read more
Learn how RNA technology is transforming therapy and drug discovery, aided by innovations in laboratory digitizationRead more