A researcher a Boston College’s DAMP Lab works with an Opentrons Flex robotic. Credit score: Opentrons Labworks Inc.
Pharmaceutical firms and analysis establishments are utilizing synthetic intelligence to design robotic experiments at scale, however they should know if AI-generated directions will execute appropriately earlier than dealing with helpful samples and reagents. Opentrons Labworks Inc. right this moment introduced Protocol Visualization for Opentrons Flex, a brand new simulation and visualization functionality.
The characteristic permits scientists to simulate and examine robotic protocols in a dynamic digital atmosphere earlier than working them on the Flex system. The interface allows customers to look at every step of an automatic workflow.
“This functionality offers researchers a dynamic solution to simulate and examine robotic execution earlier than an experiment begins, making a clearer bridge between computational design and bodily laboratory workflows,” acknowledged James Atwood, CEO of Opentrons. “As AI techniques suggest extra experiments, researchers want infrastructure that makes these experiments comprehensible, inspectable, and repeatable earlier than they attain the bench.”
Based in 2013, Opentrons mentioned it has greater than 10,000 robotic techniques deployed globally, together with installations at main analysis universities and most of the world’s largest biopharma firms.
Visualization instrument runs on present protocols
Opentrons mentioned it helps protocols authored throughout its software program ecosystem, together with OpentronsAI, the Python Protocol API, and the Protocol Designer utility. Scientists can examine workflows and observe modifications in liquid ranges at microliter scale.
The system additionally features a Slot Highlight view that gives further element for particular person deck places. This permits customers to observe nicely volumes and module situations all through a run, the New York-based firm defined.
For laboratories creating complicated automation workflows, this degree of inspection might assist quicker debugging and protocol refinement. Scientists can evaluate workflows offline with out interrupting energetic laboratory operations, famous Opentrons.
The brand new functionality will probably be obtainable by means of Opentrons App Model 9.0, scheduled for launch in April 2026.
Opentrons CEO explains how lab characteristic works
Atwood replied to the the next questions from The Robotic Report:
Is Opentrons’ simulation and inspection layer hardware-agnostic? If not, is there a selected set of procedures it covers?
Atwood: The simulation and visualization atmosphere is designed particularly for protocols written for the Opentrons Flex robotic platform. The system takes any legitimate Flex protocol and permits the consumer to simulate and examine how the robotic will execute it. That features every thing from easy liquid transfers to complicated workflows with hundreds of robotic actions.
Scientists can step by means of protocols of nearly any dimension, from a handful of steps to workflows containing 10,000 or extra actions, and observe pipetting, liquid dealing with, labware actions, and module states earlier than working the experiment. As a result of the simulation atmosphere mirrors the Flex execution atmosphere, it permits researchers to grasp precisely how the robotic will behave earlier than committing reagents, consumables, and instrument time.
Why has the pharmaceutical business been sluggish to deal with AI verification issues? How critical are they?
Atwood: A part of the problem is that a lot of the experience required to confirm experiments has traditionally been tacit laboratory data reasonably than formalized information. Lots of experimental troubleshooting depends on what skilled scientists discover on the bench: how a liquid behaves in a nicely plate, how a response seems to be because it proceeds, or whether or not one thing refined appears off within the workflow. That form of observational experience is troublesome to encode instantly into AI techniques.
In different phrases, the AI doesn’t know what it doesn’t know. Lots of the verification challenges solely turn into seen when experiments work together with the bodily world. For this reason the business is now specializing in what we name bodily AI: techniques that mix language fashions with notion and real-world information
As an alternative of relying solely on documentation or protocols, these techniques more and more want visible information, sensor information, and execution information from actual experiments. The verification problem is critical, but it surely’s additionally solvable as automation platforms generate extra structured experimental information and as AI fashions start interacting instantly with laboratory environments.
Protocol Visualization for Opentrons Flex is designed to check robotic biopharma experiments at scale earlier than execution. Supply: Opentrons
Are you able to describe how Opentrons’ generative AI works? How do you guarantee repeatability and explainability?
Atwood: The generative AI behind OpentronsAI interprets scientific intent into executable automation protocols.
At a excessive degree, the system makes use of massive language fashions mixed with a retrieval-augmented technology (RAG) structure. The fashions reference Opentrons’ documentation and a big inner data base of laboratory protocols and automation workflows developed and verified by Opentrons over a few years.
When a scientist describes an experiment in pure language, the system retrieves related examples and structured data from this database and makes use of that context to generate a protocol appropriate for execution on the robotic.
Repeatability and explainability come from a number of layers of management. The protocols generated are totally inspectable Python-based automation workflows, which means researchers can evaluate, edit, and confirm the steps earlier than execution. Structured prompting additionally ensures that the enter captures the data required to supply dependable automation protocols.
In follow, the AI helps scientists transfer quicker from experimental concept to executable workflow, whereas nonetheless permitting human oversight and inspection earlier than the experiment runs.
Did Opentrons work with particular lab automation distributors and finish customers to develop this providing, and in that case, what did it study?
Atwood: The AI functionality was developed with in depth suggestions from a cohort of beta customers, together with researchers constructing automated workflows on the Flex platform. One of many key insights was that as AI-generated protocols turn into extra complicated, researchers want higher methods to examine, perceive, and debug workflows earlier than execution.
In parallel, Opentrons can be collaborating with AI and robotics companions, together with NVIDIA and HighRes Biosolutions as a part of broader efforts to attach AI techniques with bodily laboratory automation. These collaborations are serving to push the ecosystem towards bodily AI, the place autonomous techniques can motive about experiments, work together with robotic platforms, and adapt based mostly on real-world suggestions.
How is autonomous science evolving, and what challenges stay?
Atwood: The trajectory is towards more and more autonomous laboratories, the place AI techniques can design experiments, execute them by means of robotics, observe outcomes, and refine future experiments based mostly on the outcomes.
Attaining that requires combining a number of capabilities:
- Reasoning techniques, usually powered by massive language fashions, that may plan experiments
- Notion techniques akin to vision-language fashions that enable AI to look at what is occurring in actual experiments
- Bodily AI techniques that join these fashions to laboratory automation platforms so experiments might be executed in the true world
Opentrons supplies the infrastructure layer for this improvement, connecting AI-driven intent to dependable execution on automated lab {hardware}. The most important problem is constructing dependable suggestions loops between digital intelligence and bodily experiments. In contrast to purely digital domains, biology requires capturing structured information from actual laboratory environments: visible observations, instrument outputs, and environmental alerts.
There was speedy progress on this space, together with advances in simulation and digital twin environments like NVIDIA Isaac, which assist practice and check AI techniques earlier than they work together with actual laboratory {hardware}.
The put up Opentrons introduces dynamic simulation, visualization for AI-generated lab workflows appeared first on The Robotic Report.


