By Ina Reichel
Berkeley Lab (LBNL) hosted a workshop on Agentic AI for User Facilities from January 21 to 22, 2026, with about 100 registered participants from user facilities across the US national lab complex and European light sources. The two main goals of the workshop were to identify cross-facility patterns, gaps, and design principles for agentic AI at DOE user facilities, and ground agentic AI in domain realities and identify domain-specific constraints, opportunities, and readiness.

“Agentic AI workflows are key to the various particle accelerator-related activities of the Genesis Mission,” explained Jean-Luc Vay, a senior scientist in Berkeley Lab’s Accelerator Technology & Applied Physics (ATAP) Division. “Those activities include the Multi-Office particle Accelerator Team (MOAT) seed project and the American Science Cloud Scientific User Facilities Infrastructure Partners (AmSC SUF IP) project, both led by LBNL, as well as the Nuclear Physics AI-Ready Accelerator Data (NARAD) project.” He added that “one of the goals is to deploy the Osprey agentic software, developed at LBNL, in accelerator control rooms at facilities of the seven participating laboratories: Argonne National Laboratory (ANL), Berkeley Lab, Brookhaven National Laboratory (BNL), Fermilab, Jefferson Lab, Oak Ridge National Laboratory (ORNL), and SLAC National Accelerator Laboratory.”
Thorsten Hellert, staff scientist in ATAP’s Advanced Light Source Accelerator Physics Program and chair of the workshop organizing committee, opened the plenary session, and Berkeley Lab Associate Lab Director for Computing Sciences Jonathan Carter welcomed the participants.
“The workshop is incredibly timely given DOE’s focus on AI across the complex in the form of the Genesis Mission,” said Carter. The Genesis Mission, he explained, will supercharge scientific discovery. “We’ll fully harness artificial intelligence across federal R&D and forge new public-private partnerships. It will boost the impact of R&D by a factor of two,” he added.
This was echoed by Vay: “The agentic AI workshop was a very timely opportunity to discuss the latest developments and the future of agentic AI as it applied to particle accelerators but also to synergistic user facilities.”
Eno Reyes, cofounder and CTO of Factory AI, that is bringing autonomy to software engineering with task-specific AI agents, gave the first plenary talk. His talk, “Software Development Agents: The Blueprint for Scientific AI,” emphasized the role of coding assistants in science and the importance of validation.
Kevin Yeager, Brookhaven National Lab’s interim director of the Center for Functional Nanomaterials (CFN) and group leader of AI-Accelerated Nanoscience, then talked about “The Future of AI-Empowered Physical Sciences.” Yeager said he envisions scientists soon being surrounded by a swarm of interconnected AI agents that handle much of the routine work, freeing scientists to focus on what they do best.

The slate of national lab speakers continued with Renan Souza from ORNL, Robert Underwood from ANL, and Esther Tsai from BNL. Their talks ranged from evaluating agentic AI systems to envisioning the field’s future.
In the contributed talks, Hellert spoke on a production-ready framework for deploying agentic AI in user facilities. He was followed by Alok Kamatar, a PhD student at the University of Chicago, who brought the discussion from the user facilities to the academic realm.
In the afternoon, participants divided into five topic-based breakout groups:
- Semantic understanding, context, and knowledge (chaired by Morgan Wall, scientific software engineer at LBNL’s Molecular Foundry)
- Models and reasoning for scientific agentic systems (chaired by Ed Barnard, data and analytics lead scientist at the Molecular Foundry, and Alex Hexemer, senior scientist and group lead at the ALS)
- Agent architectures and design patterns (chaired by Hellert)
- Tools, interfaces, and integration surfaces (chaired by Wahid Bhimji, division deputy for AI and Science at LBNL’s NERSC)
- Human-agent collaboration and organizational adoption (chaired by Kadidia Konate, data scientist at NERSC)
The groups were tasked with addressing the following questions:
- What problem is agentic AI solving in practice today?
- What makes an environment “agent-ready” for this topic?
- Where do agentic systems fail—and how do those failures surface?
- What design patterns are emerging across facilities?
- How does this relate to the Genesis Mission?
- What is the next concrete milestone for this topic (6–18 months)?
The participants reconvened, and each breakout group reported its findings. Many continued their discussions during the reception, which was held outdoors, giving participants the opportunity to admire the sunset.

The morning of the second day was dedicated to domain-specific breakouts groups:
- Accelerators and control systems (chaired by Hellert)
- Beamlines and experimental stations (chaired by Hexemer)
- Computing and AI infrastructure (chaired by Steven Farrell, group lead at NERSC)
- Nanoscience, biology, and chemistry (chaired by Barnard)
They were tasked with addressing domain-specific questions:
- Which workflows in this domain are most promising for agentic AI?
- What domain constraints shape agent design?
- How “agent-ready” is this domain today?
- What does safe autonomy look like in this domain?
- What evidence would convince this community that agentic AI works?
At the end of the session, participants convened on Zoom for a brief report-out.
The participants then gathered to work collaboratively on a workshop report. Clustering in small groups, they worked on specific sections of the report as assigned to their group throughout the afternoon.
“The output of this workshop in the form of collective learning and the written report will undoubtedly be very useful to our accelerator-related activities within the Genesis Mission, as well as all the scientific and technological grand challenges that are supported by particle accelerators,” concluded Vay.
“The timing of this workshop could not have been better,” said Hellert. “As agentic AI matures quickly, there has been a clear gap across the DOE complex for a focused, cross-facility discussion on how these systems should be designed, evaluated, and deployed in practice. The fully subscribed participation reflects both the demand for this conversation and the need for a coordinated path forward.”