The nuclear criticality safety workforce is facing a sizeable generation and knowledge gap, with experienced experts starting to retire and younger analysts arriving who aren’t familiar with facility histories.
Anneli Brackbill, a graduate student in the Department of Nuclear Engineering, saw a need to help minimize the potential repercussions of the transition by researching how to integrate AI into the field of nuclear criticality safety.
Brackbill developed a proof-of-concept machine learning (ML) tool that demonstrated the ability to take in safety evaluation documents, characterize them, and identify similar documents. The tool she developed is to help analysts create Nuclear Criticality Safety Evaluations (NSCEs); it will not do any actual analysis.

Through her research, Brackbill hopes to clearly communicate how the use of AI will help criticality safety analysts while maintaining the highest level of safety that the field is known for.
Brackbill’s research, which is funded by the operational criticality safety group at Los Alamos National Laboratory, won the Best Paper Award at the 2025 American Nuclear Society (ANS) Nuclear Criticality Safety Division (NCSD) Conference in Austin, Texas in September.
“I was very surprised and honored,” Brackbill said. “Nuclear criticality safety is a pretty small but important field. There’s a lot of work to do and not a lot of people doing it, so we came into this just trying to make a tool that would help make it a bit easier.”
Communication and Validation
In developing the tool, Brackbill used large language models (LLMs) and retrieval-augmented generation (RAG). A RAG optimizes the output of a LLM by referencing an authoritative knowledge base outside of its training data sources before responding to a query.
The tool is targeted to help to write NSCEs, which can range from 20-to-100 pages in length. The actual modeling and decision making is still done by humans, but the large language model will be able to provide a draft of the document to save time while also creating a large database of information.

Vlad Sobes, associate professor and Charles P. Postelle Professor in nuclear engineering, compares the tool to an airplane on autopilot.
“The pilots are there. They’re highly trained. They’re monitoring the situation,” said Sobes, who is Brackbill’s PhD advisor. “The plane is flying itself, and the pilot is there, and the industry has not suffered from catastrophic loss in any sort of performance and safety.”
Brackbill faced some initial hesitance from the nuclear criticality safety community about the AI tool, with some wondering if it may replace them and others not fully trusting AI. But once she showed them how it worked and provided some initial results, they realized how valuable the tool could be in helping them.
“I think this sort of validates to me that we are communicating the message effectively,” Sobes said. “I think there has been a bad public perception of reliability, whereas we now have a clear vision of how this can be done, how it can be done safely, how it can be done effectively, and how it can be used for the good. People are both hearing and understanding what we’re saying.”
Positive Power of AI/ML
Brackbill spent the past three summers interning at Los Alamos National Lab. As she created the tool, she worked in close collaboration with the operation criticality safety group there.
“Before I began doing this project, I kind of had an idea of how all the processes work because of my previous work at Los Alamos,” Brackbill said. “The entire tool has been developed with that in mind, so it’ll actually be useful to people.”
Given the national lab’s interest and financial support, Sobes anticipates Los Alamos will be an early adopter of the AI tool if it eventually becomes a usable product in the industry.
“It’s not just an academic idea. We are integrated with an actual group that has this issue of an aging expert population and with young people coming in all the time,” Sobes said. “The project is just a demonstration of this capability, but it’s closely tied to reality and real use.”
Brackbill hopes the project not only benefits the nuclear industry but also shows the general public how AI/ML can be used for good.
“AI and machine learning have been in the news a lot lately, and there’s a lot of applications that people don’t necessarily love. But I think this is a great example of when it’s used intentionally for a very specific scenario, it can do really great things,” she said. “I don’t think it should be entirely discounted just because of the other things when there are niche applications that it’s really usefully for.”
Contact
Rhiannon Potkey (rpotkey@utk.edu)