UF researchers develop new training method to help AI tools learn safely

Cyber big data flow. Blockchain data fields. Network line connect stream. Concept of AI technology, digital communication, science research, 3D illustration music waves

As artificial intelligence becomes woven into everyday life, UF researchers are working to make sure the technology learns safely. A new paper from the University of Florida and Visa Research introduces a training method designed to prevent AI models from memorizing sensitive information — a growing privacy risk in modern machine learning systems.

The work, titled “Deep Learning with Plausible Deniability,” was showcased in early December at NeurIPS 2025, one of the world’s most prestigious AI conferences. The paper is led by UF Ph.D. student Wenxuan Bao and UF associate professor Vincent Bindschaedler, Ph.D., in collaboration with Visa Research.

“We don’t want to design systems that maybe are the most intelligent systems without regard to how they process sensitive data,” said Bindschaedler, who is based in the UF Department of Computer & Information Science & Engineering. His work focuses on building what he calls “trustworthy machine learning,” a field that includes privacy, security and interpretability. 

Read full story on UF News