Multi-Institution Team Wins Two Awards at AAAI-26 Workshops

First author Canyu Chen led a multi-institution research team in developing a scalable approach to training AI agents without sacrificing users’ data privacy

A multi-institution team led by Northwestern Engineering’s Canyu Chen earned a Best Paper Award at the Association for the Advancement of Artificial Intelligence (AAAI) 2026 Workshop on Trust and Control in Agentic AI and an Outstanding Paper Award at the AAAI-26 Workshop on Personalization in the Era of Large Foundation Models, both held on January 27 in Singapore.

Canyu ChenThe premier AAAI Conference on Artificial Intelligence series promotes theoretical and applied AI research as well as intellectual interchange among researchers, practitioners, scientists, students, and engineers. The AAAI-26 workshop program included 52 workshops covering a wide range of topics in AI.

Chen is a PhD student in computer science and a member of the Machine Learning and Language (MLL) Lab. He was the first author of the winning paper, “Federated Agent Reinforcement Learning,” co-authored by Chen’s adviser, Manling Li, assistant professor of computer science; and Yiping Lu, assistant professor of industrial engineering and management sciences at the McCormick School of Engineering. Collaborators also included Dawn Song and Zhanhui Zhou (University of California, Berkeley), Tian Li and Zhaorun Chen (University of Chicago), Shizhe Diao (NVIDIA Research), Kangyu Zhu (Brown University).

“Receiving this recognition from the community means a great deal to me,” Chen said. “This marks an important personal milestone in my ambition to build AI systems that people can truly trust.”

In this work, Chen and the research team developed a scalable approach to training AI agents without sacrificing users’ data privacy. Powered by large language models, AI agents are designed to perform tasks without supervision, such as online shopping assistants, travel planners, or household robots.

To improve these autonomous systems, Chen explained, developers typically train AI agents using vast amounts of sensitive user data collected on centralized servers, posing a significant privacy and security challenge.

“Our work enables AI agents to learn collaboratively across many users or organizations without sharing raw data,” Chen said. “We found that distributed learning works much better than each user training their own system in isolation, and the approach remains effective even when users have very different types of data.”

Chen noted that the team’s approach paves the way to deploying AI agents in privacy-sensitive areas such as personal assistants, healthcare services, or enterprise applications, where sharing raw user data is impractical or undesirable.

“More broadly, we hope this work encourages the community to explore new ways of building AI systems that are both capable and trustworthy, making advanced AI technology more widely usable in real-world settings,” Chen said.

 

McCormick News Article