Hi, this is Canyu Chen (陈灿宇). I am a Computer Science Ph.D. student at Northwestern University and a member of the Northwestern MLL Lab, fortunately advised by Prof. Manling Li. I was a graduate visiting researcher at University of California, Berkeley, hosted by Prof. Dawn Song. I received my B.S. from the University of Chinese Academy of Sciences. I have the privilege of collaborating closely with Prof. Dawn Song, Prof. James Evans, and Prof. Philip Torr. I am grateful for the previous mentorship from Prof. Kai Shu. I am a recipient of the prestigious Bloomberg Data Science Ph.D. Fellowship.

My research interest covers Foundation Agent, Trustworthiness, and Multimodality. FedAgent enables agent learning without sacrifacing user data privacy (first author, Best Paper Award at the AAAI'26 TrustAgent Workshop, Outstanding Paper Award in the AAAI'26 PerFM Workshop). AgentTrust demonstrates the feasbility to simulate human trust behavior with LLM agents (co-first author, Outstanding Paper Award in the CIKM'25 LASS Workshop). LLMFake shows that LLM-generated misinformation can be more deceptive than human-written misinformation (first author, Didactic Paper Award in NeurIPS'23 ICBINB Workshop). I aim to pursue Safe and Aligned Artificial General Intelligence in the long run. I am one organizer of the ResponsibleFM community (Join ResponsibleFM Community on Slack!), dedicated to advancing socially responsible and trustworthy foundation models (language and multimodal). I have started and led the LLMs Meet Misinformation initiative, aiming to combat misinformation in the age of LLMs. I am always happy to chat and discuss potential collaborations, or give talks about my research in related seminars. Feel free to contact me via wechat ID : alexccychen or email canyuchen AT u.northwestern.edu

Workshops & Tutorials

News

Publications (show selected / show by date)

(* indicates equal contributions)

2026

2025

2024

2023

2022

Invited Talks

Awards and Fellowship

Media Coverage