Hi, this is Canyu Chen (陈灿宇), a fourth-year Computer Science Ph.D. student at Illinois Institute of Technology (IIT) since Fall 2021, advised by Prof. Kai Shu. Before joining IIT, I received my B.S. in Computer Science from the University of Chinese Academy of Sciences (UCAS) in 2020.
I focus on Truthful, Safe and Responsible Large Language Models with the applications in Social Computing and Healthcare. I have started and currently lead the LLMs Meet Misinformation initiative, aiming to combat misinformation in the age of LLMs. I am also one organizer of the OracleLLM community, dedicated to exploring and advancing the concept of LLMs-as-Oracles. I aim to pursue Safe and Aligned Artificial General Intelligence in the long run. I am always happy to chat and discuss potential collaborations, or give talks about my research in related seminars. Feel free to contact me via email (cchen151 AT hawk.iit.edu) or wechat (ID: alexccychen).
News
- [09/2024] Our paper Can Large Language Model Agents Simulate Human Trust Behavior? is accepted to NeurIPS 2024, more details: [project website] [Code and results on GitHub]
- [09/2024] Our paper Can Large Language Models Identify Authorship? is accepted to EMNLP 2024 Findings, more details: [project website] [Code on GitHub]
- [09/2024] New survey paper Authorship Attribution in the Era of LLMs: Problems, Methodologies, and Challenges is accepted to SIGKDD Explorations 2024, more details: [project website] [Paper list on GitHub]
- [08/2024] I will give a Research Spotlight oral presentation titled "Combating Misinformation in the Age of LLMs" at The 2024 Summit on Responsible Decentralized Intelligence —— Future of Decentralization and AI, hosted by The Berkeley Center for Responsible, Decentralized Intelligence (Berkeley RDI) [Slides] [YouTube]
- [07/2024] New preprint is online Can Editing LLMs Inject Harm?, more details: [project website] [Code, Results, Dataset on GitHub]
- [07/2024] New preprint is online MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?, more details: [project website] [Code on GitHub] [Models, Datasets, and Leaderboard on huggingface]
- [06/2024] Our paper Evaluating the Social Impact of Generative AI Systems in Systems and Society is forthcoming in Oxford Handbook on the Foundations and Regulation of Generative AI (Oxford University Press).
- [06/2024] Invited by Prof. Yuxuan Liang to talk at Swarma Club: "Can Large Language Model Agents Simulate Human Trust Behaviors?". [Slides]
- [05/2024] Invited to talk at KAIST/IBS Data Science Group: "Combating Misinformation in the Age of Large Language Models (LLMs)". [Slides]
- [04/2024] Our survey paper Combating Misinformation in the Age of LLMs: Opportunities and Challenges is accepted to AI Magazine 2024. [publication] [paper list]
- [03/2024] Deeply honored and humbled to receive the prestigious 🏆 Sigma Xi Student Research Award 2024 from Illinois Tech and the local Sigma Xi chapter. Thanks to Illinois Tech Today for the coverage.
Older News
- [02/2024] New preprint is online Can Large Language Model Agents Simulate Human Trust Behaviors?, [project website] Code and results have been released for verification. [code and results] Demos on HuggingFace: [Trust Game Demo] [Repeated Trust Game Demo].
- [01/2024] Can LLM-Generated Misinformation Be Detected? is accepted to ICLR 2024, [project website] [dataset and code].
- [12/2023] Honored to receive 🏆 Didactic Paper Award (1/35 of all accepted papers) in workshop ICBINB@NeurIPS 2023 for Can LLM-Generated Misinformation Be Detected?.
- [10/2023] Start an initiative LLMs Meet Misinformation along with a new survey paper Combating Misinformation in the Age of LLMs: Opportunities and Challenges, [project website] and a paper list collecting related papers and resources [paper list].
- [10/2023] Honored to be covered by Illinois Tech News on the research of Trustworthy AI, [IIT News].
- [09/2023] New preprint is online Can LLM-Generated Misinformation Be Detected?, [project website]. The dataset and code are released [dataset and code].
- [06/2023] Will attend FAccT 2023 as a volunteer. Welcome to Chicago and glad to connect!
- [05/2023] One paper accepted at EACL 2023 and will attend online. Welcome to our poster!
- [04/2023] Glad to be invited by Prof. Lu Cheng to give a talk on AI Fairness at UIC [Slides]
- [11/2022] Attend NeurIPS 2022 in person. See you at New Orleans!
- [08/2022] Attend KDD 2022 in person. Glad to meet old friends and make new friends!
Publications
2024
-
Can Large Language Model Agents Simulate Human Trust Behavior?
Chengxing Xie*, Canyu Chen*, Feiran Jia, Ziyu Ye, Shiyang Lai, Kai Shu, Jindong Gu, Adel Bibi, Ziniu Hu, David Jurgens, James Evans, Philip Torr, Bernard Ghanem, Guohao Li. (*equal contributions)
Published in Proceedings of the 38th Conference on Neural Information Processing Systems ( NeurIPS 2024 )
Also presented in workshop AGI@ICLR 2024 and NLP+CSS@NAACL 2024
Seventeenth Midwest Speech and Language Days Symposium ( MSLD 2024, Oral )
The First Workshop on AI Behavioral Science ( AIBS@KDD 2024, Oral ).
[arXiv] [project website] [slides] [code and results]
Demos on HuggingFace: [Trust Game Demo] [Repeated Trust Game Demo]
Invited Talks : [Swarma Club] -
Can Editing LLMs Inject Harm?
Canyu Chen*, Baixiang Huang*, Zekun Li, Zhaorun Chen, Shiyang Lai, Xiongxiao Xu, Jia-Chen Gu, Jindong Gu, Huaxiu Yao, Chaowei Xiao, Xifeng Yan, William Yang Wang, Philip Torr, Dawn Song, Kai Shu (*equal contributions)
Presented in workshop TiFA@ICML 2024, Lightning Talk and NextGenAISafety@ICML 2024.
[arXiv] [project website] [poster] [Code, Results, and Dataset] [YouTube]
🏆 Award: Research Spotlight in The 2024 Summit on Responsible Decentralized Intelligence —— Future of Decentralization and AI, hosted by The Berkeley Center for Responsible, Decentralized Intelligence (Berkeley RDI)
Invited Talks : [Berkeley Decentralization & AI Summit Research Spotlight Talk]
Included in Tutorial : [Knowledge Editing for Large Language Models@IJCAI 2024] -
Combating Misinformation in the Age of LLMs: Opportunities and Challenges
Canyu Chen, Kai Shu.
Published in AI Magazine 2024 (Volume 45, Issue 3, Fall 2024), Highlight Article.
[publication] [arXiv] [project website] [Slides] [paper list] [YouTube]
Media Coverage : [Marktechpost AI Research News] [Reddit r/machinelearningnews] [Analytics Vidhya Blog].
Invited Talks : [Berkeley Decentralization & AI Summit Research Spotlight Talk] [KAIST/IBS Data Science Group] [Psych Methods].
-
Can LLM-Generated Misinformation Be Detected?
Canyu Chen, Kai Shu.
Published in Proceedings of The Twelfth International Conference on Learning Representations ( ICLR 2024 )
Also presented in workshop RegML@NeurIPS 2023, Oral and ICBINB@NeurIPS 2023, spotlight, and the symposium AGI Leap Summit 2024.
[publication] [arXiv] [project website] [dataset and code] [Slides] [YouTube] [zhihu] [twitter/x.com] [LinkedIn]
🏆 Award: Didactic Paper Award in the workshop ICBINB@NeurIPS 2023 (1/35 of all accepted papers).
🏆 Award: Research Spotlight in The 2024 Summit on Responsible Decentralized Intelligence —— Future of Decentralization and AI, hosted by The Berkeley Center for Responsible, Decentralized Intelligence (Berkeley RDI)
🏆 Award: Spotlight Research in AGI Leap Summit 2024.
🏆 Award: Third Place Award in the Illinois Tech College of Computing Poster Session 2024 (Ph.D. Group).
Included in the curriculum at: [The City University of New York].
Included in Tutorial: [Defending Against Generative AI Threats in NLP@SBP-BRiMS 2024]. [Preventing and Detecting Misinformation Generated by Large Language Models@SIGIR 2024].
Media Coverage : [The Register] [LLM Security] [Blog 1] [Blog 2].
Invited Talks : [Berkeley Decentralization & AI Summit Research Spotlight Talk] [AGI Leap Summit Spotlight Research Talk] [Tsinghua AI Time] [Psych Methods] [KAIST/IBS Data Science Group]. -
Authorship Attribution in the Era of LLMs: Problems, Methodologies, and Challenges
Baixiang Huang, Canyu Chen, Kai Shu.
Published in SIGKDD Explorations 2024
[arXiv] [project website] [Paper list on GitHub] -
Can Large Language Models Identify Authorship?
Baixiang Huang, Canyu Chen, Kai Shu.
Published in Proceedings of Findings of The 2024 Conference on Empirical Methods in Natural Language Processing ( EMNLP 2024, Findings Long Paper )
[arXiv] [project website] [code] -
MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?
Zhaorun Chen, Yichao Du, Zichen Wen, Yiyang Zhou, Chenhang Cui, Zhenzhen Weng, Haoqin Tu, Chaoqi Wang, Zhengwei Tong, Qinglan Huang, Canyu Chen, Qinghao Ye, Zhihong Zhu, Yuqing Zhang, Jiawei Zhou, Zhuokai Zhao, Rafael Rafailov, Chelsea Finn, Huaxiu Yao
Presented in workshop FM-Wild@ICML 2024.
[arXiv] [project website] [Code] [Models, Datasets, and Leaderboard on huggingface]
🏆 Award: Top #1 Paper of the day at HuggingFace AK Daily Papers. -
Model Attribution in Machine-Generated Disinformation: A Domain Generalization Approach with Supervised Contrastive Learning
Alimohammad Beigi, Zhen Tan, Nivedh Mudiam, Canyu Chen, Kai Shu, Huan Liu.
Published in Proceedings of The 11th IEEE International Conference on Data Science and Advanced Analytics (DSAA 2024)
[arXiv] -
SST: Multi-Scale Hybrid Mamba-Transformer Experts for Long-Short Range Time Series Forecasting
Xiongxiao Xu, Canyu Chen, Yueqing Liang, Baixiang Huang, Guangji Bai, Liang Zhao, Kai Shu.
arXiv preprint. Aug. 2024.
[arXiv] -
MetaGAD: Learning to Meta Transfer for Few-shot Graph Anomaly Detection.
Xiongxiao Xu, Kaize Ding, Canyu Chen, Kai Shu.
Published in Proceedings of The 11th IEEE International Conference on Data Science and Advanced Analytics (DSAA 2024)
[arXiv] -
Introducing v0.5 of the AI Safety Benchmark from MLCommons
MLCommons AI Safety Working Group
arXiv preprint. Apr. 2024.
[arXiv] [official blog]
Media Coverage : [IEEE Spectrum] [AK Daily Papers] [Marktechpost] [AI Business] [EnterpriseAI News] [HPCwire] [Hackster.io] [ELBLOG.PL] [SiliconANGLE] [GoatStack.ai]. -
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Canyu Chen, Hal Daumé III, Jesse Dodge, Isabella Duan, Ellie Evans, Felix Friedrich, Avijit Ghosh, Usman Gohar, Sara Hooker, Yacine Jernite, Ria Kalluri, Alberto Lusoli, Alina Leidinger, Michelle Lin, Xiuzhu Lin, Sasha Luccioni, Jennifer Mickel, Margaret Mitchell, Jessica Newman, Anaelia Ovalle, Marie-Therese Png, Shubham Singh, Andrew Strait, Lukas Struppek, Arjun Subramonian
Forthcoming in Oxford Handbook on the Foundations and Regulation of Generative AI. Oxford University Press. Jun. 2024.
[arXiv]
2023
-
PromptDA: Label-guided Data Augmentation for Prompt-based Few-shot Learners.
Canyu Chen, Kai Shu.
Published in Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics ( EACL 2023, Main Conference Long Paper )
Also presented in workshop ENLSP@NeurIPS 2022, Oral (spotlight).
[arXiv] [code] [youtube] [bilibili] [slides] [poster] -
Fair Classification via Domain Adaptation: A Dual Adversarial Learning Approach.
Yueqing Liang, Canyu Chen, Tian Tian, Kai Shu.
Published in Frontiers in Big Data 2023 .
[publication] [arXiv] -
Attacking Fake News Detectors via Manipulating News Social Engagement.
Haoran Wang, Yingtong Dou, Canyu Chen, Lichao Sun, Philip S. Yu, Kai Shu.
Published in Proceedings of The ACM Web Conference 2023 ( WWW 2023 ).
[arXiv] [code]
Media Coverage : [Montreal AI Ethics Institute].
2022
-
Combating Health Misinformation in Social Media: Characterization, Detection, Intervention, and Open Issues.
Canyu Chen*, Haoran Wang*, Matthew Shapiro, Yunyu Xiao, Fei Wang, Kai Shu. (*equal contributions)
arXiv preprint. Nov. 2022.
[arXiv] -
When Fairness Meets Privacy: Fair Classification with Semi-Private Sensitive Attributes.
Canyu Chen, Yueqing Liang, Xiongxiao Xu, Shangyu Xie, Ashish Kundu, Ali Payani, Yuan Hong, Kai Shu.
Presented in workshop TSRML@NeurIPS 2022 and AFCP@NeurIPS 2022.
[arXiv] [Video] [Slides] [Poster]
Media Coverage : [Illinois Tech News].
-
Artificial Intelligence Algorithms for Treatment of Diabetes.
Mudassir M. Rashid, Mohammad Reza Askari, Canyu Chen, Yueqing Liang, Kai Shu, Ali Cinar.
Published in Algorithms 2022 .
[Paper] -
BOND: Benchmarking Unsupervised Outlier Node Detection on Static Attributed Graphs.
Kay Liu, Yingtong Dou, Yue Zhao, Xueying Ding, Xiyang Hu, Ruitong Zhang, Kaize Ding, Canyu Chen, Hao Peng, Kai Shu, Lichao Sun, Jundong Li, George H. Chen, Zhihao Jia, Philip S. Yu.
Published in Proceedings of the 36th Conference on Neural Information Processing Systems ( NeurIPS 2022 ), Datasets and Benchmarks Track.
[arXiv] [code]
Invited Talks
-
[08/06/2024] "Combating Misinformation in the Age of LLMs" in The 2024 Summit on Responsible Decentralized Intelligence —— Future of Decentralization and AI,
hosted by The Berkeley Center for Responsible, Decentralized Intelligence (Berkeley RDI)
[Slides]
[YouTube]
-
[06/26/2024] "Can Large Language Model Agents Simulate Human Trust Behaviors?" invited by Prof. Yuxuan Liang at Swarma Club
[Slides]
-
[05/10/2024] "Combating Misinformation in the Age of Large Language Models (LLMs)" invited by Wenchao Dong at KAIST/IBS Data Science Group
[Slides]
-
[04/18/2023] "Fairness in AI: An Introduction" invited by Prof. Lu Cheng at UIC
[Slides]
Awards and Fellowship
- Highlight Article in AI Magazine (Volume 45, Issue 3, Fall 2024) .
- Research Spotlight in The 2024 Summit on Responsible Decentralized Intelligence —— Future of Decentralization and AI, hosted by The Berkeley Center for Responsible, Decentralized Intelligence (Berkeley RDI)
- Great Review at ACL Rolling Review 2024 April
- Travel Award for Seventeenth Midwest Speech and Language Days (MSLD 2024)
- Sigma Xi Student Research Award 2024 from Illinois Tech and the local Sigma Xi chapter. ( An award of $500 is given each year to up to two graduate students at Illinois Tech who have demonstrated significant promise in research and scholarship through their accomplishments. There is only one awardee across the whole university in 2024. )
- Technical AI Safety Fellowship 2024 Spring from Harvard AI Safety Student Team.
- Third Place Award in the Illinois Tech College of Computing Poster Session 2024 (Ph.D. Group).
- Spotlight Research in the symposium AGI Leap Summit 2024.
- Didactic Paper Award (1/35 of all accepted papers) in the workshop ICBINB@NeurIPS 2023.
- NeurIPS 2023 Volunteer Award.
Media Coverage
- Illinois Tech Today: "Recognizing the Outstanding Work of Our Illinois Tech Faculty"
- Marktechpost AI Research News: "This AI Report from the Illinois Institute of Technology Presents Opportunities and Challenges of Combating Misinformation with LLMs"
- The Register: "It's true, LLMs are better than people – at creating convincing misinformation"
- Illinois Tech News: "Breaking Biases"
- Montreal AI Ethics Institute: "Attacking Fake News Detectors via Manipulating News Social Engagement"
- IEEE Spectrum: "Announcing a Benchmark to Improve AI Safety: MLCommons has made benchmarks for AI performance—now it's time to measure safety"