Zizhong Li

Hello! I am a second-year CS Ph.D. student at the University of California, Davis, advised by Dr.Jiawei Zhang. Prior to UC Davis, I received my B.E. degree in Computer Science from Tongji University in Shanghai, China.

My research interest lies in natural language processing, specifically in nonparametric language modeling, misinformation & information retrieval, and multi-modal generative models.

Recent Papers

[08.2025] [Findings of EACL 2026🌟] Token-Level Precise Attack on RAG: Searching for the Best Alternatives to Mislead Generation arixv blog page

  Zizhong Li, Haopeng Zhang, Jiawei Zhang

  In this paper, we propose Token-level Precise Attack on the RAG (TPARAG), which leverages a lightweight white-box LLM as an attacker to generate and iteratively optimize malicious passages at the token level and is suitable for both white-box and black-box RAG systems.

[10.2024] A Survey of AI-Generated Video Evaluation arxiv

  Xiao Liuβˆ—, Xinhao Xiangβˆ—, Zizhong Liβˆ—1, Yongheng Wang, Zhuoheng Li, Zhuosheng Liu, Weidi Zhang, Weiqi Ye, Jiawei Zhang

  In this survey, we identify the emerging field of AI-Generated Video Evaluation (AIGVE), highlighting the importance of assessing how well AI-generated videos align with human perception and meet specific instructions.

[06.2024] [IJCNN 2025🌟]Learning by Ranking: Data-Efficient Knowledge Distillation from Black-Box LLMs for Information Retrieval arixv blog page

  Zizhong Li, Haopeng Zhang, Jiawei Zhang

  In this paper, we introduce a data-efficient knowledge distillation training scheme that treats LLMs as black boxes and distills their knowledge via an innovative LLM ranker-retriever pipeline.

[03.2024] [NAACL 2024🌟] Unveiling the Magic: Investigating Attention Distillation in Retrieval-augmented Generation arxiv blog page

  Zizhong Li, Haopeng Zhang, Jiawei Zhang

  In this paper, we address this gap by conducting a comprehensive review of attention distillation workflow and identifying key factors influencing the learning quality of retrieval-augmented language models.

  1. Euqal Contribution.Β