About Me

I am an Assitant Professor in the College of Computer Science at the Nankai University. Prior to joining Nankai University, I spent a rewarding year as a Postdoctoral Scientist at Amazon, where I worked with Professors Rui Song, Han Zhao at UIUC, Prof. Hengrui Cai at UCI and Sheng Wang at the University of Washington on developing large language models for seller-domain tasks. Before that, I received my Ph.D. degree in Computer Science and Engineering at the University of Notre Dame, advised by Prof. Meng Jiang. I also hold a Master’s degree in Electrical and Computer Engineering from UIUC and a Bachlor degree from Sun Yat-sen University.

Research Interest

My primary research interest lies in data mining, natural language processing and machine learning. Basically, I define myself as an researcher whom mining the structural knowledge from heterogeneous resource data. I aim to use this knowledge as a first principle to enable intelligence systems to generate more accurate, trustworthy and explainable results, ultimately helping to reduce human effort.

My current focus include several key areas: NLP with Structural Knowledge, NLP for Science, Information Retrieval, Retrieval-Augmented Generation, Large Language Model Reasoning. Below are some keywords that reflect my research interests:

  • Structural Knowledge Construction
  • Knowledge-enhanced Reasoning System
  • Trustworthy Large Language Model

Mentoring & Collaborating

  • Xianrui Zhong, UIUC. We work on RAG with Structural Knowledge.
  • Zehong Wang, University of Notre Dame. We work on Agentic LLM.
  • Mengxia Yu, University of Notre Dame. We work on Agentic LLM.
  • Bolian Li, Purdue University. We work on LLM alignment.
  • Yifan Wang, Purdue University. We work on LLM alignment.
  • Yanjin He, Peking University. We work on science of LLM and mathematical reasoning.
  • Yuyang Bai, XJTU. Our works on taxonomy construction have been published on CIKM 2024 and ACL 2025.
  • Zhenyu Wu, XJTU. Our works on mathematical reasoning have been published on EMNLP 2024 and ACL 2025.
  • Zhaoxuan Tan, University of Notre Dame. Our work on personalized LLM has been published on EMNLP 2024.
  • Billy Porter, University of Notre Dame. We work on weakly-supervised text classifcation. Billy is now Researcher in Google.

My career has been supported and inspired by many role models, including all of my advisors listed above, and many others, including

I also hold the deepest respect for Prof. Thomas Huang, even though I never had the opportunity to be advised by him or study in his group or classes. His remarkable life and career have deeply inspired me, leading me to reflect thoughtfully on the academic path I hope to pursue and the kind of professor I strive to become.

What’s New

  • [August 2024] The code of CodeTaxo is available now. Feel free to generate your own taxonomy!
  • [August 2024] Three papers about (1) Taxonomy Expansion via Code Prompting (CodeTaxo) (2) Mathematical Reasoning (ProCo) (3) Curruiculum Learning (PUDF) are available on Arxiv now!
  • [July 2024] Chain-of-Layer was accepted by The CIKM 2024. The code of Chain-of-Layer is available now. Feel free to generate your own taxonomy!
  • [February 2024] Two papers about (1) Entity Linking via leveraging LLMs (ChatEL (Coming Soon)) (2) Mathematical Reasoning (MinT) were accepted by LREC-COLING 2024. Huge congrats to Yifan Ding and Zhenwen Liang!
  • [February 2024] Three papers about (1) Taxonomy Induction via prompting LLMs (Chain-of-Layer) (2) personalized LLMs (OPPU) (3) Entity Linking via leveraging LLMs (EntGPT) are available on Arxiv now!
  • [October 2023] One paper was accepted by The EMNLP 2023.
  • [April 2023] I am joining Tencent AI Lab as a full-time research intern this summer! I will work with Dr. Linfeng Song and Dr. Haitao Mi.
  • [May 2022] One paper was accepted by The KDD 2022.
  • [May 2021] One paper was accepted by The KDD 2021.
  • [May 2021] Our ICSE 2021 paper was selected for an ACM SIGSOFT Distinguished Paper Award!

Recent Publications

  • Qingkai Zeng, Yuyang Bai, Zhaoxuan Tan, Zhenyu Wu, Shangbin Feng, Meng Jiang. CodeTaxo: Enhancing Taxonomy Expansion with Limited Examples via Code Language Prompts, Findings of Annual Meetings of the Association for Computational Linguistics (ACL), 2025. [PDF] [Code]

  • Qingkai Zeng, Yuyang Bai, Zhaoxuan Tan, Shangbin Feng, Zhenwen Liang, Zhihan Zhang, Meng Jiang. Chain-of-Layer: Iteratively Prompting Large Language Models for Taxonomy Induction from Limited Examples, In Proceedings of the ACM International Conference on Information & Knowledge Management (CIKM). [PDF] [Code]

  • Qingkai Zeng, Zhihan Zhang, Jinfeng Lin, Meng Jiang. Completing Taxonomies with Relation-Aware Mutual Attentions, In the Workshop on Mining and Learning with Graphs (MLG) in conjunction with the ACM SIGKDD international conference on Knowledge discovery and data mining (KDD), 2023. [PDF]

  • Qingkai Zeng, Jinfeng Lin, Wenhao Yu, Jane Cleland-Huang, Meng Jiang, Enhancing Taxonomy Completion with Concept Generation via Fusing Relational Representations, In Proceedings of the ACM SIGKDD international conference on Knowledge discovery and data mining (KDD), 2021 . [PDF]

  • Qingkai Zeng, Wenhao Yu, Mengxia Yu, Tianwen Jiang, Tim Weninger, Meng Jiang, Tri-Train: Automatic Pre-fine Tuning between Pre-training and Fine-tune Training for SciNER, Findings of Empirical Methods on Natural Language Processing (EMNLP), 2020. [PDF]

  • Qingkai Zeng, Mengxia Yu, Wenhao Yu, Jinjun Xiong, Yiyu Shi, Meng Jiang, Faceted Hierarchy: A New Graph Type to Organize Scientific Concepts and a Construction Method, In the Workshop on Graph-Based Natural Language Processing (TextGraphs) at Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019. [PDF]

  • Zhaoxuan Tan, Qingkai Zeng, Yijun Tian, Zheyuan Liu, Bin Yin, Meng Jiang. Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning, arXiv preprint arXiv:2402.04401 (2024). [PDF]

  • Jinfeng Lin, Yalin Liu, Qingkai Zeng, Meng Jiang, Jane Cleland-Huang, Traceability Transformed: Generating More Accurate Links with Pre-Trained BERT Models, in Proceeding of IEEE/ACM International Conference on Software Engineering (ICSE), 2021. [PDF] (ACM SIGSOFT Distinguished Paper Award)

  • Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng, Xuedong Huang, Meng Jiang, Boosting Factual Correctness of Abstractive Summarization with Knowledge Graph, In Proceedings of North American Chapter of the Association for Computational Linguistics (NAACL), 2021 . [PDF]

Contact

  • Email: qzengnkcs [at] gmail [dot] com
  • Office: TBD
  • Location: TBD