2025

自然语言处理:基于大语言模型的方法

电子工业出版社, ISBN: 9787121495984, 2025.

车万翔,郭江,崔一鸣

A survey of multilingual large language models

Patterns, 6, 2025.

Qin, Libo and Chen, Qiguang and Zhou, Yuhang and Chen, Zhi and Li, Yinghui and Liao, Lizi and Li, Min and Che, Wanxiang and Yu, Philip S

A survey of multilingual large language models

Patterns, 6, 2025.

Qin, Libo and Chen, Qiguang and Zhou, Yuhang and Chen, Zhi and Li, Yinghui and Liao, Lizi and Li, Min and Che, Wanxiang and Yu, Philip S

A survey of table reasoning with large language models

Frontiers of Computer Science, 199348, 2025.

Zhang, Xuanliang and Wang, Dingzirui and Dou, Longxu and Zhu, Qingfu and Che, Wanxiang

A survey of table reasoning with large language models

Frontiers of Computer Science, 199348, 2025.

Zhang, Xuanliang and Wang, Dingzirui and Dou, Longxu and Zhu, Qingfu and Che, Wanxiang

Abacus-SQL: A Text-to-SQL System Empowering Cross-Domain and Open-Domain Database Retrieval

Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), 118--128, 2025.

Xu, Keyan and Wang, Dingzirui and Zhang, Xuanliang and Zhu, Qingfu and Che, Wanxiang

Abacus-SQL: A Text-to-SQL System Empowering Cross-Domain and Open-Domain Database Retrieval

Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), 118--128, 2025.

Xu, Keyan and Wang, Dingzirui and Zhang, Xuanliang and Zhu, Qingfu and Che, Wanxiang

Aware First, Think Less: Dynamic Boundary Self-Awareness Drives Extreme Reasoning Efficiency in Large Language Models

arXiv preprint arXiv:2508.11582, 2025.

Chen, Qiguang and Peng, Dengyun and Liu, Jinhao and Su, HuiKang and Guan, Jiannan and Qin, Libo and Che, Wanxiang

Aware First, Think Less: Dynamic Boundary Self-Awareness Drives Extreme Reasoning Efficiency in Large Language Models

arXiv preprint arXiv:2508.11582, 2025.

Chen, Qiguang and Peng, Dengyun and Liu, Jinhao and Su, HuiKang and Guan, Jiannan and Qin, Libo and Che, Wanxiang

CAMERA: Multi-Matrix Joint Compression for MoE Models via Micro-Expert Redundancy Analysis

arXiv preprint arXiv:2508.02322, 2025.

Xu, Yuzhuang and Han, Xu and Zhang, Yuanchi and Wang, Yixuan and Liu, Yijun and Ji, Shiyu and Zhu, Qingfu and Che, Wanxiang

CAMERA: Multi-Matrix Joint Compression for MoE Models via Micro-Expert Redundancy Analysis

arXiv preprint arXiv:2508.02322, 2025.

Xu, Yuzhuang and Han, Xu and Zhang, Yuanchi and Wang, Yixuan and Liu, Yijun and Ji, Shiyu and Zhu, Qingfu and Che, Wanxiang

Can Large Language Models Understand You Better? An MBTI Personality Detection Dataset Aligned with Population Traits

Proceedings of the 31st International Conference on Computational Linguistics, 5071--5081, 2025.

Li, Bohan and Guan, Jiannan and Dou, Longxu and Feng, Yunlong and Wang, Dingzirui and Xu, Yang and Wang, Enbo and Chen, Qiguang and Wang, Bichen and Xu, Xiao and Zhang, Yimeng and Qin, Libo and Zhao, Yanyan and Zhu, Qingfu and Che, Wanxiang

Can Large Language Models Understand You Better? An MBTI Personality Detection Dataset Aligned with Population Traits

Proceedings of the 31st International Conference on Computational Linguistics, 5071--5081, 2025.

Li, Bohan and Guan, Jiannan and Dou, Longxu and Feng, Yunlong and Wang, Dingzirui and Xu, Yang and Wang, Enbo and Chen, Qiguang and Wang, Bichen and Xu, Xiao and Zhang, Yimeng and Qin, Libo and Zhao, Yanyan and Zhu, Qingfu and Che, Wanxiang

Chart2Code53: A Large-Scale Diverse and Complex Dataset for Enhancing Chart-to-Code Generation

Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, 15839--15855, 2025.

Niu, Tianhao and Cui, Yiming and Wang, Baoxin and Xu, Xiao and Yao, Xin and Zhu, Qingfu and Wu, Dayong and Wang, Shijin and Che, Wanxiang

Chart2Code53: A Large-Scale Diverse and Complex Dataset for Enhancing Chart-to-Code Generation

Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, 15839--15855, 2025.

Niu, Tianhao and Cui, Yiming and Wang, Baoxin and Xu, Xiao and Yao, Xin and Zhu, Qingfu and Wu, Dayong and Wang, Shijin and Che, Wanxiang

DAC: Decomposed Automation Correction for Text-to-SQL

Findings of the Association for Computational Linguistics: EMNLP 2025, 385--402, 2025.

Wang, Dingzirui and Dou, Longxu and Zhang, Xuanliang and Zhu, Qingfu and Che, Wanxiang

DAC: Decomposed Automation Correction for Text-to-SQL

Findings of the Association for Computational Linguistics: EMNLP 2025, 385--402, 2025.

Wang, Dingzirui and Dou, Longxu and Zhang, Xuanliang and Zhu, Qingfu and Che, Wanxiang

DLPO: Towards a Robust, Efficient, and Generalizable Prompt Optimization Framework from a Deep-Learning Perspective

Findings of the Association for Computational Linguistics: EMNLP 2025, 8311--8334, 2025.

Peng, Dengyun and Zhou, Yuhang and Chen, Qiguang and Liu, JinHao and Chen, Jingjing and Qin, Libo and Che, Wanxiang

DLPO: Towards a Robust, Efficient, and Generalizable Prompt Optimization Framework from a Deep-Learning Perspective

Findings of the Association for Computational Linguistics: EMNLP 2025, 8311--8334, 2025.

Peng, Dengyun and Zhou, Yuhang and Chen, Qiguang and Liu, JinHao and Chen, Jingjing and Qin, Libo and Che, Wanxiang

Judge Q: Trainable Queries for Optimized Information Retention in KV Cache Eviction

arXiv preprint arXiv:2509.10798, 2025.

Liu, Yijun and Wang, Yixuan and Xu, Yuzhuang and Ji, Shiyu and Xu, Yang and Zhu, Qingfu and Che, Wanxiang

Judge Q: Trainable Queries for Optimized Information Retention in KV Cache Eviction

arXiv preprint arXiv:2509.10798, 2025.

Liu, Yijun and Wang, Yixuan and Xu, Yuzhuang and Ji, Shiyu and Xu, Yang and Zhu, Qingfu and Che, Wanxiang

Lookahead Q-Cache: Achieving More Consistent KV Cache Eviction via Pseudo Query

Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, 34146--34162, 2025.

Wang, Yixuan and Ji, Shiyu and Liu, Yijun and Xu, Yuzhuang and Xu, Yang and Zhu, Qingfu and Che, Wanxiang

Lookahead Q-Cache: Achieving More Consistent KV Cache Eviction via Pseudo Query

Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, 34146--34162, 2025.

Wang, Yixuan and Ji, Shiyu and Liu, Yijun and Xu, Yuzhuang and Xu, Yang and Zhu, Qingfu and Che, Wanxiang

Manager: Aggregating Insights from Unimodal Experts in Two-Tower VLMs and MLLMs

IEEE Transactions on Circuits and Systems for Video Technology, 2025.

Xu, Xiao and Qin, Libo and Che, Wanxiang and Kan, Min-Yen

Manager: Aggregating Insights from Unimodal Experts in Two-Tower VLMs and MLLMs

IEEE Transactions on Circuits and Systems for Video Technology, 2025.

Xu, Xiao and Qin, Libo and Che, Wanxiang and Kan, Min-Yen

Mixpro: Simple yet effective data augmentation for prompt-based learning

International Journal of Machine Learning and Cybernetics, 1--20, 2025.

Li, Bohan and Dou, Longxu and Hou, Yutai and Feng, Yunlong and Mu, Honglin and Wang, Enbo and Zhu, Qingfu and Sun, Qinghua and Che, Wanxiang

Mixpro: Simple yet effective data augmentation for prompt-based learning

International Journal of Machine Learning and Cybernetics, 1--20, 2025.

Li, Bohan and Dou, Longxu and Hou, Yutai and Feng, Yunlong and Mu, Honglin and Wang, Enbo and Zhu, Qingfu and Sun, Qinghua and Che, Wanxiang

MULTITAT: Benchmarking Multilingual Table-and-Text Question Answering

Findings of the Association for Computational Linguistics: EMNLP 2025, 626--647, 2025.

Zhang, Xuanliang and Wang, Dingzirui and Xu, Keyan and Zhu, Qingfu and Che, Wanxiang

MULTITAT: Benchmarking Multilingual Table-and-Text Question Answering

Findings of the Association for Computational Linguistics: EMNLP 2025, 626--647, 2025.

Zhang, Xuanliang and Wang, Dingzirui and Xu, Keyan and Zhu, Qingfu and Che, Wanxiang

MURRE: Multi-Hop Table Retrieval with Removal for Open-Domain Text-to-SQL

Proceedings of the 31st International Conference on Computational Linguistics, 5789--5806, 2025.

Zhang, Xuanliang and Wang, Dingzirui and Dou, Longxu and Zhu, Qingfu and Che, Wanxiang

MURRE: Multi-Hop Table Retrieval with Removal for Open-Domain Text-to-SQL

Proceedings of the 31st International Conference on Computational Linguistics, 5789--5806, 2025.

Zhang, Xuanliang and Wang, Dingzirui and Dou, Longxu and Zhu, Qingfu and Che, Wanxiang

RoT: Enhancing Table Reasoning with Iterative Row-Wise Traversals

Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, 559--579, 2025.

Zhang, Xuanliang and Wang, Dingzirui and Xu, Keyan and Zhu, Qingfu and Che, Wanxiang

RoT: Enhancing Table Reasoning with Iterative Row-Wise Traversals

Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, 559--579, 2025.

Zhang, Xuanliang and Wang, Dingzirui and Xu, Keyan and Zhu, Qingfu and Che, Wanxiang

SCITAT: A Question Answering Benchmark for Scientific Tables and Text Covering Diverse Reasoning Types

Findings of the Association for Computational Linguistics: ACL 2025, 3859--3881, 2025.

Zhang, Xuanliang and Wang, Dingzirui and Wang, Baoxin and Dou, Longxu and Lu, Xinyuan and Xu, Keyan and Wu, Dayong and Zhu, Qingfu

SCITAT: A Question Answering Benchmark for Scientific Tables and Text Covering Diverse Reasoning Types

Findings of the Association for Computational Linguistics: ACL 2025, 3859--3881, 2025.

Zhang, Xuanliang and Wang, Dingzirui and Wang, Baoxin and Dou, Longxu and Lu, Xinyuan and Xu, Keyan and Wu, Dayong and Zhu, Qingfu

Tag-Evol: Achieving Efficient Instruction Evolving via Tag Injection

Findings of the Association for Computational Linguistics: ACL 2025, 7856--7869, 2025.

Wang, Yixuan and Zhou, Shiqi and Guo, Chuanzhe and Zhu, Qingfu

Tag-Evol: Achieving Efficient Instruction Evolving via Tag Injection

Findings of the Association for Computational Linguistics: ACL 2025, 7856--7869, 2025.

Wang, Yixuan and Zhou, Shiqi and Guo, Chuanzhe and Zhu, Qingfu

Towards reasoning era: A survey of long chain-of-thought for reasoning large language models

arXiv preprint arXiv:2503.09567, 2025.

Chen, Qiguang and Qin, Libo and Liu, Jinhao and Peng, Dengyun and Guan, Jiannan and Wang, Peng and Hu, Mengkang and Zhou, Yuhang and Gao, Te and Che, Wangxiang

Towards reasoning era: A survey of long chain-of-thought for reasoning large language models

arXiv preprint arXiv:2503.09567, 2025.

Chen, Qiguang and Qin, Libo and Liu, Jinhao and Peng, Dengyun and Guan, Jiannan and Wang, Peng and Hu, Mengkang and Zhou, Yuhang and Gao, Te and Che, Wangxiang

Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling

Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 6816--6831, 2025.

Luo, Xianzhen and Wang, Yixuan and Zhu, Qingfu and Zhang, Zhiming and Zhang, Xuanyu and Yang, Qing and Xu, Dongliang

Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling

Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 6816--6831, 2025.

Luo, Xianzhen and Wang, Yixuan and Zhu, Qingfu and Zhang, Zhiming and Zhang, Xuanyu and Yang, Qing and Xu, Dongliang

Visual thoughts: A unified perspective of understanding multimodal chain-of-thought

arXiv preprint arXiv:2505.15510, 2025.

Cheng, Zihui and Chen, Qiguang and Xu, Xiao and Wang, Jiaqi and Wang, Weiyun and Fei, Hao and Wang, Yidong and Wang, Alex Jinpeng and Chen, Zhi and Che, Wanxiang and others

Visual thoughts: A unified perspective of understanding multimodal chain-of-thought

arXiv preprint arXiv:2505.15510, 2025.

Cheng, Zihui and Chen, Qiguang and Xu, Xiao and Wang, Jiaqi and Wang, Weiyun and Fei, Hao and Wang, Yidong and Wang, Alex Jinpeng and Chen, Zhi and Che, Wanxiang and others

大模型原理与应用

高等教育出版社, ISBN: 9787040651317, 2025.

刘聪,张燕咏,丁宁,车万翔,陶建华

大模型原理与应用

高等教育出版社, ISBN: 9787040651317, 2025.

刘聪,张燕咏,丁宁,车万翔,陶建华