Luo Xianzhen

How Many Code and Test Cases Are Enough? Evaluating Test Cases Generation from a Binary-Matrix Perspective

ILCR 2026

Luo, Xianzhen and Huang, Jinyang and Zheng, Wenzhen and Zhu, Qingfu and Xu, Mingzheng and Xu, Yiheng and Fan, Yuantao and Qin, Libo and Che, Wanxiang

How Many Code and Test Cases Are Enough? Evaluating Test Cases Generation from a Binary-Matrix Perspective

ILCR 2026

Luo, Xianzhen and Huang, Jinyang and Zheng, Wenzhen and Zhu, Qingfu and Xu, Mingzheng and Xu, Yiheng and Fan, Yuantao and Qin, Libo and Che, Wanxiang

Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling

Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 6816--6831, 2025.

Luo, Xianzhen and Wang, Yixuan and Zhu, Qingfu and Zhang, Zhiming and Zhang, Xuanyu and Yang, Qing and Xu, Dongliang

Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling

Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 6816--6831, 2025.

Luo, Xianzhen and Wang, Yixuan and Zhu, Qingfu and Zhang, Zhiming and Zhang, Xuanyu and Yang, Qing and Xu, Dongliang

A Survey on Natural Language Processing for Programming

Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), 1690--1704, 2024.

Zhu, Qingfu and Luo, Xianzhen and Liu, Fang and Gao, Cuiyun and Che, Wanxiang

A Survey on Natural Language Processing for Programming

Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), 1690--1704, 2024.

Zhu, Qingfu and Luo, Xianzhen and Liu, Fang and Gao, Cuiyun and Che, Wanxiang

Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training

Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 12914--12926, 2024.

Wang, Yixuan and Luo, Xianzhen and Wei, Fuxuan and Liu, Yijun and Zhu, Qingfu and Zhang, Xuanyu and Yang, Qing and Xu, Dongliang and Che, Wanxiang

Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training

Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 12914--12926, 2024.

Wang, Yixuan and Luo, Xianzhen and Wei, Fuxuan and Liu, Yijun and Zhu, Qingfu and Zhang, Xuanyu and Yang, Qing and Xu, Dongliang and Che, Wanxiang

Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging

Findings of the Association for Computational Linguistics: ACL 2022, 637--647

Hou, Yutai and Chen, Cheng and Luo, Xianzhen and Li, Bohan and Che, Wanxiang

Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging

Findings of the Association for Computational Linguistics: ACL 2022, 637--647

Hou, Yutai and Chen, Cheng and Luo, Xianzhen and Li, Bohan and Che, Wanxiang