Yuqun Zhang

Talk: When LLMs meet automated program repair and decompilation
- Date and Time
- April 1, 2025 at 12:30pm
- Location
PLSE Lab (CSE2 253)
Abstract
Large language models (LLMs) have been largely adopted in software engineering tasks. In this talk, I will first show that for LLM-based automated program repair, applying few-shot learning mechanism in many existing techniques leads to disparate repair effectiveness, while directly applying the auxiliary repair-relevant information to LLMs significantly increases function-level repair performance. Then I will introduce SRepair, which adopts a dual-LLM framework to leverage the power of the auxiliary repair-relevant information for advancing repair effectiveness. At last, I will introduce LLM4Decompile, the first and largest open-source LLM series (1.3B to 33B) trained to decompile binary code. LLM4Decompile could outperform GPT-4o and Ghidra in terms of the re-executability rate and effectively refine the decompiled code from Ghidra.
Bio
Yuqun Zhang is an Assistant Professor in the Department of Computer Science and Engineering at Southern University of Science and Technology, Shenzhen, China. His research focuses on software testing and analysis, and AI-oriented software engineering. He received his PhD from UT Austin. He has been awarded two ACM SIGSOFT Distinguished Paper Awards (ISSTA 2019 and ICSE 2025).