Gu, S., Fang, C., Zhang, Q., Tian, F., Zhou, J., & Chen, Z. (2024). Improving LLM-based Unit test generation via Template-based Repair. In Proceedings of ACM Conference (Conference’17). ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn
This paper introduces TestART, a novel method addressing the limitations of existing LLM-based unit test generation techniques by integrating automated repair and iterative feedback mechanisms to enhance the correctness and coverage of generated test cases.
TestART employs a co-evolutionary approach combining automated generation and repair iterations. It leverages LLMs (specifically ChatGPT-3.5) for initial test case generation, followed by a template-based repair process to fix compilation errors, assertion failures, and runtime exceptions. Test coverage information is then fed back into the LLM through prompt injection, guiding the generation of improved test cases in subsequent iterations.
TestART effectively leverages the generative capabilities of LLMs while mitigating their limitations through automated repair and iterative feedback. This approach results in high-quality, human-readable unit test cases with improved correctness and coverage, surpassing the performance of existing state-of-the-art methods.
This research significantly contributes to the field of automated software testing by presenting a novel and effective approach for generating high-quality unit test cases using LLMs. TestART has the potential to reduce the time and effort required for software testing while improving software quality.
The current implementation of TestART focuses on Java code and utilizes a specific LLM (ChatGPT-3.5). Future research could explore the applicability of this approach to other programming languages and LLMs. Additionally, investigating the effectiveness of different repair template designs and feedback mechanisms could further enhance the performance of TestART.
Till ett annat språk
från källinnehåll
arxiv.org
Djupare frågor