Fault localization is a critical process in software debugging, involving identifying specific program elements responsible for failures. Various tools have been developed to automate this process, but simply ranking program elements based on suspiciousness is not enough. Providing explanations for flagged code elements is crucial. Large Language Models (LLM) like FuseFL combine information such as spectrum-based fault localization results, test case outcomes, and code descriptions to improve fault localization. In a study using faulty code from the Refactory dataset, FuseFL outperformed other SBFL techniques and XAI4FL in localizing faults at the Top-1 position by significant margins. Human evaluations showed that FuseFL generated correct explanations in 22 out of 30 cases and achieved high scores for informativeness and clarity comparable to human-generated explanations.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Ratnadira Wi... at arxiv.org 03-18-2024
https://arxiv.org/pdf/2403.10507.pdfDeeper Inquiries