Concepts de base
Enhancing multi-turn intent classification with Linguistic-Adaptive Retrieval-Augmented Language Models (LLMs).
Résumé
The paper introduces LARA, a framework designed to improve accuracy in multi-turn intent classification tasks across six languages. By combining a fine-tuned smaller model with a retrieval-augmented mechanism integrated within the architecture of LLMs, LARA dynamically utilizes past dialogues and relevant intents to enhance context understanding. The adaptive retrieval techniques bolster cross-lingual capabilities without extensive retraining, achieving state-of-the-art performance.
Structure:
- Introduction to Chatbots and Intent Classification Challenges
- Proposed Solution: LARA Framework Overview
- Data Collection Challenges in Multi-Turn Dialogues
- Methodology: Linguistic-Adaptive Retrieval-Augmentation Process
- Experiments Conducted on E-commerce Multi-Turn Dataset Across Six Languages
- Comparison with Baselines and Performance Metrics Analysis
Stats
Comprehensive experiments demonstrate that LARA achieves state-of-the-art performance on multi-turn intent classification tasks, enhancing the average accuracy by 3.67% compared to existing methods.