PipeRAG improves generation efficiency through pipeline parallelism, flexible retrieval intervals, and performance modeling.
Efficiently optimize Transformer models for long-text classification on limited GPU resources.
Current LLMs struggle to effectively follow expert-written instructions for revising long-form answers in the scientific domain.
Metaphors in natural language are essential for large language models, and the Metaphor Understanding Challenge Dataset (MUNCH) provides a challenging task for LLMs to interpret metaphors accurately.
English-centric Large Language Models demonstrate multilingual capabilities through decomposed prompting, surpassing iterative methods in efficacy and efficiency.
The author introduces a novel evaluation framework for Large Language Models (LLMs) by adapting Precision and Recall metrics from image generation to text generation. This approach provides insights into the quality and diversity of generated text without the need for aligned corpora.