Negative sampling is a crucial but often overlooked component in recommendation systems, capable of revealing genuine negative aspects inherent in user preferences and improving model performance.
Incorporating common sense and large language models enhances knowledge-based recommendation systems, addressing data sparsity and cold start issues.
Challenging the traditional two backpropagation strategy in recommendation models with a novel one backpropagation approach for improved performance.
Incorporating ID embeddings into LLMs improves recommendation system performance.
Introducing InBox, a novel embedding-based model that utilizes knowledge graph entities and relations to improve recommendation systems through box-based interest and concept modeling.
State space models offer an efficient solution for sequential recommendation, addressing the effectiveness-efficiency dilemma.
The author introduces a novel multi-tower multi-interest framework to address challenges in multi-interest learning, enhancing matching performance and industrial adoption.
The author proposes GPT-FedRec, a novel federated recommendation framework leveraging ChatGPT and a hybrid Retrieval Augmented Generation mechanism to address data sparsity and heterogeneity in FR, achieving superior performance against baseline methods.
The author introduces the NoteLLM framework, leveraging Large Language Models (LLMs) to enhance item-to-item (I2I) note recommendation by compressing notes and generating hashtags/categories simultaneously.
Our MENTOR method addresses label sparsity and modality alignment issues in multimodal recommendation by utilizing self-supervised learning and enhancing specific features of each modality. The approach involves multilevel tasks to align modalities effectively while maintaining historical interaction information.