This research explores the potential of using large language models (LLMs), specifically GPT-3.5 Turbo, to solve the Travelling Salesman Problem (TSP). The authors conducted experiments employing various approaches, including zero-shot in-context learning, few-shot in-context learning, and chain-of-thoughts (CoT).
The key highlights and insights from the study are:
The authors created a dataset of simulated journeys, using both TSPLIB instances and randomly generated points, to train and test the LLM.
They engineered in-context learning prompts using different techniques, such as zero-shot, few-shot, and CoT, to assess the LLM's ability to solve the TSP without any prior training.
The researchers fine-tuned the GPT-3.5 Turbo model using the TSP instances and evaluated its performance on both the training size and larger instances.
To improve the fine-tuned model's performance without additional training, the authors adopted a self-ensemble approach, which enhanced the quality of the solutions.
The study evaluated the LLM's solutions using two metrics: the randomness score, which tests whether the solution is randomly generated, and the gap, which measures the difference between the model's solution and the optimal solution.
The fine-tuned models demonstrated promising performance on problems identical in size to the training instances and generalized well to larger problems.
The research highlights the potential of using LLMs to solve combinatorial optimization problems, such as the Travelling Salesman Problem, and provides insights into effective prompting and fine-tuning techniques.
A otro idioma
del contenido fuente
arxiv.org
Ideas clave extraídas de
by Mahmoud Maso... a las arxiv.org 05-06-2024
https://arxiv.org/pdf/2405.01997.pdfConsultas más profundas