Core Concepts
DART-LLM is a novel system that leverages large language models (LLMs) to improve multi-robot task decomposition and execution by incorporating dependency awareness, enabling efficient parallel execution and collaboration in complex scenarios.
Stats
For task level L1, all tested LLMs achieved perfect scores across all metrics (SR=1.00, IPA=1.00, DSR=1.00, SGSR=1.00).
In task level L2, the SR of the Llama 3.1 model was 0.87, higher than GPT-3.5-turbo's 0.83.
At task level L3, the GPT-4o model maintained high performance with an SR of 0.97, IPA of 1.00, and both DSR and SGSR scores of 0.97.
GPT-3.5-turbo's SR dropped to 0.75 in L3 tasks.
Llama 3.1, with only 8B parameters, performed better than GPT-3.5-turbo in some instances.
Quotes
"The key advantage of multi-robot systems lies in their ability to collaboratively solve problems that are difficult for a single robot to handle independently."
"By decomposing tasks into multiple subtasks with dependencies, DART-LLM effectively manages complex task sequences, facilitating parallel execution and collaborative cooperation in multi-robot systems."