DiscoverBest AI papers explainedEmergent coordination in multi-agent language models
Emergent coordination in multi-agent language models

Emergent coordination in multi-agent language models

Update: 2025-10-19
Share

Description

This paper introduces an **information-theoretic framework** designed to determine when multi-agent Large Language Model (LLM) systems transition from simple aggregates to integrated, synergistic collectives. The research utilizes a **group guessing game without direct communication** to experimentally test how different prompt designs—specifically, a control condition, assigning agent **personas**, and adding a **Theory of Mind (ToM)** instruction—influence emergent coordination. Findings suggest that while all conditions show signs of **dynamic emergence capacity**, combining personas with the ToM prompt significantly improves **goal-directed synergy** and performance by fostering both identity-linked differentiation and collective alignment, mirroring principles of **collective intelligence in human groups**. The study applies various statistical and **information decomposition** methods, including the practical criterion and emergence capacity, to rigorously quantify and localize this emergent behavior across different LLMs like GPT-4.1 and Llama-3.1-8B.

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Emergent coordination in multi-agent language models

Emergent coordination in multi-agent language models

Enoch H. Kang