Agentic Simulation for Exploring Emergent Behaviors
This project investigated how collective beliefs and coordination emerge in multi-agent LLM systems under uncertainty using a text-based agentic simulation framework. Building on a Werewolf social-deduction baseline, we designed a personality-driven “Shared Fictions” environment in which agents propose, negotiate, and adopt norms while managing shared resources and disaster risk. Through qualitative transcript analysis, we observed consistent coordination behaviors alongside a strong bias toward utilitarian, instrumental reasoning, even when agents were given heterogeneous personalities. The results highlight both the promise of LLM agents as tools for studying emergent social dynamics and their limitations in forming non-rational, symbolic shared beliefs. Overall, the work demonstrates how environment design and prompting strongly shape emergent behavior in artificial societies. 




