← Back to projects

Multi-Agent LLM Opinion Dynamics Simulator

AI + Research
PythonLLMsMulti-Agent SystemsSimulationNLP
Multi-Agent LLM Opinion Dynamics Simulator cover

A simulation system where multiple LLM-based agents with different political leanings and personalities discuss topics and update their opinions over time, revealing limitations in how current LLMs behave as social agents.

Problem

Understanding how opinions spread and evolve in groups is important for studying politics, online conversations, and polarization. But running real-world experiments is expensive and complicated. Large language models (LLMs) can now act as "agents" with different personalities, but most existing work only uses them for simple factual topics. Simulating realistic opinion change on open-ended or political issues is still very challenging because LLM agents often agree too easily or don't stay true to their assigned personas.

Overview

This project builds a simulation where multiple LLM-based agents—with different political leanings and personalities—discuss topics and update their opinions over time. Each agent starts with an initial belief rated on a five-point scale from strongly negative to strongly positive. The system then tracks how these beliefs shift as agents debate topics ranging from politics (e.g., gun control, welfare) to lighthearted issues (e.g., iPhone vs. Android, pineapple on pizza).

How It Works (Approach)

Each agent has a persona and a memory of past messages. At every step of the simulation, one agent writes a message, and another agent reviews it and updates their belief. This continues for 100 steps with random interactions. The system records how far opinions shift, how similar or different the agents become over time, and whether agents behave consistently with their assigned personas. Both positive and negative framings of each topic are tested to see how wording affects outcomes.

Impact / Value

The simulation reveals surprising limitations in how current LLMs behave as social agents. Across most scenarios, agents tended to move toward the positive side of the topic—even when their personas should disagree. They were overly agreeable, changed opinions too quickly, and often ignored their assigned political leanings. These findings help researchers identify where multi-agent LLM systems fall short and how future models or architectures might better represent real social dynamics.

Key Features

  • Multi-agent system with persona-conditioned LLM agents
  • Support for both political and neutral topics with positive/negative wording
  • Simulation of echo chambers, town halls, and minority-opinion scenarios
  • Automatic belief updating using a five-point Likert scale
  • Metrics for measuring opinion drift, group consensus, and belief diversity
  • Qualitative analysis of agent conversation patterns (e.g., over-agreement, inconsistency)