MAPS: Rethinking Dialogue Through Subjective Perspectives
- Molood Arman
- Jun 2
- 2 min read
What does it mean for a dialogue system to understand? Not just to respond fluently—but to hold a perspective, to negotiate meaning, and to adapt through emotional and cognitive nuance?
In a world of increasingly powerful language models, this question feels urgent. Most current systems optimize for coherence, but flatten individuality. They respond, but do not remember—and certainly do not reason as distinct entities in shared space. That’s where MAPS comes in.
Introducing MAPS
MAPS (Multi-Agent Perspective Spaces) is a novel dialogue architecture that models subjective reasoning within multi-agent systems. Developed by myself and Clément Bonnafous, it’s designed to let agents maintain cognitive individuality while progressively aligning on shared meaning.
In MAPS, agents don’t just exchange words—they think with biases, remember differently, and interpret the same utterance through different lenses.
Why It Matters
Traditional neural dialogue systems fall into two camps:
Fluent but faceless: Large LLMs generate smooth responses but suppress subjectivity.
Interpretable but rigid: Symbolic systems offer control, but collapse in open-ended domains.
MAPS breaks this binary. It bridges fluency and interpretability by embedding cognitive profiles into each agent. Dialogue becomes not just a function of context, but a negotiation between minds.

The Core Components
MAPS achieves this through three intertwined mechanisms:
Domain-Weighted Perspective Adapter: Each agent is conditioned by a vector encoding its cognitive-emotional priorities (e.g., logical vs. empathetic). This shapes how it interprets shared dialogue space.
GRU-Based Dynamic Memory: Agents evolve over time, accumulating context in recurrent states that reflect subjective interpretation.
Token-Level Attention: We visualize how each agent weighs different words, providing transparent access to their internal reasoning.
Together, these modules create what we call cognitive interpretability.
Key Results
We tested MAPS on emotionally expressive (EmpatheticDialogues), open-ended (TopicalChat), and task-oriented (MultiWOZ) datasets. Across the board, MAPS:
Reduced semantic bias between agents over time (showing convergence),
Maintained high subjectivity scores (preserving individuality),
And produced diverse yet relevant responses.
Even in goal-driven dialogues like MultiWOZ, agents preserved subtle stylistic and reasoning distinctions—one might confirm with factual clarity, another with empathetic tone.
A Step Toward Human-Like Dialogue
MAPS doesn’t just simulate dialogue—it enacts deliberative understanding. It lets us see how perspectives shift, where disagreements arise, and how convergence happens without uniformity.
In a world increasingly shaped by human-AI interaction, we believe dialogue systems should reflect not just linguistic competence but also cognitive diversity.
MAPS offers a blueprint for that.
What’s Next?
We're currently exploring:
Learning agent profiles dynamically (instead of setting them manually),
And integrating it with symbolic planners for long-term reasoning and collaboration.
The vision is clear: dialogue not as reaction, but as co-evolution. Not just coherence, but mutual meaning-making.
If you're interested in collaboration, research, or applications of MAPS in interpretability or human-AI interfaces, feel free to reach out.
Let’s build dialogues that think.
Links to paper:
Comments