Tag: team-dynamics

  • Team Dynamics

    While we’ve explored agent functions in these posts, Sentienta is, at its core, a multi-agent system where cooperation and debate enhance reasoning.

    Multi-agent Debate (MAD) and Multi-agent Cooperative Decision Making (CDM) have recently become intense areas of research, with numerous survey papers exploring both classical (non-LLM) and fully LLM-based approaches ([1], [2], [3]). While these reviews typically provide a high-level overview of the domains in which MAD/CDM systems operate and their general structure, they offer limited detail on enabling effective interaction among LLMs through cooperative and critical dialogue. In this post, we aim to bridge this gap, focusing specifically on techniques for enhancing LLM-based systems.

    We’ll begin by reviewing the characteristics of effective team dynamics, human or otherwise. Teams are most productive when they display these behaviors:

    • Balanced Participation – Ensure all members contribute and have the opportunity to share their insights.
    • Critical Thinking – Evaluate ideas objectively, considering their strengths and weaknesses. Encourage discussion and rebuttals where needed.
    • Well-define Expertise and Responsibilities – Each team member should bring something special to the discussion and be responsible for exercising that expertise.
    • Continuous Learning – Team members should reflect on past discussions and recall earlier decisions to refine the current dialog.
    • Defined Decision-Making Criteria – Teams should have a clear idea of how and when a problem is solved. This may or may not include a team-lead concluding the discussion.

    How might we get a team of LLM-based agents to exhibit these dynamics? LLMs are stateless, and this means that whenever we want an agent to participate, it needs to be provided with both the query, the context of the query, and any instructions on how best to answer the query.

    As discussed here, the context for the query is provided as a transcript of the current and past dialogs. The system prompt is where the agent is given instructions for team dynamics and the persona that defines the agent’s expertise.

    Here are some key points in the system prompt that address the team dynamics we’re looking for, stated in second-person instructions:

    Balanced Participation:

    **Brevity**: Keep answers short (1-2 sentences) to allow others to participate.

    **Avoid Repetition**: Do not repeat what others have or you have said. Only add new insights or alternative viewpoints.

    **Contribute**: Add new, relevant insights if your response is unique.

    Critical Thinking:

    **Critique**: Think critically about others’ comments and ask probing questions.

    **Listen Engage**: Focus on understanding your teammates and ask questions that dig into their ideas. Listen for gaps in understanding and use questions to address these gaps.

    **Avoid Repetition**: Do not repeat what others have or you have said. Only add new insights or alternative viewpoints.

    **Prioritize Questions**: Lead with questions that advance the discussion, ensuring clarification or elaboration on points made by others before providing your own insights.

    Well-define Expertise and Responsibilities:

    This is provided by the agent persona. In addition, there are these team instructions:

    **Engage**: Provide analysis, ask clarifying questions, or offer new ideas based on your expertise.

    Learning:

    **Read the Transcript**: Review past and current discussions. If neither have content, then simply answer the user’s question.

    **Reference**: Answer questions from past dialogs when relevant.

    Defined Decision-Making Criteria:

    **Prioritize High-Value Contributions**: Respond to topics that have not yet been adequately covered or address any gaps in the discussion. If multiple agents are addressing the same point, seek consensus before contributing.

    **Silence**: If you find no specific question to answer or insight to add, do not respond.

    **Completion**: If you have nothing more to add to the discussion and the user’s query has been answered, simply state you have nothing to add.

    These instructions direct each agent to contribute based on their expertise, responding to both user queries and peer inputs. They emphasize brevity and silence when no meaningful input is available, ensuring discussions remain concise, non-redundant, and goal-oriented.

    Conclusion

    The team dialog will evolve dynamically with each agent addressing the user’s query through these dynamics. The dialog will continue until each agent has participated fully, typically several times responding to ideas offered by teammates. Once each agent decides there is nothing more to add, the discussion comes to an end.

    References:

    [1] Jin, Weiqiang and Du, Hongyang and Zhao, Biao and Tian, Xingwu and Shi, Bohang and Yang, Guan, A Comprehensive Survey on Multi-Agent Cooperative Decision-Making: Scenarios, Approaches, Challenges and Perspectives. Available at SSRN.

    [2] Li, X., Wang, S., Zeng, S. et al. A survey on LLM-based multi-agent systems: workflow, infrastructure, and challenges. Vicinagearth 1, 9 (2024). Available at DOI.

    [3] Y. Rizk, M. Awad and E. W. Tunstel, “Decision Making in Multiagent Systems: A Survey,” in IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 3, pp. 514-529, Sept. 2018, doi: 10.1109/TCDS.2018.2840971. keywords: {Decision making;Robot sensing systems;Task analysis;Robot kinematics;Particle swarm optimization;Multi-agent systems;Cooperation;decision making models;game theory;Markov decision process (MDP);multiagent systems (MASs);swarm intelligence}, Available at IEEE.