Large Language Models

GameTalk: Training LLMs for Strategic Conversation

VVictor Conchello VendrellMMax Ruiz LuytenMMihaela van der Schaar
Published
January 22, 2026
Authors
3

Abstract

Strategic decision-making in multi-agent settings is a key challenge for large language models (LLMs), particularly when coordination and negotiation must unfold over extended conversations. While recent work has explored the use of LLMs in isolated decision tasks, little attention has been given to optimizing long-term objectives through dialogue. We introduce GameTalk, a framework for training LLMs to make strategic decisions via multi-turn interactions. Unlike prior work that focuses on single-turn objectives or static action prediction, we train LLMs to optimize a global objective across full conversations. We achieve this by adapting fine-tuning methods like GRPO, DPO, and STaR to incorporate reward signals that depend on the entire interaction. We evaluate this approach on a suite of increasingly complex games, designed to stress different aspects of reasoning, coordination, and opponent modeling. Our results show that GameTalk significantly outperforms untrained models, especially under reward shaping, with DPO consistently yielding the strongest gains. These findings position conversational fine-tuning as a promising path for LLMs to reason, negotiate, and act in interactive environments.

Keywords

multi-agent settingslarge language modelsstrategic decision-makingmulti-turn interactionsfine-tuning methodsGRPODPOSTaRreward shapingconversational fine-tuning

More in Large Language Models

View all
GameTalk: Training LLMs for Strategic Conversation | Paperchime