A Policy-Gradient Approach to Solving Imperfect-Information Games with Best-Iterate Convergence

Mingyang Liu, Gabriele Farina, Asuman E. Ozdaglar

Abstract

Policy gradient methods have become a staple of any single-agent reinforcement learning toolbox, due to their combination of desirable properties: iterate convergence, efficient use of stochastic trajectory feedback, and theoreticallysound avoidance of importance sampling corrections. In multi-agent imperfectinformation settings (extensive-form games), however, it is still unknown whether the same desiderata can be guaranteed while retaining theoretical guarantees. Instead, sound methods for extensive-form games rely on approximating counterfactual values (as opposed to Q values), which are incompatible with policy gradient methodologies. In this paper, we investigate whether policy gradient can be safely used in two-player zero-sum imperfect-information extensive-form games (EFGs). We establish positive results, showing for the first time that a policy gradient method leads to provable best-iterate convergence to a regularized Nash equilibrium in self-play

Download

Not available

Typo or question?

Get in touch!
gfarina AT mit.edu

Metadata

Venue: ICLR 2025
Topic: Decision Making, Optimization, and Computational Game Theory