Faure

Publications

Stability in games with continua of equilibriaJournal articleSebastian Bervoets et Mathieu Faure, Journal of Economic Theory, Volume 179, Issue C, pp. 131-162, 2019

The stability of Nash equilibria has often been studied by examining the asymptotic behavior of the best-response dynamics. This is generally done in games where interactions are global and equilibria are isolated. In this paper, we analyze stability in contexts where interactions are local and where there are continua of equilibria. We focus on the public good game played on a network, where the set of equilibria is known to depend on the network structure (Bramoullé and Kranton, 2007), and where, as we show, continua of equilibria often appear. We provide necessary and sufficient conditions for a component of Nash equilibria to be asymptotically stable vis-à-vis the best-response dynamics. Interestingly, we demonstrate that these conditions relate to the structure of the network in a simple way. We also provide corresponding results for several dynamical systems related to the best response.

Reinforcement Learning with Restrictions on the Action SetJournal articleMario Bravo et Mathieu Faure, SIAM Journal on Control and Optimization, Volume 53, Issue 1, pp. 287-312, 2015

Consider a two-player normal-form game repeated over time. We introduce an adaptive learning procedure, where the players only observe their own realized payoff at each stage. We assume that agents do not know their own payoff function and have no information on the other player. Furthermore, we assume that they have restrictions on their own actions such that, at each stage, their choice is limited to a subset of their action set. We prove that the empirical distributions of play converge to the set of Nash equilibria for zero-sum and potential games, and games where one player has two actions.

Convergence of generalized urn models to non-equilibrium attractorsJournal articleMathieu Faure et Sebastian Schreiber, Stochastic Processes and their Applications, Volume 125, Issue 8, pp. 3053-3074, 2015

Generalized Polya urn models have been used to model the establishment dynamics of a small founding population consisting of k di?erent genotypes or strategies. As population sizes get large, these population processes are well-approximated by a mean limit ordinary di?erential equation whose state space is the k simplex. We prove that if this mean limit ODE has an attractor at which the temporal averages of the population growth rate is positive, then there is a positive probability of the population not going extinct (i.e. growing without bound) and its distribution converging to the attractor. Conversely, when the temporal averages of the population growth rate is negative along this attractor, the population distribution does not converge to the attractor. For the stochastic analog of the replicator equations which can exhibit non-equilibrium dynamics, we show that verifying the conditions for convergence and non-convergence reduces to a simple algebraic problem. We also apply these results to selection-mutation dynamics to illustrate convergence to periodic solutions of these population genetics models with positive probability.

Online Learning and Game Theory. A Quick Overview with recent results and applicationsJournal articleMathieu Faure, Pierre Gaillard, Bruno Gaujal et Vianney Perchet, ESAIM: Proceedings and Surveys, Aurélien Garivier (Eds.), Volume 51, pp. 246-271, 2015

We study one of the main concept of online learning and sequential decision problem known as regret minimization. We investigate three different frameworks, whether data are generated accordingly to some i.i.d. process, or when no assumption whatsoever are made on their generation and, finally, when they are the consequences of some sequential interactions between players. The overall objective is to provide a comprehensive introduction to this domain. In each of these main setups, we define and analyze classical algorithms and we analyze their performances. Finally, we also show that some concepts of equilibria that emerged in game theory are learnable by players using online learning schemes while some other concepts are not learnab

Quasi-stationary distributions for randomly perturbed dynamical systemsJournal articleMathieu Faure et Sebastian Schreiber, Annals of Applied Probability, Volume 24, Issue 2, pp. 553-598, 2014

We analyze quasi-stationary distributions {μ ε } ε>0 of a family of Markov chains {X ε } ε>0 that are random perturbations of a bounded, continuous map F:M→M , where M is a closed subset of R k . Consistent with many models in biology, these Markov chains have a closed absorbing set M 0 ⊂M such that F(M 0 )=M 0 and F(M∖M 0 )=M∖M 0 . Under some large deviations assumptions on the random perturbations, we show that, if there exists a positive attractor for F (i.e., an attractor for F in M∖M 0 ), then the weak* limit points of μ ε are supported by the positive attractors of F . To illustrate the broad applicability of these results, we apply them to nonlinear branching process models of metapopulations, competing species, host-parasitoid interactions and evolutionary games.

Consistency of Vanishingly Smooth Fictitious PlayJournal articleMichel Benaïm et Mathieu Faure, Mathematics of Operations Research, Volume 38, Issue 3, pp. 437-450, 2013

We discuss consistency of vanishingly smooth fictitious play , a strategy in the context of game theory, which can be regarded as a smooth fictitious play procedure , where the smoothing parameter is time dependent and asymptotically vanishes. This answers a question initially raised by Drew Fudenberg and Satoru Takahashi.

Ergodic properties of weak asymptotic pseudo-trajectories for set valued dynamical systemsJournal articleMathieu Faure et Gregory Roth, Stochastics and Dynamics, Volume 13, Issue 1, pp. 1250011, 2013
Stochastic Approximation, Cooperative Dynamics and Supermodular GamesJournal articleMathieu Faure et Michel Benaïm, Annals of Applied Probability, Volume 22, Issue 5, pp. 2133-2164, 2012

This paper considers a stochastic approximation algorithm, with decreasing step size and martingale difference noise. Under very mild assumptions, we prove the nonconvergence of this process toward a certain class of repulsive sets for the associated ordinary differential equation (ODE). We then use this result to derive the convergence of the process when the ODE is cooperative in the sense of Hirsch [SIAM J. Math. Anal. 16 (1985) 423-439]. In particular, this allows us to extend significantly the main result of Hofbauer and Sandholm [Econometrica 70 (2002) 2265-2294] on the convergence of stochastic fictitious play in supermodular games.

Stochastic Approximations of Set-Valued Dynamical Systems: Convergence with Positive Probability to an AttractorJournal articleMathieu Faure et Gregory Roth, Mathematics of Operations Research, Volume 35, Issue 3, pp. 624-640, 2010

A successful method to describe the asymptotic behavior of a discrete time stochastic process governed by some recursive formula is to relate it to the limit sets of a well-chosen mean differential equation. Under an attainability condition, Benaïm proved that convergence to a given attractor of the flow induced by this dynamical system occurs with positive probability for a class of Robbins Monro algorithms. Benaïm, Hofbauer, and Sorin generalised this approach for stochastic approximation algorithms whose average behavior is related to a differential inclusion instead. We pursue the analogy by extending to this setting the result of convergence with positive probability to an attractor.

Self-normalized Large Deviations for Markov ChainsJournal articleMathieu Faure, Electronic Journal of Probability, Volume 7, pp. 1-31, 2002
Learning with minimal information in continuous gamesJournal articleSebastian Bervoets, Mario Bravo et Mathieu Faure, Theoretical Economics, Forthcoming

While payoff-based learning models are almost exclusively devised for finite action games, where players can test every action, it is harder to design such learning processes for continuous games. We construct a stochastic learning rule, designed for games with continuous action sets, which requires no sophistication from the players and is simple to implement: players update their actions according to variations in own payoff between current and previous action. We then analyze its behavior in several classes of continuous games and show that convergence to a stable Nash equilibrium is guaranteed in all games with strategic complements as well as in concave games, while convergence to Nash occurs in all locally ordinal potential games as soon as Nash equilibria are isolated.