Aller au contenu principal

Pierre Bertrand

Chercheur Aix-Marseille UniversitéFaculté d'économie et de gestion (FEG)

Économétrie, finance et méthodes mathématiques
Bertrand
Statut
Maître de conférences
Domaine(s) de recherche
Théorie des jeux et réseaux sociaux
Thèse
2021, Université Paris Cité (Paris 6)
Téléchargement
CV
Adresse

AMU - AMSE
5-9 Boulevard Maurice Bourdet, CS 50498
​13205 Marseille Cedex 1

Résumé For some smooth special case of generalized $\varphi-$divergences as well as of new divergences (called scaled shift divergences), we derive approximations of the omnipresent (weighted) $\ell_{1}-$distance and (weighted) $\ell_{1}-$norm.
Mots clés Divergence Kullback-Leibler, Divergence analysis, $\ell1-$distance/norm, Generalized $\varphi-$divergences
Résumé This paper aims at comparing two coupling approaches as basic layers for building clustering criteria, suited for modularizing and clustering very large networks. We briefly use "optimal transport theory" as a starting point, and a way as well, to derive two canonical couplings: "statistical independence" and "logical indetermination". A symmetric list of properties is provided and notably the so called "Monge’s properties", applied to contingency matrices, and justifying the $\otimes$ versus $\oplus$ notation. A study is proposed, highlighting "logical indetermination", because it is, by far, lesser known. Eventually we estimate the average difference between both couplings as the key explanation of their usually close results in network clustering.
Mots clés Graph Theoretical Approaches, Optimal Transport, Correlation Clustering, Coupling Functions, Logical Indetermination, Mathematical Relational Analysis
Résumé We propose a new random method to minimize deterministic continuous functions over subsets $\mathcal{S}$ of high-dimensional space $\mathbb{R}^K$ without assuming convexity. Our procedure alternates between a Global Search (GS) regime to identify candidates and a Concentrated Search (CS) regime to improve an eligible candidate in the constraint set $\mathcal{S}$. Beyond the alternation between those completely different regimes, the originality of our approach lies in leveraging high dimensionality. We demonstrate rigorous concentration properties under the $CS$ regime. In parallel, we also show that $GS$ reaches any point in $\mathcal{S}$ in finite time. Finally, we demonstrate the relevance of our new method by giving two concrete applications. The first deals with the reduction of the $\ell_{1}-$norm of a LASSO solution. Secondly, we compress a neural network by pruning weights while maintaining performance; our approach achieves significant weight reduction with minimal performance loss, offering an effective solution for network optimization.
Mots clés High-dimensional optimization, Stochastic search, Lasso, Basis pursuit denoising, Neural network compression