Skip to main content

Pierre Bertrand

Faculty Aix-Marseille UniversitéFaculté d'économie et de gestion (FEG)

Econometrics, Finance and mathematical methods
Bertrand
Status
Assistant professor
Research domain(s)
Game theory and social networks
Thesis
2021, Université Paris Cité (Paris 6)
Download
CV
Address

AMU - AMSE
5-9 Boulevard Maurice Bourdet, CS 50498
​13205 Marseille Cedex 1

Abstract For some smooth special case of generalized $\varphi-$divergences as well as of new divergences (called scaled shift divergences), we derive approximations of the omnipresent (weighted) $\ell_{1}-$distance and (weighted) $\ell_{1}-$norm.
Keywords Divergence Kullback-Leibler, Divergence analysis, $\ell1-$distance/norm, Generalized $\varphi-$divergences
Abstract This paper aims at comparing two coupling approaches as basic layers for building clustering criteria, suited for modularizing and clustering very large networks. We briefly use "optimal transport theory" as a starting point, and a way as well, to derive two canonical couplings: "statistical independence" and "logical indetermination". A symmetric list of properties is provided and notably the so called "Monge’s properties", applied to contingency matrices, and justifying the $\otimes$ versus $\oplus$ notation. A study is proposed, highlighting "logical indetermination", because it is, by far, lesser known. Eventually we estimate the average difference between both couplings as the key explanation of their usually close results in network clustering.
Keywords Graph Theoretical Approaches, Optimal Transport, Correlation Clustering, Coupling Functions, Logical Indetermination, Mathematical Relational Analysis
Abstract We propose a new random method to minimize deterministic continuous functions over subsets $\mathcal{S}$ of high-dimensional space $\mathbb{R}^K$ without assuming convexity. Our procedure alternates between a Global Search (GS) regime to identify candidates and a Concentrated Search (CS) regime to improve an eligible candidate in the constraint set $\mathcal{S}$. Beyond the alternation between those completely different regimes, the originality of our approach lies in leveraging high dimensionality. We demonstrate rigorous concentration properties under the $CS$ regime. In parallel, we also show that $GS$ reaches any point in $\mathcal{S}$ in finite time. Finally, we demonstrate the relevance of our new method by giving two concrete applications. The first deals with the reduction of the $\ell_{1}-$norm of a LASSO solution. Secondly, we compress a neural network by pruning weights while maintaining performance; our approach achieves significant weight reduction with minimal performance loss, offering an effective solution for network optimization.
Keywords High-dimensional optimization, Stochastic search, Lasso, Basis pursuit denoising, Neural network compression