Bernoulli and Gauss Take a Look at the MAP
Résumé
This article discusses two Maximum a Posteriori (MAP) interpretations for state-of-the-art methods used in sparse inverse problems: the joint-MAP and the Marginal-MAP. Canonically rooted in a Bayesian framework, sparsity is modeled by a general spike and slab distribution. The focus is on the recovery of the solution support rather than on signal amplitudes. We study the prominent Bernoulli-Gaussian model leading to NP-hard optimization problems. We show that a judicious re-parametrization of the joint-MAP may indeed be a nice surrogate of the marginal-MAP. Additionally, we explore common continuous relaxations of the support and encompass them under the scope of a parametrized distribution. Upon describing the behavior of a few relaxations, strong links are established between the Bernoulli-Gaussian joint-MAP, marginal-MAP, and well-studied methods such as the Lasso and Sparse Bayesian Learning. Finally, the utilization of randomized rounding for both joint-MAP and marginal-MAP problems yields valuable insights into obtaining sparse solutions with an emphasis on support recovery.
Origine | Fichiers produits par l'(les) auteur(s) |
---|