Optimization methods for sparse covariance selection for Gaussian models
 
 
Description:  Recently proposed $l_1$-regularized maximum-likelihood optimization methods for learning sparse Markov networks result into convex problems that can be solved optimally and efficiently. However, the accuracy of such methods can be very sensitive to the choice of the regularization parameter, and optimal selection of this parameter remains an open problem. Herein, we propose a Bayesian approach that investigates the effect of a prior on the regularization parameter. We investigate the resulting nonconvex optimization problem and describe an efficient approach to solving it. Our formulation yields promising empirical results on both synthetic data and real-life application such as brain imaging data (fMRI).
Area(s):
Date:  2008-10-30
Start Time:   11:30
Speaker:  Katya Scheinberg (Math. Sciences, IBM Research)
Place:  5.5
Research Groups: -Numerical Analysis and Optimization
See more:   <Main>  
 
© Centre for Mathematics, University of Coimbra, funded by
Science and Technology Foundation
Powered by: rdOnWeb v1.4 | technical support