|
Description: |
Recently proposed $l_1$-regularized maximum-likelihood optimization
methods for learning sparse Markov networks result into
convex problems that can be solved optimally and efficiently.
However, the accuracy of such methods can be very sensitive to
the choice of the regularization parameter, and optimal selection of
this parameter remains an open problem. Herein, we propose a
Bayesian approach that investigates the effect of a prior on
the regularization parameter. We investigate the resulting
nonconvex optimization problem and describe an efficient approach
to solving it. Our formulation yields promising empirical results
on both synthetic data and real-life application such as brain
imaging data (fMRI).
Area(s):
|
Date: |
|
Start Time: |
11:30 |
Speaker: |
Katya Scheinberg (Math. Sciences, IBM Research)
|
Place: |
5.5
|
Research Groups: |
-Numerical Analysis and Optimization
|
See more:
|
<Main>
|
|