Optimal basis algorithm and its application to matrix scaling
 
 
Description: 

We present the optimal basis (OB) problem and the OB algorithm that we proposed in BIT (1997) {\bf 37}, 591-599. The OB problem is formulated as follows. Given $m+1$ points $\{x_i\}^{m}_{0}$ in $R^n$ which generate an $m$-dimensional linear manifold, construct for this manifold a maximally linearly independent basis that consists of vectors of the form $x_i-x_j$. This problem is present in, e.g., stable variants of the secant and interpolation methods, where it is required to approximate the Jacobian matrix $f^\prime $ of a nonlinear mapping $f$ by using values of $f$ computed at $m+1$ points. In this case, it is also desirable to have a combination of finite differences with maximal linear independence. As a natural measure of linear independence, we consider the Hadamard condition number which is minimized to find an optimal combination of $m$ pairs $\{x_i,x_j\}$ that defines the optimal basis. This problem is not NP--hard, but can be reduced to the minimum spanning tree problem, which is solved by the greedy algorithm in $O(m^2)$ time. The complexity of this reduction is equivalent to one $m\times n$ matrix--matrix multiplication, and according to the Coppersmith-Winograd estimate, is below $O(n^{2.376})$ for $m=n$. We discuss possible applications of the OB algorithm for constructing simple non-diagonal prescaling procedures for iterative linear algebra solvers.

Date:  2014-07-25
Start Time:   11:30
Speaker:  Oleg Burdakov (Linköping Univ., Sweden)
Institution:  Linköping University
Place:  Room 5.5; DMUC
Research Groups: -Numerical Analysis and Optimization
See more:   <Main>  
 
Attached Files
 
File Description
Abstract and Sponsors
One register found.1
© Centre for Mathematics, University of Coimbra, funded by
Science and Technology Foundation
Powered by: rdOnWeb v1.4 | technical support