摘要

In this paper, we propose a new algorithm for recovery of low-rank matrices from compressed linear measurements. The underlying idea of this algorithm is to closely approximate the rank function with a smooth function of singular values, and then minimize the resulting approximation subject to the linear constraints. The accuracy of the approximation is controlled via a scaling parameter delta, where a smaller delta corresponds to a more accurate fitting. The consequent optimization problem for any finite delta is nonconvex. Therefore, to decrease the risk of ending up in local minima, a series of optimizations is performed, starting with optimizing a rough approximation (a large delta) and followed by successively optimizing finer approximations of the rank with smaller delta%26apos;s. To solve the optimization problem for any, delta %26gt; 0 it is converted to a new program in which the cost is a function of two auxiliary positive semidefinite variables. The paper shows that this new program is concave and applies a majorize-minimize technique to solve it which, in turn, leads to a few convex optimization iterations. This optimization scheme is also equivalent to a reweighted Nuclear Norm Minimization (NNM). For any, delta %26gt; 0 we derive a necessary and sufficient condition for the exact recovery which are weaker than those corresponding to NNM. On the numerical side, the proposed algorithm is compared to NNM and a reweighted NNM in solving affine rank minimization and matrix completion problems showing its considerable and consistent superiority in terms of success rate.