giqpm {rnnmf} | R Documentation |
Generalized Iterative Quadratic Programming Method for non-negative quadratic optimization.
giqpm(
Gmat,
dvec,
x0 = NULL,
tau = 0.5,
annealing_rate = 0.25,
check_optimal_step = TRUE,
mult_func = NULL,
grad_func = NULL,
step_func = NULL,
zero_tolerance = 1e-09,
max_iterations = 1000L,
min_xstep = 1e-09,
verbosity = 0
)
Gmat |
a representation of the matrix |
dvec |
a representation of the vector |
x0 |
the initial iterate. If none given, we spawn one of the same
size as |
tau |
the starting shrinkage factor applied to the step length.
Should be a value in |
annealing_rate |
the rate at which we scale the shrinkage factor towards 1.
Should be a value in |
check_optimal_step |
if TRUE, we attempt to take the optimal step length in the given direction. If not, we merely take the longest feasible step in the step direction. |
mult_func |
a function which takes matrix and vector and performs matrix multiplication. The default does this on matrix and vector input, but the user can implement this for some implicit versions of the problem. |
grad_func |
a function which takes matrix |
step_func |
a function which takes the vector gradient, the product
|
zero_tolerance |
values of |
max_iterations |
the maximum number of iterations to perform. |
min_xstep |
the minimum L-infinity norm of the step taken. Once the step falls under this value, we terminate. |
verbosity |
controls whether we print information to the console. |
Iteratively solves the problem
\min_x \frac{1}{2}x^{\top}G x + d^{\top}x
subject to the elementwise constraint x \ge 0
.
This implementation allows the user to specify methods to perform matrix by
vector multiplication, computation of the gradient (which should be
G x + d
), and computation of the step direction.
By default we compute the optimal step in the given step direction.
a list with the elements
The final iterate.
The number of iterations taken.
Whether convergence was detected.
This package provides proof of concept code which is unlikely to be fast or robust, and may not solve the optimization problem at hand. User assumes all risk.
Steven E. Pav shabbychef@gmail.com
Pav, S. E. "An Iterative Algorithm for Regularized Non-negative Matrix Factorizations." Forthcoming. (2024)
Merritt, Michael, and Zhang, Yin. "Interior-point Gradient Method for Large-Scale Totally Nonnegative Least Squares Problems." Journal of Optimization Theory and Applications 126, no 1 (2005): 191–202. https://scholarship.rice.edu/bitstream/handle/1911/102020/TR04-08.pdf
set.seed(1234)
ssiz <- 100
preG <- matrix(runif(ssiz*(ssiz+20)),nrow=ssiz)
G <- preG %*% t(preG)
d <- - runif(ssiz)
y1 <- giqpm(G, d)
objective <- function(G, d, x) { as.numeric(0.5 * t(x) %*% (G %*% x) + t(x) %*% d) }
# this does not converge to an actual solution!
steepest_step_func <- function(gradf, ...) { return(-gradf) }
y2 <- giqpm(G, d, step_func = steepest_step_func)
scaled_step_func <- function(gradf, Gx, Gmat, dvec, x0, ...) { return(-gradf * abs(x0)) }
y3 <- giqpm(G, d, step_func = scaled_step_func)
sqrt_step_func <- function(gradf, Gx, Gmat, dvec, x0, ...) { return(-gradf * abs(sqrt(x0))) }
y4 <- giqpm(G, d, step_func = sqrt_step_func)
complementarity_stepfunc <- function(gradf, Gx, Gmat, dvec, x0, ...) { return(-gradf * x0) }
y5 <- giqpm(G, d, step_func = complementarity_stepfunc)
objective(G, d, y1$x)
objective(G, d, y2$x)
objective(G, d, y3$x)
objective(G, d, y4$x)
objective(G, d, y5$x)