constrOptim package:stats R Documentation _L_i_n_e_a_r_l_y _c_o_n_s_t_r_a_i_n_e_d _o_p_t_i_m_i_s_a_t_i_o_n _D_e_s_c_r_i_p_t_i_o_n: Minimise a function subject to linear inequality constraints using an adaptive barrier algorithm. _U_s_a_g_e: constrOptim(theta, f, grad, ui, ci, mu = 1e-04, control = list(), method = if(is.null(grad)) "Nelder-Mead" else "BFGS", outer.iterations = 100, outer.eps = 1e-05, ...) _A_r_g_u_m_e_n_t_s: theta: Starting value: must be in the feasible region. f: Function to minimise (see below). grad: Gradient of 'f', or 'NULL' (see below). ui: Constraints (see below). ci: Constraints (see below). mu: (Small) tuning parameter. control: Passed to 'optim'. method: Passed to 'optim'. outer.iterations: Iterations of the barrier algorithm. outer.eps: Criterion for relative convergence of the barrier algorithm. ...: Other arguments passed to 'optim', which will pass them to 'f' and 'grad' if it does not use them. _D_e_t_a_i_l_s: The feasible region is defined by 'ui %*% theta - ci >= 0'. The starting value must be in the interior of the feasible region, but the minimum may be on the boundary. A logarithmic barrier is added to enforce the constraints and then 'optim' is called. The barrier function is chosen so that the objective function should decrease at each outer iteration. Minima in the interior of the feasible region are typically found quite quickly, but a substantial number of outer iterations may be needed for a minimum on the boundary. The tuning parameter 'mu' multiplies the barrier term. Its precise value is often relatively unimportant. As 'mu' increases the augmented objective function becomes closer to the original objective function but also less smooth near the boundary of the feasible region. Any 'optim' method that permits infinite values for the objective function may be used (currently all but "L-BFGS-B"). The objective function 'f' takes as first argument the vector of parameters over which minimisation is to take place. It should return a scalar result. Optional arguments '...' will be passed to 'optim' and then (if not used by 'optim') to 'f'. As with 'optim', the default is to minimise, but maximisation can be performed by setting 'control$fnscale' to a negative value. The gradient function 'grad' must be supplied except with 'method="Nelder-Mead"'. It should take arguments matching those of 'f' and return a vector containing the gradient. _V_a_l_u_e: As for 'optim', but with two extra components: 'barrier.value' giving the value of the barrier function at the optimum and 'outer.iterations' gives the number of outer iterations (calls to 'optim') _R_e_f_e_r_e_n_c_e_s: K. Lange _Numerical Analysis for Statisticians._ Springer 2001, p185ff _S_e_e _A_l_s_o: 'optim', especially 'method="L-BFGS-B"' which does box-constrained optimisation. _E_x_a_m_p_l_e_s: ## from optim fr <- function(x) { ## Rosenbrock Banana function x1 <- x[1] x2 <- x[2] 100 * (x2 - x1 * x1)^2 + (1 - x1)^2 } grr <- function(x) { ## Gradient of 'fr' x1 <- x[1] x2 <- x[2] c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1), 200 * (x2 - x1 * x1)) } optim(c(-1.2,1), fr, grr) #Box-constraint, optimum on the boundary constrOptim(c(-1.2,0.9), fr, grr, ui=rbind(c(-1,0),c(0,-1)), ci=c(-1,-1)) # x<=0.9, y-x>0.1 constrOptim(c(.5,0), fr, grr, ui=rbind(c(-1,0),c(1,-1)), ci=c(-0.9,0.1)) ## Solves linear and quadratic programming problems ## but needs a feasible starting value # # from example(solve.QP) in 'quadprog' # no derivative fQP <- function(b) {-sum(c(0,5,0)*b)+0.5*sum(b*b)} Amat <- matrix(c(-4,-3,0,2,1,0,0,-2,1),3,3) bvec <- c(-8,2,0) constrOptim(c(2,-1,-1), fQP, NULL, ui=t(Amat),ci=bvec) # derivative gQP <- function(b) {-c(0,5,0)+b} constrOptim(c(2,-1,-1), fQP, gQP, ui=t(Amat), ci=bvec) ## Now with maximisation instead of minimisation hQP <- function(b) {sum(c(0,5,0)*b)-0.5*sum(b*b)} constrOptim(c(2,-1,-1), hQP, NULL, ui=t(Amat), ci=bvec, control=list(fnscale=-1))