optim package:stats R Documentation _G_e_n_e_r_a_l-_p_u_r_p_o_s_e _O_p_t_i_m_i_z_a_t_i_o_n _D_e_s_c_r_i_p_t_i_o_n: General-purpose optimization based on Nelder-Mead, quasi-Newton and conjugate-gradient algorithms. It includes an option for box-constrained optimization and simulated annealing. _U_s_a_g_e: optim(par, fn, gr = NULL, ..., method = c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN"), lower = -Inf, upper = Inf, control = list(), hessian = FALSE) _A_r_g_u_m_e_n_t_s: par: Initial values for the parameters to be optimized over. fn: A function to be minimized (or maximized), with first argument the vector of parameters over which minimization is to take place. It should return a scalar result. gr: A function to return the gradient for the '"BFGS"', '"CG"' and '"L-BFGS-B"' methods. If it is 'NULL', a finite-difference approximation will be used. For the '"SANN"' method it specifies a function to generate a new candidate point. If it is 'NULL' a default Gaussian Markov kernel is used. ...: Further arguments to be passed to 'fn' and 'gr'. method: The method to be used. See 'Details'. lower, upper: Bounds on the variables for the '"L-BFGS-B"' method. control: A list of control parameters. See 'Details'. hessian: Logical. Should a numerically differentiated Hessian matrix be returned? _D_e_t_a_i_l_s: Note that arguments after '...' must be matched exactly. By default this function performs minimization, but it will maximize if 'control$fnscale' is negative. The default method is an implementation of that of Nelder and Mead (1965), that uses only function values and is robust but relatively slow. It will work reasonably well for non-differentiable functions. Method '"BFGS"' is a quasi-Newton method (also known as a variable metric algorithm), specifically that published simultaneously in 1970 by Broyden, Fletcher, Goldfarb and Shanno. This uses function values and gradients to build up a picture of the surface to be optimized. Method '"CG"' is a conjugate gradients method based on that by Fletcher and Reeves (1964) (but with the option of Polak-Ribiere or Beale-Sorenson updates). Conjugate gradient methods will generally be more fragile than the BFGS method, but as they do not store a matrix they may be successful in much larger optimization problems. Method '"L-BFGS-B"' is that of Byrd _et. al._ (1995) which allows _box constraints_, that is each variable can be given a lower and/or upper bound. The initial value must satisfy the constraints. This uses a limited-memory modification of the BFGS quasi-Newton method. If non-trivial bounds are supplied, this method will be selected, with a warning. Nocedal and Wright (1999) is a comprehensive reference for the previous three methods. Method '"SANN"' is by default a variant of simulated annealing given in Belisle (1992). Simulated-annealing belongs to the class of stochastic global optimization methods. It uses only function values but is relatively slow. It will also work for non-differentiable functions. This implementation uses the Metropolis function for the acceptance probability. By default the next candidate point is generated from a Gaussian Markov kernel with scale proportional to the actual temperature. If a function to generate a new candidate point is given, method '"SANN"' can also be used to solve combinatorial optimization problems. Temperatures are decreased according to the logarithmic cooling schedule as given in Belisle (1992, p. 890); specifically, the temperature is set to 'temp / log(((t-1) %/% tmax)*tmax + exp(1))', where 't' is the current iteration step and 'temp' and 'tmax' are specifiable via 'control', see below. Note that the '"SANN"' method depends critically on the settings of the control parameters. It is not a general-purpose method but can be very useful in getting to a good value on a very rough surface. Function 'fn' can return 'NA' or 'Inf' if the function cannot be evaluated at the supplied value, but the initial value must have a computable finite value of 'fn'. (Except for method '"L-BFGS-B"' where the values should always be finite.) 'optim' can be used recursively, and for a single parameter as well as many. It also accepts a zero-length 'par', and just evaluates the function with that argument. The 'control' argument is a list that can supply any of the following components: '_t_r_a_c_e' Non-negative integer. If positive, tracing information on the progress of the optimization is produced. Higher values may produce more tracing information: for method '"L-BFGS-B"' there are six levels of tracing. (To understand exactly what these do see the source code: higher levels give more detail.) '_f_n_s_c_a_l_e' An overall scaling to be applied to the value of 'fn' and 'gr' during optimization. If negative, turns the problem into a maximization problem. Optimization is performed on 'fn(par)/fnscale'. '_p_a_r_s_c_a_l_e' A vector of scaling values for the parameters. Optimization is performed on 'par/parscale' and these should be comparable in the sense that a unit change in any element produces about a unit change in the scaled value. '_n_d_e_p_s' A vector of step sizes for the finite-difference approximation to the gradient, on 'par/parscale' scale. Defaults to '1e-3'. '_m_a_x_i_t' The maximum number of iterations. Defaults to '100' for the derivative-based methods, and '500' for '"Nelder-Mead"'. For '"SANN"' 'maxit' gives the total number of function evaluations. There is no other stopping criterion. Defaults to '10000'. '_a_b_s_t_o_l' The absolute convergence tolerance. Only useful for non-negative functions, as a tolerance for reaching zero. '_r_e_l_t_o_l' Relative convergence tolerance. The algorithm stops if it is unable to reduce the value by a factor of 'reltol * (abs(val) + reltol)' at a step. Defaults to 'sqrt(.Machine$double.eps)', typically about '1e-8'. '_a_l_p_h_a', '_b_e_t_a', '_g_a_m_m_a' Scaling parameters for the '"Nelder-Mead"' method. 'alpha' is the reflection factor (default 1.0), 'beta' the contraction factor (0.5) and 'gamma' the expansion factor (2.0). '_R_E_P_O_R_T' The frequency of reports for the '"BFGS"', '"L-BFGS-B"' and '"SANN"' methods if 'control$trace' is positive. Defaults to every 10 iterations for '"BFGS"' and '"L-BFGS-B"', or every 100 temperatures for '"SANN"'. '_t_y_p_e' for the conjugate-gradients method. Takes value '1' for the Fletcher-Reeves update, '2' for Polak-Ribiere and '3' for Beale-Sorenson. '_l_m_m' is an integer giving the number of BFGS updates retained in the '"L-BFGS-B"' method, It defaults to '5'. '_f_a_c_t_r' controls the convergence of the '"L-BFGS-B"' method. Convergence occurs when the reduction in the objective is within this factor of the machine tolerance. Default is '1e7', that is a tolerance of about '1e-8'. '_p_g_t_o_l' helps control the convergence of the '"L-BFGS-B"' method. It is a tolerance on the projected gradient in the current search direction. This defaults to zero, when the check is suppressed. '_t_e_m_p' controls the '"SANN"' method. It is the starting temperature for the cooling schedule. Defaults to '10'. '_t_m_a_x' is the number of function evaluations at each temperature for the '"SANN"' method. Defaults to '10'. Any names given to 'par' will be copied to the vectors passed to 'fn' and 'gr'. Note that no other attributes of 'par' are copied over. _V_a_l_u_e: A list with components: par: The best set of parameters found. value: The value of 'fn' corresponding to 'par'. counts: A two-element integer vector giving the number of calls to 'fn' and 'gr' respectively. This excludes those calls needed to compute the Hessian, if requested, and any calls to 'fn' to compute a finite-difference approximation to the gradient. convergence: An integer code. '0' indicates successful convergence. Error codes are '_1' indicates that the iteration limit 'maxit' had been reached. '_1_0' indicates degeneracy of the Nelder-Mead simplex. '_5_1' indicates a warning from the '"L-BFGS-B"' method; see component 'message' for further details. '_5_2' indicates an error from the '"L-BFGS-B"' method; see component 'message' for further details. message: A character string giving any additional information returned by the optimizer, or 'NULL'. hessian: Only if argument 'hessian' is true. A symmetric matrix giving an estimate of the Hessian at the solution found. Note that this is the Hessian of the unconstrained problem even if the box constraints are active. _N_o_t_e: 'optim' will work with one-dimensional 'par's, but the default method does not work well (and will warn). Use 'optimize' instead. _S_o_u_r_c_e: The code for methods '"Nelder-Mead"', '"BFGS"' and '"CG"' was based originally on Pascal code in Nash (1990) that was translated by 'p2c' and then hand-optimized. Dr Nash has agreed that the code can be made freely available. The code for method '"L-BFGS-B"' is based on Fortran code by Zhu, Byrd, Lu-Chen and Nocedal obtained from Netlib (file 'opt/lbfgs_bcm.shar': another version is in 'toms/778'). The code for method '"SANN"' was contributed by A. Trapletti. _R_e_f_e_r_e_n_c_e_s: Belisle, C. J. P. (1992) Convergence theorems for a class of simulated annealing algorithms on Rd. _J Applied Probability_, *29*, 885-895. Byrd, R. H., Lu, P., Nocedal, J. and Zhu, C. (1995) A limited memory algorithm for bound constrained optimization. _SIAM J. Scientific Computing_, *16*, 1190-1208. Fletcher, R. and Reeves, C. M. (1964) Function minimization by conjugate gradients. _Computer Journal_ *7*, 148-154. Nash, J. C. (1990) _Compact Numerical Methods for Computers. Linear Algebra and Function Minimisation._ Adam Hilger. Nelder, J. A. and Mead, R. (1965) A simplex algorithm for function minimization. _Computer Journal_ *7*, 308-313. Nocedal, J. and Wright, S. J. (1999) _Numerical Optimization_. Springer. _S_e_e _A_l_s_o: 'nlm', 'nlminb'. 'optimize' for one-dimensional minimization and 'constrOptim' for constrained optimization. _E_x_a_m_p_l_e_s: require(graphics) fr <- function(x) { ## Rosenbrock Banana function x1 <- x[1] x2 <- x[2] 100 * (x2 - x1 * x1)^2 + (1 - x1)^2 } grr <- function(x) { ## Gradient of 'fr' x1 <- x[1] x2 <- x[2] c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1), 200 * (x2 - x1 * x1)) } optim(c(-1.2,1), fr) optim(c(-1.2,1), fr, grr, method = "BFGS") optim(c(-1.2,1), fr, NULL, method = "BFGS", hessian = TRUE) optim(c(-1.2,1), fr, grr, method = "CG") optim(c(-1.2,1), fr, grr, method = "CG", control=list(type=2)) optim(c(-1.2,1), fr, grr, method = "L-BFGS-B") flb <- function(x) { p <- length(x); sum(c(1, rep(4, p-1)) * (x - c(1, x[-p])^2)^2) } ## 25-dimensional box constrained optim(rep(3, 25), flb, NULL, method = "L-BFGS-B", lower=rep(2, 25), upper=rep(4, 25)) # par[24] is *not* at boundary ## "wild" function , global minimum at about -15.81515 fw <- function (x) 10*sin(0.3*x)*sin(1.3*x^2) + 0.00001*x^4 + 0.2*x+80 plot(fw, -50, 50, n=1000, main = "optim() minimising 'wild function'") res <- optim(50, fw, method="SANN", control=list(maxit=20000, temp=20, parscale=20)) res ## Now improve locally {typically only by a small bit}: (r2 <- optim(res$par, fw, method="BFGS")) points(r2$par, r2$value, pch = 8, col = "red", cex = 2) ## Combinatorial optimization: Traveling salesman problem library(stats) # normally loaded eurodistmat <- as.matrix(eurodist) distance <- function(sq) { # Target function sq2 <- embed(sq, 2) return(sum(eurodistmat[cbind(sq2[,2],sq2[,1])])) } genseq <- function(sq) { # Generate new candidate sequence idx <- seq(2, NROW(eurodistmat)-1, by=1) changepoints <- sample(idx, size=2, replace=FALSE) tmp <- sq[changepoints[1]] sq[changepoints[1]] <- sq[changepoints[2]] sq[changepoints[2]] <- tmp return(sq) } sq <- c(1,2:NROW(eurodistmat),1) # Initial sequence distance(sq) set.seed(123) # chosen to get a good soln relatively quickly res <- optim(sq, distance, genseq, method="SANN", control = list(maxit=30000, temp=2000, trace=TRUE, REPORT=500)) res # Near optimum distance around 12842 loc <- cmdscale(eurodist) rx <- range(x <- loc[,1]) ry <- range(y <- -loc[,2]) tspinit <- loc[sq,] tspres <- loc[res$par,] s <- seq(NROW(tspres)-1) plot(x, y, type="n", asp=1, xlab="", ylab="", main="initial solution of traveling salesman problem") arrows(tspinit[s,1], -tspinit[s,2], tspinit[s+1,1], -tspinit[s+1,2], angle=10, col="green") text(x, y, labels(eurodist), cex=0.8) plot(x, y, type="n", asp=1, xlab="", ylab="", main="optim() 'solving' traveling salesman problem") arrows(tspres[s,1], -tspres[s,2], tspres[s+1,1], -tspres[s+1,2], angle=10, col="red") text(x, y, labels(eurodist), cex=0.8)