optimize package:stats R Documentation _O_n_e _D_i_m_e_n_s_i_o_n_a_l _O_p_t_i_m_i_z_a_t_i_o_n _D_e_s_c_r_i_p_t_i_o_n: The function 'optimize' searches the interval from 'lower' to 'upper' for a minimum or maximum of the function 'f' with respect to its first argument. 'optimise' is an alias for 'optimize'. _U_s_a_g_e: optimize(f = , interval = , ..., lower = min(interval), upper = max(interval), maximum = FALSE, tol = .Machine$double.eps^0.25) optimise(f = , interval = , ..., lower = min(interval), upper = max(interval), maximum = FALSE, tol = .Machine$double.eps^0.25) _A_r_g_u_m_e_n_t_s: f: the function to be optimized. The function is either minimized or maximized over its first argument depending on the value of 'maximum'. interval: a vector containing the end-points of the interval to be searched for the minimum. ...: additional named or unnamed arguments to be passed to 'f'. lower: the lower end point of the interval to be searched. upper: the upper end point of the interval to be searched. maximum: logical. Should we maximize or minimize (the default)? tol: the desired accuracy. _D_e_t_a_i_l_s: Note that arguments after '...' must be matched exactly. The method used is a combination of golden section search and successive parabolic interpolation, and was designed for use with continuous functions. Convergence is never much slower than that for a Fibonacci search. If 'f' has a continuous second derivative which is positive at the minimum (which is not at 'lower' or 'upper'), then convergence is superlinear, and usually of the order of about 1.324. The function 'f' is never evaluated at two points closer together than eps * |x_0| + (tol/3), where eps is approximately 'sqrt(.Machine$double.eps)' and x_0 is the final abscissa 'optimize()$minimum'. If 'f' is a unimodal function and the computed values of 'f' are always unimodal when separated by at least eps * |x| + (tol/3), then x_0 approximates the abscissa of the global minimum of 'f' on the interval 'lower,upper' with an error less than eps * |x_0|+ tol. If 'f' is not unimodal, then 'optimize()' may approximate a local, but perhaps non-global, minimum to the same accuracy. The first evaluation of 'f' is always at x_1 = a + (1-phi)(b-a) where '(a,b) = (lower, upper)' and phi = (sqrt 5 - 1)/2 = 0.61803.. is the golden section ratio. Almost always, the second evaluation is at x_2 = a + phi(b-a). Note that a local minimum inside [x_1,x_2] will be found as solution, even when 'f' is constant in there, see the last example. 'f' will be called as 'f(x, ...)' for a numeric value of x. _V_a_l_u_e: A list with components 'minimum' (or 'maximum') and 'objective' which give the location of the minimum (or maximum) and the value of the function at that point. _S_o_u_r_c_e: A C translation of Fortran code based on the Algol 60 procedure 'localmin' given in the reference. _R_e_f_e_r_e_n_c_e_s: Brent, R. (1973) _Algorithms for Minimization without Derivatives._ Englewood Cliffs N.J.: Prentice-Hall. _S_e_e _A_l_s_o: 'nlm', 'uniroot'. _E_x_a_m_p_l_e_s: require(graphics) f <- function (x,a) (x-a)^2 xmin <- optimize(f, c(0, 1), tol = 0.0001, a = 1/3) xmin ## See where the function is evaluated: optimize(function(x) x^2*(print(x)-1), lower=0, upper=10) ## "wrong" solution with unlucky interval and piecewise constant f(): f <- function(x) ifelse(x > -1, ifelse(x < 4, exp(-1/abs(x - 1)), 10), 10) fp <- function(x) { print(x); f(x) } plot(f, -2,5, ylim = 0:1, col = 2) optimize(fp, c(-4, 20))# doesn't see the minimum optimize(fp, c(-7, 20))# ok