Built-in MATLAB routines are allowed to take advantage of your custom MATLAB functions through the passing of function pointers, which MATLAB calls "function handles", into the routines. There are a number of MATLAB routines that accept function handles in this manner. We will use the fminsearch routine as an example. This routine is a general purpose implementation of the unconstrained nonlinear optimization algorithm known as the Nelder-Mead simplex algorithm.

To be usable by fminsearch, your function must satisfy two requirements: first, it must take a single argument (call it x) and return the scalar value of the function (call it f). Note that x may be an n-dimensional array—in which case fminsearch will perform n-dimensional optimization— but f must be scalar-valued. A very simple example of such a function is shown below:

File: my_function.m

function f = my_function(x)
    f = x(1)^2 + x(2)^2;

The user passes a handle to the function to be optimized to the fminsearch function. In terms of syntax, creating and using a function handle is simple. Once you have created a MATLAB function (e.g., my_function.m), you can create a handle to this function by prepending the "@" character to it in the function parameter list: fminsearch(@my_function).


>> [xmin, fmin] = fminsearch(@my_function, x0);

The parabola f(x,y) = x^2 + y^2 is centered at (and has a minimum at) x=y=0 and is defined for all of R^2. The minimum value of this function is clearly 0. fminsearch also requires a starting x value which we call x0, which is shown in the function call above.

Anonymous functions

Many times, we may want to create a function directly in the interpreter without having to save it in a '.m' file. One way to do this is through anonymous functions (sometimes called lambda functions). To demonstrate this concept, we change my_function (defined above) into an anonymous function handle, then pass it to fminsearch:


>> f = @(x) x(1)^2 + x(2)^2

f = 

    @(x)x(1)^2+x(2)^2

>> xmin = fminsearch(f, [-1 -1])

xmin =

   1.0e-04 *

    0.2102   -0.2548

This is useful when we want to refer to my_function in multiple places or call it more than once, but for a one-time use, we can do the following instead, which is indeed anonymous (the function has no name):


>> xmin = fminsearch(@(x) x(1)^2 + x(2)^2, [-1 -1])

xmin =

   1.0e-04 *

    0.2102   -0.2548

Finally it is worth pointing out that, although MATLAB is not a language designed with functional programming in mind, some ideas from functional programming can be implemented in MATLAB, See part 1 and part 2 of Loren Shure's tutorials on Functional Programming in MATLAB, in which passing functions and anonymous functions play a big role. Limited support for closures—informally, a function created by another function—also can be achieved in MATLAB.

A major limitation to functional programming in MATLAB is that MATLAB can't do tail call optimization (TCO) on a recursive function, so 'for' loops and 'while' loops should still be used in most places. In some contexts it may still make sense to use recursive calls for simplicity, assuming the recursion depth is known to be finite and relatively small. This is not a problem for most of the prominent C compilers, and so could be supported by implementing the function using the mex interface previously described.

More information on fminsearch

Here we go in to more details on fminsearch, as it is a very useful function to be able to use in general. You can just skip to the next section if you're not interested in fminsearch.

While it is sufficient to catch only the x and f values found at the minimum from the fminsearch function, you will usually want to catch the exitflag from fminsearch as well as additional information found in the output structure. You may also want to specify a tighter minimization tolerance than the default.

To specify options for the optimization routine, utilize the optimset function provided by MATLAB. optimset (line 1) takes a list of key-value pairs and returns an object which is suitable for passing to the fminsearch routine. There are many possible options that optimset can be passed, but for the purposes of this example we will discuss only TolFun, the tolerance which the minimization algorithm must reach before the problem can be considered "solved" and the function can return.


>> options = optimset('TolFun', 1.e-16);
>> [xmin,fmin,exitflag,output] = ...
    fminsearch(@my_function, [-1,-1], options);

In the example above four output arguments are caught: xmin, fmin, exitflag, and output. The exitflag is used to indicate the success or failure of the method. Upon leaving fminsearch, if exitflag has a value of 1 the minimization routine has converged to the desired tolerance TolFun as specified in the call to optimset. If exitflag is 0 on leaving fminsearch, the maximum number of function evaluations was reached before the optimization function value fell below TolFun. If exitflag is -1, it means that something in the optimization function itself terminated the execution.


>> options = optimset('TolFun', 1.e-16);
>> [xmin,fmin,exitflag,output] = ...
    fminsearch(@my_function, [-1,-1], options);

>> xmin
xmin =
    1.0e-08 *
    0.2019 -0.6711
>> fmin
fmin =
    4.9112e-17

The output variable (line 2) is a structure which contains additional data about the performance of the algorithm. Two of the most important fields are output.iterations, which is the number of Nelder-Mead iterations actually required to reach the minimum, and output.funcCount, which is the number of function evaluations required. (Note that a single iteration of Nelder-Mead requires more than one function evaluation; the actual number depends on several factors including the dimension of the search space.) Other optimization routines are available in MATLAB, but fminsearch is particularly simple because Nelder-Mead does not require function gradients or Hessians for its minimization algorithm. In more sophisticated problems, the gradient and Hessian may be prohibitively expensive to compute or too "noisy" to be useful.


>> exitflag
exitflag =
    1
>> output.iterations
ans =
    67
>> output.funcCount
ans =
    125
 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Inclusivity Statement