Actual source code: projectedarmijo.h
1: // Context for an Armijo (nonmonotone) linesearch for unconstrained
2: // minimization.
3: //
4: // Given a function f, the current iterate x, and a projected descent
5: // direction d:
6: // Find the smallest i in 0, 1, 2, ..., such that:
7: //
8: // f(P(x + (beta**i)d)) <=
9: // f(x) - (sigma)<grad f(x),x - P(x + (beta**i)d)>
10: //
11: // The nonmonotone modification of this linesearch replaces the f(x) term
12: // with a reference value, R, and seeks to find the smallest i such that:
13: //
14: // f(P(x + (beta**i)d)) <=
15: // R - (sigma)<grad f(x),x - P(x + (beta**i)d)>
16: //
17: // This modification does effect neither the convergence nor rate of
18: // convergence of an algorithm when R is chosen appropriately. Essentially,
19: // R must decrease on average in some sense. The benefit of a nonmonotone
20: // linesearch is that local minimizers can be avoided (by allowing increase
21: // in function value), and typically, fewer iterations are performed in
22: // the main code.
23: //
24: // The reference value is chosen based upon some historical information
25: // consisting of function values for previous iterates. The amount of
26: // historical information used is determined by the memory size where the
27: // memory is used to store the previous function values. The memory is
28: // initialized to alpha*f(x^0) for some alpha >= 1, with alpha=1 signifying
29: // that we always force decrease from the initial point.
30: //
31: // The reference value can be the maximum value in the memory or can be
32: // chosen to provide some mean descent. Elements are removed from the
33: // memory with a replacement policy that either removes the oldest
34: // value in the memory (FIFO), or the largest value in the memory (MRU).
35: //
36: // Additionally, we can add a watchdog strategy to the search, which
37: // essentially accepts small directions and only checks the nonmonotonic
38: // descent criteria every m-steps. This strategy is NOT implemented in
39: // the code.
40: //
41: // Finally, care must be taken when steepest descent directions are used.
42: // For example, when the Newton direction is not not satisfy a sufficient
43: // descent criteria. The code will apply the same test regardless of
44: // the direction. This type of search may not be appropriate for all
45: // algorithms. For example, when a gradient direction is used, we may
46: // want to revert to the best point found and reset the memory so that
47: // we stay in an appropriate level set after using a gradient steps.
48: // This type of search is currently NOT supported by the code.
49: //
50: // References:
51: // Armijo, "Minimization of Functions Having Lipschitz Continuous
52: // First-Partial Derivatives," Pacific Journal of Mathematics, volume 16,
53: // pages 1-3, 1966.
54: // Ferris and Lucidi, "Nonmonotone Stabilization Methods for Nonlinear
55: // Equations," Journal of Optimization Theory and Applications, volume 81,
56: // pages 53-71, 1994.
57: // Grippo, Lampariello, and Lucidi, "A Nonmonotone Line Search Technique
58: // for Newton's Method," SIAM Journal on Numerical Analysis, volume 23,
59: // pages 707-716, 1986.
60: // Grippo, Lampariello, and Lucidi, "A Class of Nonmonotone Stabilization
61: // Methods in Unconstrained Optimization," Numerische Mathematik, volume 59,
62: // pages 779-805, 1991.
64: #ifndef __TAO_PROJECTEDARMIJO_H
67: #include "src/tao_impl.h"
68: #include "tao_solver.h"
70: typedef struct {
71: TaoVec *work;
73: double *memory;
75: double alpha; // Initial reference factor >= 1
76: double beta; // Steplength determination < 1
77: double sigma; // Acceptance criteria < 1)
78: double minimumStep; // Minimum step size
79: double lastReference; // Reference value of last iteration
81: int memorySize; // Number of functions kept in memory
82: int current; // Current element for FIFO
83: int referencePolicy; // Integer for reference calculation rule
84: int replacementPolicy; // Policy for replacing values in memory
85: } TAO_PROJECTEDARMIJO;
87: #endif