// TODO shorten
Second order methods¶
We use the Hessian in the search for the minima.
Secant method¶
It applies Newton method using estimates of the Hessian using \(\nabla f\). Thus it requires knowing the gradient.
The secant method uses the last two iterates to approximate the second derivative:
We can substitute this into newtons method:
The algorithm for secant point requires an additional initial design point. It suffers from the same problems as Newtons method, and it takes longer to converge due to the approximation.
Quasi Newton¶
Similarly as the secant method, it approximates the Hessian. To be more specific the Inverse Hessian. The Quasi newton has the from:
\(\alpha^{(k)}\) is a scalar step
\(Q^{(k)}\) is the approximation of the inverse hessian at \(x\)
We start \(Q^{(1)} = I\) and at each iteration we update it. There are two main variants:
Davidon-Fletcher Powell (DFP)
Byorden Fletcher Goldfarb Shanno (BFGS)
Booth algorithms use the following definitions:
Davidon Fletcher Powell¶
\(\gamma = \gamma^{(k)} = g^{(k+1)} - g^{(k)}\)
\(\delta = \delta^{(k)} = x^{(k+1)} - x^{(k)}\)
This update has the following properties:
\(Q\) remains symmetric and positive definite
If \(f(x) = \frac{1}{2}x^TAx + b^Tx + c\) then \(Q^{-1} =A^{-1}\). Thus DFP retains the convergence properties of conjugate gradient method.
For higher-dimensional problems, storing and updating Q can be significant compared to other methods like conjugate gradient method.
Byorden Fletcher Goldfarb Shanno (BFGS)¶
BFGS does better than DFP with approximate line search but it still requires \(n\times n\) dense matrix. For very large problems space is a concern. Limited memory BFGS (L-BFGS) can be used to approximate BFGS.
L-BFGS¶
Stores only the last m values for \(\delta\) and \(\gamma\) rather than the full inverse Hessian where \(i = 1\) indexes the oldest value and \(i=m\) indexes the most recent. The process for computing the descent direction at x begins by computing \(q^{(m)} = \nabla f(x)\) the remaining vector \(q^{(i)}\) for i from m-1 to 1 are computed using:
These vectors are then used to compute an another \(m+1\) vectors staring with:
and proceding with \(z^{(i)}\) from i to m according to:
The descent direction is \(d = -z^{(m)}\)
For minimization the inverse Hessian \(Q\) must remain positive definite. The initial Hessian is often set to the diagonal of:
Computing the diagonal for the expression above and substituting the result into \(z^{(1)} = Q^{(1)} q^{(1)}\) results in \(z^{(1)}\)
In general Quasi Newton performs well.