The Quasi-Newton methods in optimization are for finding local maxima and minima of functions.
The Quasi-Newton methods are based on Newton's method to find the stationary point of a function,
where the gradient is 0.
Newton's method assumes that the function can be locally approximated as quadratic in the region
around the optimum.
It uses the first and second derivatives (gradient and Hessian) to find the stationary point.
In the Quasi-Newton methods the Hessian matrix of the objective function needs not to be
The Hessian is updated by analyzing successive gradient vectors instead.