Newton’s method for finding roots. Oscar Veliz video in 1D case, generalised and global.

newton’s method in 1-D

  • starting point needs to be near root
  • not guaranteed to converge
  • multiply second term by factor to get damped Newton’s method
from typing import Callable, Tuple
 
def newton(
	func: Callable[float],
	df: Callable[float],
	x0: float, # initial estimate
	max_iterations: int = 100,
	tolerance: float = 1e-12
) -> Tuple[float, int]:
    curr_x: float = x0
    iteration: int = 0
 
    while iteration < max_iterations:
        delta_x: float = func(curr_x) / df(curr_x)
 
		# done enough iterations
        if abs(delta_x) < tolerance:
            break
 
		# x_ = x - f(x)/f'(x)
        curr_x = curr_x - delta_x
        iteration += 1
 
    return curr_x, iteration # value, number of iterations

generalised newton’s method

where is now a vector-valued function and is the Jacobian (matrix of all first order partial derivatives, see here for notation). Or avoid the inverse and solve the system of equations

def fwd_solver(f, z_init):
	z_prev, z = z_init, f(z_init)
	while jnp.linalg.norm(z_prev - z) > 1e-5:
		z_prev, z = z, f(z)
	return z
 
def newton_solver(f, z_init):
	f_root = lambda z: f(z) - z
	g = lambda z: z - jnp.linalg.solve(jax.jacobian(f_root)(z), f_root(z))
	return fwd_solver(g, z_init)

Colouring domain by which roots a point converges to gives a Newton fractal.