Say we have , how do we approximate ?
Of course we need Taylor’s theorem! Suppose , for any such that we have for
Now assume , , ,
,
We can rewrite this to get a familiar formula,
In our methodology, we view this as an approximation that comes with an error term, and we arrive at the forward finite difference equation
where
We can also write backward finite difference
where
Can we do better than an error of ? We can if we know more about the derivatives
If we assume , we can combine higher order forward and backward equations to yield,
where
The key point is this procedure has essentially equivalent complexity but gives us , much faster convergence, at the cost of requiring more regularity
Now what if we’d like to calculate the integral ? We use trapezoids
The line for this is
where (see Lagrange interpolation)
We can divide the function into strips to improve this approximation,
,
where and
We can view as an inner product between values of and weights,
Observe that this is pretty similar to the equations we derived for differentiation; this is ultimately because they are both linear operators
ODEs
Overview
Now say we have a first-order ODE with an initial condition,
To guarantee a well-behaved solution, we assume Lipschitz continuity
Say we are given a physical system, such as the mass-spring oscillator,
We can convert this into a system of first-order equations,
Expressed in vector form,
where
Most ODEs don’t have closed-form solutions, so numerical methods are very important here
Our general idea is to create a series of approximations at evenly spaced time steps, but there are multiple techniques to do this
Forward Euler uses the forward finite difference formula for the derivative,
This is as simple as it gets, but requires a “small enough” step size
Backward Euler uses the backward finite difference formula,
This cannot be solved explicitly because we have a potentially complex equation in terms of
Instead, we must solve for using a root-finding method,
This method is more complex but stable for any step size
Crank-Nicolson uses the trapezoidal rule for integration,
Again, this is an implicit equation that must be solved with a root-finding method
This method is unconditionally stable and ends up with an error of !
Theory
How do we actually analyze the errors? Let’s be more precise
Definition: A scheme is convergent if , as
A scheme converges with order q if as
is the error between the exact value and the approximated value
In practice, we split our error
where is the approximation made when is exact (the ideal estimate)
measures local consistency
measures global consistency
Definition: A scheme is consistent if as where is called the global truncation error (the maximum of the local truncations)
Let’s put this into practice
Theorem: If of the Cauchy problem is in then Forward-Euler is convergent with order 1
for some
, where
So
, using the Lipschitz continuity condition
We combine and
If we run through similar steps with the other schemes, we find Backward Euler is also order 1 and Crank-Nicolson is order 2
We can go deeper with our analysis by analyzing stability, which quantifies how a scheme responds to perturbations
Definition: A scheme is zero-stable if there exists such that if for all then , where , , and is the estimate given a perturbation to
Theorem: A consistent scheme is convergent if and only if it is zero-stable
Definition: A scheme is absolutely stable if for the simple system ,
For Forward Euler,
So it is stable if , or , which we call conditional absolute stability
Backwards Euler leads to the equation , which is unconditional
Crank-Nicolson leads to the equation , which is also unconditionally true
This kind of analysis can be used for more complex equations too