What's the goal?
This is a note which decribes methods used to numerically evaluate a basis of the solution space
of $n^{th}$ order scalar equations of the form
\begin{equation}
q_0(t)y(t) + \sum_{i=1}^nq_i(t) y^{(i)}(t) = 0
\label{riccati:eq1}
\end{equation}
using an adaptive algorithm known as the Levin method.
For a description of the algorithm for small values of $n$, please read the algorithm descriptions
in this
paper.
Please ensure a sufficient understanding of the material convered in this paper before proceeding.
Discretizing the Riccati Equation
The $n^{\text{th}}$ derivative chain rule of the composition of functions $f\circ g$ is provided by the following formula:
\begin{equation}
\frac{d^n}{dx^n}f(g(x)) = \sum_S A_{m_1, \ldots, m_n} f^{(m_1 + \cdots + m_n)}(g(x)) \prod_{j=1}^n(g^{(j)}(x))^{m_j},
\label{chain n}
\end{equation}
where the sum is taken over the collection $S$ of all sequences $(m_1, \ldots, m_n)$ such that
\begin{equation}
m_1 + 2m_2 + \ldots + nm_n = \sum_{i=1}^n im_i = n
\label{summation1}
\end{equation}
and
\begin{equation}
A_{m_1, \ldots, m_n} = \frac{n!}{m_1!(1!)^{m_1}\cdot m_2!(2!)^{m_2} \cdots m_n!(n!)^{m_n}}
\label{coefficients}
\end{equation}
Substituting $f(t) = \exp(t)$ and $g(t) = \int r(t)dt$, we obtain
\begin{equation}
\frac{d^n}{dx^n}\exp\left(\int r(t)dt \right) = \exp\left(\int r(t)dt \right) \sum_S A_{m_1, \ldots, m_n} \prod_{j=1}^n(r^{(j-1)}(x))^{m_j}
\label{chain n2}
\end{equation}
Therefore, the $n^{\text{th}}$ order Riccati equation for the $n^{\text{th}}$ order equation
\begin{equation}
y^{(n)}(t) + k^nq(t)y(t) = 0
\label{helmholtz}
\end{equation}
is given by
\begin{equation}
\sum_S A_{m_1, \ldots, m_n} \prod_{j=1}^n(r^{(j-1)})^{m_j} + k^nq = 0
\label{riccati1}
\end{equation}
Deploying Newton's method via perturbation by a function of small magnitude $\delta$;
\begin{equation}
r_{n+1}(t) = r_n(t) + \delta(t)
\label{perturb}
\end{equation}
we wish to find (\ref{chain n}) in the case where $g(t) = \int (r(t) + \delta(t))dt$. A direct substitution of (\ref{perturb}) into (\ref{chain n}) yields
\begin{equation}
\frac{d^n}{dx^n}\exp\left(\int (r(t) + \delta(t))dt \right) = \exp\left(\int (r(t) + \delta(t))dt \right) \sum_S A_{m_1, \ldots, m_n} \prod_{j=1}^n(r^{(j-1)} + \delta^{(j-1)})^{m_j}
\label{chain n3}
\end{equation}
and so the linearized Riccati equation becomes
\begin{equation}
\sum_S A_{m_1, \ldots, m_n} \prod_{j=1}^n(r^{(j-1)} + \delta^{(j-1)})^{m_j} + k^nq = 0
\label{riccati2}
\end{equation}
The binomial series provides
\begin{equation}
(r^{(j-1)} + \delta^{(j-1)})^{m_j} = \sum_{k=0}^{m_j} \begin{pmatrix} m_j \\ k \end{pmatrix}(r^{(j-1)})^{m_j - k}(\delta^{(j-1)})^k.
\label{binomial1}
\end{equation}
For all $i_1, i_2, j_1, j_2 \geq 1$, we make the assumption that $(\delta^{(i_1)})^{j_1}(\delta^{(i_2)})^{j_2} \approx 0$ and thus we only consider the terms $k = 0, 1$ in (\ref{binomial1}), yielding the following
\begin{equation}
(r^{(j-1)} + \delta^{(j-1)})^{m_j} \approx (r^{(j-1)})^{m_j} + m_j(r^{(j-1)})^{m_j - 1}\delta^{(j-1)}
\label{binomial2}
\end{equation}
Further, under the assumption $b_ib_j \approx 0$ for all $i, j$, we have
\begin{equation}
\prod_{i=1}^n(a_i + b_i) \approx \prod_{i=1}^n a_i + \sum_j b_j\left(\prod_{i=1, i\neq j}^n a_i \right)
\label{sum of prod}
\end{equation}
Therefore, utilizing (\ref{binomial2}) and (\ref{sum of prod}), we obtain
\begin{equation}
\prod_{j=1}^n(r^{(j-1)} + \delta^{(j-1)})^{m_j} \approx \prod_{j=1}^n (r^{(j-1)})^{m_j} + \sum_{j=1}^nm_j(r^{(j-1)})^{m_j - 1}\delta^{(j-1)}\left[\prod_{i=1, i\neq j}^n(r^{(i-1)})^{m_i} \right]
\end{equation}
Plugging this into the linearized Riccati equation (\ref{riccati2}), we obtain
\begin{equation}
\sum_S A_{m_1, \ldots, m_n} \left(\prod_{j=1}^n (r^{(j-1)})^{m_j} + \sum_{j=1}^nm_j(r^{(j-1)})^{m_j - 1}\delta^{(j-1)}\left[\prod_{i=1, i\neq j}^n(r^{(i-1)})^{m_i} \right]\right) + k^nq = 0
\end{equation}
\begin{equation}
\implies \boxed{\sum_{j=1}^n\left(\sum_Sm_j A_{m_1, \ldots, m_n}(r^{(j-1)})^{m_j - 1}\left[\prod_{i=1, i\neq j}^n(r^{(i-1)})^{m_i}\right]\right)\delta^{(j-1)} = - \sum_S A_{m_1, \ldots, m_n}\prod_{j=1}^n(r^{(j-1)})^{m_j} - k^nq}
\label{final_riccati}
\end{equation}
Fortran Implementation
An implementation of the algorithm is provided under the $\texttt{scalar_levin_n}$ subroutine
here.
The key piece of this algorithm is attributed to the discretization of (\ref{final_riccati}), given the values of the derivatives of $r$ at the discretization nodes.
The following block constructs the right hand side of (\ref{final_riccati});
rhs_sum = 0 ! Reset summation value every iteration
! exterior sum
do i = 1, m
terms = sum_set(:, i)
prod = 1 ! Need to reset product value
! Compute the product nested in sum on RHS
do j = 1, n
if (j == 1) then
prod = r ** sum_set(j, i)
else
prod = prod * (rps(j-1, :)) ** sum_set(j, i)
end if
end do
rhs_sum = rhs_sum + sum_coefs(i) * prod
end do
rhs = -rhs_sum - qs0
Similarly, the following block constructs the more intricate left hand side of (\ref{final_riccati});
amatr = 0
sum1 = 0
! Contribute the first term with D^{(n-1)}
amatr = dd ** (n - 1) * adiffs(:, :, n-1)
! Contribute the final term where there is no differential matrix
do j = 1, m
prod = 1 ! Need to reset nested product
do k = 2, n
prod = prod * (rps(k-1, :) ** sum_set(k, j))
end do
sum1 = sum1 + sum_set(1, j) * sum_coefs(j) * prod * (r ** (sum_set(1, j) - 1))
end do
do l = 1, kcheb
amatr(l, l) = amatr(l, l) + sum1(l)
end do
! Do all other terms
do i = 2, n - 1 !outer sum
sum1 = 0
sum2 = 0 ! Need to reset inner summation
do j = 1, m !second sum
prod = 1 ! Need to reset nested product
do k = 1, n !nested product
if (k == i) then
prod = prod ! This is the term we don't consider, map via identity
else
if (k == 1) then
prod = prod * r ** sum_set(k, j)
else
prod = prod * (rps(k-1, :)) ** sum_set(k, j)
end if
end if
end do
sum1 = sum1 + sum_set(i, j) * sum_coefs(j) * (rps(i-1,:)) ** (sum_set(i, j)-1) * prod
end do
do l = 1, kcheb
amatr(l, :) = amatr(l, :) + sum1(l) * dd**(i-1) * adiffs(l, :, i-1)
end do
end do
Having discretized (\ref{final_riccati}), we have obtained a solvable system of the form $A[\delta] = [\texttt{rhs}]$. Numerically inverting the constructed matrix $A$, the values of the perturbation function $\delta$ are thus determined at the quadrature nodes.