Moore-Penrose continuation

CL_MATCONT implements a continuation method that is slightly different from the pseudo-arclength continuation.

Figure 3: Moore-Penrose continuation
\begin{figure}\begin{center}
\input{moore1.pstex_t}
\end{center}\end{figure}

Definition 1   Let $A$ be an $N\times (N+1)$ matrix with maximal rank. Then the Moore-Penrose inverse of $A$ is defined by $A^+ = A^T(AA^T)^{-1}$.

Let $A$ be an $N\times (N+1)$ matrix with maximal rank. Consider the following linear system with $x,v\in \ensuremath{\mathbf{R}}^{N+1},b\in \ensuremath{\mathbf{R}}^N$:
$\displaystyle Ax$ $\textstyle =$ $\displaystyle b$ (8)
$\displaystyle v^Tx$ $\textstyle =$ $\displaystyle 0$ (9)

where $x$ is a point on the curve and $v$ its tangent vector with respect to $A$, i.e. $Av=0$. Since $AA^+b = b$ and $v^TA^+b = \langle Av,(AA^T)^{-1}b\rangle = 0$, a solution of this system is
\begin{displaymath}
x = A^+b.
\end{displaymath} (10)

Suppose we have a predicted point $X^0$ using (1). We want to find the point $x$ on the curve which is nearest to $X^0$, i.e. we are trying to solve the optimization problem:
\begin{displaymath}
\min_{x}\{ \vert\vert x-X^0\vert\vert\ \vert\ F(x)=0 \}
\end{displaymath} (11)

So, the system we need to solve is:
$\displaystyle F(x)$ $\textstyle =$ $\displaystyle 0$ (12)
$\displaystyle w^T(x-X^0)$ $\textstyle =$ $\displaystyle 0$ (13)

where $w$ is the tangent vector at point $x$. In Newton's method this system is solved using a linearization about $X^0$. Taylor expansion about $X^0$ gives:
$\displaystyle F(x)$ $\textstyle =$ $\displaystyle F(X^0) + F_x(X^0)(x-X^0) + \ensuremath{\mathcal{O}}(\vert\vert x-X^0\vert\vert^2)$ (14)
$\displaystyle w^T(x-X^0)$ $\textstyle =$ $\displaystyle v^T(x-X^0) + \ensuremath{\mathcal{O}}(\vert\vert x-X^0\vert\vert^2)\ .$ (15)

So when we discard the higher order terms we can see using (8) and (10) that the solution of this system is:
\begin{displaymath}
x = X^0 - F_x^+(X^0)F(X^0)\ .
\end{displaymath} (16)

However, the null vector of $F_x(X^0)$ is not known, therefore we approximate it by $V^0 = v_i$, the tangent vector at $x_i$. Geometrically this means we are solving $F(x)=0$ in a hyperplane perpendicular to the previous tangent vector. This is illustrated in Figure 3. In other words, the extra function $g(x)$ in (2) becomes:
\begin{displaymath}
g_k(x) = \langle x-X^{k},V^k \rangle ,
\end{displaymath} (17)

where $F_x(X^{k-1})V^k=0$ for $k=1,2,\ldots$. Thus, the Newton iteration we are doing is:
$\displaystyle X^{k+1}$ $\textstyle =$ $\displaystyle X^k - H^{-1}_x(X^k,V^k)H(X^k,V^k)$ (18)
$\displaystyle V^{k+1}$ $\textstyle =$ $\displaystyle V^k - H^{-1}_x(X^k,V^k)R(X^k,V^k)$ (19)


\begin{displaymath}
H(X,V) = \left( \begin{array}{c}
F(X)\\
0
\end{array...
...left( \begin{array}{c}
F_x(X)\\
V^T
\end{array}\right)
\end{displaymath} (20)


\begin{displaymath}
R(X,V) = \left( \begin{array}{c}
F_x(X)V \\
0
\end{array}\right)\ .
\end{displaymath} (21)

One can prove that under the same conditions as for the pseudo-arclength continuation, the Newton iterations (18) and (19) converge to a point on the curve $x_{i+1}$ and the corresponding tangent vector $v_{i+1}$, respectively. In the pseudo-arclength continuation, we had to compute a tangent vector when a new point was found. In this case however, we already compute the tangent vectors $V^k$ at each iterate (19), so we only need to normalize the computed tangent vectors.