Subsections


5.1 A simple example

The method of separation of variables shall first be demonstrated for a simple example. The method as described here will work as long as the spatial region is finite and has homogeneous boundary conditions.


5.1.1 The physical problem

The problem is to find the unsteady pressure field $u(x,t)$ in a pipe with one end closed and the other open to the atmosphere:

Figure 5.1: Acoustics in a pipe.
\begin{figure}
\begin{center}
\leavevmode
{}
\epsffile{svwex1.eps}
\end{center}
\end{figure}


5.1.2 The mathematical problem

Figure 5.2: Dependent variables.
\begin{figure}
\begin{center}
\leavevmode
{}
\epsffile{svwex2.eps}
\end{center}
\end{figure}


5.1.3 Outline of the procedure

We will try to find a solution of this problem in the form

\begin{displaymath}
u = \sum_n u_n(t) X_n(x)
\end{displaymath}

Here the $X_n$ will be cleverly chosen functions called ``eigenfunctions.'' The $u_n$ are coefficients, depending on time, that are found from plugging the expression for $u$ into the partial differential equation and the initial conditions.

There are two big reasons why the $X_n$ must be the eigenfunctions, rather than the $u_n$:

(If the spatial range is infinite or semi-infinite, you may be able to use a Fourier transform. Alternatively, you may be able to use a Laplace transform in time.)


5.1.4 Step 1: Find the eigenfunctions

The first step is to find the eigenfunctions $X_n$.

The eigenfunctions are found from requiring that each individual term of the form $u_n(T) X_n(x)$ is capable of satisfying the homogeneous partial differential equation and the homogeneous boundary conditions.

In this particular example the partial differential equation is homogeneous. But even if it is not, i.e. if the partial differential equation was something like

\begin{displaymath}
u_{tt} = a^2 u_{xx} + q
\end{displaymath}

with $q$ a given function of $x$ and $t$, then still in this step you would use the homogeneous equation

\begin{displaymath}
u_{tt} = a^2 u_{xx}
\end{displaymath}

By convention, $u_n(t)$ is usually written as $T(t)$ and $X_n()$ as $X(x)$ in this step. To see when $X(x)T(t)$ satisfies the homogeneous partial differential equation, plug it in:

\begin{displaymath}[X(x)T(t)]_{tt} = a^2 [X(x)T(t)]_{xx}
\quad\quad\Rightarrow\quad\quad
X(x)T''(t) = a^2 X''(x)T(t)
\end{displaymath}

where primes indicate derivatives of the function with respect to its argument.

The trick is now to take the terms containing time to one side of the equation and the terms containing $x$ to the other side.

\begin{displaymath}
\frac{1}{a^2} \frac{T''(t)}{T(t)} = \frac{X''(x)}{X(x)}
\end{displaymath}

This trick is why this solution procedure is called the “method of separation of variables.”

While the right hand side, $X''(x)/X(x)$, does not depend on $t$, you would think that it would depend on the position $x$; both $X$ and $X''$ change when $x$ changes. But actually, $X''/X$ does not change with $x$; after all, if we change $x$, it does nothing to $t$, so the left hand side does not change. And since the right hand side is the same, it too does not change. So the right hand side does not depend on either $x$ or $t$; it must be a constant. By convention, we call the constant $-\lambda$:

\begin{displaymath}
\frac{T''}{a^2 T} = \frac{X''}{X} = \hbox{ constant } = -\lambda
\end{displaymath}

If we also require $X$ to satisfy the same homogeneous boundary conditions as $u$. In this case, that means that at $x=0$, its $x$-derivative is zero, and that at $x=\ell$, $X$ itself is zero. So we get the following problem for $X$:

\begin{displaymath}
X'' + \lambda X = 0 \qquad X'(0) = 0 \qquad X(\ell)=0
\end{displaymath}

This is a boundary value problem involving an ordinary differential equation. Not a partial differential equation.

Note that the problem is completely homogeneous: $X(x)=0$ satisfies both the partial differential equation and the boundary conditions. This is similar to the eigenvalue problem for vectors $A\vec v = \lambda \vec v$, which is certainly always true when $\vec v=0$. But for the eigenvalue problem, we are interested in nonzero vectors $\vec v$ for which $A\vec v=\lambda v$. That only occurs for special values $\lambda_1,
\lambda_2, \ldots$ of $\lambda$.

Similarly, we are interested only in nonzero solutions X(x) of the above ordinary differential equation and boundary conditions. Eigenvalue problems for functions such as the one above are called “Sturm-Liouville problems.” The biggest differences from matrix eigenvalue problems are:

Fortunately, the above ordinary differential equation is simple: it is a constant coefficient one, so we write its characteristic polynomial:

\begin{displaymath}
k^2 + \lambda = 0 \quad\quad\Rightarrow\quad\quad k = \pm \sqrt{-\lambda} = \pm {\rm i} \sqrt{\lambda}
\end{displaymath}

We must now find all possible eigenvalues $\lambda$ and all corresponding eigenfunctions that satisfy the required boundary conditions. We must look at all possibilities, one at a time.

  1. Case $\lambda < 0$:

    Since $k = \pm \sqrt{-\lambda}$

    \begin{displaymath}
X = A e^{\sqrt{-\lambda} x} + B e^{-\sqrt{-\lambda} x}
\end{displaymath}

    We try to satisfy the boundary conditions:

    \begin{displaymath}
X'(0) = 0 = A \sqrt{-\lambda} - B \sqrt{-\lambda} \quad\quad\Rightarrow\quad\quad B = A
\end{displaymath}


    \begin{displaymath}
X(\ell) = 0 = A \left(
e^{\sqrt{-\lambda} \ell} + e^{-\s...
...-\lambda} \ell} \right) \quad\quad\Rightarrow\quad\quad A = 0
\end{displaymath}

    So $A=B=0$; there are no nontrivial solutions for $\lambda<0$.

  2. Case $\lambda = 0$:

    Since $k_1 = k_2 = 0$ we have a multiple root of the characteristic equation, and the solution is

    \begin{displaymath}
X = A e^{0x} + B x e^{0x} = A + B x
\end{displaymath}

    We try to satisfy the boundary conditions again:

    \begin{displaymath}
X'(0) = 0 = B \qquad X(\ell) = 0 = A
\end{displaymath}

    So $A=B=0$; there are again no nontrivial solutions.

  3. Case $\lambda > 0$:

    Since $k = \pm \sqrt{-\lambda} = \pm {\rm i} \sqrt{\lambda}$, the solution of the ordinary differential equation is after cleanup:

    \begin{displaymath}
X = A \sin\left(\sqrt{\lambda} x\right) + B \cos\left(\sqrt{\lambda} x\right)
\end{displaymath}

    We try to satisfy the first boundary condition:

    \begin{displaymath}
X'(0) = 0 = A \sqrt{\lambda}
\end{displaymath}

    Since we are looking at the case $\lambda > 0$, this can only be true if $A=0$. So, we need

    \begin{displaymath}
X = B \cos\left(\sqrt{\lambda} x\right)
\end{displaymath}

    We now try to also satisfy the second boundary condition:

    \begin{displaymath}
X(\ell) = 0 = B \cos\left(\sqrt{\lambda} \ell\right) = 0
\end{displaymath}

    For a nonzero solution, $B$ may not be zero, so the cosine must be zero. For positive argument, a cosine is zero at $\frac12\pi, \frac32\pi,\ldots$, so that our eigenvalues are

    \begin{displaymath}
\sqrt{\lambda_1} = \frac{\pi}{2\ell},
\sqrt{\lambda_2} =...
...pi}{2\ell},
\sqrt{\lambda_3} = \frac{5\pi}{2\ell},
\ldots
\end{displaymath}

    The same as for eigenvectors, for our eigenfunctions we must choose the one undetermined parameter $B$. Choosing each $B=1$, we get the eigenfunctions:

    \begin{displaymath}
X_1 = \cos\left(\frac{\pi x}{2\ell}\right),
X_2 = \cos\l...
...ght),
X_3 = \cos\left(\frac{5\pi x}{2\ell}\right),
\ldots
\end{displaymath}

The eigenvalues and eigenfunctions have been found. If we want to evaluate them on a computer, we need a general formula for them. You can check that it is:

\begin{displaymath}
\lambda_n = \frac{(2n-1)^2 \pi^2}{4\ell^2}
\qquad X_n = ...
...ac{(2n-1) \pi x}{2\ell}\right)
\qquad (n = 1, 2, 3, \ldots)
\end{displaymath}

Just try a few values for $n$. We have finished finding the eigenfunctions.


5.1.5 Should we solve the other equation?

If you look back to the beginning of the previous subsection, you may wonder about the function $T(t)$. It satisfied

\begin{displaymath}
\frac{T''}{a^2 T} = -\lambda
\end{displaymath}

Now that we have found the values for $\lambda$ from the $X$-problem, we could solve this ordinary differential equation too, and find functions $T_1(t),
T_2(t),\ldots$.

However, it is far more straightforward not to do so. Now that the eigenfunctions $X_n$ have been found, the general expression for the solution,

\begin{displaymath}
u = \sum_n u_n(t) X_n(x)
\end{displaymath}

can simply be plugged into the partial differential equation and its initial conditions to find the $u_n$, completing the solution.

However, most people do solve for the $T_n$ corresponding to each eigenvalue $\lambda_n$. If you want to follow the crowd, please keep in mind the following:

  1. The values of $\lambda$ can only be found from the Sturm-Liouville problem for $X$. The problem for $T$ is not a Sturm-Liouville problem and cannot produce the correct values for $\lambda$.
  2. The functions $T(t)$ do not satisfy the same initial conditions at time $t=0$ as $u$ does. That is unlike the $X_n$ which must satisfy the homogeneous boundary conditions.
  3. Finding $T$ is useless if the partial differential equation is inhomogeneous; it simply does not work. Unless you add still more artificial tricks to the mix, as the book does.


5.1.6 Step 2: Solve the problem

Now that the eigenfunctions are known, the problem may be solved. To do so, everything needs to be written in terms of the eigenfunctions. And that means everything, including the partial differential equation and the initial conditions.

We first write our solution $u(x,t)$ in terms of the eigenfunctions:

\begin{displaymath}
u(x,t) = \sum_{n=1}^\infty u_n(t) X_n(x)
\end{displaymath}

The coefficients $u_n(t)$ are called the “Fourier coefficients” of $u$. The complete sum is called the “Fourier series” for $u$.

We know our eigenfunctions $X_n(x)$, but not yet our Fourier coefficients $u_n(t)$. In fact, the $u_n(t)$ are what is still missing; if we know the $u_n(t)$, we can find the solution $u$ that we want by doing the sum above. On a computer probably, if we want to get high accuracy. Or just the first few terms by hand, if we accept some numerical error.

Next we write the complete partial differential equation, $u_{tt} =
a^2 u_{xx}$, in terms of the eigenfunctions:

\begin{displaymath}
\sum_{n=1}^\infty \ddot u_n(t) X_n(x) = a^2
\sum_{n=1}^\infty u_n(t) X_n''(x)
\end{displaymath}

This equation will always simplify; that is how the method of separation of variables works. Look up the differential equation for $X_n$ in the second last subsection; it was

\begin{displaymath}
X_n''(x) = -\lambda_n X_n(x)
\end{displaymath}

Using this expression for $X_n''$, we can get rid of the $x$-derivatives in the partial differential equation to get

\begin{displaymath}
\sum_{n=1}^\infty \ddot u_n(t) X_n(x) = a^2
\sum_{n=1}^\infty \left(- \lambda_n u_n(t)\right) X_n(x)
\end{displaymath}

Now if two functions are equal, all their Fourier coefficients must be equal, so we have, for any value of $n$,

\begin{displaymath}
\ddot u_n(t) = - a^2 \lambda_n u_n(t) \qquad (\mbox{for }n=1,2,3,\ldots)
\end{displaymath}

That no longer contains $x$ at all. The partial differential equation has become a set of ordinary differential equations in $t$ only. And those are much easier to solve than the original partial differential equations. Getting rid of $x$ is really what the method of separation variables does for us.

The above ordinary differential equations can be solved easily. For each value of $n$ it is a constant coefficient equation. So you write the characteristic equation $k^2 = -a^2 \lambda_n$. That give $k=\pm{{\rm i}}a\sqrt\lambda_n$. Then the solution is

\begin{displaymath}
u_n(t) = C_{1n} e^{ i a \sqrt\lambda_n t} + C_{2n} e^{ -i a \sqrt\lambda_n t}
\end{displaymath}

or after cleaning up,

\begin{displaymath}
u_n(t) = D_{1n} \cos\left(a \sqrt\lambda_n t\right)
+ D_{2n} \sin\left(a \sqrt\lambda_n t\right)
\end{displaymath}

So, we have already found our pressure a bit more precisely:

\begin{displaymath}
u(x,t)= \sum_{n=1}^\infty
\left[
D_{1n} \cos\left(a \s...
...
D_{2n} \sin\left(a \sqrt\lambda_n t\right)
\right] X_n(x)
\end{displaymath}

but we still need to figure out what the integration constants $D_{1n}$ and $D_{2n}$ are.

To do so, we also write the initial condition $u(x,0)=f(x)$ and $u_t(x,0)=g(x)$ in terms of the eigenfunctions:

\begin{displaymath}
f(x) = \sum_{n=1}^\infty f_n X_n(x) \qquad
g(x) = \sum_{n=1}^\infty g_n X_n(x)
\end{displaymath}

Sometimes, when $f$ or $g$ is a simple function, like function 1, students do not write a Fourier series for it. But that does not work.

Using the Fourier series for $u$, $f$, and $g$ above, the two initial conditions become

\begin{displaymath}
\sum_{n=1}^\infty D_{1n} X_n(x) = \sum_{n=1}^\infty f_n X_n(x)
\end{displaymath}


\begin{displaymath}
\sum_{n=1}^\infty a \sqrt\lambda_n D_{2n} X_n(x)
= \sum_{n=1}^\infty g_n X_n(x).
\end{displaymath}

The Fourier coefficients must again be equal, so we conclude that the coefficients we are looking for are

\begin{displaymath}
D_{1n} = f_n \qquad D_{2n} = \frac{g_n}{a \sqrt\lambda_n}
\end{displaymath}

The Fourier series for $u$ becomes now

\begin{displaymath}
u(x,t)= \sum_{n=1}^\infty
\left[
f_n \cos\left(a \sqrt...
...ambda_n} \sin\left(a \sqrt\lambda_n t\right)
\right] X_n(x)
\end{displaymath}

where

\begin{displaymath}
\lambda_n = \frac{(2n-1)^2 \pi^2}{4\ell^2}
\quad
\qquad X_n = \cos\left(\frac{(2n-1) \pi x}{2\ell}\right)
\end{displaymath}

So, if we can find the Fourier coefficients $f_n$ and $g_n$ of functions $f(x)$ and $g(x)$, we are done.

Now $f(x)$ and $g(x)$ are, supposedly, given functions, but how do we find their Fourier coefficients? The answer is the following important formula:

\begin{displaymath}
\fbox{$ \displaystyle
f_n = \frac{\int_0^l f(x) X_n(x)\/ {\rm d}x}{\int_0^l X_n(x)^2\/ {\rm d}x}
$}
\end{displaymath}

This is called the “orthogonality relation”. Even if $f$ is some simple function like $f=1$, we still need to do those integrals. Only if $f=0$ we can immediately say that each Fourier coefficient $f_n$ is zero. The same for $g$:

\begin{displaymath}
g_n = \frac{\int_0^l g(x) X_n(x)\/ {\rm d}x}{\int_0^l X_n(x)^2\/ {\rm d}x}
\end{displaymath}

(These formulae work as long as the ordinary differential equation for the $X_n$ is of the form $AX_n''+BX_n=0$. What you do for more general differential equations will be covered later.)

We are done! Or at least, we have done as much as we can do until someone tells us the actual functions $f(x)$ and $g(x)$. If they do, we just do the integrals above to find all the $f_n$ and $g_n$, (maybe analytically or on a computer), and then we can sum the expression for $u(x,t)$ for any $x$ and $t$ that strikes our fancy.

Note that we did not have to do anything with the boundary conditions $u_x(0,t)=0$ and $u(\ell,t)=0$. Since every eigenfunction $X_n$ satisfies them, the expression for $u$ above automatically also satisfies these homogeneous boundary conditions.