D.18 Eigen­func­tions of com­mut­ing op­er­a­tors

Any two op­er­a­tors $A$ and $B$ that com­mute, $AB=BA$, have a com­mon set of eigen­func­tions, pro­vided only that each has a com­plete set of eigen­func­tions. (In other words, the op­er­a­tors do not nec­es­sar­ily have to be Her­mit­ian. Uni­tary, anti-Her­mit­ian, etcetera, op­er­a­tors all qual­ify.)

First note the fol­low­ing:

if $\alpha_i$ is an eigen­func­tion of $A$ with eigen­value $a_i$, then $B\alpha_i$ is ei­ther also an eigen­func­tion of $A$ with eigen­value $a_i$ or is zero.
To see that, note that since $A$ and $B$ com­mute $AB\alpha_i$ $\vphantom0\raisebox{1.5pt}{$=$}$ $BA\alpha_i$ which is $a_iB\alpha_i$. Com­par­ing start and end, the com­bi­na­tion $B\alpha_i$ must be an eigen­func­tion of $A$ with eigen­value $a_i$ if it is not zero. (Eigen­func­tions may not be zero.)

Now as­sume that there is just a sin­gle in­de­pen­dent eigen­func­tion $\alpha_{i}$ for each dis­tinct eigen­value $a_i$ of $A$. Then if $B\alpha_i$ is nonzero, it can only be a mul­ti­ple of that sin­gle eigen­func­tion. By de­f­i­n­i­tion, that makes $\alpha_i$ an eigen­func­tion of $B$ too, with as eigen­value the mul­ti­ple. On the other hand, if $B\alpha_i$ is zero, then $\alpha_i$ is still an eigen­func­tion of $B$, now with eigen­value zero. So un­der the stated as­sump­tion, $A$ and $B$ have the ex­act same eigen­func­tions, prov­ing the as­ser­tion of this de­riva­tion.

How­ever, fre­quently there is de­gen­er­acy, i.e. there is more than one eigen­func­tion $\alpha_{i,1},\alpha_{i,2},\ldots$ for a sin­gle eigen­value $a_i$. Then the fact that, say, $B\alpha_{i,1}$ is an eigen­func­tion of $A$ with eigen­value $a_i$ no longer means that $B\alpha_{i,1}$ is a mul­ti­ple of $\alpha_{i,1}$; it only means that $B\alpha_{i,1}$ is some com­bi­na­tion of all of $\alpha_{i,1},\alpha_{i,2},\ldots$. Which means that $\alpha_{i,1}$ is not in gen­eral an eigen­func­tion of $B$.

To deal with that, it has to be as­sumed that the prob­lem has been nu­mer­i­cally ap­prox­i­mated by some fi­nite-di­men­sion­al one. Then $A$ and $B$ will be ma­tri­ces, and the num­ber of in­de­pen­dent eigen­func­tions (or rather, eigen­vec­tors now) of $A$ and $B$ will be fi­nite and equal. That al­lows the prob­lem to be ad­dressed one eigen­func­tion at a time.

As­sume now that $\beta$ is an eigen­func­tion of $B$, with eigen­value $b$, that is not yet an eigen­func­tion of $A$ too. By com­plete­ness, it can still be writ­ten as a com­bi­na­tion of the eigen­func­tions of $A$, and more par­tic­u­larly as $\beta$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\beta_{a_i}+\beta_o$ where $\beta_{a_i}$ is a com­bi­na­tion of the eigen­func­tions of $A$ with eigen­value $a_i$ and $\beta_o$ a com­bi­na­tion of the eigen­func­tions of $A$ with other eigen­val­ues. There must be such eigen­func­tions with $\beta_{a_i}$ nonzero, be­cause with­out us­ing the $\alpha_i$ you can­not cre­ate an equal num­ber of in­de­pen­dent eigen­func­tions of $B$ as of $A$. By de­f­i­n­i­tion

\begin{displaymath}
B \left(\beta_{a_i} +\beta_o\right) = b \left(\beta_{a_i} +\beta_o\right)
\end{displaymath}

but that must mean that

\begin{displaymath}
B \beta_{a_i} = b \beta_{a_i}
\end{displaymath}

since if it is not, $B\beta_0$ can­not make up the dif­fer­ence; as seen ear­lier, $B\beta_0$ only con­sists of eigen­func­tions of $A$ that do not have eigen­value $a_i$. Ac­cord­ing to the above equa­tion, $\beta_{a_i}$, which is al­ready an eigen­func­tion of $A$ with eigen­value $a_i$, is also an eigen­func­tion of $B$ with eigen­value $b$. So re­place one of the $\alpha_{i,1}$, $\alpha_{i,2}$ ,...with $\beta_{a_i}$. (If you write $\beta_{a_i}$ in terms of the $\alpha_{i,1}$, $\alpha_{i,2}$ ,..., then the func­tion you re­place may not ap­pear with a zero co­ef­fi­cient.) Sim­i­larly re­place an eigen­func­tion of $B$ with eigen­value $b$ with $\beta_{a_i}$. Then $A$ and $B$ have one more com­mon eigen­func­tion. Keep go­ing in this way and even­tu­ally all eigen­func­tions of $B$ are also eigen­func­tions of $A$ and vice versa.

Sim­i­lar ar­gu­ments can be used re­cur­sively to show that more gen­er­ally, a set of op­er­a­tors $A,B,C,\ldots$ that all com­mute have a sin­gle com­mon set of eigen­func­tions. The trick is to de­fine an ar­ti­fi­cial new op­er­a­tor, call it $P$, that has the com­mon eigen­func­tions of $A$ and $B$, but whose eigen­val­ues are dis­tinct for any two eigen­func­tions un­less these eigen­func­tions have the same eigen­val­ues for both $A$ and $B$. Then the eigen­func­tions of $P$, even if you mess with them, re­main eigen­func­tions of $A$ and $B$. So go find com­mon eigen­func­tions for $P$ and $C$.

The above de­riva­tion as­sumed that the prob­lem was fi­nite-​di­men­sion­al, or dis­cretized some way into a fi­nite-di­men­sion­al one like you do in nu­mer­i­cal so­lu­tions. The lat­ter is open to some sus­pi­cion, be­cause even the most ac­cu­rate nu­mer­i­cal ap­prox­i­ma­tion is never truly ex­act. Un­for­tu­nately, in the in­fi­nite-​di­men­sion­al case the de­riva­tion gets much trick­ier. How­ever, as the hy­dro­gen atom and har­monic os­cil­la­tor eigen­func­tion ex­am­ples in­di­cate, typ­i­cal in­fi­nite sys­tems in na­ture do obey the the­o­rem any­way.