Sub­sec­tions


11.14 Ap­pli­ca­tion to Par­ti­cles in a Box

This sec­tion ap­plies the ideas de­vel­oped in the pre­vi­ous sec­tions to weakly in­ter­act­ing par­ti­cles in a box. This al­lows some of the de­tails of the shelves in fig­ures 11.1 through 11.3 to be filled in for a con­crete case.

For par­ti­cles in a macro­scopic box, the sin­gle-par­ti­cle en­ergy lev­els ${\vphantom' E}^{\rm p}$ are so closely spaced that they can be taken to be con­tin­u­ously vary­ing. The one ex­cep­tion is the ground state when Bose-Ein­stein con­den­sa­tion oc­curs; that will be ig­nored for now. In con­tin­uum ap­prox­i­ma­tion, the num­ber of sin­gle-par­ti­cle en­ergy states in a macro­scop­i­cally small en­ergy range ${\rm d}{{\vphantom' E}^{\rm p}}$ is ap­prox­i­mately, fol­low­ing (6.6),

\begin{displaymath}
\fbox{$\displaystyle
{\rm d}N = V n_s {\cal D}{ \rm d}{\v...
...{\vphantom' E}^{\rm p}}
{ \rm d}{\vphantom' E}^{\rm p}
$} %
\end{displaymath} (11.42)

Here $n_s$ $\vphantom0\raisebox{1.5pt}{$=$}$ $2s+1$ is the num­ber of spin states.

Now ac­cord­ing to the de­rived dis­tri­b­u­tions, the num­ber of par­ti­cles in a sin­gle en­ergy state at en­ergy ${\vphantom' E}^{\rm p}$ is

\begin{displaymath}
\iota = \frac{1}{e^{({\vphantom' E}^{\rm p}-\mu)/{k_{\rm B}}T}\pm1}
\end{displaymath}

where the plus sign ap­plies for fermi­ons and the mi­nus sign for bosons. The term can be ig­nored com­pletely for dis­tin­guish­able par­ti­cles.

To get the to­tal num­ber of par­ti­cles, just in­te­grate the par­ti­cles per state $\iota$ over all states:

\begin{displaymath}
I = \int_{{\vphantom' E}^{\rm p}=0}^\infty \iota V n_s {\ca...
...\rm p}-\mu)/{k_{\rm B}}T}\pm1} { \rm d}{\vphantom' E}^{\rm p}
\end{displaymath}

and to get the to­tal en­ergy, in­te­grate the en­ergy of each sin­gle-par­ti­cle state times the num­ber of par­ti­cles in that state over all states:

\begin{displaymath}
E = \int_{{\vphantom' E}^{\rm p}=0}^\infty {\vphantom' E}^{...
...\rm p}-\mu)/{k_{\rm B}}T}\pm1} { \rm d}{\vphantom' E}^{\rm p}
\end{displaymath}

The ex­pres­sion for the num­ber of par­ti­cles can be nondi­men­sion­al­ized by re­ar­rang­ing and tak­ing a root to give

\begin{displaymath}
\fbox{$\displaystyle
\frac{\displaystyle
\frac{\hbar^2}{2...
...}{{k_{\rm B}}T}\quad u_0 \equiv \frac{\mu}{{k_{\rm B}}T}
$} %
\end{displaymath} (11.43)

Note that the left hand side is a nondi­men­sion­al ra­tio of a typ­i­cal quan­tum mi­cro­scopic en­ergy, based on the av­er­age par­ti­cle spac­ing $\sqrt[3]{V/I}$, to the typ­i­cal clas­si­cal mi­cro­scopic en­ergy ${k_{\rm B}}T$. This ra­tio is a key nondi­men­sion­al num­ber gov­ern­ing weakly in­ter­act­ing par­ti­cles in a box. To put the typ­i­cal quan­tum en­ergy into con­text, a sin­gle par­ti­cle in its own vol­ume of size $V$$\raisebox{.5pt}{$/$}$$I$ would have a ground state en­ergy $3\pi^2\hbar^2$$\raisebox{.5pt}{$/$}$$2m(V/I)^{2/3}$.

Some ref­er­ences, [4], de­fine a “ther­mal de Broglie wave­length” $\lambda_{\rm {th}}$ by writ­ing the clas­si­cal mi­cro­scopic en­ergy ${k_{\rm B}}T$ in a quan­tum-like way:

\begin{displaymath}
{k_{\rm B}}T \equiv 4\pi \frac{\hbar^2}{2m} \frac{1}{\lambda_{\rm th}^2}
\end{displaymath}

In some sim­ple cases, you can think of this as roughly the quan­tum wave­length cor­re­spond­ing to the mo­men­tum of the par­ti­cles. It al­lows var­i­ous re­sults that de­pend on the nondi­men­sion­al ra­tio of en­er­gies to be re­for­mu­lated in terms of a nondi­men­sion­al ra­tio of lengths, as in

\begin{displaymath}
\frac{\displaystyle \frac{\hbar^2}{2m}\left(\frac{I}{V}\rig...
...\left[\frac{\lambda_{\rm th}}{\left(V/I\right)^{1/3}}\right]^2
\end{displaymath}

Since the ra­tio of en­er­gies is fully equiv­a­lent, and has an un­am­bigu­ous mean­ing, this book will re­frain from mak­ing the­ory harder than needed by defin­ing su­per­flu­ous quan­ti­ties. But in prac­tice, think­ing in terms of nu­mer­i­cal val­ues that are lengths is likely to be more in­tu­itive than en­er­gies, and then the nu­mer­i­cal value of the ther­mal wave­length would be the one to keep in mind.

Note that (11.43) pro­vides a di­rect re­la­tion­ship be­tween the ra­tio of typ­i­cal quan­tum/clas­si­cal en­er­gies on one side, and $u_0$, the ra­tio of atomic chem­i­cal po­ten­tial $\mu$ to typ­i­cal clas­si­cal mi­cro­scopic en­ergy ${k_{\rm B}}T$ on the other side. While the two en­ergy ra­tios are not the same, (11.43) makes them equiv­a­lent for sys­tems of weakly in­ter­act­ing par­ti­cles in boxes. Know one and you can in prin­ci­ple com­pute the other.

The ex­pres­sion for the sys­tem en­ergy may be nondi­men­sion­al­ized in a sim­i­lar way to get

\begin{displaymath}
\fbox{$\displaystyle
\frac{E}{I{k_{\rm B}}T}
=
\left.
...
...}{{k_{\rm B}}T}\quad u_0 \equiv \frac{\mu}{{k_{\rm B}}T}
$} %
\end{displaymath} (11.44)

The in­te­gral in the bot­tom arises when get­ting rid of the ra­tio of en­er­gies that forms us­ing (11.43).

The quan­tity in the left hand side is the nondi­men­sion­al ra­tio of the ac­tual sys­tem en­ergy over the sys­tem en­ergy if every par­ti­cle had the typ­i­cal clas­si­cal en­ergy ${k_{\rm B}}T$. It too is a unique func­tion of $u_0$, and as a con­se­quence, also of the ra­tio of typ­i­cal mi­cro­scopic quan­tum and clas­si­cal en­er­gies.


11.14.1 Bose-Ein­stein con­den­sa­tion

Bose-Ein­stein con­den­sa­tion is said to have oc­curred when in a macro­scopic sys­tem the num­ber of bosons in the ground state be­comes a fi­nite frac­tion of the num­ber of par­ti­cles $I$. It hap­pens when the tem­per­a­ture is low­ered suf­fi­ciently or the par­ti­cle den­sity is in­creased suf­fi­ciently or both.

Ac­cord­ing to de­riva­tion {D.57}, the num­ber of par­ti­cles in the ground state is given by

\begin{displaymath}
I_1 = \frac{N_1-1}{e^{({\vphantom' E}^{\rm p}_1-\mu)/{k_{\rm B}}T}-1}.
\end{displaymath} (11.45)

In or­der for this to be­come a fi­nite frac­tion of the large num­ber of par­ti­cles $I$ of a macro­scopic sys­tem, the de­nom­i­na­tor must be­come ex­tremely small, hence the ex­po­nen­tial must be­come ex­tremely close to one, hence $\mu$ must come ex­tremely close to the low­est en­ergy level ${\vphantom' E}^{\rm p}_1$. To be pre­cise, $E_1-\mu$ must be small of or­der ${k_{\rm B}}T$$\raisebox{.5pt}{$/$}$$I$; smaller than the clas­si­cal mi­cro­scopic en­ergy by the hu­mon­gous fac­tor $I$. In ad­di­tion, for a macro­scopic sys­tem of weakly in­ter­act­ing par­ti­cles in a box, ${\vphantom' E}^{\rm p}_1$ is ex­tremely close to zero, (it is smaller than the mi­cro­scopic quan­tum en­ergy de­fined above by a fac­tor $I^{2/3}$.) So con­den­sa­tion oc­curs when $\mu$ $\vphantom0\raisebox{1.1pt}{$\approx$}$ ${\vphantom' E}^{\rm p}_1$ $\vphantom0\raisebox{1.1pt}{$\approx$}$ 0, the ap­prox­i­ma­tions be­ing ex­tremely close. If the ground state is unique, $N_1$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, Bose-Ein­stein con­den­sa­tion sim­ply oc­curs when $\mu$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\vphantom' E}^{\rm p}_1$ $\vphantom0\raisebox{1.1pt}{$\approx$}$ 0.

You would there­fore ex­pect that you can sim­ply put $u_0$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\mu$$\raisebox{.5pt}{$/$}$${k_{\rm B}}T$ to zero in the in­te­grals (11.43) and (11.44). How­ever, if you do so (11.43) fails to de­scribe the num­ber of par­ti­cles in the ground state; it only gives the num­ber of par­ti­cles $I-I_1$ not in the ground state:

\begin{displaymath}
\frac{\displaystyle
\frac{\hbar^2}{2m}\left(\frac{I-I_1}V\...
...\sqrt{u}{ \rm d}u}{e^u-1}
\right)^{2/3} \qquad\mbox{for BEC}
\end{displaymath} (11.46)

To see that the num­ber of par­ti­cles in the ground state is in­deed not in­cluded in the in­te­gral, note that while the in­te­grand does be­come in­fi­nite when $u\downarrow0$, it be­comes in­fi­nite pro­por­tion­ally to 1/$\sqrt{u}$, which in­te­grates as pro­por­tional to $\sqrt{u}$, and $\sqrt{u_1}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\sqrt{{\vphantom' E}^{\rm p}_1/{k_{\rm B}}T}$ is van­ish­ingly small, not fi­nite. Ar­gu­ments given in de­riva­tion {D.57} do show that the only sig­nif­i­cant er­ror oc­curs for the ground state; the above in­te­gral does cor­rectly ap­prox­i­mate the num­ber of par­ti­cles not in the ground state when con­den­sa­tion has oc­curred.

The value of the in­te­gral can be found in math­e­mat­i­cal hand­books, [41, p. 201, with typo], as $\frac12!\zeta\left(\frac32\right)$ with $\zeta$ the so-called Rie­mann zeta func­tion, due to, who else, Euler. Euler showed that it is equal to a prod­uct of terms rang­ing over all prime num­bers, but you do not want to know that. All you want to know is that $\zeta\left(\frac32\right)$ $\vphantom0\raisebox{1.1pt}{$\approx$}$ 2.612 and that $\frac12!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac12\sqrt{\pi}$.

The Bose-Ein­stein tem­per­a­ture $T_B$ is the tem­per­a­ture at which Bose-​Ein­stein con­den­sa­tion starts. That means it is the tem­per­a­ture for which $I_1$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 in the ex­pres­sion above, giv­ing

\begin{displaymath}
\frac{\displaystyle
\frac{\hbar^2}{2m}\left(\frac{I-I_1}V\...
...right)^{2/3} \quad T\mathrel{\raisebox{-.7pt}{$\leqslant$}}T_B
\end{displaymath} (11.47)

It im­plies that for a given sys­tem of bosons, at Bose-Ein­stein con­den­sa­tion there is a fixed nu­mer­i­cal ra­tio be­tween the mi­cro­scopic quan­tum en­ergy based on par­ti­cle den­sity and the clas­si­cal mi­cro­scopic en­ergy ${k_{\rm B}}T_B$. That also il­lus­trates the point made at the be­gin­ning of this sub­sec­tion that both changes in tem­per­a­ture and changes in par­ti­cle den­sity can pro­duce Bose-Ein­stein con­den­sa­tion.

The first equal­ity in the equa­tion above can be cleaned up to give the frac­tion of bosons in the ground state as:

\begin{displaymath}
\frac{I_1}{I} = 1 - \left(\frac{T}{T_B}\right)^{3/2} \qquad T\mathrel{\raisebox{-.7pt}{$\leqslant$}}T_B
\end{displaymath} (11.48)


11.14.2 Fermi­ons at low tem­per­a­tures

An­other ap­pli­ca­tion of the in­te­grals (11.43) and (11.44) is to find the Fermi en­ergy ${\vphantom' E}^{\rm p}_{\rm {F}}$ and in­ter­nal en­ergy $E$ of a sys­tem of weakly in­ter­act­ing fermi­ons for van­ish­ing tem­per­a­ture.

For low tem­per­a­tures, the nondi­men­sion­al en­ergy ra­tio $u_0$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\mu$$\raisebox{.5pt}{$/$}$${k_{\rm B}}T$ blows up, since ${k_{\rm B}}T$ be­comes zero and the chem­i­cal po­ten­tial $\mu$ does not; $\mu$ be­comes the Fermi en­ergy ${\vphantom' E}^{\rm p}_{\rm {F}}$, chap­ter 6.10. To deal with the blow up, the in­te­grals can be rephrased in terms of $u$$\raisebox{.5pt}{$/$}$$u_0$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\vphantom' E}^{\rm p}$$\raisebox{.5pt}{$/$}$$\mu$, which does not blow up.

In par­tic­u­lar, the ra­tio (11.43) in­volv­ing the typ­i­cal mi­cro­scopic quan­tum en­ergy can be rewrit­ten by tak­ing a fac­tor $u_0^{3/2}$ out of the in­te­gral and root and to the other side to give:

\begin{displaymath}
\frac{\displaystyle \frac{\hbar^2}{2m} \left(\frac{I}{V}\ri...
...{u/u_0}{ \rm d}(u/u_0)}{e^{u_0[(u/u_0)-1]}+ 1}
\right)^{2/3}
\end{displaymath}

Now since $u_0$ is large, the ex­po­nen­tial in the de­nom­i­na­tor be­comes ex­tremely large for $u$$\raisebox{.5pt}{$/$}$$u_0$ $\raisebox{.3pt}{$>$}$ 1, mak­ing the in­te­grand neg­li­gi­bly small. There­fore the up­per limit of in­te­gra­tion can be lim­ited to $u$$\raisebox{.5pt}{$/$}$$u_0$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1. In that range, the ex­po­nen­tial is van­ish­ingly small, ex­cept for a neg­li­gi­bly small range around $u$$\raisebox{.5pt}{$/$}$$u_0$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, so it can be ig­nored. That gives

\begin{displaymath}
\frac{\displaystyle \frac{\hbar^2}{2m} \left(\frac{I}{V}\ri...
...0)
\right)^{2/3}
=
\left( \frac{n_s}{6\pi^2} \right)^{2/3}
\end{displaymath}

It fol­lows that the Fermi en­ergy is

\begin{displaymath}
{\vphantom' E}^{\rm p}_{\rm {F}} = \mu\vert _{T=0} =
\left...
...ight)^{2/3}
\frac{\hbar^2}{2m} \left(\frac{I}{V}\right)^{2/3}
\end{displaymath}

Physi­cist like to de­fine a “Fermi tem­per­a­ture” as the tem­per­a­ture where the clas­si­cal mi­cro­scopic en­ergy ${k_{\rm B}}T$ be­comes equal to the Fermi en­ergy. It is

\begin{displaymath}
T_{\rm {F}} = \frac{1}{k_{\rm B}} \left(\frac{6\pi^2}{n_s}\right)^{2/3}
\frac{\hbar^2}{2m} \left(\frac{I}{V}\right)^{2/3} %
\end{displaymath} (11.49)

It may be noted that ex­cept for the nu­mer­i­cal fac­tor, the ex­pres­sion for the Fermi tem­per­a­ture $T_{\rm {F}}$ is the same as that for the Bose-Ein­stein con­den­sa­tion tem­per­a­ture $T_B$ given in the pre­vi­ous sub­sec­tion.

Elec­trons have $n_s$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2. For the va­lence elec­trons in typ­i­cal met­als, the Fermi tem­per­a­tures are in the or­der of ten thou­sands of de­grees Kelvin. The metal will melt be­fore it is reached. The va­lence elec­trons are pretty much the same at room tem­per­a­ture as they are at ab­solute zero.

The in­te­gral (11.44) can be in­te­grated in the same way and then shows that $E$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac35I\mu$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac35I{\vphantom' E}^{\rm p}_{\rm {F}}$. In short, at ab­solute zero, the av­er­age en­ergy per par­ti­cle is $\frac35$ times ${\vphantom' E}^{\rm p}_{\rm {F}}$, the max­i­mum sin­gle-par­ti­cle en­ergy.

It should be ad­mit­ted that both of the re­sults in this sub­sec­tion have been ob­tained more sim­ply in chap­ter 6.10. How­ever, the analy­sis in this sub­sec­tion can be used to find the cor­rected ex­pres­sions when the tem­per­a­ture is fairly small but not zero, {D.62}, or for any tem­per­a­ture by brute-force nu­mer­i­cal in­te­gra­tion. One re­sult is the spe­cific heat at con­stant vol­ume of the free-elec­tron gas for low tem­per­a­tures:

\begin{displaymath}
C_v = \frac{\pi^2}{2}\frac{k_{\rm B}T}{{\vphantom' E}^{\rm p}_{\rm {F}}} \frac{k_{\rm B}}{m}
(1 + \ldots)
\end{displaymath} (11.50)

where $k_{\rm B}$$\raisebox{.5pt}{$/$}$$m$ is the gas con­stant $R$. All low-tem­per­a­ture ex­pan­sions pro­ceed in pow­ers of $({k_{\rm B}}T/{\vphantom' E}^{\rm p}_{\rm {F}})^2$, so the dots in the ex­pres­sion for $C_v$ above are of that or­der. The spe­cific heat van­ishes at zero tem­per­a­ture and is typ­i­cally small.


11.14.3 A gen­er­al­ized ideal gas law

While the pre­vi­ous sub­sec­tions pro­duced a lot of in­ter­est­ing in­for­ma­tion about weakly in­ter­act­ing par­ti­cles near ab­solute zero, how about some info about con­di­tions that you can check in a T-shirt? And how about some­thing math­e­mat­i­cally sim­ple, in­stead of elab­o­rate in­te­grals that pro­duce weird func­tions?

Well, there is at least one. By de­f­i­n­i­tion, (11.8), the pres­sure is the ex­pec­ta­tion value of $-{\rm d}{\vphantom' E}^{\rm S}_q$$\raisebox{.5pt}{$/$}$${\rm d}{V}$ where the ${\vphantom' E}^{\rm S}_q$ are the sys­tem en­ergy eigen­val­ues. For weakly in­ter­act­ing par­ti­cles in a box, chap­ter 6.2 found that the sin­gle par­ti­cle en­er­gies are in­versely pro­por­tional to the squares of the lin­ear di­men­sions of the box, which means pro­por­tional to $V^{-2/3}$. Then so are the sys­tem en­ergy eigen­func­tions, since they are sums of sin­gle-par­ti­cle ones: ${\vphantom' E}^{\rm S}_q$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\mbox{constant }V^{-2/3}$ Dif­fer­en­ti­at­ing pro­duces ${\rm d}{\vphantom' E}^{\rm S}_q$$\raisebox{.5pt}{$/$}$${\rm d}{V}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-\frac23{\vphantom' E}^{\rm S}_q$$\raisebox{.5pt}{$/$}$$V$ and tak­ing the ex­pec­ta­tion value

\begin{displaymath}
\fbox{$\displaystyle
PV={\textstyle\frac{2}{3}} E
$} %
\end{displaymath} (11.51)

This ex­pres­sion is valid for weakly in­ter­act­ing bosons and fermi­ons even if the sym­metriza­tion re­quire­ments can­not be ig­nored.


11.14.4 The ideal gas

The weakly in­ter­act­ing par­ti­cles in a box can be ap­prox­i­mated as an ideal gas if the num­ber of par­ti­cles is so small, or the box so large, that the av­er­age num­ber of par­ti­cles in an en­ergy state is much less than one.

Since the num­ber of par­ti­cles per en­ergy state is given by

\begin{displaymath}
\iota = \frac{1}{e^{({\vphantom' E}^{\rm p}-\mu)/{k_{\rm B}}T}\pm 1}
\end{displaymath}

ideal gas con­di­tions im­ply that the ex­po­nen­tial must be much greater than one, and then the $\pm1$ can be ig­nored. That means that the dif­fer­ence be­tween fermi­ons and bosons, which ac­counts for the $\pm1$, can be ig­nored for an ideal gas. Both can be ap­prox­i­mated by the dis­tri­b­u­tion de­rived for dis­tin­guish­able par­ti­cles.

The en­ergy in­te­gral (11.44) can now eas­ily be done; the $e^{u_0}$ fac­tor di­vides away and an in­te­gra­tion by parts in the nu­mer­a­tor pro­duces $E$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac32I{k_{\rm B}}T$. Plug it into the gen­er­al­ized ideal gas law (11.51) to get the nor­mal “ideal gas law”

\begin{displaymath}
\fbox{$\displaystyle
PV=I k_{\rm B}T
\qquad\Longleftrightarrow\qquad
Pv=RT \quad R \equiv \frac{k_{\rm B}}{m}
$} %
\end{displaymath} (11.52)

Also, fol­low­ing (11.34),

\begin{displaymath}
e = {\textstyle\frac{3}{2}} \frac{k_{\rm B}}{m} T = C_v T \...
...\textstyle\frac{3}{2}} R \quad C_p = {\textstyle\frac{5}{2}} R
\end{displaymath}

but note that these for­mu­lae are spe­cific to the sim­plis­tic ideal gases de­scribed by the model, (like no­ble gases.) For ideal gases with more com­plex mol­e­cules, like air, the spe­cific heats are not con­stants, but vary with tem­per­a­ture, as dis­cussed in sec­tion 11.15.

The ideal gas equa­tion is iden­ti­cal to the one de­rived in clas­si­cal physics. That is im­por­tant since it es­tab­lishes that what was de­fined to be the tem­per­a­ture in this chap­ter is in fact the ideal gas tem­per­a­ture that clas­si­cal physics de­fines.

The in­te­gral (11.43) can be done us­ing in­te­gra­tion by parts and a re­sult found in the no­ta­tions un­der !. It gives an ex­pres­sion for the sin­gle-par­ti­cle chem­i­cal po­ten­tial $\mu$:

\begin{displaymath}
-\frac{\mu}{{k_{\rm B}}T}
=
{\textstyle\frac{3}{2}}
\ln
...
...c{\hbar^2}{2m}\right.
\left(\frac{I}{V}\right)^{2/3}
\right]
\end{displaymath}

Note that the ar­gu­ment of the log­a­rithm is es­sen­tially the ra­tio be­tween the clas­si­cal mi­cro­scopic en­ergy and the quan­tum mi­cro­scopic en­ergy based on av­er­age par­ti­cle spac­ing. This ra­tio has to be big for an ac­cu­rate ideal gas, to get the ex­po­nen­tial in the par­ti­cle en­ergy dis­tri­b­u­tion $\iota$ to be big.

Next is the spe­cific en­tropy $s$. Re­call that the chem­i­cal po­ten­tial is just the Gibbs free en­ergy. By the de­f­i­n­i­tion of the Gibbs free en­ergy, the spe­cific en­tropy $s$ equals $(h-g)$$\raisebox{.5pt}{$/$}$$T$. Now the spe­cific Gibbs en­ergy is just the Gibbs en­ergy per unit mass, in other words, $\mu$$\raisebox{.5pt}{$/$}$$m$ while $h$$\raisebox{.5pt}{$/$}$$T$ $\vphantom0\raisebox{1.5pt}{$=$}$ $C_p$ as above. So

\begin{displaymath}
\fbox{$\displaystyle
s =
C_v
\ln
\left[
{k_{\rm B}}T\l...
...}\right.
\left(\frac{I}{V}\right)^{2/3}
\right]
+ C_p
$} %
\end{displaymath} (11.53)

In terms of clas­si­cal ther­mo­dy­nam­ics, $V$$\raisebox{.5pt}{$/$}$$I$ is $m$ times the spe­cific vol­ume $v$. So clas­si­cal ther­mo­dy­nam­ics takes the log­a­rithm above apart as

\begin{displaymath}
s = C_v\ln(T) + R\ln(v) + \mbox{some combined constant}
\end{displaymath}

and then promptly for­gets about the con­stant, damn units.


11.14.5 Black­body ra­di­a­tion

This sec­tion takes a closer look at black­body ra­di­a­tion, dis­cussed ear­lier in chap­ter 6.8. Black­body ra­di­a­tion is the ba­sic model for ab­sorp­tion and emis­sion of elec­tro­mag­netic ra­di­a­tion. Elec­tro­mag­netic ra­di­a­tion in­cludes light and a wide range of other ra­di­a­tion, like ra­dio waves, mi­crowaves, and X-rays. All sur­faces ab­sorb and emit ra­di­a­tion; oth­er­wise we would not see any­thing. But black sur­faces are the most easy to un­der­stand the­o­ret­i­cally.

No, a black body need not look black. If its tem­per­a­ture is high enough, it could look like the sun. What de­fines an ideal black body is that it ab­sorbs, (in­ter­nal­izes in­stead of re­flects,) all ra­di­a­tion that hits it. But it may be emit­ting its own ra­di­a­tion at the same time. And that makes a dif­fer­ence. If the black body is cool, you will need your in­frared cam­era to see it; it would look re­ally black to the eye. It is not re­flect­ing any ra­di­a­tion, and it is not emit­ting any vis­i­ble amount ei­ther. But if it is at the tem­per­a­ture of the sun, bet­ter take out your sun­glasses. It is still ab­sorb­ing all ra­di­a­tion that hits it, but it is emit­ting large amounts of its own too, and lots of it in the vis­i­ble range.

So where do you get a nearly per­fectly black sur­face? Matte black paint? A piece of black­board? Soot? Ac­tu­ally, pretty much all ma­te­ri­als will re­flect in some range of wave lengths. You get the black­est sur­face by us­ing no ma­te­r­ial at all. Take a big box and paint its in­te­rior the black­est you can. Close the box, then drill a very tiny hole in its side. From the out­side, the area of the hole will be truly, ab­solutely black. What­ever ra­di­a­tion en­ters there is gone. Still, when you heat the box to very high tem­per­a­tures, the hole will shine bright.

While any ra­di­a­tion en­ter­ing the hole will most surely be ab­sorbed some­where in­side, the in­side of the box it­self is filled with elec­tro­mag­netic ra­di­a­tion, like a gas of pho­tons, pro­duced by the hot in­side sur­face of the box. And some of those pho­tons will man­age to es­cape through the hole, mak­ing it shine.

The amount of pho­tons in the box may be com­puted from the Bose-Ein­stein dis­tri­b­u­tion with a few caveats. The first is that there is no limit on the num­ber of pho­tons; pho­tons will be cre­ated or ab­sorbed by the box sur­face to achieve ther­mal equi­lib­rium at what­ever level is most prob­a­ble at the given tem­per­a­ture. This means the chem­i­cal po­ten­tial $\mu$ of the pho­tons is zero, as you can check from the de­riva­tions in notes {D.57} and {D.58}.

The sec­ond caveat is that the usual den­sity of states (6.6) is non­rel­a­tivis­tic. It does not ap­ply to pho­tons, which move at the speed of light. For pho­tons you must use the den­sity of modes (6.7).

The third caveat is that there are only two in­de­pen­dent spin states for a pho­ton. As a spin-one par­ti­cle you would ex­pect that pho­tons would have the spin val­ues 0 and $\pm1$, but the zero value does not oc­cur in the di­rec­tion of prop­a­ga­tion, ad­den­dum {A.21.6}. There­fore the num­ber of in­de­pen­dent states that ex­ist is two, not three. A dif­fer­ent way to un­der­stand this is clas­si­cal: the elec­tric field can only os­cil­late in the two in­de­pen­dent di­rec­tions nor­mal to the di­rec­tion of prop­a­ga­tion, (13.10); os­cil­la­tion in the di­rec­tion of prop­a­ga­tion it­self is not al­lowed by Maxwell’s laws be­cause it would make the di­ver­gence of the elec­tric field nonzero. The fact that there are only two in­de­pen­dent states has al­ready been ac­counted for in the den­sity of modes (6.7).

The en­ergy per unit box vol­ume and unit fre­quency range found un­der the above caveats is Planck’s black­body spec­trum al­ready given in chap­ter 6.8:

\begin{displaymath}
\rho(\omega) \equiv
\frac{{\rm d}(E/V)}{{\rm d}\omega} =
...
...bar}{\pi^2c^3} \frac{\omega^3}{e^{\hbar\omega/{k_{\rm B}}T}-1}
\end{displaymath} (11.54)

The ex­pres­sion for the to­tal in­ter­nal en­ergy per unit vol­ume is called the “Ste­fan-Boltz­mann for­mula.” It is found by in­te­gra­tion of Planck’s spec­trum over all fre­quen­cies just like for the Ste­fan-Boltz­mann law in chap­ter 6.8:

\begin{displaymath}
\fbox{$\displaystyle
\frac{E}{V} =
\frac{\pi^2}{15\hbar^3c^3} (k_{\rm B}T)^4
$} %
\end{displaymath} (11.55)

The num­ber of par­ti­cles may be found sim­i­lar to the en­ergy, by drop­ping the $\hbar\omega$ en­ergy per par­ti­cle from the in­te­gral. It is, [41, 36.24, with typo]:

\begin{displaymath}
\frac{I}{V} =
\frac{2\zeta(3)}{\pi^2\hbar^3c^3} (k_{\rm B}T)^3
\qquad\zeta(3)\approx 1.202 %
\end{displaymath} (11.56)

Tak­ing the ra­tio with (11.55), the av­er­age en­ergy per pho­ton may be found:
\begin{displaymath}
\fbox{$\displaystyle
\frac{E}{I} =
\frac{\pi^4}{30\zeta(3)} k_{\rm B}T
\approx 2.7 {k_{\rm B}}T
$} %
\end{displaymath} (11.57)

The tem­per­a­ture has to be roughly 9 000 K for the av­er­age pho­ton to be­come vis­i­ble light. That is one rea­son a black body will look black at a room tem­per­a­ture of about 300 K. The so­lar sur­face has a tem­per­a­ture of about 6 000 K, so the vis­i­ble light pho­tons it emits are more en­er­getic than av­er­age, but there are still plenty of them.

The en­tropy $S$ of the pho­ton gas fol­lows from in­te­grat­ing $\int{\rm d}{E}$$\raisebox{.5pt}{$/$}$$T$ us­ing (11.55), start­ing from ab­solute zero and keep­ing the vol­ume con­stant:

\begin{displaymath}
\fbox{$\displaystyle
\frac{S}{V} =
\frac{4\pi^2}{45\hbar^3c^3} k_{\rm B}(k_{\rm B}T)^3
$} %
\end{displaymath} (11.58)

Di­vid­ing by (11.56) shows the av­er­age en­tropy per pho­ton to be
\begin{displaymath}
\frac{S}{I} = \frac{2\pi^4}{45\zeta(3)} k_{\rm B} %
\end{displaymath} (11.59)

in­de­pen­dent of tem­per­a­ture.

The gen­er­al­ized ideal gas law (11.51) does not ap­ply to the pres­sure ex­erted by the pho­ton gas, be­cause the en­ergy of the pho­tons is ${\hbar}ck$ and that is pro­por­tional to the wave num­ber in­stead of its square. The cor­rected ex­pres­sion is:

\begin{displaymath}
\fbox{$\displaystyle
PV={\textstyle\frac{1}{3}} E
$} %
\end{displaymath} (11.60)


11.14.6 The De­bye model

To ex­plain the heat ca­pac­ity of sim­ple solids, De­bye mod­eled the en­ergy in the crys­tal vi­bra­tions very much the same way as the pho­ton gas of the pre­vi­ous sub­sec­tion. This sub­sec­tion briefly out­lines the main ideas.

For elec­tro­mag­netic waves prop­a­gat­ing with the speed of light $c$, sub­sti­tute acousti­cal waves prop­a­gat­ing with the speed of sound $c_{\rm {s}}$. For pho­tons with en­ergy $\hbar\omega$, sub­sti­tute phonons with en­ergy $\hbar\omega$. Since un­like elec­tro­mag­netic waves, sound waves can vi­brate in the di­rec­tion of wave prop­a­ga­tion, for the num­ber of spin states sub­sti­tute $n_s$ $\vphantom0\raisebox{1.5pt}{$=$}$ 3 in­stead of 2; in other words, just mul­ti­ply the var­i­ous ex­pres­sions for pho­tons by 1.5.

The crit­i­cal dif­fer­ence for solids is that the num­ber of modes, hence the fre­quen­cies, is not in­fi­nitely large. Since each in­di­vid­ual atom has three de­grees of free­dom (it can move in three in­di­vid­ual di­rec­tions), there are $3I$ de­grees of free­dom, and re­for­mu­lat­ing the mo­tion in terms of acoustic waves does not change the num­ber of de­grees of free­dom. The short­est wave lengths will be com­pa­ra­ble to the atom spac­ing, and no waves of shorter wave length will ex­ist. As a re­sult, there will be a high­est fre­quency $\omega_{\rm {max}}$. The “De­bye tem­per­a­ture” $T_D$ is de­fined as the tem­per­a­ture at which the typ­i­cal clas­si­cal mi­cro­scopic en­ergy ${k_{\rm B}}T$ be­comes equal to the max­i­mum quan­tum mi­cro­scopic en­ergy $\hbar\omega_{\rm {max}}$

\begin{displaymath}
\fbox{$\displaystyle
{k_{\rm B}}T_D=\hbar\omega_{max}
$} %
\end{displaymath} (11.61)

The ex­pres­sion for the in­ter­nal en­ergy be­comes, from (6.11) times 1.5:

\begin{displaymath}
\fbox{$\displaystyle
\frac{E}{V} = \int_0^{\omega_{\rm max...
...\omega^3}{e^{\hbar\omega/{k_{\rm B}}T}-1}{ \rm d}\omega
$} %
\end{displaymath} (11.62)

If the tem­per­a­tures are very low the ex­po­nen­tial will make the in­te­grand zero ex­cept for very small fre­quen­cies. Then the up­per limit is es­sen­tially in­fi­nite com­pared to the range of in­te­gra­tion. That makes the en­ergy pro­por­tional to $T^4$ just like for the pho­ton gas and the heat ca­pac­ity is there­fore pro­por­tional to $T^3$. At the other ex­treme, when the tem­per­a­ture is large, the ex­po­nen­tial in the bot­tom can be ex­panded in a Tay­lor se­ries and the en­ergy be­comes pro­por­tional to $T$, mak­ing the heat ca­pac­ity con­stant.

The max­i­mum fre­quency, hence the De­bye tem­per­a­ture, can be found from the re­quire­ment that the num­ber of modes is $3I$, to be ap­plied by in­te­grat­ing (6.7), or an em­pir­i­cal value can be used to im­prove the ap­prox­i­ma­tion for what­ever tem­per­a­ture range is of in­ter­est. Lit­er­a­ture val­ues are of­ten cho­sen to ap­prox­i­mate the low tem­per­a­ture range ac­cu­rately, since the model works best for low tem­per­a­tures. If in­te­gra­tion of (6.7) is used at high tem­per­a­tures, the law of Du­long and Pe­tit re­sults, as de­scribed in sec­tion 11.15.

More so­phis­ti­cated ver­sions of the analy­sis ex­ist to ac­count for some of the very non­triv­ial dif­fer­ences be­tween crys­tal vi­bra­tions and elec­tro­mag­netic waves. They will need to be left to lit­er­a­ture.