Sub­sec­tions


6.28 Ther­mo­elec­tric Ap­pli­ca­tions

Ther­mo­elec­tric ef­fects can be used to make solid-state re­frig­er­a­tion de­vices, or to sense tem­per­a­ture dif­fer­ences, or to con­vert ther­mal en­ergy di­rectly into elec­tric­ity. This sec­tion ex­plains the un­der­ly­ing prin­ci­ples.

There are three dif­fer­ent ther­mo­elec­tric ef­fects. They are named the Peltier, See­beck, and Thom­son ef­fects af­ter the re­searchers who first ob­served them. Thom­son is bet­ter known as Kelvin.

These ef­fects are not at all spe­cific to semi­con­duc­tors. How­ever semi­con­duc­tors are par­tic­u­larly suit­able for ther­mo­elec­tric ap­pli­ca­tions. The rea­son is that the na­ture of the cur­rent car­ri­ers in semi­con­duc­tors can be ma­nip­u­lated. That is done by dop­ing the ma­te­r­ial as de­scribed in sec­tion 6.23. In an n-type doped semi­con­duc­tor, cur­rents are car­ried by mo­bile elec­trons. In a p-type doped semi­con­duc­tor, the cur­rents are car­ried by mo­bile holes, quan­tum states from which elec­trons are miss­ing. Elec­trons are neg­a­tively charged par­ti­cles, but holes act as pos­i­tively charged ones. That is be­cause a neg­a­tively charged elec­tron is miss­ing from a hole.


6.28.1 Peltier ef­fect

Ther­mo­elec­tric cool­ing can be achieved through what is called the Peltier ef­fect. The top part of fig­ure 6.37 shows a schematic of a Peltier cooler. The typ­i­cal de­vice con­sists of blocks of a semi­con­duc­tor like bis­muth tel­luride that are al­ter­nately doped n-type and p-type. The blocks are elec­tri­cally con­nected by strips of a metal like cop­per.

The con­nec­tions are made such that when a cur­rent is passed through the de­vice, both the n-type elec­trons and the p-type holes move to­wards the same side of the de­vice. For ex­am­ple, in fig­ure 6.37 both elec­trons and holes move to the top of the de­vice. The cur­rent how­ever is up­ward in the p-type blocks and down­ward in the n-type blocks. (Since elec­trons are neg­a­tively charged, their cur­rent is in the di­rec­tion op­po­site to their mo­tion.) The same cur­rent that en­ters a metal strip from one block leaves the strip again through the other block.

Fig­ure 6.37: Peltier cool­ing. Top: phys­i­cal de­vice. Bot­tom: Elec­tron en­ergy spec­tra of the semi­con­duc­tor ma­te­ri­als. Quan­tum states filled with elec­trons are shown in red.
\begin{figure}\centering
\setlength{\unitlength}{1pt}
\begin{picture}(405,19...
...kebox(0,0)[lt]{A}}
\put(8,46){\makebox(0,0)[lt]{B}}
\end{picture}
\end{figure}

Con­sider now a metal strip at the top of the de­vice in fig­ure 6.37. Such a strip needs to take in a stream of con­duc­tion-band elec­trons from an n-type semi­con­duc­tor block A. It must drop the same num­ber of elec­trons into the va­lence-band holes com­ing in from a p-type semi­con­duc­tor block B to elim­i­nate them. As il­lus­trated by the top ar­row be­tween the spec­tra at the bot­tom of fig­ure 6.37, this low­ers the en­ergy of the elec­trons. There­fore en­ergy is re­leased, and the top strips get hot.

How­ever, a bot­tom strip needs to take elec­trons out of the va­lence band of a p-type semi­con­duc­tor B to cre­ate the out­go­ing holes. It needs to put these elec­trons into the con­duc­tion band of an n-type semi­con­duc­tor A. That re­quires en­ergy, so the bot­tom strips lose en­ergy and cool down. You might think of it as evap­o­ra­tive cool­ing: the bot­tom strips have to give up their elec­trons with the high­est ther­mal en­ergy.

The net ef­fect is that the Peltier cooler acts as a heat pump that re­moves heat from the cold side and adds it to the hot side. It can there­fore pro­vide re­frig­er­a­tion at the cold side. At the time of writ­ing, Peltier cool­ers use a lot more power to op­er­ate than a re­frig­er­ant-based de­vice of the same cool­ing ca­pa­bil­ity. How­ever, the de­vice is much sim­pler, and is there­fore more suit­able for var­i­ous small ap­pli­ca­tions. And it can eas­ily reg­u­late tem­per­a­tures; a sim­ple re­ver­sal of the cur­rent turns the cold side into the hot side.

Note that while the Peltier de­vice con­nects p and n type semi­con­duc­tors, it does not act as a diode. In par­tic­u­lar, even in the bot­tom strips there is no need to raise elec­trons over the band gap of the semi­con­duc­tor to cre­ate the new elec­trons and holes. Cop­per does not have a band gap.

It is true that the bot­tom strips must take elec­trons out of the p-type va­lence band and put them into the n-type con­duc­tion band. How­ever, as the spec­tra at the bot­tom of fig­ure 6.37 show, the en­ergy needed to do so is much less than the band gap. The rea­son is that the p-type spec­trum is raised rel­a­tive to the n-type one. That is an ef­fect of the elec­tro­sta­tic po­ten­tial en­er­gies that are dif­fer­ent in the two semi­con­duc­tors. Even in ther­mal equi­lib­rium, the spec­tra are at un­equal lev­els. In par­tic­u­lar, in equi­lib­rium the elec­tro­sta­tic po­ten­tials ad­just so that the chem­i­cal po­ten­tials, shown as red tick marks in the spec­tra, line up. The ap­plied ex­ter­nal volt­age then de­creases the en­ergy dif­fer­ence even more.

The analy­sis of Peltier cool­ing can be phrased more gen­er­ally in terms of prop­er­ties of the ma­te­ri­als in­volved. The “Peltier co­ef­fi­cient” ${\mathscr P}$ of a ma­te­r­ial is de­fined as the heat flow pro­duced by an elec­tric cur­rent, taken per unit cur­rent.

\begin{displaymath}
\fbox{$\displaystyle
{\mathscr P}\equiv \frac{\dot Q}{I}
$} %
\end{displaymath} (6.38)

Here $I$ is the cur­rent through the ma­te­r­ial and $\dot{Q}$ the heat flow it causes. Phrased an­other way, the Peltier co­ef­fi­cient is the ther­mal en­ergy car­ried per unit charge. That gives it SI units of volts.

Now con­sider the en­ergy bal­ance of a top strip in fig­ure 6.37. An elec­tric cur­rent $I_{\rm {AB}}$ flows from ma­te­r­ial A to ma­te­r­ial B through the strip. (This cur­rent is neg­a­tive as shown, but that is not im­por­tant for the gen­eral for­mula.) The cur­rent brings along a heat flux $\dot{Q}_{\rm {A}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\mathscr P}_{\rm {A}}I_{\rm {AB}}$ from ma­te­r­ial A that flows into the strip. But a dif­fer­ent heat flux $\dot{Q}_{\rm {B}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\mathscr P}_{\rm {B}}I_{\rm {AB}}$ leaves the strip through ma­te­r­ial B. The dif­fer­ence be­tween what comes in and what goes out is what re­mains in­side the strip to heat it:

\begin{displaymath}
\fbox{$\displaystyle
\dot Q = - \left({\mathscr P}_{\rm B} - {\mathscr P}_{\rm A}\right) I_{\rm AB}
$} %
\end{displaymath} (6.39)

This equa­tion is gen­er­ally valid; A and B do not need to be semi­con­duc­tors. The dif­fer­ence in ma­te­r­ial Peltier co­ef­fi­cients is called the Peltier co­ef­fi­cient of the junc­tion.

For the top strips in fig­ure 6.37, $I_{\rm {AB}}$ is neg­a­tive. Also, as dis­cussed be­low, the n-type ${\mathscr P}_{\rm {A}}$ will be neg­a­tive and the p-type ${\mathscr P}_{\rm {B}}$ pos­i­tive. That makes the net heat flow­ing into the strip pos­i­tive as it should be. Note also that the op­po­site signs of n-type and p-type Peltier co­ef­fi­cients re­ally help to make the net heat flow as big as pos­si­ble.

If there is a tem­per­a­ture gra­di­ent in the semi­con­duc­tors in ad­di­tion to the cur­rent, and there will be, it too will cre­ate a heat flow, {A.11}. This heat flow can be found us­ing what is known as Fourier’s law. It is bad news as it re­moves heat from the hot side and con­ducts it to the cold side.

A more quan­ti­ta­tive un­der­stand­ing of the Peltier ef­fect can be ob­tained us­ing some ball­park Peltier co­ef­fi­cients. Con­sider again the spec­tra in fig­ure 6.37. In the n-type semi­con­duc­tor, each con­duc­tion elec­tron has an en­ergy per unit charge of about

\begin{displaymath}
{\mathscr P}_{n\rm type} \sim
\frac{{\vphantom' E}^{\rm p...
...\rm p}_{\rm c} + {\textstyle\frac{3}{2}} k_{\rm B}T - \mu}{-e}
\end{displaymath}

Here $\vphantom{0}\raisebox{1.5pt}{$-$}$$e$ in the de­nom­i­na­tor is the charge of the elec­tron, while ${\vphantom' E}^{\rm p}_{\rm {c}}$ in the nu­mer­a­tor is the en­ergy at the bot­tom of the con­duc­tion band. It has been as­sumed that a typ­i­cal elec­tron in the con­duc­tion band has an ad­di­tional ran­dom ther­mal en­ergy equal to the clas­si­cal value $\frac32{k_{\rm B}}T$. Fur­ther the chem­i­cal po­ten­tial, or Fermi level, $\mu$ has been taken as the zero level of en­ergy.

The rea­son for do­ing the lat­ter has to do with the fact that in ther­mal equi­lib­rium, all solids in con­tact have the same chem­i­cal po­ten­tial. That makes the chem­i­cal po­ten­tial a con­ve­nient ref­er­ence level of en­ergy. The idea can be de­scribed graph­i­cally in terms of the spec­tra of fig­ure 6.37. In the spec­tra, the chem­i­cal po­ten­tial is in­di­cated by the red tick marks on the ver­ti­cal axes. Now con­sider again the en­ergy change in trans­fer­ring elec­trons be­tween the n- and p-type ma­te­ri­als. What de­ter­mines it is how much the n-type elec­trons are higher in en­ergy than the chem­i­cal po­ten­tial and how much elec­trons put in the p-type holes are lower than it. (This as­sumes that the cur­rent re­mains small enough that the chem­i­cal po­ten­tials in the two semi­con­duc­tors stay level. Oth­er­wise the­o­ret­i­cal de­scrip­tion would be­come much more dif­fi­cult.)

As this pic­ture sug­gests, for the holes in the p-type semi­con­duc­tor, the en­ergy should be taken to be in­creas­ing down­wards in the elec­tron spec­trum. It takes more en­ergy to cre­ate a hole by tak­ing an elec­tron up to the Fermi level if the hole is lower in the spec­trum. There­fore the Peltier co­ef­fi­cient of the p-doped semi­con­duc­tor is

\begin{displaymath}
{\mathscr P}_{p\rm type} \sim
\frac{{\vphantom' E}^{\rm p...
...m' E}^{\rm p}_{\rm v} + {\textstyle\frac{3}{2}} k_{\rm B}T}{e}
\end{displaymath}

where ${\vphantom' E}^{\rm p}_{\rm {v}}$ is the elec­tron en­ergy at the top of the va­lence band. Be­cause holes act as pos­i­tively charged par­ti­cles, the Peltier co­ef­fi­cient of a p-type semi­con­duc­tor is pos­i­tive. On the other hand, the Peltier co­ef­fi­cient of an n-type semi­con­duc­tor is neg­a­tive be­cause of the neg­a­tive charge in the de­nom­i­na­tor.

Note that both for­mu­lae are just ball­parks. The ther­mal en­ergy dragged along by a cur­rent is not sim­ply the ther­mal equi­lib­rium dis­tri­b­u­tion of elec­tron en­ergy. The av­er­age ther­mal ki­netic en­ergy per cur­rent car­rier to be used turns out to dif­fer some­what from $\frac32{k_{\rm B}}T$. The cur­rent is also as­so­ci­ated with a flow of phonons; their en­ergy should be added to the ther­mal en­ergy that is car­ried di­rectly by the elec­trons or holes, {A.11}. Such is­sues are far be­yond the scope of this book.

It is how­ever in­ter­est­ing to com­pare the above semi­con­duc­tor ball­parks to one for met­als:

\begin{displaymath}
{\mathscr P}_{\rm metal}
\sim - \frac{2\pi^2}{9}
\frac{\f...
...{{\vphantom' E}^{\rm p}_{\rm {F}}} \frac{\frac32k_{\rm B}T}{e}
\end{displaymath}

This ball­park comes from as­sum­ing the spec­trum of a free-elec­tron gas, {A.11}. The fi­nal ra­tio is eas­ily un­der­stood as the clas­si­cal ther­mal ki­netic en­ergy $\frac32{k_{\rm B}}T$ per unit charge $e$. The ra­tio in front of it is the ther­mal en­ergy di­vided by the Fermi en­ergy ${\vphantom' E}^{\rm p}_{\rm {F}}$. As dis­cussed in sec­tion 6.10, this frac­tion is much less than one. Its pres­ence can be un­der­stood from the ex­clu­sion prin­ci­ple: as il­lus­trated in fig­ure 6.15, only a small frac­tion of the elec­trons pick up ther­mal en­ergy in a metal.

The ball­park above im­plies that the Peltier co­ef­fi­cient of a metal is very much less than that of a doped semi­con­duc­tor. It should how­ever be noted that while the ball­park does give the rough or­der of mag­ni­tude of the Peltier co­ef­fi­cients of met­als, they tend to be no­tice­ably larger. Worse, there are quite a few met­als whose Peltier co­ef­fi­cient is pos­i­tive, un­like the ball­park above says.

To some ex­tent, the lower Peltier co­ef­fi­cients of met­als are com­pen­sated for by their larger elec­tri­cal con­duc­tiv­ity. A nondi­men­sion­al fig­ure of merit can be de­fined for ther­mo­elec­tric ma­te­ri­als as, {A.11}:

\begin{displaymath}
\frac{{\mathscr P}^2\sigma}{T\kappa}
\end{displaymath}

where $T$ is a typ­i­cal op­er­at­ing ab­solute tem­per­a­ture. This fig­ure of merit shows that a large Peltier co­ef­fi­cient is good, qua­drat­i­cally so, but so is a large elec­tri­cal con­duc­tiv­ity $\sigma$ and a low ther­mal con­duc­tiv­ity $\kappa$. Un­for­tu­nately, met­als also con­duct heat well.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
In the Peltier ef­fect, a cur­rent pro­duces cool­ing or heat­ing when it passes through the con­tact area be­tween two solids.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The heat re­leased is pro­por­tional to the cur­rent and the dif­fer­ence in Peltier co­ef­fi­cients of the ma­te­ri­als.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Con­nec­tions be­tween op­po­sitely-doped semi­con­duc­tors work well.


6.28.2 See­beck ef­fect

Ther­mo­elec­tric tem­per­a­ture sens­ing and power gen­er­a­tion can be achieved by what is known as the See­beck ef­fect. It is in some sense the op­po­site of the Peltier ef­fect of the pre­vi­ous sub­sec­tion.

Con­sider the con­fig­u­ra­tion shown in fig­ure 6.38. Blocks of n-type and p-type doped semi­con­duc­tors are elec­tri­cally con­nected at their tops us­ing a cop­per strip. Cop­per strips are also at­tached to the bot­toms of the semi­con­duc­tor blocks. Un­like for the Peltier de­vice, no ex­ter­nal volt­age source is at­tached. In the pure See­beck ef­fect, the bot­tom strips are elec­tri­cally not in con­tact at all. So there is no cur­rent through the de­vice. It is what is called an open-cir­cuit con­fig­u­ra­tion.

Fig­ure 6.38: An ex­am­ple See­beck volt­age gen­er­a­tor.
\begin{figure}\centering
\setlength{\unitlength}{1pt}
\begin{picture}(405,18...
...b]
{$\circ$  hole (\emph{p}-type semiconductor)}}
\end{picture}
\end{figure}

To achieve the See­beck ef­fect, heat from an ex­ter­nal heat source is added to the top cop­per strip. That heats it up. Heat is al­lowed to es­cape from the bot­tom strips to, say, cool­ing wa­ter. This heat flow pat­tern is the ex­act op­po­site of the one for the Peltier cooler. If heat went out of the strips of your Peltier cooler at the cold side, it would melt your ice cubes.

But the Peltier cooler re­quires an ex­ter­nal volt­age to be sup­plied to keep the de­vice run­ning. The op­po­site hap­pens for the See­beck gen­er­a­tor of fig­ure 6.38. The de­vice it­self turns into a elec­tric power sup­ply. A volt­age dif­fer­ence de­vel­ops spon­ta­neously be­tween the bot­tom two strips.

That volt­age dif­fer­ence can be used to de­ter­mine the tem­per­a­ture of the top cop­per strip, as­sum­ing that the bot­tom strips are kept at a known tem­per­a­ture. A de­vice that mea­sures tem­per­a­tures this way is called a “ther­mo­cou­ple.”

Al­ter­na­tively, you can ex­tract elec­tri­cal power from the volt­age dif­fer­ence be­tween the two bot­tom ter­mi­nals. In that case the See­beck de­vice acts as a “ther­mo­elec­tric gen­er­a­tor.” Of course, to ex­tract power you need to al­low some cur­rent to flow. That will re­duce the volt­age be­low the pure See­beck value.

To de­scribe why the de­vice works phys­i­cally is not that easy. To un­der­stand the ba­sic idea, con­sider an ar­bi­trary point P in the n-type semi­con­duc­tor, as in­di­cated in fig­ure 6.38. Imag­ine your­self stand­ing at this point, shrunk down to mi­cro­scopic di­men­sions. Due to ran­dom heat mo­tion, con­duc­tion elec­trons come at you ran­domly from both above and be­low. How­ever, those com­ing from above are hot­ter and so they come to­wards you at a higher speed. There­fore, as­sum­ing that all else is the same, there is a net elec­tron cur­rent down­wards at your lo­ca­tion. Of course, that can­not go on, be­cause it moves neg­a­tive charge down, charg­ing the lower part of the de­vice neg­a­tive and the top pos­i­tive. This will cre­ate an elec­tric field that slows down the hot elec­trons go­ing down and speeds up the cold elec­trons go­ing up. The volt­age gra­di­ent as­so­ci­ated with this elec­tric field is the See­beck ef­fect, {A.11}.

In the See­beck ef­fect, an in­cre­men­tal tem­per­a­ture change ${{\rm d}}T$ in a ma­te­r­ial causes a cor­re­spond­ing change in volt­age ${\rm d}{\varphi}$ given by:

\begin{displaymath}
{\rm d}\varphi_\mu = - {\mathscr S}{\rm d}T
\end{displaymath}

The sub­script on $\varphi_\mu$ in­di­cates that the in­trin­sic chem­i­cal po­ten­tial of the ma­te­r­ial must be in­cluded in ad­di­tion to the elec­tro­sta­tic po­ten­tial $\varphi$. In other words, $\varphi_\mu$ is the to­tal chem­i­cal po­ten­tial per unit elec­tron charge. The con­stant ${\mathscr S}$ is a ma­te­r­ial co­ef­fi­cient de­pend­ing on ma­te­r­ial and tem­per­a­ture.

This co­ef­fi­cient is some­times called the “See­beck co­ef­fi­cient.” How­ever, it is usu­ally called the ther­mopower” or “ther­mo­elec­tric power. These names are much bet­ter, be­cause the See­beck co­ef­fi­cient de­scribes an open-cir­cuit volt­age, in which no power is pro­duced. It has units of V/K. It is hi­lar­i­ous to watch the con­fused faces of those hated non­spe­cial­ists when a physi­cist with a straight face de­scribes some­thing that is not, and can­not be, a power as the ther­mopower.

The net volt­age pro­duced is the in­te­grated to­tal volt­age change over the lengths of the two ma­te­ri­als. If $T_{\rm {H}}$ is the tem­per­a­ture of the top strip and $T_{\rm {L}}$ that of the bot­tom ones, the net volt­age can be writ­ten as:

\begin{displaymath}
\fbox{$\displaystyle
\varphi_{\rm B} - \varphi_{\rm A} =
...
...
({\mathscr S}_{\rm B}-{\mathscr S}_{\rm A}) { \rm d}T
$} %
\end{displaymath} (6.40)

This is the volt­age that will show up on a volt­meter con­nected be­tween the bot­tom strips. Note that there is no need to use the chem­i­cal po­ten­tial $\varphi_\mu$ in this ex­pres­sion: since the bot­tom strips are both cop­per and at the same tem­per­a­ture, their in­trin­sic chem­i­cal po­ten­tials are iden­ti­cal.

The above equa­tion as­sumes that the cop­per strips con­duct heat well enough that their tem­per­a­ture is con­stant, (or al­ter­na­tively, that ma­te­ri­als A and B are in di­rect con­tact with each other at their top edges and with the volt­meter at their bot­tom edges). Oth­er­wise you would need to add an in­te­gral over the cop­per.

Note from the above equa­tion that, given the tem­per­a­ture $T_{\rm {L}}$ of the bot­tom strips, the volt­age only de­pends on the tem­per­a­ture $T_{\rm {H}}$ of the top strip. In terms of fig­ure 6.38, the de­tailed way that the tem­per­a­ture varies with height is not im­por­tant, just that the end val­ues are $T_H$ and $T_L$. That is great for your ther­mo­cou­ple ap­pli­ca­tion, be­cause the volt­age that you get only de­pends on the tem­per­a­ture at the tip of the ther­mo­cou­ple, the one you want to mea­sure. It is not af­fected by what­ever is the de­tailed tem­per­a­ture dis­tri­b­u­tion in the two leads go­ing to and from the tip. (As long as the ma­te­r­ial prop­er­ties stay con­stant in the leads, that is. The tem­per­a­ture de­pen­dence of the See­beck co­ef­fi­cients is not a prob­lem.)

Fig­ure 6.39: The Gal­vani po­ten­tial jump over the con­tact sur­face does not pro­duce a us­able volt­age.
\begin{figure}\centering
\setlength{\unitlength}{1pt}
\begin{picture}(400,14...
...le{32}}
\put(0,-36){\makebox(0,0)[b]{\scriptsize0}}
\end{picture}
\end{figure}

It is some­times sug­gested, even by some that surely know bet­ter like [22, p. 14-9], that the See­beck po­ten­tial is due to jumps in po­ten­tial at the con­tact sur­faces. To ex­plain the idea, con­sider fig­ure 6.39. In this fig­ure ma­te­ri­als A and B have been con­nected di­rectly in or­der to sim­plify the ideas. It turns out that the mean elec­tro­sta­tic po­ten­tial in­side ma­te­r­ial A im­me­di­ately be­fore the con­tact sur­face with ma­te­r­ial B is dif­fer­ent from the mean elec­tro­sta­tic po­ten­tial in­side ma­te­r­ial B im­me­di­ately af­ter the con­tact sur­face. The dif­fer­ence is called the Gal­vani po­ten­tial. It is due to the charge dou­ble layer that ex­ists at the con­tact sur­face be­tween dif­fer­ent solids. This charge layer de­vel­ops to en­sure that the chem­i­cal po­ten­tials are the same at both sides of the con­tact sur­face. Equal­ity of chem­i­cal po­ten­tials across con­tact sur­faces is a re­quire­ment for ther­mal equi­lib­rium. Elec­tro­sta­tic po­ten­tials can be dif­fer­ent.

If you try to mea­sure this Gal­vani po­ten­tial di­rectly, like with the bot­tom volt­meter in fig­ure 6.39, you fail. The rea­son is that there are also Gal­vani po­ten­tial jumps be­tween ma­te­ri­als A and B and the leads of your volt­meter. As­sume for sim­plic­ity that the leads of your volt­meter are both made of cop­per. Be­cause the chem­i­cal po­ten­tials are pair­wise equal across the con­tact sur­faces, all four chem­i­cal po­ten­tials are the same, in­clud­ing the two in the volt­meter leads. There­fore, the ac­tual volt­meter can de­tect no dif­fer­ence be­tween its two leads and gives a zero read­ing.

Now con­sider the top volt­meter in fig­ure 6.39. This volt­meter does mea­sure a volt­age. Also in this case, the con­tact sur­faces be­tween the leads of the volt­meter and ma­te­ri­als A and B are at a dif­fer­ent tem­per­a­ture $T_{\rm {L}}$ than the tem­per­a­ture $T_{\rm {H}}$ of the con­tact sur­face be­tween ma­te­ri­als A and B. The sug­ges­tion is there­fore some­times made that changes in the Gal­vani po­ten­tials due to tem­per­a­ture dif­fer­ences pro­duce the mea­sured volt­age. That would ex­plain very neatly why the mea­sured volt­age only de­pends on the tem­per­a­tures of the con­tact sur­faces. Not on the de­tailed tem­per­a­ture dis­tri­b­u­tions along the lengths of the ma­te­ri­als.

It may be neat, but un­for­tu­nately it is also all wrong. The fact that the de­pen­dence on the tem­per­a­ture dis­tri­b­u­tion drops out of the fi­nal re­sult is just a math­e­mat­i­cal co­in­ci­dence. As long as the changes in in­trin­sic chem­i­cal po­ten­tial can be ig­nored, the Gal­vani po­ten­tial jumps still sum to zero. Not to the mea­sured po­ten­tial. Af­ter all, in that case the volt­age changes over the lengths of the ma­te­ri­als are the same as the chem­i­cal po­ten­tial changes. And be­cause they al­ready sum to the mea­sured volt­age, there is noth­ing left for the Gal­vani jumps. Con­sider for ex­am­ple the free-elec­tron gas model of met­als. While its in­trin­sic chem­i­cal po­ten­tial does change with tem­per­a­ture, {D.62}, that change is only one third of the po­ten­tial change pro­duced by the See­beck co­ef­fi­cient given in ad­den­dum {A.11}. Gal­vani po­ten­tial changes then sum to only a third of the mea­sured po­ten­tial. No, there is no par­tial credit.

Fig­ure 6.40: The See­beck ef­fect is not di­rectly mea­sur­able.
\begin{figure}\centering
\setlength{\unitlength}{1pt}
\begin{picture}(400,94...
...{32}}
\put(-25,-36){\makebox(0,0)[b]{\scriptsize0}}
\end{picture}
\end{figure}

It should also be pointed out that the See­beck ef­fect of a ma­te­r­ial is not di­rectly mea­sur­able. Fig­ure 6.40 il­lus­trates an at­tempt to di­rectly mea­sure the See­beck ef­fect of ma­te­r­ial A. Un­for­tu­nately, the only thing that changes com­pared to fig­ure 6.39 is that the two leads of the volt­meter take over the place of ma­te­r­ial B. Un­less the two leads are at­tached to points of equal tem­per­a­ture, they are an ac­tive part of the to­tal See­beck ef­fect mea­sured. (Su­per­con­duc­tors should have their See­beck co­ef­fi­cient zero. How­ever, find­ing su­per­con­duc­tors that still are su­per­con­duc­tors if they are in ther­mal con­tact with real-life tem­per­a­tures is an ob­vi­ous is­sue.)

Kelvin dis­cov­ered that you can find the See­beck co­ef­fi­cient ${\mathscr S}$ from the Peltier co­ef­fi­cient ${\mathscr P}$ sim­ply by di­vid­ing by the ab­solute tem­per­a­ture. Un­for­tu­nately, the Peltier co­ef­fi­cient is not di­rectly mea­sur­able ei­ther. Its ef­fect too re­quires a sec­ond ma­te­r­ial to be present to com­pare against. It does show, how­ever, that good ma­te­ri­als for the Peltier ef­fect are also good ma­te­ri­als for the See­beck ef­fect.

You might won­der where the charges that trans­fer be­tween the hot and cold sides in the See­beck ef­fect end up. In ther­mal equi­lib­rium, the in­te­ri­ors of solids need to stay free of net elec­tric charge, or a cur­rent would de­velop to elim­i­nated the charge dif­fer­ence. But in the See­beck ef­fect, the solids are not in ther­mal equi­lib­rium. It is there­fore some­what sur­pris­ing that the in­te­ri­ors do re­main free of net charge. At least, they do if the tem­per­a­ture vari­a­tions are small enough, {A.11}. So the charges that trans­fer be­tween hot and cold, and so give rise to the See­beck po­ten­tial dif­fer­ence, end up at the sur­faces of the solids. Not in the in­te­rior. Even in the See­beck ef­fect.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The See­beck ef­fect pro­duces a us­able volt­age from tem­per­a­ture dif­fer­ences.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
It re­quires two dif­fer­ent ma­te­ri­als in elec­tri­cal con­tact to span the tem­per­a­ture dif­fer­ence.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The volt­age is the dif­fer­ence in the in­te­grals of the See­beck co­ef­fi­cients of the two ma­te­ri­als with re­spect to tem­per­a­ture.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The See­beck co­ef­fi­cient is usu­ally called ther­mopower be­cause it is not power.


6.28.3 Thom­son ef­fect

The Thom­son ef­fect,” or “Kelvin heat, de­scribes the heat re­lease in a ma­te­r­ial with a cur­rent through it. This heat re­lease is di­rectly mea­sur­able. That is un­like the Peltier and See­beck ef­fects, for which only the net ef­fect of two dif­fer­ent ma­te­ri­als can be mea­sured. Since the Peltier and See­beck co­ef­fi­cients can be com­puted from the Thom­son one, in prin­ci­ple the Thom­son ef­fect al­lows all three ther­mo­elec­tric co­ef­fi­cients to be found with­out in­volv­ing a sec­ond ma­te­r­ial.

Thom­son, who later be­came lord Kelvin, showed that the net en­ergy ac­cu­mu­la­tion per unit vol­ume in a bar of ma­te­r­ial with a cur­rent through it can be writ­ten as:

\begin{displaymath}
\fbox{$\displaystyle
\dot e = \frac{{\rm d}}{{\rm d}x}\lef...
...c{j^2}{\sigma} - {\mathscr K}j \frac{{\rm d}T}{{\rm d}x}
$} %
\end{displaymath} (6.41)

Here $x$ is the po­si­tion along the bar, $T$ is the tem­per­a­ture, $j$ is the cur­rent per unit area, and $\kappa$ and $\sigma$ are the ther­mal and elec­tri­cal con­duc­tiv­i­ties. The first term in the right hand side is heat ac­cu­mu­la­tion due to Fourier’s law of heat con­duc­tion. The sec­ond term is the Joule heat­ing that keeps your re­sis­tance heater work­ing. The fi­nal term is the ther­mo­elec­tric Thom­son ef­fect or Kelvin heat. (The term Kelvin ef­fect is not used be­cause it is al­ready in com­mon use for some­thing else.) The co­ef­fi­cient ${\mathscr K}$ is called the “Kelvin co­ef­fi­cient” or “Thom­son co­ef­fi­cient.” A de­riva­tion from the gen­eral equa­tions of ther­mo­electrics is given in ad­den­dum {A.11}.

It may be noted that for de­vices in which the Thom­son ef­fect is im­por­tant, the fig­ure of merit in­tro­duced ear­lier be­comes less mean­ing­ful. In such cases, a sec­ond nondi­men­sion­al num­ber based on the Kelvin co­ef­fi­cient will also af­fect de­vice per­for­mance.

The other two ther­mo­elec­tric co­ef­fi­cients can be com­puted from the Kelvin one us­ing the Kelvin, or Thom­son, re­la­tion­ships {A.11}:

\begin{displaymath}
\fbox{$\displaystyle
\frac{{\rm d}{\mathscr S}}{{\rm d}\ln T} = {\mathscr K}
\qquad
{\mathscr P}= {\mathscr S}T
$} %
\end{displaymath} (6.42)

By in­te­grat­ing ${\mathscr K}$ with re­spect to $\ln{T}$ you can find the See­beck co­ef­fi­cient and from that the Peltier one.

That re­quires of course that you find the Kelvin co­ef­fi­cient over the com­plete tem­per­a­ture range. But you only need to do it for one ma­te­r­ial. As soon as you ac­cu­rately know the ther­mo­elec­tric co­ef­fi­cients for one ma­te­r­ial, you can use that as the ref­er­ence ma­te­r­ial to find Peltier and See­beck co­ef­fi­cients for every other ma­te­r­ial. Lead is typ­i­cally used as the ref­er­ence ma­te­r­ial, as it has rel­a­tively low ther­mo­elec­tric co­ef­fi­cients.

Of course, if it turns out that the data on your ref­er­ence ma­te­r­ial are not as ac­cu­rate as you thought they were, it would be very bad news. It will af­fect the ac­cu­racy of the ther­mo­elec­tric co­ef­fi­cients of every other ma­te­r­ial that you found us­ing this ref­er­ence ma­te­r­ial. A pre­dic­tion on whether such a thing was likely to hap­pen for lead could be de­rived from what is known as Mur­phy’s law.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The Thom­son ef­fect, or Kelvin heat, de­scribes the in­ter­nal heat­ing in a ma­te­r­ial with a cur­rent go­ing through it. More pre­cisely, it de­scribes the part of this heat­ing that is due to in­ter­ac­tion of the cur­rent with the tem­per­a­ture changes.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Un­like the Peltier and See­beck co­ef­fi­cients, the Kelvin (Thom­son) co­ef­fi­cient can be mea­sured with­out in­volv­ing a sec­ond ma­te­r­ial.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The Kelvin (Thom­son) re­la­tions al­low you to com­pute the Peltier and See­beck co­ef­fi­cients from the Kelvin one.