Notations

The below are the simplest possible descriptions of various symbols, just to help you keep reading if you do not remember/know what they stand for.

Watch it. There are so many ad hoc usages of symbols, some will have been overlooked here. Always use common sense first in guessing what a symbol means in a given context.

$\cdot$
A dot might indicate And also many more prosaic things (punctuation signs, decimal points, ...).

$\times$
Multiplication symbol. May indicate:

$!$
Might be used to indicate a factorial. Example: $5!=1\times2\times3\times4\times5=120$.

The function that generalizes $n!$ to noninteger values of $n$ is called the gamma function; $n!=\Gamma(n+1)$. The gamma function generalization is due to, who else, Euler. (However, the fact that $n!=\Gamma(n+1)$ instead of $n!=\Gamma(n)$ is due to the idiocy of Legendre.) In Legendre-resistant notation,

\begin{displaymath}
n!=\int_0^{\infty} t^n e^{-t} { \rm d}{t}
\end{displaymath}

Straightforward integration shows that $0!$ is 1 as it should, and integration by parts shows that $(n+1)!=(n+1)n!$, which ensures that the integral also produces the correct value of $n!$ for any higher integer value of $n$ than 0. The integral, however, exists for any real value of $n$ above $-1$, not just integers. The values of the integral are always positive, tending to positive infinity for both $n\downarrow-1$, (because the integral then blows up at small values of $t$), and for $n\uparrow\infty$, (because the integral then blows up at medium-large values of $t$). In particular, Stirling’s formula says that for large positive $n$, $n!$ can be approximated as

\begin{displaymath}
n! \sim \sqrt{2\pi n} n^n e^{-n} \left[1 + \ldots\right]
\end{displaymath}

where the value indicated by the dots becomes negligibly small for large $n$. The function $n!$ can be extended further to any complex value of $n$, except the negative integer values of $n$, where $n!$ is infinite, but is then no longer positive. Euler’s integral can be done for $n=-\frac12$ by making the change of variables $\sqrt{t}=u$, producing the integral $\int_0^\infty2e^{-u^2}{ \rm d}{u}$, or $\int_{-\infty}^{\infty}e^{-u^2}{ \rm d}{u}$, which equals $\sqrt{\int_{-\infty}^{\infty}e^{-x^2}{ \rm d}{x}\int_{-\infty}^{\infty}e^{-y^2}{ \rm d}{y}}$ and the integral under the square root can be done analytically using polar coordinates. The result is that

\begin{displaymath}
-\frac12! = \int_{-\infty}^{\infty}e^{-u^2}{ \rm d}{u} = \sqrt{\pi}
\end{displaymath}

To get $\frac12!$, multiply by $\frac12$, since $n!=n(n-1)!$.

$\vert$
May indicate:

$\sum$
Summation symbol. Example: if in three dimensional space a vector $\vec f$ has components $f_1=2$, $f_2=1$, $f_3=4$, then $\sum_{\mbox{\scriptsize all }i} f_i$ stands for $2+1+4=7$.

$\int$
Integration symbol, the continuous version of the summation symbol. For example,

\begin{displaymath}
\int_{\mbox{\scriptsize all }x} f(x){ \rm d}x
\end{displaymath}

is the summation of $f(x){ \rm d}x$ over all little fragments ${\rm d}x$ that make up the entire $x$-range.

$\to$
May indicate:

$\vec{\phantom{a}}$
Vector symbol. An arrow above a letter indicates it is a vector. A vector is a quantity that requires more than one number to be characterized. Typical vectors in physics include position $\vec
r$, velocity $\vec v$, linear momentum $\vec p$, acceleration $\vec
a$, force $\vec F$, moment $\vec M$, etcetera.

$'$
May indicate:

$\nabla$
The spatial differentiation operator nabla. In Cartesian coordinates:

\begin{displaymath}
\nabla \equiv
\left(
\frac{\partial}{\partial x},
\f...
...partial}{\partial y} +
{\hat k}\frac{\partial}{\partial z}
\end{displaymath}

Nabla can be applied to a scalar function $f$ in which case it gives a vector of partial derivatives called the gradient of the function:

\begin{displaymath}
{\rm grad} f = \nabla f =
{\hat\imath}\frac{\partial f}...
...al f}{\partial y} +
{\hat k}\frac{\partial f}{\partial z}.
\end{displaymath}

Nabla can be applied to a vector in a dot product multiplication, in which case it gives a scalar function called the divergence of the vector:

\begin{displaymath}
{\rm div} \vec v = \nabla\cdot\vec v =
\frac{\partial v...
...partial v_y}{\partial y} +
\frac{\partial v_z}{\partial z}
\end{displaymath}

or in index notation

\begin{displaymath}
{\rm div} \vec v = \nabla\cdot\vec v =
\sum_{i=1}^3 \frac{\partial v_i}{\partial x_i}
\end{displaymath}

Nabla can also be applied to a vector in a vectorial product multiplication, in which case it gives a vector function called the curl or rot of the vector. In index notation, the $i$-th component of this vector is

\begin{displaymath}
\left({\rm curl} \vec v\right)_i =
\left({\rm rot} \ve...
...line{\imath}}}}{\partial x_{{\overline{\overline{\imath}}}}}
\end{displaymath}

where ${\overline{\imath}}$ is the index following $i$ in the sequence 123123..., and ${\overline{\overline{\imath}}}$ the one preceding it (or the second following it).

The operator $\nabla^2$ is called the Laplacian. In Cartesian coordinates:

\begin{displaymath}
\nabla^2 \equiv
\frac{\partial^2}{\partial x^2}+
\frac{\partial^2}{\partial y^2}+
\frac{\partial^2}{\partial z^2}
\end{displaymath}

In non Cartesian coordinates, don’t guess; look these operators up in a table book.

$^*$
A superscript star normally indicates a complex conjugate. In the complex conjugate of a number, every ${\rm i}$ is changed into a $-{\rm i}$.

$<$
Less than.

$>$
Greater than.

$\equiv$
Emphatic equals sign. Typically means “by definition equal” or “everywhere equal.”

$\sim$
Indicates approximately equal. Normally the approximation applies when something is small or large. Read it as “is approximately equal to.”

$\propto$
Proportional to. The two sides are equal except for some unknown constant factor.

$\Gamma$
(Gamma) May indicate:

$\Delta$
(capital delta) May indicate:

$\delta$
(delta) May indicate:

$\partial$
(partial) Indicates a vanishingly small change or interval of the following variable. For example, $\partial f/\partial x$ is the ratio of a vanishingly small change in function $f$ divided by the vanishingly small change in variable $x$ that causes this change in $f$. Such ratios define derivatives, in this case the partial derivative of $f$ with respect to $x$.

$\varepsilon$
(variant of epsilon) May indicate:

$\eta$
(eta) May be used to indicate a $y$-position.

$\Theta$
(capital theta) Used in this book to indicate some function of $\theta$ to be determined.

$\theta$
(theta) May indicate:

$\vartheta$
(variant of theta) An alternate symbol for $\theta$.

$\lambda$
(lambda) May indicate:

$\xi$
(xi) May indicate:

$\pi$
(pi) May indicate:

$\rho$
(rho) May indicate:

$\tau$
(tau) May indicate:

$\Phi$
(capital phi) May indicate:

$\phi$
(phi) May indicate:

$\varphi$
(variant of phi) May indicate:

$\omega$
(omega) May indicate:

$A$
May indicate:

$a$
May indicate:

absolute
May indicate:

adjoint
The adjoint $A^H$ or $A^\dagger$ of a matrix is the complex-conjugate transpose of the matrix.

Alternatively, it is the matrix you get if you take it to the other side of an inner product. (While keeping the value of the inner product the same regardless of whatever two vectors or functions may be involved.)

Hermitian”matrices are “self-adjoint;”they are equal to their adjoint. “Skew-Hermitian”matrices are the negative of their adjoint.

Unitary”matrices are the inverse of their adjoint. Unitary matrices generalize rotations and reflections of vectors. Unitary operators preserve inner products.

Fourier transforms are unitary operators on account of the Parseval equality that says that inner products are preserved.

angle
According to trigonometry, if the length of a segment of a circle is divided by its radius, it gives the total angular extent of the circle segment. More precisely, it gives the angle, in radians, between the line from the center to the start of the circle segment and the line from the center to the end of the segment. The generalization to three dimensions is called the “solid angle;” the total solid angle over which a segment of a spherical surface extends, measured from the center of the sphere, is the area of that segment divided by the square radius of the sphere.

$B$
May indicate:

$b$
May indicate:

basis
A basis is a minimal set of vectors or functions that you can write all other vectors or functions in terms of. For example, the unit vectors ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$ are a basis for normal three-dimensional space. Every three-dimensional vector can be written as a linear combination of the three.

$C$
May indicate:

Cauchy-Schwartz inequality
The Cauchy-Schwartz inequality describes a limitation on the magnitude of inner products. In particular, it says that for any vectors $\vec v$ and vec $w$

\begin{displaymath}
\vert\vec v^H \vec w\vert \le \vert\vec v\vert \vert\vec w\vert
\end{displaymath}

For example, if $\vec v$ and $\vec w$ are real vectors, the inner product is the dot product and we have

\begin{displaymath}
\vec v\cdot \vec w = \vert\vec v\vert \vert\vec w\vert\cos\theta
\end{displaymath}

where $\vert\vec v\vert$ is the length of vector $\vec v$ and $\vert\vec w\vert$ the one of $\vec w$, and $\theta$ is the angle in between the two vectors. Since a cosine is less than one in magnitude, the Cauchy-Schwartz inequality is therefore true for vectors.

$\cos$
The cosine function, a periodic function oscillating between 1 and -1 as shown in [2, pp. 40-...].

curl
The curl of a vector field $\vec{v}$ is defined as ${\rm {curl}}\;\vec{v}={\rm {rot}}\;\vec{v}=\nabla\times\vec{v}$.

${\rm d}$
Indicates a vanishingly small change or interval of the following variable. For example, ${\rm d}x$ can be thought of as a small segment of the $x$-axis.

derivative
A derivative of a function is the ratio of a vanishingly small change in a function divided by the vanishingly small change in the independent variable that causes the change in the function. The derivative of $f(x)$ with respect to $x$ is written as ${\rm d}f/{\rm d}x$, or also simply as $f'$. Note that the derivative of function $f(x)$ is again a function of $x$: a ratio $f'$ can be found at every point $x$. The derivative of a function $f(x,y,z)$ with respect to $x$ is written as $\partial{f}/\partial{x}$ to indicate that there are other variables, $y$ and $z$, that do not vary.

determinant
The determinant of a square matrix $A$ is a single number indicated by $\vert A\vert$. If this number is nonzero, $A\vec v$ can be any vector $\vec w$ for the right choice of $\vec v$. Conversely, if the determinant is zero, $A\vec v$ can only produce a very limited set of vectors. But if it can produce a vector $w$, it can do so for multiple vectors $\vec v$.

There is a recursive algorithm that allows you to compute determinants from increasingly bigger matrices in terms of determinants of smaller matrices. For a $1\times 1$ matrix consisting of a single number, the determinant is simply that number:

\begin{displaymath}
\left\vert a_{11} \right\vert = a_{11}
\end{displaymath}

(This determinant should not be confused with the absolute value of the number, which is written the same way. Since we normally do not deal with $1\times 1$ matrices, there is normally no confusion.) For $2\times 2$ matrices, the determinant can be written in terms of $1\times 1$ determinants:

\begin{displaymath}
\left\vert
\begin{array}{ll}
a_{11} & a_{12} \\
a_{...
... \\
a_{21} & \phantom{a_{22}}
\end{array}
\right\vert
\end{displaymath}

so the determinant is $a_{11}a_{22}-a_{12}a_{21}$ in short. For $3\times 3$ matrices, we have

\begin{eqnarray*}
\lefteqn{
\left\vert
\begin{array}{lll}
a_{11} & a_{12...
...a_{31} & a_{32} & \phantom{a_{33}}
\end{array}
\right\vert
\end{eqnarray*}

and we already know how to work out those $2\times 2$ determinants, so we now know how to do $3\times 3$ determinants. Written out fully:

\begin{displaymath}
a_{11}(a_{22}a_{33}-a_{23}a_{32})
-a_{12}(a_{21}a_{33}-a_{23}a_{31})
+a_{13}(a_{21}a_{32}-a_{22}a_{31})
\end{displaymath}

For $4\times 4$ determinants,

\begin{eqnarray*}
\lefteqn{
\left\vert
\begin{array}{llll}
a_{11} & a_{1...
...a_{42} & a_{43} & \phantom{a_{44}}
\end{array}
\right\vert
\end{eqnarray*}

Etcetera. Note the alternating sign pattern of the terms.

As you might infer from the above, computing a good size determinant takes a large amount of work. Fortunately, it is possible to simplify the matrix to put zeros in suitable locations, and that can cut down the work of finding the determinant greatly. We are allowed to use the following manipulations without seriously affecting the computed determinant:

  1. We may “transpose”the matrix, i.e. change its columns into its rows.
  2. We can create zeros in a row by subtracting a suitable multiple of another row.
  3. We may also swap rows, as long as we remember that each time that we swap two rows, it will flip over the sign of the computed determinant.
  4. We can also multiply an entire row by a constant, but that will multiply the computed determinant by the same constant.
Applying these tricks in a systematic way, called “Gaussian elimination” or “reduction to lower triangular form”, we can eliminate all matrix coefficients $a_{ij}$ for which $j$ is greater than $i$, and that makes evaluating the determinant pretty much trivial.

div(ergence)
The divergence of a vector field $\vec{v}$ is defined as ${\rm {div}}\;\vec{v}=\nabla\cdot\vec{v}$.

$e$
May indicate:

$e^{{\rm i}a x}$
Assuming that $a$ is an ordinary real number, and $x$ a real variable, $e^{{\rm i}a x}$ is a complex function of magnitude one. The derivative of $e^{{\rm i}a x}$ with respect to $x$ is ${\rm i}ae^{{\rm i}a x}$

eigenvector
A vector $\vec v$ is an eigenvector of a matrix $A$ if $\vec v$ is nonzero and $A\vec v=\lambda\vec v$ for some number $\lambda$ called the corresponding eigenvalue.

exponential function
A function of the form $e^{\ldots}$, also written as $\exp(\ldots)$. See function and $e$.

$F$
May indicate:

$f$
May indicate:

function
A mathematical object that associates values with other values. A function $f(x)$ associates every value of $x$ with a value $f$. For example, the function $f(x)=x^2$ associates $x=0$ with $f=0$, $x=\frac12$ with $f=\frac14$, $x=1$ with $f=1$, $x=2$ with $f=4$, $x=3$ with $f=9$, and more generally, any arbitrary value of $x$ with the square of that value $x^2$. Similarly, function $f(x)=x^3$ associates any arbitrary $x$ with its cube $x^3$, $f(x)=\sin(x)$ associates any arbitrary $x$ with the sine of that value, etcetera.

One way of thinking of a function is as a procedure that allows you, whenever given a number, to compute another number.

functional
A functional associates entire functions with single numbers. For example, the expectation energy is mathematically a functional: it associates any arbitrary wave function with a number: the value of the expectation energy if physics is described by that wave function.

$g$
May indicate:

grad(ient)
The gradient of a scalar $f$ is defined as ${\rm {grad}}\;f=\nabla{f}$.

$\Im$
The imaginary part of a complex number. If $c=c_r+{{\rm i}}c_i$ with $c_r$ and $c_i$ real numbers, then $\Im(c)=c_i$. Note that $c-c^*=2{\rm i}\Im(c)$.

$i$
May indicate: Not to be confused with ${\rm i}$.

${\rm i}$
The standard square root of minus one: ${\rm i}=\sqrt{-1}$, ${\rm i}^2 = -1$, $1/{\rm i}=-{\rm i}$, ${\rm i}^*=-{\rm i}$.

index notation
A more concise and powerful way of writing vector and matrix components by using a numerical index to indicate the components. For Cartesian coordinates, we might number the coordinates $x$ as 1, $y$ as 2, and $z$ as 3. In that case, a sum like $v_x+v_y+v_z$ can be more concisely written as $\sum_i v_i$. And a statement like $v_x\ne0,v_y\ne0,v_z\ne0$ can be more compactly written as $v_i\ne0$. To really see how it simplifies the notations, have a look at the matrix entry. (And that one shows only 2 by 2 matrices. Just imagine 100 by 100 matrices.)

iff
Emphatic “if.” Should be read as “if and only if.”

integer
Integer numbers are the whole numbers: $\ldots,-2,-1,0,1,2,3,4,\ldots$.

inverse
(Of matrices.) If a matrix $A$ converts a vector $\vec
v$ into a vector $\vec w$, then the inverse of the matrix, $A^{-1}$, converts $\vec w$ back into $\vec v$.

in other words, $A^{-1} A = A A^{-1} = I$ with $I$ the unit, or identity, matrix.

The inverse of a matrix only exists if the matrix is square and has nonzero determinant.

$j$
May indicate:

$k$
May indicate:

$l$
May indicate:

$\ell$
May indicate:

$\lim$
Indicates the final result of an approaching process. $\lim_{\varepsilon\to 0}$ indicates for practical purposes the value of the following expression when $\varepsilon$ is extremely small.

linear combination
A very generic concept indicating sums of objects times coefficients. For example, a position vector ${\skew0\vec r}$ is the linear combination $x{\hat\imath}+y{\hat\jmath}+z{\hat k}$ with the objects the unit vectors ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$ and the coefficients the position coordinates $x$, $y$, and $z$.

matrix
A table of numbers.

As a simple example, a two-dimensional matrix $A$ is a table of four numbers called $a_{11}$, $a_{12}$, $a_{21}$, and $a_{22}$:

\begin{displaymath}
\left(
\begin{array}{ll}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{array}
\right)
\end{displaymath}

unlike a two-dimensional (ket) vector $\vec v$, which would consist of only two numbers $v_1$ and $v_2$ arranged in a column:

\begin{displaymath}
\left(
\begin{array}{l}
v_1 \\
v_2
\end{array}
\right)
\end{displaymath}

(Such a vector can be seen as a “rectangular matrix” of size $2\times1$, but let’s not get into that.)

In index notation, a matrix $A$ is a set of numbers $\{a_{ij}\}$ indexed by two indices. The first index $i$ is the row number, the second index $j$ is the column number. A matrix turns a vector $\vec
v$ into another vector $\vec w$ according to the recipe

\begin{displaymath}
w_i = \sum_{\mbox{{\scriptsize all }}j} a_{ij} v_j \quad \mbox{for all $i$}
\end{displaymath}

where $v_j$ stands for “the $j$-th component of vector $\vec
v$,” and $w_i$ for “the $i$-th component of vector $\vec w$.”

As an example, the product of $A$ and $\vec v$ above is by definition

\begin{displaymath}
\left(
\begin{array}{ll}
a_{11} & a_{12} \\
a_{21} ...
...2} v_2 \\
a_{21} v_1 + a_{22} v_2
\end{array}
\right)
\end{displaymath}

which is another two-dimensional ket vector.

Note that in matrix multiplications like the example above, in geometric terms we take dot products between the rows of the first factor and the column of the second factor.

To multiply two matrices together, just think of the columns of the second matrix as separate vectors. For example:

\begin{displaymath}
\left(
\begin{array}{ll}
a_{11} & a_{12} \\
a_{21} ...
...{21} & a_{21} b_{12} + a_{22} b_{22}
\end{array}
\right)
\end{displaymath}

which is another two-dimensional matrix. In index notation, the $ij$ component of the product matrix has value $\sum_k a_{ik}b_{kj}$.

The zero matrix is like the number zero; it does not change a matrix it is added to and turns whatever it is multiplied with into zero. A zero matrix is zero everywhere. In two dimensions:

\begin{displaymath}
\left(
\begin{array}{ll}
0 & 0 \\
0 & 0
\end{array}
\right)
\end{displaymath}

A unit matrix is the equivalent of the number one for matrices; it does not change the quantity it is multiplied with. A unit matrix is one on its “main diagonal” and zero elsewhere. The 2 by 2 unit matrix is:

\begin{displaymath}
\left(
\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}
\right)
\end{displaymath}

More generally the coefficients, $\{\delta_{ij}\}$, of a unit matrix are one if $i=j$ and zero otherwise.

The transpose of a matrix $A$, $A^T$, is what you get if you switch the two indices. Graphically, it turns its rows into its columns and vice versa. The Hermitian “adjoint”$A^H$ is what you get if you switch the two indices and then take the complex conjugate of every element. If you want to take a matrix to the other side of an inner product, you will need to change it to its Hermitian adjoint. “Hermitian matrices”are equal to their Hermitian adjoint, so this does nothing for them.

See also “determinant” and “eigenvector.”

$M$
May indicate:

$m$
May indicate:

$n$
May indicate: and maybe some other stuff.

natural
Natural numbers are the numbers: $1,2,3,4,\ldots$.

normal
A normal operator or matrix is one that has orthonormal eigenfunctions or eigenvectors. Since eigenvectors are not orthonormal in general, a normal operator or matrix is abnormal! Normal matrices are matrices that commute with their adjoint.

opposite
The opposite of a number $a$ is $-a$. In other words, it is the additive inverse.

perpendicular bisector
For two given points $P$ and $Q$, the perpendicular bisector consists of all points $R$ that are equally far from $P$ as they are from $Q$. In two dimensions, the perpendicular bisector is the line that passes through the point exactly half way in between $P$ and $Q$, and that is orthogonal to the line connecting $P$ and $Q$. In three dimensions, the perpendicular bisector is the plane that passes through the point exactly half way in between $P$ and $Q$, and that is orthogonal to the line connecting $P$ and $Q$. In vector notation, the perpendicular bisector of points $P$ and $Q$ is all points $R$ whose radius vector ${\skew0\vec r}$ satisfies the equation:

\begin{displaymath}
({\skew0\vec r}-{\skew0\vec r}_P)\cdot({\skew0\vec r}_Q-{\...
..._Q-{\skew0\vec r}_P)\cdot({\skew0\vec r}_Q-{\skew0\vec r}_P)
\end{displaymath}

(Note that the halfway point ${\skew0\vec r}-{\skew0\vec r}_P={\textstyle\frac{1}{2}}({\skew0\vec r}_Q-{\skew0\vec r}_P)$ is included in this formula, as is the half way point plus any vector that is normal to $({\skew0\vec r}_Q-{\skew0\vec r}_P)$.)

phase angle
Any complex number can be written in “polar form” as $c=\vert c\vert e^{{\rm i}\alpha}$ where both the magnitude $\vert c\vert$ and the phase angle $\alpha$ are real numbers. Note that when the phase angle varies from zero to $2\pi$, the complex number $c$ varies from positive real to positive imaginary to negative real to negative imaginary and back to positive real. When the complex number is plotted in the complex plane, the phase angle is the direction of the number relative to the origin. The phase angle $\alpha$ is often called the argument, but so is about everything else in mathematics, so that is not very helpful.

In complex time-dependent waves of the form $e^{{\rm i}({\omega}t-\phi)}$, and its real equivalent $\cos({\omega}t-\phi)$, the phase angle $\phi$ gives the angular argument of the wave at time zero.

$q$
May indicate:

$R$
May indicate:

$\Re$
The real part of a complex number. If $c=c_r+{{\rm i}}c_i$ with $c_r$ and $c_i$ real numbers, then $\Re(c)=c_r$. Note that $c+c^*=2\Re(c)$.

$r$
May indicate:

$\vec r$
The position vector. In Cartesian coordinates $(x,y,z)$ or $x{\hat\imath}+y{\hat\jmath}+z{\hat k}$. In spherical coordinates $r\hat\imath_r$. Its three Cartesian components may be indicated by $r_1,r_2,r_3$ or by $x,y,z$ or by $x_1,x_2,x_3$.

reciprocal
The reciprocal of a number $a$ is $1/a$. In other words, it is the multiplicative inverse.

rot
The rot of a vector $\vec{v}$ is defined as ${\rm {curl}}\;\vec{v}={\rm {rot}}\;\vec{v}=\nabla\times\vec{v}$.

scalar
A quantity characterized by a single number.

$\sin$
The sine function, a periodic function oscillating between 1 and -1 as shown in [2, pp. 40-]. Good to remember: $\cos^2 \alpha + \sin^2 \alpha=1$.

Stokes' Theorem
This theorem, first derived by Kelvin and first published by someone else I cannot recall, says that for any reasonably smoothly varying vector $\vec v$,

\begin{displaymath}
\int_A \left(\nabla \times \vec v\right) { \rm d}A
=
\oint \vec v \cdot {\rm d}\vec r
\end{displaymath}

where the first integral is over any smooth surface area $A$ and the second integral is over the edge of that surface. How did Stokes get his name on it? He tortured his students with it, that’s how!

symmetry
Symmetries are operations under which an object does not change. For example, a human face is almost, but not completely, mirror symmetric: it looks almost the same in a mirror as when seen directly. The electrical field of a single point charge is spherically symmetric; it looks the same from whatever angle you look at it, just like a sphere does. A simple smooth glass (like a glass of water) is cylindrically symmetric; it looks the same whatever way you rotate it around its vertical axis.

$t$
May indicate:

triple product
A product of three vectors. There are two different versions:

$u$
May indicate:

$V$
May indicate:

$v$
May indicate:

$\vec v$
May indicate:

vector
A quantity characterized by a list of numbers. A vector $\vec v$ in index notation is a set of numbers $\{v_i\}$ indexed by an index $i$. In normal three-dimensional Cartesian space, $i$ takes the values 1, 2, and 3, making the vector a list of three numbers, $v_1$, $v_2$, and $v_3$. These numbers are called the three components of $\vec v$.

vectorial product
An vectorial product, or cross product is a product of vectors that produces another vector. If

\begin{displaymath}
\vec c=\vec a\times\vec b,
\end{displaymath}

it means in index notation that the $i$-th component of vector $\vec c$ is

\begin{displaymath}
c_i = a_{{\overline{\imath}}} b_{{\overline{\overline{\ima...
... - a_{{\overline{\overline{\imath}}}}b_{{\overline{\imath}}}
\end{displaymath}

where ${\overline{\imath}}$ is the index following $i$ in the sequence 123123..., and ${\overline{\overline{\imath}}}$ the one preceding it. For example, $c_1$ will equal $a_2b_3-a_3b_2$.

$w$
May indicate:

$\vec w$
Generic vector.

$X$
Used in this book to indicate a function of $x$ to be determined.

$x$
May indicate:

$Y$
Used in this book to indicate a function of $y$ to be determined.

$y$
May indicate:

$Z$
Used in this book to indicate a function of $z$ to be determined.

$z$
May indicate: