Search Continuum Mechanics Website

For $0.99, you receive two optimized PDFs: the first for 8.5" x 11" pages, the second for tablets (iPads, Kindle, etc).

This page repeats the tensor notation segments of earlier pages nearly verbatim. If you have already read them, then there is nothing new here. You can continue to the next page, which addresses more advanced tensor notation topics.

\[ a_i b_i \equiv a_1 b_1 + a_2 b_2 + a_3 b_3 \]

which is just the dot product of vectors \({\bf a}\) and \({\bf b}\). Note that any letter could be used as the index (as \(i\) was in this case), in order to invoke the automatic summation, however, it is critical that the same letter be used in both subscripts in order to do so.

Follow this link for an entertaining discussion between someone who gets it, and someone else who doesn't.

\[ \int f(t) \delta_{(t)} dt = f(0) \]

where \( \epsilon_{123} = \epsilon_{231} = \epsilon_{312} = 1 \), while \( \epsilon_{321} = \epsilon_{213} = \epsilon_{132} = -1 \), and all other combinations equal zero. Summation of the \(j\) and \(k\) indices from 1 to 3 is implied because they are repeated as subscripts. In other words, it is shorthand for

\[ \matrix { c_i \; = \; \epsilon_{ijk} a_j b_k & = & \epsilon_{i11} a_1 b_1 & + & \epsilon_{i12} a_1 b_2 & + & \epsilon_{i13} a_1 b_3 & + & \\ & & \epsilon_{i21} a_2 b_1 & + & \epsilon_{i22} a_2 b_2 & + & \epsilon_{i23} a_2 b_3 & + & \\ & & \epsilon_{i31} a_3 b_1 & + & \epsilon_{i32} a_3 b_2 & + & \epsilon_{i33} a_3 b_3 } \]

The equation is still general until a particular component is chosen for \(i\) to be evaluated.

\[ \matrix { c_3 \; = \; \epsilon_{3jk} a_j b_k & = & \epsilon_{311} a_1 b_1 & + & \epsilon_{312} a_1 b_2 & + & \epsilon_{313} a_1 b_3 & + & \\ & & \epsilon_{321} a_2 b_1 & + & \epsilon_{322} a_2 b_2 & + & \epsilon_{323} a_2 b_3 & + & \\ & & \epsilon_{331} a_3 b_1 & + & \epsilon_{332} a_3 b_2 & + & \epsilon_{333} a_3 b_3 } \]

All subscripts are now specified, and this permits evaluation of all alternating tensors. All of them will equal zero except two. This leaves

\[ c_3 \; = \; \epsilon_{3jk} a_j b_k \; = \; a_1 b_2 - a_2 b_1 \]

which is consistent with the determinant result (as it had better be). Results for the x

\[ c_i = a_i + b_i \quad \quad \text{and} \quad \quad c_{ij} = a_{ij} + b_{ij} \]

\[ a_i b_i \equiv a_1 b_1 + a_2 b_2 + a_3 b_3 \]

Note that any letter could be used as the index (as \(i\) was in this case), however, it is critical that the same letter be used on both subscripts in order to invoke the automatic summation.

\[ c_i = \epsilon_{ijk} a_j b_k \]

where \( \epsilon_{123} = \epsilon_{231} = \epsilon_{312} = 1 \), while \( \epsilon_{321} = \epsilon_{213} = \epsilon_{132} = -1 \), and all other combinations equal zero. Summation of the \(j\) and \(k\) indices from 1 to 3 is implied because they are repeated as subscripts in the above equation. In other words, it is shorthand for

\[ \matrix { c_i \; = \; \epsilon_{ijk} a_j b_k & = & \epsilon_{i11} a_1 b_1 & + & \epsilon_{i12} a_1 b_2 & + & \epsilon_{i13} a_1 b_3 & + & \\ & & \epsilon_{i21} a_2 b_1 & + & \epsilon_{i22} a_2 b_2 & + & \epsilon_{i23} a_2 b_3 & + & \\ & & \epsilon_{i31} a_3 b_1 & + & \epsilon_{i32} a_3 b_2 & + & \epsilon_{i33} a_3 b_3 } \]

The equation is still general until a particular component is chosen for \(i\) to be evaluated.

\[ \matrix { c_3 \; = \; \epsilon_{3jk} a_j b_k & = & \epsilon_{311} a_1 b_1 & + & \epsilon_{312} a_1 b_2 & + & \epsilon_{313} a_1 b_3 & + & \\ & & \epsilon_{321} a_2 b_1 & + & \epsilon_{322} a_2 b_2 & + & \epsilon_{323} a_2 b_3 & + & \\ & & \epsilon_{331} a_3 b_1 & + & \epsilon_{332} a_3 b_2 & + & \epsilon_{333} a_3 b_3 } \]

All subscripts are now specified, and this permits evaluation of all alternating tensors. All of them will equal zero except two. This leaves

\[ c_3 \; = \; \epsilon_{3jk} a_j b_k \; = \; a_1 b_2 - a_2 b_1 \]

which is consistent with the determinant result (as it had better be). Results for the x

The area of a triangle bounded on two sides by vectors \({\bf a}\) and \({\bf b}\) is

\[ Area = {1 \over 2} | \, {\bf a} \times {\bf b} | \]

In tensor notation, this is written in two steps as

\[ c_i = \epsilon_{ijk} a_j b_k \quad \quad \quad \text{and} \quad \quad \quad Area = {1 \over 2} \sqrt{c_i c_i} \]

or in a single equation as

\[ Area = {1 \over 2} \sqrt{ \epsilon_{ijk} a_j b_k \epsilon_{imn} a_m b_n } \]

Note that each index appears twice in the above equation because, by convention, it is not permitted to appear more than 2 times.

\[ c_{ij} = a_i b_j \]

\[ \begin{eqnarray} {\bf a} \otimes {\bf b} & = & \left[ \matrix { 3*1 & 3*2 & 3*3 \\ 7*1 & 7*2 & 7*3 \\ 2*1 & 2*2 & 2*3 } \right] \\ \\ \\ & = & \left[ \matrix { 3 & 6 & 9 \\ 7 & 14 & 21 \\ 2 & 4 & 6 } \right] \\ \end{eqnarray} \]

Tensor notation dictates that the value of any \(c_{ij}\) component is simply \(a_i b_j\). For example, selecting \(i = 2\) and \(j = 3\) gives

\[ c_{23} \quad = \quad a_2 b_3 \quad = \quad 7 * 3 \quad = \quad 21 \]

\[ \text{det}( {\bf A} ) \; = \; \epsilon_{ijk} A_{i1} A_{j2} A_{k3} \; = \; {1 \over 6} \epsilon_{ijk} \epsilon_{rst} A_{ir} A_{js} A_{kt} \]

The inverse can be calculated using

\[ A^{-1}_{ij} = {1 \over 2 \, \text{det} ({\bf A}) } \epsilon_{jmn} \, \epsilon_{ipq} A_{mp} A_{nq} \]

\[ C_{ij} = A_{ik} B_{kj} \]

(Note that no dot is used in tensor notation.) The \(k\) in both factors automatically implies

\[ C_{ij} = A_{i1} B_{1j} + A_{i2} B_{2j} + A_{i3} B_{3j} \]

which is the i

\[ C_{23} = A_{21} B_{13} + A_{22} B_{23} + A_{23} B_{33} \]

\[ \left[ \matrix { 1 & 2 & 3 \\ 4 & 2 & 2 \\ 2 & 3 & 4 } \right] \left[ \matrix { 1 & 4 & 7 \\ 2 & 5 & 8 \\ 3 & 6 & 9 } \right] = \left[ \matrix { 14 & 32 & 50 \\ 14 & 38 & 62 \\ 20 & 47 & 74 } \right] \]

subroutine aa_dot_bb(n,a,b,c) dimension a(n,n), b(n,n), c(n,n) do i = 1,n do j = 1,n c(i,j) = 0 do k = 1,n c(i,j) = c(i,j) + a(i,k) * b(k,j) end do end do end do return end

\[ {\bf A} : {\bf B} = A_{ij} B_{ij} \]

Since the \(i\) and \(j\) subscripts appear in both factors, they are both summed to give

\[ \matrix { {\bf A} : {\bf B} \; = \; A_{ij} B_{ij} \; = & A_{11} * B_{11} & + & A_{12} * B_{12} & + & A_{13} * B_{13} & + \\ & A_{21} * B_{21} & + & A_{22} * B_{22} & + & A_{23} * B_{23} & + \\ & A_{31} * B_{31} & + & A_{32} * B_{32} & + & A_{33} * B_{33} & } \]

\[ \matrix { A_{ij} B_{ij} & = & 1 * 1 & + & 2 * 4 & + & 3 * 7 & + \\ & & 4 * 2 & + & 2 * 5 & + & 2 * 8 & + \\ & & 2 * 3 & + & 3 * 6 & + & 4 * 9 \\ & \\ & = & 124 } \]

\[ \qquad \; \; \text{velocity} \qquad = \qquad {d{\bf x} \over dt} \qquad = \qquad \left( {dx_1 \over dt} , {dx_2 \over dt} , {dx_3 \over dt} \right) \qquad \ = \qquad \dot{\bf x} \qquad = \qquad \dot{x}_i \qquad = \qquad x_{i,t} \]

\[ \text{acceleration} \qquad = \qquad {d{\bf v} \over dt} \qquad = \qquad \left( {dv_1 \over dt} , {dv_2 \over dt} , {dv_3 \over dt} \right) \qquad \ = \qquad \dot{\bf v} \qquad = \qquad \dot{v}_i \qquad = \qquad v_{i,t} \]

\[ \text{acceleration} \qquad = \qquad {d^2{\bf x} \over dt^2} \qquad = \qquad \left( {d^2x_1 \over dt^2} , {d^2x_2 \over dt^2} , {d^2x_3 \over dt^2} \right) \qquad \ = \qquad \ddot{\bf x} \qquad = \qquad \ddot{x}_i \qquad = \qquad x_{i,tt} \]

One can use the derivative with respect to \(\;t\), or the dot, which is probably the most popular, or the comma notation, which is a popular subset of tensor notation. Note that the notation \(x_{i,tt}\) somewhat violates the tensor notation rule of double-indices automatically summing from 1 to 3. This is because time does not have 3 dimensions as space does, so it is understood that no summation is performed.

\[ {\partial f \over \partial x_{\!j}} \qquad \qquad \text{or} \qquad \qquad f_{,\,j} \]

Differentiation of a vector, \({\bf v}\), is

\[ {\partial {\bf v} \over \partial x_{\!j}} \qquad \qquad \text{or} \qquad \qquad \left( {\partial \, v_x \over \partial x_{\!j}} , {\partial \, v_y \over \partial x_{\!j}} , {\partial \, v_z \over \partial x_{\!j}} \right) \qquad \qquad \text{or} \qquad \qquad v_{i,\,j} \]

Differentiation of a tensor, \(\boldsymbol{\sigma}\), is

\[ {\partial \boldsymbol{\sigma} \over \partial x_{\!k}} \qquad \qquad \text{or} \qquad \qquad \sigma_{ij,k} \]

As with vectors, every component of a tensor is differentiated.

\[ \begin{eqnarray} v_{i,i} & = \; & {\partial v_1 \over \partial x_1} + {\partial v_2 \over \partial x_2} + {\partial v_3 \over \partial x_3} \\ \\ & = \; & {\partial v_x \over \partial x} + {\partial v_y \over \partial y} + {\partial v_z \over \partial z} \end{eqnarray} \]

As stated above, the divergence is written in tensor notation as \( v_{i,i}\). It is very important that both subscripts are the same because this dictates that they are automatically summed from 1 to 3. They can in fact be any letter one desires, so long as they are both the same letter.

\[ v_{i,i} \quad = \quad {\partial \over \partial x}(3x^2 - 2y) + {\partial \over \partial y}(z^2 + x) + {\partial \over \partial z}(y^3 - z) \quad = \quad 6x - 1 \]

As with cross products, the fact that \(j\) and \(k\) both occur twice in \( \epsilon_{ijk} v_{k,j} \) dictates that both are automatically summed from 1 to 3. The term expands to

\[ \matrix { \epsilon_{ijk} v_{k,j} & = & \epsilon_{i11} v_{1,1} & + & \epsilon_{i12} v_{2,1} & + & \epsilon_{i13} v_{3,1} & + & \\ & & \epsilon_{i21} v_{1,2} & + & \epsilon_{i22} v_{2,2} & + & \epsilon_{i23} v_{3,2} & + & \\ & & \epsilon_{i31} v_{1,3} & + & \epsilon_{i32} v_{2,3} & + & \epsilon_{i33} v_{3,3} } \]

\[ \matrix { \epsilon_{2jk} v_{k,j} & = & \epsilon_{211} v_{1,1} & + & \epsilon_{212} v_{2,1} & + & \epsilon_{213} v_{3,1} & + & \\ & & \epsilon_{221} v_{1,2} & + & \epsilon_{222} v_{2,2} & + & \epsilon_{223} v_{3,2} & + & \\ & & \epsilon_{231} v_{1,3} & + & \epsilon_{232} v_{2,3} & + & \epsilon_{233} v_{3,3} } \]

All subscripts are now specified, and this permits evaluation of all alternating tensors. All of them will equal zero except two, leaving

\[ \epsilon_{2jk} v_{k,j} \; = \; v_{1,3} - v_{3,1} \; = \; {\partial \, v_x \over \partial z} - {\partial \, v_z \over \partial x} \]

which is again consistent with the determinant result (as it must be). Results for the x

\[ f,_{ii} \equiv {\partial^{\,2} \! f({\bf x}) \over \partial \, x^2} + {\partial^{\,2} \! f({\bf x}) \over \partial \, y^2} + {\partial^{\,2} \! f({\bf x}) \over \partial \, z^2} \]

Start by calculating the gradient of \(f({\bf x})\).

\[ f,_i = \left( 6 x^2 y, 2 x^3 - z \cos(y), - \sin(y) \right) \]

And the divergence of the gradient (which is the Laplacian after all) is

\[ f,_{ii} = \nabla \cdot \nabla f({\bf x}) = 12 x y + z \sin(y) \]

\[ ( a_i b_i ),_j = a_{i,j} b_i + a_i b_{i,j} \]

the derivative of a cross product

\[ ( \epsilon_{ijk} a_j b_k ),_m = \epsilon_{ijk} a_{j,m} b_k + \epsilon_{ijk} a_j b_k,_m \]

and the derivative of a diadic product.

\[ ( a_i b_j ),_k = a_{i,k} b_j + a_i b_{j,k} \]