Tensor calculus

In, tensor calculus, tensor analysis, or  is an extension of to s (s that may vary over a , e.g. in ).

Developed by and his student, it was used by  to develop his. Contrasted with the, tensor calculus allows presentation of physics equations in a of the  on the manifold.

Tensor calculus has many real-life applications in and, including , ,  (see ),  (see ) and.

Working with a main proponent of the Elie Cartan, the influential geometer  summarizes the role of tensor calculus:"In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus."

Syntax
Tensor notation makes use of upper and lower indexes on objects that are used to label a variable object as covariant (lower index), contravariant (upper index), or mixed covariant and contravariant (having both upper and lower indexes). In fact in conventional math syntax we make use of covariant indexes when dealing with Cartesian coordinate systems $$(x_1, x_2, x_3)$$ frequently without realizing this is a limited use of tensor syntax as covariant indexed components.

Tensor notation allows upper index on an object that may be confused with normal power operations from conventional math syntax. For example, in normal math syntax, $$e=mc^2= mcc$$, how in tensor syntax a parenthesis should be used around an object before raising it to a power to disambiguate the use of a tensor index verse a normal power operation. For example, in tensor syntax we would write, $$e=m(c^1)^2= m(c^1)(c^1)$$ and $$e=m(c^2)^2= m(c^2)(c^2)$$. The number in the inner parenthesis distinguishes the contravariant component where the outer parenthesis number distinguishes the power to raise the quantities to. Of course this example, is just an arbitrary equation, we could have specified that c is not a tensor and known that this particular variable does not need a parenthesis around it to take the quality c to a power of 2, however, if c were a vector, then it could be represented as a tensor and this tensor would need to be distinguished from normal math indexes that indicate raising a quantity to a power.

Vector Decomposition
Tensors notation allows a vector ($$\vec{V}$$) to be decomposed into an representing the  of a  ($$\vec{Z}_i$$ or $$\vec{Z}^i$$) dotted with a component vector($$V_i$$ or $$v^i$$).

$$\vec{Z} = V^i \vec{Z}_i = V_i \vec{Z}^i$$

Every vector has two different representations, one referred to as contravariant component ($$v^i$$) with a covariant basis($$\vec{Z}_i$$), and the other as a covariant component($$V_i$$) with a contravariant basis($$\vec{Z}^i$$). (tensor objects with all upper indexes are referred to as contravariant, and tensor objects will all lower indexes are referred to as covariant.) The need to distinguish between contravariant and covariant arises from the fact that when we dot an arbitrary vector with its basis vector related to a particular coordinate system, there are two ways of interpreting this dot product, either we view it as the projection of the basis vector onto the arbitrary vector, or we view it as the projection of the arbitrary vector onto the basis vector, both views of the dot product are entirely equivalent, but have different component elements and different basis vectors:

$$\vec{V} \cdot \vec{Z} = \vec{Z} \cdot \vec{V} = \vec{V}^T \vec{Z} = \vec{Z}^T \vec{V} = proj_\vec{Z}(\vec{V}) = proj_\vec{V}(\vec{Z}) = V_i \vec{Z}^i = V^i \vec{Z}_i$$

For example, in physics you start with a vector field, you decompose it with respect to the covariant basis, and that's how you get the contravariant coordinates. For orthonormal cartesian coordinates, the covariant and contravariant basis are identical, since the basis set in this case is just the identity matrix, however, for non-affine coodinate system such as polar or spherical is a need to distinguish between decomposition by use of contravariant or covariant basis set for generating the components of the coordinate system.

Covariant vector decomposition
$$\vec{V} = V^i \vec{Z}_i$$

Contravariant vector decomposition
$$\vec{V} = V_i \tilde{\mathcal{Z}}^i$$

Metric Tensor
The metric tensor represents a matrix with scalar elements ($$Z_{ij}$$ or $$Z^{ij}$$) and is a tensor object which is used to raise or lower the index on another tensor object by an operation called contraction, thus allowing a covariant tensor to be converted to a contravariant tensor, and vice versa.

Example of lowering index using metric tensor:

$$Z_i=Z_{ij}Z^j$$

Example of raising index using metric tensor:

$$Z^i=Z^{ij}Z_j$$

The metric tensor is defined as:

$$Z_{ij} = \vec{Z}_i \cdot \vec{Z}_j$$

$$Z^{ij} = \vec{Z}^i \cdot \vec{Z}^j$$

This means that if we take every permutation of a basis vector set and dotted them against each other, and then arrange them into a square matrix, we would have a metric tensor. The caveat here is which of the two vectors in the permutation is used for projection against the other vector, that is the distinguishing property of the covariant metric tensor in comparision with the contravariant metric tensor.

Two flavors of metric tensors exist: (1) the contravariant metric tensor ($$Z^{ij}$$), and (2) the covariant metric tensor ($$Z_{ij}$$). These two flavors of metric tensor are related by the indentity:

$$Z_{ik}Z^{jk} = \delta^j_i$$

For an, the metric tensor is just the  $$\delta_{ij}$$ or $$\delta^{ij}$$, which is just an , and $$\delta_{ij} = \delta^{ij} = \delta^i_j$$.

Jacobian
In addition a tensor can be readily converted from an unbarred(x) to a barred coordinate($$\bar{x}$$) system having different sets of basis vectors:

$$f(x^1, x^2, \dots, x^n) = f\bigg(x^1(\bar{x}), x^2(\bar{x}), \dots, x^n(\bar{x})\bigg) = \bar{f}(\bar{x}^1, \bar{x}^2, \dots, \bar{x}^n)= \bar{f}\bigg(\bar{x}^1(x), \bar{x}^2(x), \dots, \bar{x}^n(x)\bigg)$$

by use of relationships between the barred and unbarred coordinate system ($$\bar{J}=J^{-1}$$). The Jacobian between the barred and unbarred system is instrumental in defining the covariant and contravariant basis vectors, in that in order for the these vectors to exist they need to satisfy the following relationship relative to the barred and unbarred system:

Contravariant vectors are required to obey the laws:

$$v^i = \bar{v}^r\frac{\partial x^i(\bar{x})}{\partial \bar{x}^r}$$

$$\bar{v}^i = v^r\frac{\partial \bar{x}^i(x)}{\partial x^r}$$

Covariant vectors are required to obey the laws:

$$v_i = \bar{v}_r\frac{\partial \bar{x}^i(x)}{\partial x^r}$$

$$\bar{v}_i = v_r\frac{\partial x^r(\bar{x})}{\partial \bar{x}^i}$$

There are two flavors of Jacobian matrix:

1. The J matrix representing the change from unbarred to barred coordinates. To find J, we take the "barred gradient", ie. partial derive with respect to $$\bar{x}^i$$:

$$J = \bar{\nabla} f(x(\bar{x}))$$

2. The $$\bar{J}$$ matrix, representing the change from barred to unbarred coordinates. To find $$\bar{J}$$, we take the "unbarred gradient", ie. partial derive with respect to $$x^i$$:

$$\bar{J} = \nabla \bar{f}(\bar{x}(x))$$

Gradient vector
Tensor calculus provides a generalization to the gradient vector formula from standard calculus that works in all coordinate systems:

$$\begin{matrix}\nabla F = \nabla_i F \vec{Z}^i & \nabla_i F = \frac{\partial F}{\partial Z^i}\end{matrix}$$

In contrast, for standard calculus, the gradient vector formula is dependent on the coordinate system in use (example: Cartesian gradient vector formula vs. the polar gradient vector formula vs. the spherical gradient vector formula, etc.). In standard calculus, each coordinate system has its own specific formula, unlike tensor calculus that has only one gradient formula that is equivalent for all coordinate systems. This is made possible by an understanding of the metric tensor that tensor calculus makes use of.