Advanced mathematics


 * This article is College level

Direct product
From Direct product

In, one can often define of objects already known, giving a new one. This together with a suitably defined structure on the product set. More abstractly, one talks about the, which formalizes these notions.

Examples are the product of sets, (described below),, and other. The of s is another instance.

There is also the – in some areas this is used interchangeably, while in others it is a different concept.

Examples

 * If we think of $$\mathbb{R}$$ as the set of real numbers, then the direct product $$\mathbb{R}\times \mathbb{R}$$ is just the Cartesian product $$\{ (x,y) \mid x,y \in \mathbb{R} \}$$.
 * If we think of $$\mathbb{R}$$ as the of real numbers under addition, then the direct product $$\mathbb{R}\times \mathbb{R}$$ still has $$\{ (x,y) \mid x,y \in \mathbb{R} \}$$ as its underlying set. The difference between this and the preceding example is that $$\mathbb{R}\times \mathbb{R}$$ is now a group, and so we have to also say how to add their elements. This is done by defining $$(a,b) + (c,d) = (a+c, b+d)$$.
 * If we think of $$\mathbb{R}$$ as the of real numbers, then the direct product $$\mathbb{R}\times \mathbb{R}$$ again has $$\{ (x,y) \mid x,y \in \mathbb{R} \}$$ as its underlying set. The ring structure ring consists of addition defined by $$(a,b) + (c,d) = (a+c, b+d)$$ and multiplication defined by $$(a,b) (c,d) = (ac, bd)$$.
 * However, if we think of $$\mathbb{R}$$ as the of real numbers, then the direct product $$\mathbb{R}\times \mathbb{R}$$ does not exist – naively defining addition and multiplication componentwise as in the above example would not result in a field since the element $$(1,0)$$ does not have a.

In a similar manner, we can talk about the direct product of finitely many algebraic structures, e.g. $$\mathbb{R} \times \mathbb{R} \times \mathbb{R} \times \mathbb{R}$$. This relies on the fact that the direct product is up to. That is, $$(A \times B) \times C \cong A \times (B \times C)$$ for any algebraic structures $$A$$, $$B$$, and $$C$$ of the same kind. The direct sum is also up to isomorphism, i.e. $$A \times B \cong B \times A$$ for any algebraic structures $$A$$ and $$B$$ of the same kind. We can even talk about the direct product of infinitely many algebraic structures; for example we can take the direct product of many copies of $$\mathbb R$$, which we write as $$\mathbb{R} \times \mathbb{R} \times \mathbb{R} \times \dotsb$$.

Group direct product
In one can define the direct product of two groups (G, ∘) and (H, ∙), denoted by G × H. For s which are written additively, it may also be called the, denoted by $$G \oplus H$$.

It is defined as follows: (Note that (G, ∘) may be the same as (H, ∙))
 * the of the elements of the new group is the Cartesian product of the sets of elements of G and H, that is {(g, h): g ∈ G, h ∈ H};
 * on these elements put an operation, defined element-wise:
 * (g, h) × (g, h' ) = (g ∘ g, h ∙ h')

This construction gives a new group. It has a isomorphic to G (given by the elements of the form (g, 1)), and one isomorphic to H (comprising the elements (1, h)).

The reverse also holds, there is the following recognition theorem: If a group K contains two normal subgroups G and H, such that K= GH and the intersection of G and H contains only the identity, then K is isomorphic to G × H. A relaxation of these conditions, requiring only one subgroup to be normal, gives the.

As an example, take as G and H two copies of the unique (up to isomorphisms) group of order 2, C2: say {1, a} and {1, b}. Then C2×C2 = {(1,1), (1,b), (a,1), (a,b)}, with the operation element by element. For instance, (1,b)*(a,1) = (1*a, b*1) = (a,b), and (1,b)*(1,b) = (1,b2) = (1,1).

With a direct product, we get some natural s for free: the projection maps define by
 * $$\begin{align}

\pi_1: G \times H \to G, \ \ \pi_1(g, h) &= g \\ \pi_2: G \times H \to H, \ \ \pi_2(g, h) &= h \end{align}$$ called the coordinate functions.

Also, every homomorphism f to the direct product is totally determined by its component functions $$f_i = \pi_i \circ f$$.

For any group (G, ∘) and any integer n ≥ 0, repeated application of the direct product gives the group of all n-s Gn (for n = 0 we get the ), for example Zn and Rn.

Direct product of modules
The direct product for (not to be confused with the ) is very similar to the one defined for groups above, using the Cartesian product with the operation of addition being componentwise, and the scalar multiplication just distributing over all the components. Starting from R we get Rn, the prototypical example of a real n-dimensional vector space. The direct product of Rm and Rn is Rm+n.

They are dual in the sense of : the direct sum is the, while the direct product is the product.

Direct sum
From Direct sum

The direct sum is an operation from, a branch of. For example, the direct sum $$ \mathbf{R} \oplus \mathbf{R} $$, where $$ \mathbf{R} $$ is, is the , $$ \mathbf{R} ^2 $$.

To see how direct sum is used in abstract algebra, consider a more elementary structure in abstract algebra, the. The direct sum of two s $$A$$ and $$B$$ is another abelian group $$A\oplus B$$ consisting of the ordered pairs $$(a,b)$$ where $$a \in A$$ and $$b \in B$$. To add ordered pairs, we define the sum $$(a, b) + (c, d)$$ to be $$(a + c, b + d)$$; in other words addition is defined coordinate-wise.
 * (Confusingly this ordered pair is also called the of the two groups.)

A similar process can be used to form the direct sum of any two algebraic structures, such as, , and s.


 * This relies on the fact that the direct sum is up to . That is, $$(A \oplus B) \oplus C \cong A \oplus (B \oplus C)$$ for any algebraic structures $$A$$, $$B$$, and $$C$$ of the same kind.
 * The direct sum is also up to isomorphism, i.e. $$A \oplus B \cong B \oplus A$$ for any algebraic structures $$A$$ and $$B$$ of the same kind.

. If the arithmetic operation is written as +, as it usually is in abelian groups, then we use the direct sum. If the arithmetic operation is written as × or ⋅ or using juxtaposition (as in the expression $$xy$$) we use direct product.

In the case where infinitely many objects are combined, most authors make a distinction between direct sum and direct product. As an example, consider the direct sum and direct product of infinitely many real lines. An element in the direct product is an infinite sequence, such as (1,2,3,...) but in the direct sum, there would be a requirement that all but finitely many coordinates be zero, so the sequence (1,2,3,...) would be an element of the direct product but not of the direct sum, while (1,2,0,0,0,...) would be an element of both. In more technical language, if the summands are $$(A_i)_{i \in I}$$, the direct sum $$\bigoplus_{i \in I} A_i$$ is defined to be the set of tuples $$(a_i)_{i \in I}$$ with $$a_i \in A_i$$ such that $$a_i=0$$ for all but finitely many i.
 * More generally, if a + sign is used, all but finitely many coordinates must be zero, while
 * if some form of multiplication is used, all but finitely many coordinates must be 1.

The direct sum $$\bigoplus_{i \in I} A_i$$ is contained in the $$\prod_{i \in I} A_i$$, but is usually strictly smaller when the  $$I$$ is infinite, because direct products do not have the restriction that all but finitely many coordinates must be zero.

Spaces


From Topological space:

Around 1735, Euler discovered the formula $$V - E + F = 2$$ relating the number of vertices, edges and faces of a convex polyhedron, and hence of a. No metric is required to prove this formula. The study and generalization of this formula is the origin of.

satisfying a set of axioms relating points and neighbourhoods. The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence.






 * style="width:20px"| 1.
 * style="width:250px"|$$d(x,y) \ge 0 $$
 * 2. || $$d(x,y) = 0 \quad$$ $$\quad x = y$$
 * 3. || $$d(x,z) \le d(x,y) + d(y, z)$$
 * 4. || $$d(x,y) = d(y,x) $$
 * }
 * 3. || $$d(x,z) \le d(x,y) + d(y, z)$$
 * 4. || $$d(x,y) = d(y,x) $$
 * }
 * }

From Normed vector space

A norm is the generalization to real vector spaces of the intuitive notion of distance in the real world. All norms on a finite-dimensional vector space are equivalent from a topological viewpoint as they induce the same topology (although the resulting metric spaces need not be the same).

From Norm (mathematics)


 * 1) $$\|\mathbf{v}\| ≥ 0$$
 * 2) $$\|\mathbf{v}\| = 0 \quad$$    $$\quad \mathbf{v} = \mathbf{0}$$ (the )
 * 3) $$\|\mathbf{u} + \mathbf{v}\| ≤ \|\mathbf{u}\| + \|\mathbf{v}\| \quad$$  (The )
 * 4) $$\|\mathbf{v}\| = \|\mathbf{-v}\|$$

A, on the other hand, is allowed to assign zero length to some non-zero vectors (in addition to the zero vector).

Inner product
From Inner product space

In the folowing, the of s denoted $F$ is either the field of s $R$ or the field of s $C$.

, i.e., with a map
 * $$ \langle \cdot, \cdot \rangle : V \times V \to F $$

that satisfies the following three properties for all vectors $V$ and all scalars $F$:

\langle ax, y \rangle &= a \langle x, y \rangle \\ \langle x + y, z \rangle &= \langle x, z \rangle + \langle y, z \rangle \end{align}$$
 * symmetry:
 * $$\langle x, y \rangle = \overline{\langle y, x \rangle}$$
 * in the first argument:
 * $$\begin{align}
 * $$\langle x, x \rangle > 0,\quad x \in V \setminus \{\mathbf{0}\}.$$
 * $$\langle x, x \rangle > 0,\quad x \in V \setminus \{\mathbf{0}\}.$$

Positive-definiteness and linearity, respectively, ensure that:
 * $$\begin{align}

\langle x, x \rangle &= 0 \Rightarrow x = \mathbf{0} \\ \langle \mathbf{0}, \mathbf{0} \rangle &= \langle 0x, 0x \rangle = 0 \langle x, 0x \rangle = 0 \end{align}$$

Notice that conjugate symmetry implies that $x, y, z ∈ V$ is real for all $a ∈ F$, since we have:
 * $$\langle x, x \rangle = \overline{\langle x, x \rangle} \,.$$

Conjugate symmetry and linearity in the first variable imply


 * $$\begin{align}

\langle x, a y \rangle &= \overline{\langle a y, x \rangle} = \overline{a} \overline{\langle y, x \rangle} = \overline{a} \langle x, y \rangle \\ \langle x, y + z \rangle &= \overline{\langle y + z, x \rangle} = \overline{\langle y, x \rangle} + \overline{\langle z, x \rangle} = \langle x, y \rangle + \langle x, z \rangle \,; \end{align}$$

that is, in the second argument. So, {{yellow|an inner product is a {{Wikipedia link|sesquilinear form}}. Conjugate symmetry is also called Hermitian symmetry, and a conjugate-symmetric sesquilinear form is called a Hermitian form. While the above axioms are more mathematically economical, {{yellow|a compact verbal definition of an inner product is a positive-definite Hermitian form.}}

This important generalization of the familiar square expansion follows:
 * $$\langle x + y, x + y \rangle = \langle x, x \rangle + \langle x, y \rangle + \langle y, x \rangle + \langle y, y \rangle \,.$$

These properties, constituents of the above linearity in the first and second argument:
 * $$\begin{align}

\langle x + y, z \rangle &= \langle x, z \rangle + \langle y, z \rangle \,, \\ \langle x, y + z \rangle &= \langle x, y\rangle + \langle x, z \rangle \end{align}$$

are otherwise known as {{Wikipedia link|additive map|additivity}}.

In the case of $⟨x, x⟩$, conjugate-symmetry reduces to symmetry, and sesquilinearity reduces to bilinearity. So, {{yellow|an inner product on a real vector space is a positive-definite symmetric bilinear form.}} That is,


 * $$\begin{align}

\langle x, y \rangle &= \langle y, x \rangle \\ \Rightarrow \langle -x, x \rangle &= \langle x, -x \rangle \,, \end{align}$$

and the {{Wikipedia link|binomial expansion}} becomes:
 * $$\langle x + y, x + y \rangle = \langle x, x \rangle + 2\langle x, y \rangle + \langle y, y \rangle \,.$$

A common special case of the inner product, the scalar product or {{Wikipedia link|dot product}}, is written with a centered dot $$a \cdot b$$.

Bra-ket
Then the first argument becomes conjugate linear, rather than the second. In those disciplines we would write the product $x$ as $F = R$ (the of ), respectively $⟨x, y⟩$ (dot product as a case of the convention of forming the matrix product $⟨y$ as the dot products of rows of $y^{†}x$ with columns of $AB$). Here the kets and columns are identified with the vectors of $A$ and the bras and rows with the s (covectors) of the $B$, with conjugacy associated with duality. This reverse order is now occasionally followed in the more abstract literature, taking $V$ to be conjugate linear in $V$ rather than $⟨x, y⟩$. A few instead find a middle ground by recognizing both $x$ and $y$ as distinct notations differing only in which argument is conjugate linear.

There are various technical reasons why it is necessary to restrict the to $⟨·, ·⟩$ and $⟨·$ in the definition. Briefly, the basefield has to contain an in order for non-negativity to make sense, and therefore has to have  equal to 0 (since any ordered field has to have such characteristic). This immediately excludes finite fields. The basefield has to have additional structure, such as a distinguished automorphism. More generally any quadratically closed subfield of $R$ or $C$ will suffice for this purpose, e.g., the s or the s. However, in these cases when it is a proper subfield (i.e., neither $R$ nor $C$) even finite-dimensional inner product spaces will fail to be metrically complete. In contrast all finite-dimensional inner product spaces over $R$ or $C$, such as those used in, are automatically and hence s.

List of Spaces
From List of vector spaces in mathematics:







Back to top

Commutator
In, There are different definitions used in  and.



In, if two s of a space are represented by commuting matrices in terms of one basis, then they are so represented in terms of every basis.

The anticommutator of two elements $a$ and $b$ of a ring or an associative algebra is defined by
 * $$\{a, b\} = ab + ba.$$

Sometimes $$[a,b]_+$$ is used to denote anticommutator, while $$[a,b]_-$$ is then used for commutator. The anticommutator is used less often, but can be used to define s and s, and in the derivation of the in particle physics.

The commutator of two operators acting on a is a central concept in, since it quantifies how well the two s described by these operators can be measured simultaneously. The is ultimately a theorem about such commutators, by virtue of the. In, equivalent commutators of function are called s, and are completely isomorphic to the Hilbert-space commutator structures mentioned.

The commutator has the following properties:

Lie-algebra identities

 * 1) $$[A + B, C] = [A, C] + [B, C]$$
 * 2) $$[A, A] = 0$$
 * 3) $$[A, B] = -[B, A]$$
 * 4) $$[A, [B, C]] + [B, [C, A]] + [C, [A, B]] = 0$$

Relation (3) is called, while (4) is the.

Additional identities

 * 1) $$[A, BC] = [A, B]C + B[A, C]$$
 * 2) $$[A, BCD] = [A, B]CD + B[A, C]D + BC[A, D]$$
 * 3) $$[A, BCDE] = [A, B]CDE + B[A, C]DE + BC[A, D]E + BCD[A, E]$$
 * 4) $$[AB, C] = A[B, C] + [A, C]B$$
 * 5) $$[ABC, D] = AB[C, D] + A[B, D]C + [A, D]BC$$
 * 6) $$[ABCD, E] = ABC[D, E] + AB[C, E]D + A[B, E]CD + [A, E]BCD$$
 * 7) $$[A, B + C] = [A, B] + [A, C]$$
 * 8) $$[A + B, C + D] = [A, C] + [A, D] + [B, C] + [B, D]$$
 * 9) $$[AB, CD] = A[B, C]D + [A, C]BD + CA[B, D] + C[A, D]B$$
 * 10) $$A, C], [B, D = [[[[A, B], C], D] + [[[[B, C], D], A] + [[[[C, D], A], B] + [[[[D, A], B], C]$$

If $A$ is a fixed element of a ring R, identity (1) can be interpreted as a for the map $$\operatorname{ad}_A: R \rightarrow R$$ given by $$\operatorname{ad}_A(B) = [A, B]$$. In other words, the map adA defines a on the ring R. Identities (2), (3) represent Leibniz rules for more than two factors, and are valid for any derivation. Identities (4)–(6) can also be interpreted as Leibniz rules. Identities (7), (8) express Z-.

Some of the above identities can be extended to the anticommutator using the above ± subscript notation. For example:


 * 1) $$[AB, C]_- = A[B, C]_\mp \pm [A, C]_\mp B$$
 * 2) $$[AB, CD]_- = A[B, C]_\mp D \pm AC[B, D]_\mp + [A, C]_\mp DB \pm C[A, D]_\mp B$$
 * 3) $$\left[A, [B, C]_\pm\right] + \left[B, [C, A]_\pm\right] + \left[C, [A, B]_\pm\right] = 0$$

Linear groups
GLn(F) or GL(n, F), or simply GL(n) is the of n×n invertible matrices with entries from the field F. The group operation is. The group GL(n, F) and its subgroups are often called linear groups or matrix groups.


 * SL(n, F) or SLn(F), is the of GL(n, F) consisting of matrices with a  of 1.


 * U(n), the Unitary group of degree n is the of n × n . The group operation is . The determinant of a unitary matrix is a complex number with norm 1.


 * SU(n), the special unitary group of degree $R$, is the of $C$  with  1.

Back to top

Symmetry groups

 * boosts, rotations, translations


 * boosts, rotations


 * The set of all boosts, however, does not form a subgroup, since composing two boosts does not, in general, result in another boost. (Rather, a pair of non-colinear boosts is equivalent to a boost and a rotation, and this relates to Thomas rotation.)

Aff(n,K): the affine group or general affine group of any affine space over a field K is the group of all invertible affine transformations from the space into itself.


 * E(n): rotations, reflections, and translations.


 * O(n): rotations, reflections





Clifford group: The set of invertible elements x such that for all v in V $$x v \alpha(x)^{-1}\in V .$$ The  Q is defined on the Clifford group by $$Q(x) = x^\mathrm{t}x.$$


 * PinV(K): The subgroup of elements of spinor norm 1. Maps 2-to-1 to the orthogonal group


 * SpinV(K): The subgroup of elements of Dickson invariant 0 in PinV(K). When the characteristic is not 2, these are the elements of determinant 1. Maps 2-to-1 to the special orthogonal group. Elements of the spin group act as linear transformations on the space of spinors

Back to top

Rotations



 * See also:, , , , , , , , , , , ,


 * From Rotation group SO(3):

Consider the solid ball in R3 of radius π. For every point in this ball there is a rotation, with axis through the point and the origin, and rotation angle equal to the distance of the point from the origin. The two rotations through π and through −π are the same. So we (or "glue together")  on the surface of the ball.

The ball with antipodal surface points identified is a, and this manifold is  to the rotation group. It is also diffeomorphic to the RP3, so the latter can also serve as a topological model for the rotation group.

These identifications illustrate that SO(3) is but not. As to the latter, consider the path running from the "north pole" straight through the interior down to the south pole. This is a closed loop, since the north pole and the south pole are identified. This loop cannot be shrunk to a point, since no matter how you deform the loop, the start and end point have to remain antipodal, or else the loop will "break open".



Surprisingly, if you run through the path twice, i.e., run from north pole down to south pole, jump back to the north pole (using the fact that north and south poles are identified), and then again run from north pole down to south pole, so that φ runs from 0 to 4π, you get a closed loop which can be shrunk to a single point: first move the paths continuously to the ball's surface, still connecting north pole to south pole twice. The second half of the path can then be mirrored over to the antipodal side without changing the path at all. Now we have an ordinary closed loop on the surface of the ball, connecting the north pole to itself along a great circle. This circle can be shrunk to the north pole without problems.

The same argument can be performed in general, and it shows that the of SO(3) is  of order 2. In physics applications,

Back to top

Orientation entanglement

 * From Orientation entanglement

In three dimensions...the  is not. Mathematically, one can tackle this problem by exhibiting the  , which is also the  in three  dimensions, as a  of SO(3).

$n$ is the following group,


 * $$ \mathrm{SU}(2) = \left \{ \begin{pmatrix} \alpha&-\overline{\beta}\\ \beta & \overline{\alpha} \end{pmatrix}: \ \ \alpha,\beta\in\mathbf{C}, |\alpha|^2 + |\beta|^2 = 1\right \} ~,$$

where the overline denotes.

For comparison: Using 2&thinsp;×&thinsp;2 complex matrices, the quaternion a + bi + cj + dk can be represented as



\begin{bmatrix} a+bi & c+di \\ -(c-di) & a-bi \end{bmatrix}.$$

If X = (x1,x2,x3) is a vector in R3, then we identify X with the 2 &times; 2 matrix with complex entries


 * $$X=\left(\begin{matrix}x_1&x_2-ix_3\\x_2+ix_3&-x_1\end{matrix}\right)$$

Note that &minus;det(X) gives the square of the Euclidean length of X regarded as a vector, and that X is a, or better, trace-zero.

The unitary group acts on X via


 * $$X\mapsto MXM^+$$

where M ∈ SU(2). Note that, since M is unitary,


 * $$\det(MXM^+) = \det(X)$$, and
 * $$MXM^+$$ is trace-zero Hermitian.

Hence SU(2) acts via rotation on the vectors X. Conversely, since any which sends trace-zero Hermitian matrices to trace-zero Hermitian matrices must be unitary, it follows that every rotation also lifts to SU(2). However, each rotation is obtained from a pair of elements M and &minus;M of SU(2). Hence SU(2) is a double-cover of SO(3). Furthermore, SU(2) is easily seen to be itself simply connected by realizing it as the group of unit, a space to the.

A unit quaternion has the cosine of half the rotation angle as its scalar part and the sine of half the rotation angle multiplying a unit vector along some rotation axis (here assumed fixed) as its pseudovector (or axial vector) part. If the initial orientation of a rigid body (with unentangled connections to its fixed surroundings) is identified with a unit quaternion having a zero pseudovector part and +1 for the scalar part, then after one complete rotation (2pi rad) the pseudovector part returns to zero and the scalar part has become -1 (entangled). After two complete rotations (4pi rad) the pseudovector part again returns to zero and the scalar part returns to +1 (unentangled), completing the cycle.

Back to top

Rotors
From Rotor

A rotor is an object in (a ) that s any  or general  about the. They are normally motivated by considering an even number of s, which generate rotations (see also the ).

The term originated with, in showing that the algebra is just a special case of 's "theory of extension" (Ausdehnungslehre). Hestenes defined a rotor to be any element $$R$$ of a geometric algebra that can be written as the product of an even number of unit vectors and satisfies $$\tilde{R} R = 1$$, where $$\tilde{R}$$ is the "reverse" of $$R$$—that is, the product of the same vectors, but in reverse order.

Using inverse
in geometric algebra may be represented as (minus) sandwiching a multivector M between a vector v perpendicular to the  of reflection and that vector's  v−1:


 * $$-v_1Mv_1^{-1}$$

and are of even grade. Two reflections is a rotation.


 * $$v_2 v_1Mv_1^{-1} v_2^{-1}$$

Multiplying a vector times a vector ($$v_2 v_1$$) results in a bivector. When used in this manner we call the bivectors rotors.

Under a rotation generated by the rotor R, a general multivector M will transform double-sidedly as


 * $$RMR^{-1}.$$

The formulation above (using inverse) is self-normalizing.

Using reverse
For a, it may be convenient to consider an alternative formulation, and some authors define the operation of reflection as (minus) the sandwiching of a unit (i.e. normalized) multivector:


 * $$-vMv, \quad v^2=1 ,$$

forming rotors that are automatically normalised:


 * $$RR^{\dagger}=R^{\dagger}R=1 .$$

The derived rotor action is then expressed as a sandwich product with the reverse:


 * $$RMR^{\dagger}$$

For a reflection for which the associated vector squares to a negative scalar, as may be the case with a, such a vector can only be normalized up to the sign of its square, and additional bookkeeping of the sign of the application the rotor becomes necessary. The formulation in terms of the sandwich product with the inverse as above suffers no such shortcoming.

Two consecutive rotations is given by


 * $$R_2R_1MR_1^{\dagger}R_2^{\dagger} = R_2R_1M(R_1R_2)^{\dagger}$$

The alternative formulation above (using reverse) is not self-normalizing and motivates the definition of in geometric algebra as an object that transforms single-sidedly – i.e. spinors may be regarded as non-normalised rotors in which the reverse rather than the inverse is used in the sandwich product.

Spinors

 * See also:

External link:An introduction to spinors

Spinors generalize the idea of rotors (using reverse). They may be regarded as non-normalised rotors which transform single-sided.

Note: The (real) in three-dimensions are quaternions, and the action of an even-graded element on a spinor is given by ordinary quaternionic multiplication.

A spinor transforms to its negative when the space is rotated through a complete turn from 0° to 360°. This property characterizes spinors.

Back to top

Spinors in three dimensions

 * From Spinors in three dimensions

The association of a spinor with a 2×2 complex was formulated by Élie Cartan.

In detail, given a vector x = (x1, x2, x3) of real (or complex) numbers, one can associate the complex matrix
 * $$\vec{x} \rightarrow X \ =\left(\begin{matrix}x_3&x_1-ix_2\\x_1+ix_2&-x_3\end{matrix}\right).$$

Matrices of this form have the following properties, which relate them intrinsically to the geometry of 3-space:
 * det X = – (length x)2.
 * X 2 = (length x)2I, where I is the identity matrix.
 * $$\frac{1}{2}(XY+YX)=({\bold x}\cdot{\bold y})I$$
 * $$\frac{1}{2}(XY-YX)=iZ$$ where Z is the matrix associated to the cross product z = x &times; y.
 * If u is a unit vector, then −UXU is the matrix associated to the vector obtained from x by reflection in the plane orthogonal to u.
 * It is an elementary fact from that any rotation in 3-space factors as a composition of two reflections.  (Similarly, any orientation reversing orthogonal transformation is either a reflection or the product of three reflections.)  Thus if R is a rotation, decomposing as the reflection in the plane perpendicular to a unit vector u1 followed by the plane perpendicular to u2, then the matrix U2U1XU1U2 represents the rotation of the vector x through R.

Having effectively encoded all of the rotational linear geometry of 3-space into a set of complex 2&times;2 matrices, it is natural to ask what role, if any, the 2&times;1 matrices (i.e., the ) play. Provisionally, a spinor is a column vector
 * $$\xi=\left[\begin{matrix}\xi_1\\\xi_2\end{matrix}\right],$$ with complex entries ξ1 and ξ2.

The space of spinors is evidently acted upon by complex 2&times;2 matrices. Furthermore, the product of two reflections in a given pair of unit vectors defines a 2&times;2 matrix whose action on euclidean vectors is a rotation, so there is an action of rotations on spinors.

Often, the first example of spinors that a student of physics encounters are the 2&times;1 spinors used in Pauli's theory of electron spin. The are a vector of three 2&times;2 that are used as.

Given a in 3 dimensions, for example (a, b, c), one takes a with the Pauli spin matrices to obtain a spin matrix for spin in the direction of the unit vector.

The of that spin matrix are the spinors for spin-1/2 oriented in the direction given by the vector.

Example: u = (0.8, -0.6, 0) is a unit vector. Dotting this with the Pauli spin matrices gives the matrix:



S_u = (0.8,-0.6,0.0)\cdot \vec{\sigma}=0.8 \sigma_{1}-0.6\sigma_{2}+0.0\sigma_{3} = \begin{bmatrix} 0.0 & 0.8+0.6i \\ 0.8-0.6i & 0.0 \end{bmatrix} $$

The eigenvectors may be found by the usual methods of, but a convenient trick is to note that a Pauli spin matrix is an, that is, the squareof the above matrix is the identity matrix.

Thus a (matrix) solution to the eigenvector problem with eigenvalues of ±1 is simply 1 ± Su. That is,



S_u (1\pm S_u) = \pm 1 (1 \pm S_u) $$

One can then choose either of the columns of the eigenvector matrix as the vector solution, provided that the column chosen is not zero. Taking the first column of the above, eigenvector solutions for the two eigenvalues are:



\begin{bmatrix} 1.0+ (0.0)\\ 0.0 +(0.8-0.6i) \end{bmatrix}, \begin{bmatrix} 1.0- (0.0)\\ 0.0-(0.8-0.6i) \end{bmatrix} $$

The trick used to find the eigenvectors is related to the concept of, that is, the matrix eigenvectors (1 ± Su)/2 are or  and therefore each generates an ideal in the Pauli algebra. The same trick works in any, in particular the that are discussed below. These projection operators are also seen in theory where they are examples of pure density matrices.

More generally, the projection operator for spin in the (a, b, c) direction is given by
 * $$\frac{1}{2}\begin{bmatrix}1+c&a-ib\\a+ib&1-c\end{bmatrix}$$

and any non zero column can be taken as the projection operator. While the two columns appear different, one can use a2 + b2 + c2 = 1 to show that they are multiples (possibly zero) of the same spinor.

Back to top

Transforms like a tensor

 * From Tensor:

When changing from one (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not (see  and ): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1. A is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant.

Succinctly, spinors are elements of the of the rotation group, while tensors are elements of its. Other have tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well.

Back to top

Column vectors

 * From Spinor:

"Quote from Elie Cartan: The Theory of Spinors, Hermann, Paris, 1966: "Spinors...provide a linear representation of the group of rotations in a space with any number $n$ of dimensions, each spinor having $2^\nu$ components where $n = 2\nu+1$ or $2\nu$." The star (*) refers to Cartan 1913."

(Note: $$\nu$$ is the number of an object can have in n dimensions.)

Although spinors can be defined purely as elements of a representation space of the spin group (or its Lie algebra of infinitesimal rotations), they are typically defined as elements of a vector space that carries a linear representation of the Clifford algebra. The Clifford algebra is an associative algebra that can be constructed from Euclidean space and its inner product in a basis independent way. Both the spin group and its Lie algebra are embedded inside the Clifford algebra in a natural way, and in applications the Clifford algebra is often the easiest to work with. After choosing an orthonormal basis of Euclidean space, a representation of the Clifford algebra is generated by gamma matrices, matrices that satisfy a set of canonical anti-commutation relations. The spinors are the column vectors on which these matrices act. In three Euclidean dimensions, for instance, the Pauli spin matrices are a set of gamma matrices, and the two-component complex column vectors on which these matrices act are spinors. However, the particular matrix representation of the Clifford algebra, hence what precisely constitutes a "column vector" (or spinor), involves the choice of basis and gamma matrices in an essential way. As a representation of the spin group, this realization of spinors as (complex) column vectors will either be irreducible if the dimension is odd, or it will decompose into a pair of so-called "half-spin" or Weyl representations if the dimension is even.

In three Euclidean dimensions, for instance, spinors can be constructed by making a choice of Pauli spin matrices corresponding to (angular momenta about) the three coordinate axes. These are 2×2 matrices with complex entries, and the two-component complex column vectors on which these matrices act by matrix multiplication are the spinors. In this case, the spin group is isomorphic to the group of 2×2 unitary matrices with determinant one, which naturally sits inside the matrix algebra. This group acts by conjugation on the real vector space spanned by the Pauli matrices themselves, realizing it as a group of rotations among them, but it also acts on the column vectors (that is, the spinors).

Back to top

Electron spin

 * From Spinor:

In the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or "spin", of the electron and other subatomic particles. More precisely, it is the fermions of spin-1/2 that are described by spinors, which is true both in the relativistic and non-relativistic theory. The wavefunction of the non-relativistic electron has values in 2 component spinors transforming under three-dimensional infinitesimal rotations. The relativistic for the electron is an equation for 4 component spinors transforming under infinitesimal Lorentz transformations for which a substantially similar theory of spinors exists.
 * }

Back to top

Equivalence relation
From Equivalence relation:

It has the following properites:


 * $n×n$ (reflexive property),
 * if $SO(3)$ then $3 × 3$ (symmetric property), and
 * if $SU(2)$ and $a = a$ then $a = b$ (transitive property).

As a consequence of the reflexive, symmetric, and transitive properties, any equivalence relation provides a partition of the underlying set into disjoint equivalence classes. Two elements of the given set are equivalent to each other if and only if they belong to the same equivalence class.

Back to top

Modular arithmetic
From Modular arithmetic :

Modular arithmetic can be handled mathematically by introducing a.


 * $$a \equiv b \pmod n.$$




 * $$a = pn + r,$$
 * $$b = qn + r,$$

The number $b = a$ is called the  of the congruence.

The equivalence class consisting of the integers congruent to a modulo n, is called the congruence class or residue class.

Back to top

Quadratic reciprocity

 * See also: and

From Quadratic reciprocity:

Consider the polynomial $$f(n) = n^2 - 5$$ and its values for $$n \in \N.$$ The prime factorizations of these values are given as follows:

The prime factors $$p$$ dividing $$f(n)$$ are $$p=2,5$$, and every prime whose final digit is $$1$$ or $$9$$; no primes ending in $$3$$ or $$7$$ ever appear.

Now, $$p$$ is a prime factor of $$n^2-5$$ whenever $$n^2 - 5 \equiv 0\ (\text{mod }p)$$


 * In other words, whenever $$n^2 \equiv 5 \ (\text{mod }p)$$.

In this case this happens whenever $$n^2 \equiv p \ (\text{mod }5)$$

The law of gives a similar characterization of prime divisors of $$f(n)=n^2 - q$$ for any prime q, which leads to a characterization for any integer $$q$$.

{{math_theorem|name=Law of quadratic reciprocity|Let p and q be distinct odd prime numbers, and define the {{Wikipedia link|Legendre symbol}} as:


 * $$\left(\frac{q}{p}\right)

=\left\{\begin{array}{rl} 1 & \text{if }\, n^2 \equiv q \!{\pmod p}\, \text{ for some unknown integer } n,\\ 0 & \text{if } q \equiv 0 \pmod p \\ -1 & \text{otherwise.} \end{array}\right.$$

Where:


 * $$ \left(\frac{q}{p}\right) \equiv q^{\frac{p-1}{2}} \pmod p $$

Then:
 * $$ \left(\frac{p}{q}\right) \left(\frac{q}{p}\right) = (-1)^{\frac{p-1}{2}\frac{q-1}{2}}.$$

}}

This law allows the easy calculation of whether there exists any integer solution $a = b$ for a quadratic equation of the form $$n^2\equiv q \, \pmod{p}$$ for p an odd prime. However it gives no help at all for finding a specific solution; for this, one uses.

Ideal
To understand the concept of an ideal, consider how ideals arise in the construction of rings of "elements modulo". For concreteness, let us look at the ring ℤn of integers modulo a given integer n ∈ ℤ (note that ℤ is a commutative ring). The key observation here is that we obtain ℤn by taking ℤ and wrapping it around itself so that n gets identified with 0 (since n is congruent to 0 modulo n). However, when doing so we must ensure that the resulting structure is again a ring. This requirement forces us to make some additional identifications. The notion of an ideal arises when we ask the question: "What is the exact set of integers that we are forced to identify with 0?" The answer is the set nℤ = {nm | m∈ℤ} of all integers congruent to 0 modulo n. That is, we must wrap ℤ around itself infinitely many times so that the integers ..., n ⋅ -2, n ⋅ -1, n ⋅ +1, n ⋅ +2, ... will all align with 0. If we look at what properties this set must satisfy in order to ensure that ℤn is a ring, then we arrive at the definition of an ideal. Indeed, one can directly verify that nℤ is an ideal of ℤ.

Remark. Identifications with elements other than 0 also need to be made. For example, the elements in 1 + nℤ must be identified with 1, the elements in 2 + nℤ must be identified with 2, and so on. Those, however, are uniquely determined by nℤ since ℤ is an additive group.

Back to top

Coset
Let G be the of the integers, Z = ({..., −2, −1, 0, 1, 2, ...}, +) and H the subgroup (mZ, +) = ({..., −2m, −m, 0, m, 2m, ...}, +) where m is a positive integer. Then the cosets of H in G are the m sets mZ, mZ + 1, ..., mZ + (m − 1), where mZ + a = {..., −2m+a, −m+a, a, m+a, 2m+a, ...}. There are no more than m cosets, because mZ + m = m(Z + 1) = mZ. The coset (mZ + a, +) is the of a modulo m.

Another example of a coset comes from the theory of s. The elements (vectors) of a vector space form an under. It is not hard to show that of a vector space are  of this group. For a vector space V, a subspace W, and a fixed vector a in V, the sets
 * $$\{x \in V \colon x = a + n, n \in W\}$$

are called s, and are cosets (both left and right, since the group is abelian). In terms of vectors, these affine subspaces are all the "lines" or "planes"  to the subspace, which is a line or plane going through the origin.

Back to top

Polynomial ring
From Polynomial ring:



These operations are defined according to the ordinary rules for manipulating algebraic expressions. Specifically, if


 * $$p = p_0 + p_1 X + p_2 X^2 + \cdots + p_m X^m,$$
 * $$q = q_0 + q_1 X + q_2 X^2 + \cdots + q_n X^n,$$

then
 * $$p + q = r_0 + r_1 X + r_2 X^2 + \cdots + r_k X^k,$$
 * where
 * $$r_i=p_i+q_i$$
 * $$k = max(m, n)$$

and
 * $$pq = s_0 + s_1 X + s_2 X^2 + \cdots + s_l X^l,$$
 * where
 * $$s_i=p_0 q_i + p_1 q_{i-1} + \cdots + p_i q_0.$$
 * $$l = m + n,$$

The scalar multiplication is
 * $$p_0(q_0 + q_1 X + \dots q_nX^n) = p_0q_0 +(p_0q_1)X + \cdots + (p_0q_n)X^n$$

Gauss remarked that the procedure of division with the remainder can also be defined for polynomials: given two polynomials p and q with q ≠ 0, one can write


 * $$ p = uq + r,$$

where the quotient u and the remainder r are polynomials, and the degree of r is strictly less than the degree of q. Moreover, a decomposition of this form is unique. The quotient and the remainder are found using.



\begin{array}{r} x^2 + \phantom{1}x + 3\\ x-3 \, \overline{) \, x^3 - 2x^2 + 0x - 4}\\ \underline{x^3 - 3x^2 {\color{White} {} + 0x - 4}}\\ x^2 + 0x {\color{White} {} - 4}\\ \underline{x^2 - 3x {\color{White} {} - 4}}\\ 3x - 4\\ \underline{3x - 9}\\ 5 \end{array} $$


 * therefore


 * $$x^3 - 2x^2 + 0x - 4 = (x^2 + \phantom{1}x + 3)(x-3) + 5$$

Using the existence of greatest common divisors, Gauss was able to simultaneously rigorously prove the fundamental theorem of arithmetic for integers and its generalization to polynomials.

For polynomials over the integers, over the rational numbers, or over a, there are efficient algorithms for computing the factorization that are implemented in computer algebra systems (see ).

Many classes of rings, such as s, s, s,, s, s, are generalizations of polynomial rings.

Further reading: Polynomial ring

Back to top

Commutative algebra
From Commutative algebra :

Commutative algebra is essentially the study of the rings occurring in and.

The notion of (in particular the localization with respect to a, the localization consisting in inverting a single element and the ) is one of the main differences between commutative algebra and the theory of non-commutative rings. It leads to an important class of commutative rings, the s that have only one. The set of the prime ideals of a commutative ring is naturally equipped with a, the. All these notions are widely used in algebraic geometry and are the basic technical tools for the definition of, a generalization of algebraic geometry introduced by.

Many other notions of commutative algebra are counterparts of geometrical notions occurring in algebraic geometry. This is the case of, , s, s, s and many other notions.

Back to top

localization of a ring
From Localization (commutative algebra)

In and, localization is a formal way to introduce the "denominators" to a given ring or module. That is, it introduces a new ring/module out of an existing one so that it consists of  $$\frac{m}{s},$$ such that the  s belongs to a given subset S of R. If S is the set of the non-zero elements of an, then the localization is the : this case generalizes the construction of the ring Q of rational numbers from the ring Z of rational integers.

The technique has become fundamental, particularly in, as it provides a natural link to theory. In fact, the term localization originated in : if R is a ring of s defined on some geometric object V, and one wants to study this variety "locally" near a point p, then one considers the set S of all functions which are not zero at p and localizes R with respect to S. The resulting ring R* contains only information about the behavior of V near p (c.f. the example given at ).

An important related process is : one often localizes a ring/module, then completes.

Back to top

Completion of a ring
From Completion of a ring

In, a completion is any of several related s on s and that result in complete s and modules. Completion is similar to, and together they are among the most basic tools in analysing s. Complete commutative rings have a simpler structure than general ones, and  applies to them. In, a completion of a ring of functions R on a space X concentrates on a formal neighborhood of a point of X: heuristically, this is a neighborhood so small that all Taylor series centered at the point are convergent. An algebraic completion is constructed in a manner analogous to of a  with s, and agrees with it in case R has a metric given by a.

Einstein notation
From Einstein notation:

According to this convention, when an index variable appears twice in a single term and is not otherwise defined (see ), it implies summation of that term over all the values of the index. So where the indices can range over the set $b = c$,


 * $$y = \sum_{i = 1}^3 c_i x^i = c_1 x^1 + c_2 x^2 + c_3 x^3$$

is simplified by the convention to:


 * $$y = c_i x^i$$.

Any tensor $a = c$ in $a$ can be written as:


 * $$\mathbf{T} = T^{ij}\mathbf{e}_{ij}$$.

The of two matrices $b$ and $n$ is:


 * $$ \mathbf{C}_{ik} = (\mathbf{A} \mathbf{B})_{ik} =\sum_{j=1}^N A_{ij} B_{jk}$$

equivalent to


 * $$C^i{}_k = A^i{}_j B^j{}_k $$

Back to top

Metric contraction
From Covariance and contravariance of vectors

are. Upper indices represent components of vectors


 * $$\mathbf{v} = v^i e_i = \begin{bmatrix}e_1 & e_2 & \cdots & e_n\end{bmatrix} \begin{bmatrix}v^1 \\ v^2 \\ \vdots \\ v^n\end{bmatrix} \qquad$$

are. Lower indices represent components of covectors.


 * $$ \mathbf{c} = c_i e^i = \begin{bmatrix}c_1 & c_2 & \cdots & c_n\end{bmatrix} \begin{bmatrix}e^1 \\ e^2 \\ \vdots \\ e^n\end{bmatrix}$$

We ordinarily think of the product of a vector and a covector as a scalar. In :


 * $$\mathbf{v} \cdot \mathbf{c} = v^{\color{red}i} c_{\color{red}i} = v^1 c_1 + v^2 c_2 + v^3 c_3 $$

This is an example of tensor contraction. The important thing to note is that by the Einstein summation convention when both vectors use the same index i then tensor contraction occurs.

But the product of a vector and a covector can also be a (1,1) tensor. In :


 * $$A^i{}_j = v^{\color{red}i} c_{\color{red}j} = (v c)^i{}_j$$

The important thing to note is that by the Einstein summation convention when the vectors use different indices i and j then there is no tensor contraction.

From Tensor contraction

Contraction on a pair of indices that are either both contravariant or both covariant is not possible in general. However, in the presence of an inner product (also known as a ) g, such contractions are possible. One uses the metric to, as needed, and then one uses the usual operation of contraction. The combined operation is known as .


 * $$g^{ij}A_j = \cancel{g}^{i \cancel{j}}A_\cancel{j} = A^i\,,$$

Metric contraction can be used to find the dot product of 2 vectors

$$\mathbf{u} \cdot \mathbf{v} = \begin{bmatrix} u^1 \\ u^2 \\ u^3 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\   0 & 1 & 0 \\    0 & 0 & 1 \\  \end{bmatrix} \begin{bmatrix} v^1 \\ v^2 \\ v^3 \end{bmatrix}= \begin{bmatrix} u^1 \\ u^2 \\ u^3 \end{bmatrix} \begin{bmatrix} v_1 & v_2 & v_3 \end{bmatrix}= u^1 v_1 + u^2 v_2 + u^3 v_3$$

Not only can the dot product be written using an (0,2) tensor but the dot product is sometimes said to BE an (0,2) tensor which is said to be a map from two vectors to a scalar or a map from a vector to a covector.

Back to top

Calculus of variations


Whereas calculus is concerned with infinitesimal changes of variables, calculus of variations is concerned with infinitesimal changes of the underlying function itself.

Calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals.

A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is obviously a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, where the optical length depends upon the material of the medium. One corresponding concept in mechanics is the principle of least action.

Given an equation that you want to minimize


 * $$\int_a^b F(x,y(x),y'(x)) \, \mathrm{d}x$$

You then solve the corresponding for y(x).


 * $$\frac{\partial F}{\partial y} - \frac{d}{dx} \frac{\partial F}{\partial y'} = 0$$

Back to top

Coordinate-free
From Coordinate-free: A coordinate-free, or component-free, treatment of a or  topic develops its concepts on any form of  without reference to any particular.

Coordinate-free treatments generally allow for simpler systems of equations and inherently constrain certain types of inconsistency, allowing greater at the cost of some  from the detailed formulae needed to evaluate these equations within a particular system of coordinates.

Coordinate-free treatments were the only available approach to (and are now known as ) before the development of  by. After several centuries of generally coordinate-based exposition, the modern tendency is generally to introduce students to coordinate-free treatments early on, and then to derive the coordinate-based treatments from the coordinate-free treatment, rather than vice versa.

Fields that are now often introduced with coordinate-free treatments include, s, , and.

In physics, the existence of coordinate-free treatments of physical theories is a corollary of the principle of.

Back to top