Advanced mathematics


 * This article is a continuation of Intermediate mathematics

Equivalence relation
From Equivalence relation:

An Equivalence relation is a generalization of the concept of "is equal to". It has the following properites:


 * $a = a$ (reflexive property),
 * if $a = b$ then $b = a$ (symmetric property), and
 * if $a = b$ and $b = c$ then $a = c$ (transitive property).

As a consequence of the reflexive, symmetric, and transitive properties, any equivalence relation provides a partition of the underlying set into disjoint equivalence classes. Two elements of the given set are equivalent to each other if and only if they belong to the same equivalence class.

Back to top

Modular arithmetic
From Modular arithmetic :

Modular arithmetic can be handled mathematically by introducing a.


 * $$a \equiv b \pmod n.$$


 * Two numbers $a$ and $b$ are said to be  if $n$ and $a$ have the same remainder when divided by positive integer $b$.


 * $$a = pn + r,$$
 * $$b = qn + r,$$

The number $n$ is called the  of the congruence.

The congruence relation satisfies all the conditions of an equivalence relation.

The equivalence class consisting of the integers congruent to a modulo n, is called the congruence class or residue class.

Back to top

Quadratic reciprocity

 * See also: and

From Quadratic reciprocity:

Consider the polynomial $$f(n) = n^2 - 5$$ and its values for $$n \in \N.$$ The prime factorizations of these values are given as follows:

The prime factors $$p$$ dividing $$f(n)$$ are $$p=2,5$$, and every prime whose final digit is $$1$$ or $$9$$; no primes ending in $$3$$ or $$7$$ ever appear.

Now, $$p$$ is a prime factor of $$n^2-5$$ whenever $$n^2 - 5 \equiv 0\ (\text{mod }p)$$


 * In other words, whenever $$n^2 \equiv 5 \ (\text{mod }p)$$.

In this case this happens whenever $$n^2 \equiv p \ (\text{mod }5)$$

The law of gives a similar characterization of prime divisors of $$f(n)=n^2 - q$$ for any prime q, which leads to a characterization for any integer $$q$$.

{{math_theorem|name=Law of quadratic reciprocity|Let p and q be distinct odd prime numbers, and define the {{Link|Legendre symbol}} as:


 * $$\left(\frac{q}{p}\right)

=\left\{\begin{array}{rl} 1 & \text{if }\, n^2 \equiv q \!{\pmod p}\, \text{ for some unknown integer } n,\\ 0 & \text{if } q \equiv 0 \pmod p \\ -1 & \text{otherwise.} \end{array}\right.$$

Where:


 * $$ \left(\frac{q}{p}\right) \equiv q^{\frac{p-1}{2}} \pmod p $$

Then:
 * $$ \left(\frac{p}{q}\right) \left(\frac{q}{p}\right) = (-1)^{\frac{p-1}{2}\frac{q-1}{2}}.$$

}}

This law allows the easy calculation of whether there exists any integer solution $n$ for a quadratic equation of the form $$n^2\equiv q \, \pmod{p}$$ for p an odd prime. However it gives no help at all for finding a specific solution; for this, one uses.

Polynomial ring
From Polynomial ring:

The polynomial ring, K[X], in X over a field K is defined as the set of expressions, called polynomials in X, of the form


 * $$p = p_0 + p_1 X + p_2 X^2 + \cdots + p_{m - 1} X^{m - 1} + p_m X^m,$$

The polynomial ring in X over K is equipped with an addition, a multiplication and a scalar multiplication that make it a. These operations are defined according to the ordinary rules for manipulating algebraic expressions. Specifically, if


 * $$p = p_0 + p_1 X + p_2 X^2 + \cdots + p_m X^m,$$
 * $$q = q_0 + q_1 X + q_2 X^2 + \cdots + q_n X^n,$$

then
 * $$p + q = r_0 + r_1 X + r_2 X^2 + \cdots + r_k X^k,$$
 * where
 * $$r_i=p_i+q_i$$
 * $$k = max(m, n)$$

and
 * $$pq = s_0 + s_1 X + s_2 X^2 + \cdots + s_l X^l,$$
 * where
 * $$s_i=p_0 q_i + p_1 q_{i-1} + \cdots + p_i q_0.$$
 * $$l = m + n,$$

The scalar multiplication is
 * $$p_0(q_0 + q_1 X + \dots q_nX^n) = p_0q_0 +(p_0q_1)X + \cdots + (p_0q_n)X^n$$

It is easy to verify that these three operations satisfy the axioms of a commutative algebra. Therefore, polynomial rings are also called polynomial algebras.

Gauss remarked that the procedure of division with the remainder can also be defined for polynomials: given two polynomials p and q with q ≠ 0, one can write


 * $$ p = uq + r,$$

where the quotient u and the remainder r are polynomials, and the degree of r is strictly less than the degree of q. Moreover, a decomposition of this form is unique. The quotient and the remainder are found using.



\begin{array}{r} x^2 + \phantom{1}x + 3\\ x-3 \, \overline{) \, x^3 - 2x^2 + 0x - 4}\\ \underline{x^3 - 3x^2 {\color{White} {} + 0x - 4}}\\ x^2 + 0x {\color{White} {} - 4}\\ \underline{x^2 - 3x {\color{White} {} - 4}}\\ 3x - 4\\ \underline{3x - 9}\\ 5 \end{array} $$


 * therefore


 * $$x^3 - 2x^2 + 0x - 4 = (x^2 + \phantom{1}x + 3)(x-3) + 5$$

Using the existence of greatest common divisors, Gauss was able to simultaneously rigorously prove the fundamental theorem of arithmetic for integers and its generalization to polynomials.

For polynomials over the integers, over the rational numbers, or over a, there are efficient algorithms for computing the factorization that are implemented in computer algebra systems (see ).

Back to top

Einstein notation
From Einstein notation:

According to this convention, when an index variable appears twice in a single term and is not otherwise defined (see ), it implies summation of that term over all the values of the index. So where the indices can range over the set $n$,


 * $$y = \sum_{i = 1}^3 c_i x^i = c_1 x^1 + c_2 x^2 + c_3 x^3$$

is simplified by the convention to:


 * $$y = c_i x^i$$.

Any tensor ${1, 2, 3}$ in $T$ can be written as:


 * $$\mathbf{T} = T^{ij}\mathbf{e}_{ij}$$.

The matrix product of two matrices $V ⊗ V$ and $A_{ij}$ is:


 * $$ \mathbf{C}_{ik} = (\mathbf{A} \mathbf{B})_{ik} =\sum_{j=1}^N A_{ij} B_{jk}$$

equivalent to


 * $$C^i{}_k = A^i{}_j B^j{}_k $$

Back to top

Metric contraction
From Covariance and contravariance of vectors

Vectors are. Upper indices represent components of vectors (contravariant)


 * $$\mathbf{v} = v^i e_i = \begin{bmatrix}e_1 & e_2 & \cdots & e_n\end{bmatrix} \begin{bmatrix}v^1 \\ v^2 \\ \vdots \\ v^n\end{bmatrix} \qquad$$

Covectors are. Lower indices represent components of covectors (covariant).


 * $$ \mathbf{c} = c_i e^i = \begin{bmatrix}c_1 & c_2 & \cdots & c_n\end{bmatrix} \begin{bmatrix}e^1 \\ e^2 \\ \vdots \\ e^n\end{bmatrix}$$

We ordinarily think of the product of a vector and a covector as a scalar. In :


 * $$\mathbf{v} \cdot \mathbf{c} = v^{\color{red}i} c_{\color{red}i} = v^1 c_1 + v^2 c_2 + v^3 c_3 $$

This is an example of tensor contraction. The important thing to note is that by the Einstein summation convention when both vectors use the same index i then tensor contraction occurs.

But the product of a vector and a covector can also be a (1,1) tensor. In :


 * $$A^i{}_j = v^{\color{red}i} c_{\color{red}j} = (v c)^i{}_j$$

The important thing to note is that by the Einstein summation convention when the vectors use different indices i and j then there is no tensor contraction.

From Tensor contraction

Contraction on a pair of indices that are either both contravariant or both covariant is not possible in general. However, in the presence of an inner product (also known as a metric) g, such contractions are possible. One uses the metric to, as needed, and then one uses the usual operation of contraction. The combined operation is known as metric contraction.


 * $$g^{ij}A_j = \cancel{g}^{i \cancel{j}}A_\cancel{j} = A^i\,,$$

Metric contraction can be used to find the dot product of 2 vectors

$$\mathbf{u} \cdot \mathbf{v} = \begin{bmatrix} u^1 \\ u^2 \\ u^3 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\   0 & 1 & 0 \\    0 & 0 & 1 \\  \end{bmatrix} \begin{bmatrix} v^1 \\ v^2 \\ v^3 \end{bmatrix}= \begin{bmatrix} u^1 \\ u^2 \\ u^3 \end{bmatrix} \begin{bmatrix} v_1 & v_2 & v_3 \end{bmatrix}= u^1 v_1 + u^2 v_2 + u^3 v_3$$

Not only can the dot product be written using an (0,2) tensor but the dot product is sometimes said to BE an (0,2) tensor which is said to be a map from two vectors to a scalar or a map from a vector to a covector.

Back to top

Geometric calculus

 * See also:

From :

Geometric calculus extends the geometric algebra to include differentiation and integration. The formalism is powerful and can be shown to encompass other mathematical theories including differential geometry and differential forms.

With a geometric algebra given, let a and b be and let F(a) be a -valued function. The of F(a) along b is defined as


 * $$\nabla_b F(a) = \lim_{\epsilon \rightarrow 0}{\frac{F(a + \epsilon b) - F(a)}{\epsilon}}$$

provided that the limit exists, where the limit is taken for scalar ε. This is similar to the usual definition of a directional derivative but extends it to functions that are not necessarily scalar-valued.

Next, choose a set of s $$\{e_i\}$$ and consider the operators, noted $$(\partial_i)$$, that perform directional derivatives in the directions of $$(e_i)$$:
 * $$\partial_i : F \mapsto (x\mapsto \nabla_{e_i} F(x))$$

Then, using the, consider the operator :
 * $$e^i\partial_i$$

which means:
 * $$F \mapsto e^i\partial_i F$$

or, more verbosely:
 * $$F \mapsto (x\mapsto e^i\nabla_{e_i} F(x))$$

It can be shown that this operator is independent of the choice of frame, and can thus be used to define the geometric derivative:
 * $$\nabla = e^i\partial_i$$

This is similar to the usual definition of the, but it, too, extends to functions that are not necessarily scalar-valued.

It can be shown that the directional derivative is linear regarding its direction, that is:
 * $$\nabla_{\alpha a + \beta b} = \alpha\nabla_a + \beta\nabla_b$$

From this follows that the directional derivative is the inner product of its direction by the geometric derivative. All needs to be observed is that the direction $$a$$ can be written $$a = (a\cdot e^i) e_i$$, so that:
 * $$\nabla_a = \nabla_{(a\cdot e^i)e_i} = (a\cdot e^i)\nabla_{e_i} = a\cdot(e^i\nabla_{e^i}) = a\cdot \nabla$$

For this reason, $$\nabla_a F(x)$$ is often noted $$a\cdot \nabla F(x)$$.

The standard for the geometric derivative is that it acts only on the function closest to its immediate right. Given two functions F and G, then for example we have


 * $$\nabla FG = (\nabla F)G.$$

Although the partial derivative exhibits a, the geometric derivative only partially inherits this property. Consider two functions F and G:


 * $$\begin{align}\nabla(FG) &= e^i\partial_i(FG) \\

&= e^i((\partial_iF)G+F(\partial_iG)) \\ &= e^i(\partial_iF)G+e^iF(\partial_iG) \end{align}$$

Since the geometric product is not with $$e^iF \ne Fe^i$$ in general, we cannot proceed further without new notation. A solution is to adopt the  notation, in which the scope of a geometric derivative with an overdot is the multivector-valued function sharing the same overdot. In this case, if we define


 * $$\dot{\nabla}F\dot{G}=e^iF(\partial_iG),$$

then the product rule for the geometric derivative is


 * $$\nabla(FG) = \nabla FG+\dot{\nabla}F\dot{G}$$

Let F be an r-grade multivector. Then we can define an additional pair of operators, the interior and exterior derivatives,


 * $$\nabla \cdot F = \langle \nabla F \rangle_{r-1} = e^i \cdot \partial_i F$$


 * $$\nabla \wedge F = \langle \nabla F \rangle_{r+1} = e^i \wedge \partial_i F.$$

In particular, if F is grade 1 (vector-valued function), then we can write


 * $$\nabla F = \nabla \cdot F + \nabla \wedge F$$

and identify the and  as


 * $$\nabla \cdot F = \operatorname{div} F$$


 * $$\nabla \wedge F = I \, \operatorname{curl} F.$$

Note, however, that these two operators are considerably weaker than the geometric derivative counterpart for several reasons. Neither the interior derivative operator nor the exterior derivative operator is.

The reason for defining the geometric derivative and integral as above is that they allow a strong generalization of. Let $$\mathsf{L}(A;x)$$ be a multivector-valued function of r-grade input A and general position x, linear in its first argument. Then the fundamental theorem of geometric calculus relates the integral of a derivative over the volume V to the integral over its boundary:

$$\int_V \dot{\mathsf{L}} \left(\dot{\nabla} dX;x \right) = \oint_{\partial V} \mathsf{L} (dS;x)$$

As an example, let $$\mathsf{L}(A;x)=\langle F(x) A I^{-1} \rangle$$ for a vector-valued function F(x) and a (n-1)-grade multivector A. We find that


 * $$\begin{align}\int_V \dot{\mathsf{L}} \left(\dot{\nabla} dX;x \right) &= \int_V \langle\dot{F}(x)\dot{\nabla} dX I^{-1} \rangle \\

&= \int_V \langle\dot{F}(x)\dot{\nabla} |dX| \rangle \\ &= \int_V \nabla \cdot F(x) |dX|. \end{align}$$

and likewise


 * $$\begin{align}\oint_{\partial V} \mathsf{L} (dS;x) &= \oint_{\partial V} \langle F(x) dS I^{-1} \rangle \\

&= \oint_{\partial V} \langle F(x) \hat{n} |dS| \rangle \\ &= \oint_{\partial V} F(x) \cdot \hat{n} |dS| \end{align}$$

Thus we recover the ,


 * $$\int_V \nabla \cdot F(x) |dX| = \oint_{\partial V} F(x) \cdot \hat{n} |dS|.$$

Back to top

Calculus of variations


Whereas calculus is concerned with infinitesimal changes of variables, calculus of variations is concerned with infinitesimal changes of the underlying function itself.

Calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals.

A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is obviously a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, where the optical length depends upon the material of the medium. One corresponding concept in mechanics is the principle of least action.

Given an equation that you want to minimize


 * $$\int_a^b F(x,y(x),y'(x)) \, \mathrm{d}x$$

You then solve the corresponding for y(x).


 * $$\frac{\partial F}{\partial y} - \frac{d}{dx} \frac{\partial F}{\partial y'} = 0$$

Back to top