Exterior algebra

In, the exterior product or wedge product of vectors is an algebraic construction used in to study s, s, and their higher-dimensional analogues. The exterior product of two vectors u and v, denoted by u ∧ v, is called a and lives in a space called the exterior square, a  that is distinct from the original space of vectors. The of u ∧ v can be interpreted as the area of the parallelogram with sides u and v, which in three dimensions can also be computed using the  of the two vectors. Like the cross product, the exterior product is, meaning that u ∧ v = −(v ∧ u) for all vectors u and v, but, unlike the cross product, the exterior product is. One way to visualize a bivector is as a family of all lying in the same plane, having the same area, and with the same &mdash;a choice of clockwise or counterclockwise.

When regarded in this manner, the exterior product of two vectors is called a. More generally, the exterior product of any number k of vectors can be defined and is sometimes called a k-blade. It lives in a space known as the kth exterior power. The magnitude of the resulting k-blade is the volume of the k-dimensional whose edges are the given vectors, just as the magnitude of the  of vectors in three dimensions gives the volume of the parallelepiped generated by those vectors.

The exterior algebra, or Grassmann algebra after, is the algebraic system whose product is the exterior product. The exterior algebra provides an algebraic setting in which to answer geometric questions. For instance, blades have a concrete geometric interpretation, and objects in the exterior algebra can be manipulated according to a set of unambiguous rules. The exterior algebra contains objects that are not only k-blades, but sums of k-blades; such a sum is called a. The k-blades, because they are simple products of vectors, are called the simple elements of the algebra. The rank of any k-vector is defined to be the smallest number of simple elements of which it is a sum. The exterior product extends to the full exterior algebra, so that it makes sense to multiply any two elements of the algebra. Equipped with this product, the exterior algebra is an, which means that α ∧ (β ∧ γ) = (α ∧ β) ∧ γ for any elements α, β, γ. The k-vectors have degree k, meaning that they are sums of products of k vectors. When elements of different degrees are multiplied, the degrees add like multiplication of polynomials. This means that the exterior algebra is a.

The definition of the exterior algebra makes sense for spaces not just of geometric vectors, but of other vector-like objects such as s or. In full generality, the exterior algebra can be defined for over a, and for other structures of interest in. It is one of these more general constructions where the exterior algebra finds one of its most important applications, where it appears as the algebra of that is fundamental in areas that use. The exterior algebra also has many algebraic properties that make it a convenient tool in algebra itself. The association of the exterior algebra to a vector space is a type of on vector spaces, which means that it is compatible in a certain way with linear transformations of vector spaces. The exterior algebra is one example of a, meaning that its also possesses a product, and this dual product is compatible with the exterior product. This dual algebra is precisely the algebra of s, and the pairing between the exterior algebra and its dual is given by the.

Areas in the plane
The R2 is a vector space equipped with a  consisting of a pair of s


 * $${\mathbf e}_1 = \begin{bmatrix}1\\0\end{bmatrix},\quad {\mathbf e}_2 = \begin{bmatrix}0\\1\end{bmatrix}.$$

Suppose that


 * $${\mathbf v} = \begin{bmatrix}a\\b\end{bmatrix} = a {\mathbf e}_1 + b {\mathbf e}_2, \quad {\mathbf w} = \begin{bmatrix}c\\d\end{bmatrix} = c {\mathbf e}_1 + d {\mathbf e}_2$$

are a pair of given vectors in R2, written in components. There is a unique parallelogram having v and w as two of its sides. The area of this parallelogram is given by the standard formula:


 * $$ \text{Area} = \left| \det \begin{bmatrix} {\mathbf v} & {\mathbf w} \end{bmatrix} \right| = \left| \det \begin{bmatrix} a & c \\ b & d \end{bmatrix} \right| = \left| ad - bc \right| .$$

Consider now the exterior product of v and w:



\begin{align} {\mathbf v}\wedge {\mathbf w} & = (a{\mathbf e}_1 + b{\mathbf e}_2) \wedge (c{\mathbf e}_1 + d{\mathbf e}_2) \\ & = ac{\mathbf e}_1 \wedge {\mathbf e}_1+ ad{\mathbf e}_1 \wedge {\mathbf e}_2+bc{\mathbf e}_2 \wedge {\mathbf e}_1+bd{\mathbf e}_2 \wedge {\mathbf e}_2 \\ & = \left ( ad - bc \right ){\mathbf e}_1 \wedge {\mathbf e}_2 \end{align} $$

where the first step uses the distributive law for the, and the last uses the fact that the exterior product is alternating, and in particular e2 ∧ e1 = −(e1 ∧ e2). (The fact that the exterior product is alternating also forces $$ \mathbf e_1 \wedge \mathbf e_1 = \mathbf e_2 \wedge \mathbf e_2 = 0 $$.) Note that the coefficient in this last expression is precisely the determinant of the matrix [v w]. The fact that this may be positive or negative has the intuitive meaning that v and w may be oriented in a counterclockwise or clockwise sense as the vertices of the parallelogram they define. Such an area is called the signed area of the parallelogram: the absolute value of the signed area is the ordinary area, and the sign determines its orientation.

The fact that this coefficient is the signed area is not an accident. In fact, it is relatively easy to see that the exterior product should be related to the signed area if one tries to axiomatize this area as an algebraic construct. In detail, if A(v, w) denotes the signed area of the parallelogram of which the pair of vectors v and w form two adjacent sides, then A must satisfy the following properties: With the exception of the last property, the exterior product of two vectors satisfies the same properties as the area. In a certain sense, the exterior product generalizes the final property by allowing the area of a parallelogram to be compared to that of any "standard" chosen parallelogram in a parallel plane (here, the one with sides e1 and e2). In other words, the exterior product provides a basis-independent formulation of area.
 * 1) A(jv, kw) = jkA(v, w) for any real numbers j and k, since rescaling either of the sides rescales the area by the same amount (and reversing the direction of one of the sides reverses the orientation of the parallelogram).
 * 2) A(v, v) = 0, since the area of the  parallelogram determined by v (i.e., a ) is zero.
 * 3) A(w, v) = −A(v, w), since interchanging the roles of v and w reverses the orientation of the parallelogram.
 * 4) A(v + jw, w) = A(v, w), for real j, since adding a multiple of w to v affects neither the base nor the height of the parallelogram and consequently preserves its area.
 * 5) A(e1, e2) = 1, since the area of the unit square is one.

Cross and triple products
For vectors in R3, or any other 3 dimensional with a fixed orientation, the exterior algebra is closely related to the  and. Using a {e1, e2, e3}, the exterior product of a pair of vectors


 * $$ \mathbf{u} = u_1 \mathbf{e}_1 + u_2 \mathbf{e}_2 + u_3 \mathbf{e}_3 $$

and


 * $$ \mathbf{v} = v_1 \mathbf{e}_1 + v_2 \mathbf{e}_2 + v_3 \mathbf{e}_3 $$

is


 * $$ \mathbf{u} \wedge \mathbf{v} = (u_1 v_2 - u_2 v_1) (\mathbf{e}_1 \wedge \mathbf{e}_2) + (u_3 v_1 - u_1 v_3) (\mathbf{e}_3 \wedge \mathbf{e}_1) + (u_2 v_3 - u_3 v_2) (\mathbf{e}_2 \wedge \mathbf{e}_3) $$

where {e1 ∧ e2, e3 ∧ e1, e2 ∧ e3} is a basis for the three-dimensional space Λ2(R3). The coefficients above are the same as those in the usual definition of the of vectors in three dimensions with a fixed orientation, the only differences being that the exterior product is not an ordinary vector, but instead is a, and that the exterior product does not depend on the choice of orientation.

Bringing in a third vector


 * $$ \mathbf{w} = w_1 \mathbf{e}_1 + w_2 \mathbf{e}_2 + w_3 \mathbf{e}_3, $$

the exterior product of three vectors is


 * $$ \mathbf{u} \wedge \mathbf{v} \wedge \mathbf{w} = (u_1 v_2 w_3 + u_2 v_3 w_1 + u_3 v_1 w_2 - u_1 v_3 w_2 - u_2 v_1 w_3 - u_3 v_2 w_1) (\mathbf{e}_1 \wedge \mathbf{e}_2 \wedge \mathbf{e}_3) $$

where e1 ∧ e2 ∧ e3 is the basis vector for the one-dimensional space Λ3(R3). The scalar coefficient is the of the three vectors.

The cross product and triple product in three dimensions each admit both geometric and algebraic interpretations. The cross product u × v can be interpreted as a vector which is perpendicular to both u and v and whose magnitude is equal to the area of the parallelogram determined by the two vectors. It can also be interpreted as the vector consisting of the of the matrix with columns u and v. The triple product of u, v, and w is geometrically a (signed) volume. Algebraically, it is the determinant of the matrix with columns u, v, and w. The exterior product in three dimensions allows for similar interpretations: it, too, can be identified with oriented areas or volumes spanned by the two or three vectors involved. In fact, in the presence of a positively oriented, the exterior product generalizes these geometric notions to higher dimensions.

Formal definitions and algebraic properties
The exterior algebra $Λ(V)$ of a vector space $V$ over a $K$ is defined as the  of the  $T(V)$ by the two-sided  $I$ generated by all elements of the form $x ⊗ x$ for $x ∈ V$ (i.e. all tensors that can be expressed as the tensor product of a vector in $V$ by itself).. The ideal I contains the ideal J generated by elements of the form $$x\otimes y + y \otimes x = (x+y)\otimes (x+y) - x\otimes x - y\otimes y $$ and these ideals coincide if (and only if) $$\operatorname{char}(K) \ne 2$$:
 * $$I \supseteq J \text{ with equality iff } \operatorname{char}(K) \ne 2$$.

We define
 * $$\bigwedge(V) = T(V)/I $$

The exterior product $∧$ of two elements of $Λ(V)$ is the product induced by the tensor product $⊗$ of $T(V)$. That is, if
 * $$\pi:T(V)\to \bigwedge(V) = T(V)/I$$

is the, and $a$ and $b$ are in $Λ(V)$, then there are $$\alpha$$ and $$\beta$$ in $T(V)$ such that $$a=\pi(\alpha)$$ and $$b=\pi(\beta),$$ and
 * $$a\wedge b = \pi(\alpha\otimes\beta).$$

It results from the definition of a quotient algebra that the value of $$a\wedge b$$ does not depend on a particular choice of $$\alpha$$ and $$\beta$$. We have (in all characteristics) $$ x \otimes y = - y \otimes x \bmod I$$.

As $T^{0} = K$, $T^{1} = V$, and $$\left(T^0(V)\oplus T^1(V)\right)\cap I= \{ 0 \} $$, the inclusions of $K$ and $V$ in $T(V)$ induce injections of $K$ and $V$ into $Λ(V)$. These injections are commonly considered as inclusions, and called natural embeddings, natural injections or natural inclusions. The word canonical is also commonly used in place of natural.

Alternating product
The exterior product is by construction on elements of V, which means that x ∧ x = 0 for all x ∈ V, by the above construction. It follows that the product is also on elements of V, for supposing that x, y ∈ V,


 * $$0 = (x+y)\wedge (x+y) = x\wedge x + x\wedge y + y\wedge x + y\wedge y = x\wedge y + y\wedge x$$

hence


 * $$ x \wedge y = - ( y \wedge x ) .$$

More generally, if σ is a of the integers [1, ..., k], and x1, x2, ..., xk are elements of V, it follows that


 * $$x_{\sigma(1)}\wedge x_{\sigma(2)}\wedge\cdots\wedge x_{\sigma(k)} = \operatorname{sgn}(\sigma)x_1\wedge x_2\wedge\cdots \wedge x_k,$$

where sgn(σ) is the σ.

In particular, if xi = xj for some i ≠ j, then the following generalization of the alternating property also holds:
 * $$x_{1}\wedge x_{2}\wedge\cdots\wedge x_{k} = 0.$$

Exterior power
The kth exterior power of V, denoted Λk(V), is the of Λ(V)  by elements of the form
 * $$x_1\wedge x_2\wedge\cdots\wedge x_k,\quad x_i\in V, i=1,2,\ldots, k.$$

If α ∈ Λk(V), then α is said to be a . If, furthermore, α can be expressed as an exterior product of k elements of V, then α is said to be decomposable. Although decomposable k-vectors span Λk(V), not every element of Λk(V) is decomposable. For example, in R4, the following 2-vector is not decomposable:
 * $$\alpha = e_1\wedge e_2 + e_3\wedge e_4.$$

(This is a, since α ∧ α ≠ 0.)

Basis and dimension
If the of V is n and {e1, ..., en} is a  of V, then the set
 * $$\{e_{i_1}\wedge e_{i_2}\wedge\cdots\wedge e_{i_k} \mid 1\le i_1 < i_2 < \cdots < i_k \le n\}$$

is a basis for Λk(V). The reason is the following: given any exterior product of the form
 * $$v_1\wedge\cdots\wedge v_k ,$$

every vector vj can be written as a of the basis vectors ei; using the bilinearity of the exterior product, this can be expanded to a linear combination of exterior products of those basis vectors. Any exterior product in which the same basis vector appears more than once is zero; any exterior product in which the basis vectors do not appear in the proper order can be reordered, changing the sign whenever two basis vectors change places. In general, the resulting coefficients of the basis k-vectors can be computed as the s of the that describes the vectors vj in terms of the basis ei.

By counting the basis elements, the dimension of Λk(V) is equal to a :
 * $$\dim {\bigwedge}^k(V) = \binom{n}{k} .$$

In particular, Λk(V) = {0} for k > n.

Any element of the exterior algebra can be written as a sum of s. Hence, as a vector space the exterior algebra is a
 * $$\bigwedge(V) = {\bigwedge}^0(V)\oplus {\bigwedge}^1(V) \oplus {\bigwedge}^2(V) \oplus \cdots \oplus {\bigwedge}^n(V)$$

(where by convention Λ0(V) = K and Λ1(V) = V), and therefore its dimension is equal to the sum of the binomial coefficients, which is 2n.

Rank of a k-vector
If α ∈ Λk(V), then it is possible to express α as a linear combination of decomposable s:


 * $$ \alpha = \alpha^{(1)} + \alpha^{(2)} + \cdots + \alpha^{(s)}$$

where each α(i) is decomposable, say


 * $$\alpha^{(i)} = \alpha^{(i)}_1\wedge\cdots\wedge\alpha^{(i)}_k,\quad i=1,2,\ldots, s.$$

The rank of the k-vector α is the minimal number of decomposable k-vectors in such an expansion of α. This is similar to the notion of.

Rank is particularly important in the study of 2-vectors. The rank of a 2-vector α can be identified with half the of coefficients of α in a basis. Thus if ei is a basis for V, then α can be expressed uniquely as


 * $$\alpha = \sum_{i,j}a_{ij}e_i\wedge e_j$$

where aij = −aji (the matrix of coefficients is ). The rank of the matrix aij is therefore even, and is twice the rank of the form α.

In characteristic 0, the 2-vector α has rank p if and only if


 * $$\underset{p}{\underbrace{\alpha\wedge\cdots\wedge\alpha}}\not= 0$$

and


 * $$\underset{p+1}{\underbrace{\alpha\wedge\cdots\wedge\alpha}} = 0.$$

Graded structure
The exterior product of a k-vector with a p-vector is a (k + p)-vector, once again invoking bilinearity. As a consequence, the direct sum decomposition of the preceding section


 * $$\bigwedge(V) = {\bigwedge}^0(V)\oplus {\bigwedge}^1(V) \oplus {\bigwedge}^2(V) \oplus \cdots \oplus {\bigwedge}^n(V)$$

gives the exterior algebra the additional structure of a, that is


 * $${\bigwedge}^k(V)\wedge{\bigwedge}^p(V)\sub {\bigwedge}^{k+p}(V).$$

Moreover, if $K$ is the basis field, we have
 * $${\bigwedge}^0(V)=K$$

and
 * $${\bigwedge}^1(V)=V.$$

The exterior product is graded anticommutative, meaning that if α ∈ Λk(V) and β ∈ Λp(V), then


 * $$\alpha\wedge\beta = (-1)^{kp}\beta\wedge\alpha.$$

In addition to studying the graded structure on the exterior algebra, studies additional graded structures on exterior algebras, such as those on the exterior algebra of a  (a module that already carries its own gradation).

Universal property
Let $V$ be a vector space over the field $K$. Informally, multiplication in $Λ(V)$ is performed by manipulating symbols and imposing a, an , and using the identity $v ∧ v = 0$ for $v ∈ V$. Formally, $Λ(V)$ is the "most general" algebra in which these rules hold for the multiplication, in the sense that any unital associative $K$-algebra containing $V$ with alternating multiplication on $V$ must contain a homomorphic image of $Λ(V)$. In other words, the exterior algebra has the following :

Given any unital associative $K$-algebra $A$ and any $K$- $j : V → A$ such that $j(v)j(v) = 0$ for every $v$ in $V$, then there exists precisely one unital $f : Λ(V) → A$ such that $j(v) = f(i(v))$ for all $v$ in $V$ (here $i$ is the natural inclusion of $V$ in $Λ(V)$, see above).

To construct the most general algebra that contains $V$ and whose multiplication is alternating on $V$, it is natural to start with the most general associative algebra that contains $V$, the $T(V)$, and then enforce the alternating property by taking a suitable. We thus take the two-sided $I$ in $T(V)$ generated by all elements of the form $v ⊗ v$ for $v$ in $V$, and define $Λ(V)$ as the quotient


 * $$\bigwedge(V) = T(V)/I\ $$

(and use $∧$ as the symbol for multiplication in $Λ(V))$. It is then straightforward to show that $Λ(V)$ contains $V$ and satisfies the above universal property.

As a consequence of this construction, the operation of assigning to a vector space $V$ its exterior algebra $Λ(V)$ is a from the  of vector spaces to the category of algebras.

Rather than defining $Λ(V)$ first and then identifying the exterior powers $Λ^{k}(V)$ as certain subspaces, one may alternatively define the spaces $Λ^{k}(V)$ first and then combine them to form the algebra $Λ(V)$. This approach is often used in differential geometry and is described in the next section.

Generalizations
Given a R and an R- M, we can define the exterior algebra Λ(M) just as above, as a suitable quotient of the tensor algebra T(M). It will satisfy the analogous universal property. Many of the properties of Λ(M) also require that M be a. Where finite dimensionality is used, the properties further require that M be and projective. Generalizations to the most common situations can be found in.

Exterior algebras of s are frequently considered in geometry and topology. There are no essential differences between the algebraic properties of the exterior algebra of finite-dimensional vector bundles and those of the exterior algebra of finitely generated projective modules, by the. More general exterior algebras can be defined for of modules.

Alternating operators
Given two vector spaces V and X, an alternating operator from Vk to X is a


 * $$ f\colon V^k \to X $$

such that whenever v1, ..., vk are vectors in V, then


 * $$ f(v_1,\ldots, v_k)=0$$

The map


 * $$ w\colon V^k \to {\bigwedge}^k(V) $$

which associates to k vectors from V their exterior product, i.e. their corresponding k-vector, is also alternating. In fact, this map is the "most general" alternating operator defined on Vk: given any other alternating operator f : Vk → X, there exists a unique φ : Λk(V) → X with f = φ ∘ w. This  characterizes the space Λk(V) and can serve as its definition.

Alternating multilinear forms
The above discussion specializes to the case when X = K, the base field. In this case an alternating multilinear function
 * $$f : V^k \to K\ $$

is called an alternating multilinear form. The set of all  is a vector space, as the sum of two such maps, or the product of such a map with a scalar, is again alternating. By the universal property of the exterior power, the space of alternating forms of degree k on V is isomorphic with the  (ΛkV)∗. If V is finite-dimensional, then the latter is naturally isomorphic to Λk(V∗). In particular, the dimension of the space of alternating maps from Vk to K is.

Under this identification, the exterior product takes a concrete form: it produces a new anti-symmetric map from two given ones. Suppose ω : Vk → K and η : Vm → K are two anti-symmetric maps. As in the case of s of multilinear maps, the number of variables of their exterior product is the sum of the numbers of their variables. It is defined as follows:


 * $$\omega\wedge\eta=\frac{(k+m)!}{k!\,m!}\operatorname{Alt}(\omega\otimes\eta),$$

where the alternation Alt of a multilinear map is defined to be the average of the sign-adjusted values over all the s of its variables:


 * $$\operatorname{Alt}(\omega)(x_1,\ldots,x_k)=\frac{1}{k!}\sum_{\sigma\in S_k}\operatorname{sgn}(\sigma)\,\omega(x_{\sigma(1)},\ldots,x_{\sigma(k)}).$$

This definition of the exterior product is well-defined even if the K has, if one considers an equivalent version of the above that does not use factorials or any constants:


 * $${\omega \wedge \eta(x_1,\ldots,x_{k+m})} = \sum_{\sigma \in Sh_{k,m}} \operatorname{sgn}(\sigma)\,\omega(x_{\sigma(1)}, \ldots, x_{\sigma(k)}) \eta(x_{\sigma(k+1)}, \ldots, x_{\sigma(k+m)}),$$

where here Shk,m ⊂ Sk+m is the subset of : s σ of the set {1, 2, ..., k + m} such that σ(1) < σ(2) < ... < σ(k), and σ(k + 1) < σ(k + 2) < ... < σ(k + m).

Bialgebra structure
There is a correspondence between the graded dual of the graded algebra Λ(V) and alternating multilinear forms on V. The exterior algebra (as well as the ) inherits a bialgebra structure, and, indeed, a structure, from the. See the article on s for a detailed treatment of the topic.

The exterior product of multilinear forms defined above is dual to a defined on Λ(V), giving the structure of a. The coproduct is a linear function Δ : Λ(V) → Λ(V) ⊗ Λ(V) which is given by
 * $$\Delta(v) = 1 \otimes v + v \otimes 1$$

on elements v∈V. The symbol 1 stands for the unit element of the field K. Recall that K ⊂ Λ(V), so that the above really does lie in Λ(V) ⊗ Λ(V). This definition of the coproduct is extended to the full space Λ(V) by (linear) homomorphism. That is, for v,w∈V, one has, by definition, the homomorphism
 * $$\Delta(v\wedge w)= \Delta(v) \wedge \Delta(w)$$

The correct form of this homomorphism is not what one might naively write, but has to be the one carefully defined in the article. In this case, one obtains

\Delta(v \wedge w) = 1 \otimes (v \wedge w) + v \otimes w - w \otimes v + (v \wedge w) \otimes 1. $$

Extending to the full space Λ(V), one has, in general,
 * $$\Delta(x_1\wedge\cdots\wedge x_k) = \Delta(x_1) \wedge\cdots \wedge \Delta(x_k)$$

Expanding this out in detail, one obtains the following expression on decomposable elements:
 * $$\Delta(x_1\wedge\cdots\wedge x_k) = \sum_{p=0}^k \; \sum_{\sigma\in Sh(p+1,k-p)} \; \operatorname{sgn}(\sigma) (x_{\sigma(0)}\wedge\cdots\wedge x_{\sigma(p)})\otimes (x_{\sigma(p+1)}\wedge\cdots\wedge x_{\sigma(k)}).$$

where the second summation is taken over all. The above is written with a notational trick, to keep track of the field element 1: the trick is to write $$x_0=1$$, and this is shuffled into various locations during the expansion of the sum over shuffles. The shuffle follows directly from the first axiom of a co-algebra: the relative order of the elements $$x_k$$ is preserved in the riffle shuffle: the riffle shuffle merely splits the ordered sequence into two ordered sequences, one on the left, and one on the right.

Observe that the coproduct preserves the grading of the algebra. That is, one has that
 * $$\Delta:{\bigwedge}^k(V) \to \bigoplus_{p=0}^k {\bigwedge}^p(V) \otimes{\bigwedge}^{k-p}(V)$$

The tensor symbol ⊗ used in this section should be understood with some caution: it is not the same tensor symbol as the one being used in the definition of the alternating product. Intuitively, it is perhaps easiest to think it as just another, but different, tensor product: it is still (bi-)linear, as tensor products should be, but it is the product that is appropriate for the definition of a bialgebra, that is, for creating the object Λ(V) ⊗ Λ(V). Any lingering doubt can be shaken by pondering the equalities (1 ⊗ v) ∧ (1 ⊗ w) = 1 ⊗ (v ∧ w) and (v ⊗ 1) ∧ (1 ⊗ w) = v ⊗ w, which follow from the definition of the coalgebra, as opposed to naive manipulations involving the tensor and wedge symbols. This distinction is developed in greater detail in the article on s. Here, there is much less of a problem, in that the alternating product Λ clearly corresponds to multiplication in the bialgebra, leaving the symbol ⊗ free for use in the definition of the bialgebra. In practice, this presents no particular problem, as long as one avoids the fatal trap of replacing alternating sums of ⊗ by the wedge symbol, with one exception. One can construct an alternating product from ⊗, with the understanding that it works in a different space. Immediately below, an example is given: the alternating product for the dual space can be given in terms of the coproduct. The construction of the bialgebra here parallels the construction in the article almost exactly, except for the need to correctly track the alternating signs for the exterior algebra.

In terms of the coproduct, the exterior product on the dual space is just the graded dual of the coproduct:


 * $$(\alpha\wedge\beta)(x_1\wedge\cdots\wedge x_k) = (\alpha\otimes\beta)\left(\Delta(x_1\wedge\cdots\wedge x_k)\right)$$

where the tensor product on the right-hand side is of multilinear linear maps (extended by zero on elements of incompatible homogeneous degree: more precisely, α ∧ β = ε ∘ (α ⊗ β) ∘ Δ, where ε is the counit, as defined presently).

The counit is the homomorphism ε : Λ(V) → K that returns the 0-graded component of its argument. The coproduct and counit, along with the exterior product, define the structure of a on the exterior algebra.

With an antipode defined on homogeneous elements by $$S(x)=(-1)^{\binom{\text{deg}\, x\, +1}{2}}x$$, the exterior algebra is furthermore a.

Interior product
Suppose that V is finite-dimensional. If V∗ denotes the to the vector space V, then for each α ∈ V∗, it is possible to define an  on the algebra Λ(V),


 * $$i_\alpha:{\bigwedge}^k V\rightarrow{\bigwedge}^{k-1}V.$$

This derivation is called the interior product with α, or sometimes the insertion operator, or contraction by α.

Suppose that w ∈ ΛkV. Then w is a multilinear mapping of V∗ to K, so it is defined by its values on the k-fold V∗ × V∗ × ... × V∗. If u1, u2, ..., uk−1 are k − 1 elements of V∗, then define


 * $$(i_\alpha {\mathbf w})(u_1,u_2,\ldots,u_{k-1})={\mathbf w}(\alpha,u_1,u_2,\ldots, u_{k-1}).$$

Additionally, let iαf = 0 whenever f is a pure scalar (i.e., belonging to Λ0V).

Axiomatic characterization and properties
The interior product satisfies the following properties:


 * 1) For each k and each α ∈ V∗,
 * $$i_\alpha:{\bigwedge}^k V\rightarrow {\bigwedge}^{k-1}V.$$
 * (By convention, Λ−1V = {0}.)
 * 1) If v is an element of V (= Λ1V), then iαv = α(v) is the dual pairing between elements of V and elements of V∗.
 * 2) For each α ∈ V∗, iα is a  of degree −1:
 * $$i_\alpha (a\wedge b) = (i_\alpha a)\wedge b + (-1)^{\deg a}a\wedge (i_\alpha b).$$

These three properties are sufficient to characterize the interior product as well as define it in the general infinite-dimensional case.

Further properties of the interior product include:
 * $$i_\alpha\circ i_\alpha = 0.$$
 * $$i_\alpha\circ i_\beta = -i_\beta\circ i_\alpha.$$

Hodge duality
Suppose that V has finite dimension n. Then the interior product induces a canonical isomorphism of vector spaces
 * $${\bigwedge}^k(V^*) \otimes {\bigwedge}^n(V) \to {\bigwedge}^{n-k}(V) $$

by the recursive definition
 * $$i_{\alpha \wedge \beta} = i_\beta \circ i_\alpha .$$

In the geometrical setting, a non-zero element of the top exterior power Λn(V) (which is a one-dimensional vector space) is sometimes called a  (or orientation form, although this term may sometimes lead to ambiguity). The name orientation form comes from the fact that a choice of preferred top element determines an orientation of the whole exterior algebra, since it is tantamount to fixing an ordered basis of the vector space. Relative to the preferred volume form σ, the isomorphism between an element $$\alpha \in \wedge^k(V^*)$$and its Hodge dual is given explicitly by


 * $$ {\bigwedge}^k(V^*) \to {\bigwedge}^{n-k}(V) : \alpha \mapsto i_\alpha\sigma .$$

If, in addition to a volume form, the vector space V is equipped with an identifying V with V∗, then the resulting isomorphism is called the Hodge star operator, which maps an element to its Hodge dual:


 * $$\star : {\bigwedge}^k(V) \rightarrow {\bigwedge}^{n-k}(V) .$$

The composition of $$\star$$ with itself maps Λk(V) → Λk(V) and is always a scalar multiple of the identity map. In most applications, the volume form is compatible with the inner product in the sense that it is an exterior product of an of V. In this case,
 * $$\star \circ \star : {\bigwedge}^k(V) \to {\bigwedge}^k(V) = (-1)^{k(n-k) + q}\mathrm{id}$$

where id is the identity mapping, and the inner product has (p, q) — p pluses and q minuses.

Inner product
For V a finite-dimensional space, an on V defines an isomorphism of V with V∗, and so also an isomorphism of ΛkV with (ΛkV)∗. The pairing between these two spaces also takes the form of an inner product. On decomposable k-vectors,
 * $$\left\langle v_1\wedge\cdots\wedge v_k, w_1\wedge\cdots\wedge w_k\right\rangle = \det(\langle v_i,w_j\rangle),$$

the determinant of the matrix of inner products. In the special case vi = wi, the inner product is the square norm of the k-vector, given by the determinant of the (⟨vi, vj⟩). This is then extended bilinearly (or sesquilinearly in the complex case) to a non-degenerate inner product on ΛkV. If ei, i = 1, 2, ..., n, form an of V, then the vectors of the form
 * $$e_{i_1}\wedge\cdots\wedge e_{i_k},\quad i_1 < \cdots < i_k,$$

constitute an orthonormal basis for Λk(V).

It is not hard to show that for vectors v1,v2,...vk in Rn, ‖v1∧v2∧...∧vk‖ is the volume of the parallelopiped spanned by these vectors.

With respect to the inner product, exterior multiplication and the interior product are mutually adjoint. Specifically, for v ∈ Λk&minus;1(V), w ∈ Λk(V), and x ∈ V,
 * $$\langle x\wedge\mathbf{v}, \mathbf{w}\rangle = \langle \mathbf{v}, i_{x^\flat}\mathbf{w}\rangle$$

where x♭ ∈ V∗ is the linear functional defined by
 * $$x^\flat(y) = \langle x, y\rangle$$

for all y ∈ V. This property completely characterizes the inner product on the exterior algebra.

Indeed, more generally for v ∈ Λk&minus;l(V), w ∈ Λk(V), and x ∈ Λl(V), iteration of the above adjoint properties gives
 * $$\langle \mathbf{x} \wedge\mathbf{v}, \mathbf{w}\rangle = \langle \mathbf{v}, i_{\mathbf{x}^\flat}\mathbf{w}\rangle$$

where now x♭ ∈ Λl(V∗) ≃ (Λl(V))∗ is the dual l-vector defined by
 * $$\mathbf{x}^\flat(\mathbf{y}) = \langle \mathbf{x}, \mathbf{y}\rangle$$

for all y ∈ Λl(V).

Functoriality
Suppose that V and W are a pair of vector spaces and f : V → W is a. Then, by the universal construction, there exists a unique homomorphism of graded algebras


 * $$\bigwedge(f) : \bigwedge(V)\rightarrow \bigwedge(W)$$

such that


 * $$\bigwedge(f)\left|_{{\bigwedge}^1(V)}\right. = f : V={\bigwedge}^1(V)\rightarrow W={\bigwedge}^1(W).$$

In particular, Λ(f) preserves homogeneous degree. The k-graded components of Λ(f) are given on decomposable elements by
 * $$\bigwedge(f)(x_1\wedge \cdots \wedge x_k) = f(x_1)\wedge\cdots\wedge f(x_k).$$

Let


 * $${\bigwedge}^k(f) = \bigwedge a(f)\left|_{{\bigwedge}^k(V)}\right. : {\bigwedge}^k(V) \rightarrow {\bigwedge}^k(W).$$

The components of the transformation Λk(f) relative to a basis of V and W is the matrix of k × k minors of f. In particular, if V = W and V is of finite dimension n, then Λn(f) is a mapping of a one-dimensional vector space ΛnV to itself, and is therefore given by a scalar: the of f.

Exactness
If


 * $$0\rightarrow U\rightarrow V\rightarrow W\rightarrow 0$$

is a of vector spaces, then


 * $$0\to {\bigwedge}^1(U)\wedge\bigwedge(V) \to \bigwedge(V)\rightarrow \bigwedge(W)\rightarrow 0$$

is an exact sequence of graded vector spaces as is
 * $$0\to \bigwedge(U)\to\bigwedge(V).$$

Direct sums
In particular, the exterior algebra of a direct sum is isomorphic to the tensor product of the exterior algebras:


 * $$\bigwedge(V\oplus W)\cong\bigwedge(V)\otimes\bigwedge(W).$$

This is a graded isomorphism; i.e.,


 * $${\bigwedge}^k(V\oplus W)\cong\bigoplus_{p+q=k} {\bigwedge}^p(V)\otimes{\bigwedge}^q(W).$$

Slightly more generally, if


 * $$0\to U\to  V\to  W\to  0$$

is a of vector spaces then Λk(V) has a


 * $$0 = F^0 \subseteq F^1 \subseteq \cdots \subseteq F^k \subseteq F^{k+1} = {\bigwedge}^k(V)$$

with quotients


 * $$F^{p+1}/F^p = {\bigwedge}^{k-p}(U) \otimes {\bigwedge}^p(W).$$

In particular, if U is 1-dimensional then


 * $$0\to U \otimes {\bigwedge}^{k-1}(W) \to  {\bigwedge}^k(V)\to  {\bigwedge}^k(W)\to  0$$

is exact, and if W is 1-dimensional then


 * $$0\to {\bigwedge}^k(U) \to  {\bigwedge}^k(V)\to  {\bigwedge}^{k-1}(U) \otimes W\to  0$$

is exact.

Alternating tensor algebra
If K is a field of characteristic 0, then the exterior algebra of a vector space V can be canonically identified with the vector subspace of T(V) consisting of s. Recall that the exterior algebra is the quotient of T(V) by the ideal I generated by x ⊗ x.

Let Tr(V) be the space of homogeneous tensors of degree r. This is spanned by decomposable tensors


 * $$v_1\otimes\cdots\otimes v_r,\quad v_i\in V.$$

The antisymmetrization (or sometimes the skew-symmetrization) of a decomposable tensor is defined by


 * $$\operatorname{Alt}(v_1\otimes\cdots\otimes v_r) = \frac{1}{r!}\sum_{\sigma\in\mathfrak{S}_r} \operatorname{sgn}(\sigma) v_{\sigma(1)}\otimes\cdots\otimes v_{\sigma(r)}$$

where the sum is taken over the of permutations on the symbols {1, ..., r}. This extends by linearity and homogeneity to an operation, also denoted by Alt, on the full tensor algebra T(V). The image Alt(T(V)) is the alternating tensor algebra, denoted A(V). This is a vector subspace of T(V), and it inherits the structure of a graded vector space from that on T(V). It carries an associative graded product $$\widehat{\otimes}$$ defined by


 * $$t~\widehat{\otimes}~s = \operatorname{Alt}(t\otimes s).$$

Although this product differs from the tensor product, the kernel of Alt is precisely the ideal I (again, assuming that K has characteristic 0), and there is a canonical isomorphism


 * $$A(V)\cong \bigwedge(V).$$

Index notation
Suppose that V has finite dimension n, and that a basis e1, ..., en of V is given. then any alternating tensor t ∈ Ar(V) ⊂ Tr(V) can be written in as


 * $$t = t^{i_1i_2\cdots i_r}\, {\mathbf e}_{i_1}\otimes {\mathbf e}_{i_2}\otimes\cdots\otimes {\mathbf e}_{i_r},$$

where ti1⋅⋅⋅ir is in its indices.

The exterior product of two alternating tensors t and s of ranks r and p is given by


 * $$t~\widehat{\otimes}~s = \frac{1}{(r+p)!}\sum_{\sigma\in {\mathfrak S}_{r+p}}\operatorname{sgn}(\sigma)t^{i_{\sigma(1)}\cdots i_{\sigma(r)}} s^{i_{\sigma(r+1)} \cdots i_{\sigma(r+p)}} {\mathbf e}_{i_1}\otimes {\mathbf e}_{i_2} \otimes \cdots\otimes {\mathbf e}_{i_{r+p}}.$$

The components of this tensor are precisely the skew part of the components of the tensor product s ⊗ t, denoted by square brackets on the indices:


 * $$(t~\widehat{\otimes}~s)^{i_1\cdots i_{r+p}} = t^{[i_1\cdots i_r}s^{i_{r+1}\cdots i_{r+p}]}.$$

The interior product may also be described in index notation as follows. Let $$t = t^{i_0i_1\cdots i_{r-1}}$$ be an antisymmetric tensor of rank r. Then, for α ∈ V∗, iαt is an alternating tensor of rank r − 1, given by


 * $$(i_\alpha t)^{i_1\cdots i_{r-1}}=r\sum_{j=0}^n\alpha_j t^{ji_1\cdots i_{r-1}}.$$

where n is the dimension of V.

Linear algebra
In applications to, the exterior product provides an abstract algebraic manner for describing the and the  of a. For instance, it is well known that the magnitude of the determinant of a square matrix is equal to the volume of the parallelotope whose sides are the columns of the matrix. This suggests that the determinant can be defined in terms of the exterior product of the column vectors. Likewise, the k × k minors of a matrix can be defined by looking at the exterior products of column vectors chosen k at a time. These ideas can be extended not just to matrices but to s as well: the magnitude of the determinant of a linear transformation is the factor by which it scales the volume of any given reference parallelotope. So the determinant of a linear transformation can be defined in terms of what the transformation does to the top exterior power. The action of a transformation on the lesser exterior powers gives a -independent way to talk about the minors of the transformation.

Technical details: Definitions
Let $$V$$ be an n-dimensional vector space over field $$K$$ with basis $$\{e_1,\ldots,e_n\}$$.
 * For $$A \in \operatorname{End}(V)$$, define $${\bigwedge}^k A \in \operatorname{End}({\bigwedge}^k V) $$ on simple tensors by


 * $${\bigwedge}^k A(v_1 \wedge\cdots\wedge v_k)=Av_1 \wedge \cdots \wedge Av_k $$


 * and expand the definition linearly to all tensors. More generally, we can define $${\bigwedge}^p A^k \in \operatorname{End}({\bigwedge}^p V), (p \geq k) $$ on simple tensors by


 * $$\left ({\bigwedge}^p A^k \right )(v_1 \wedge \cdots \wedge v_p)=\sum_{0 \leq i_1 < \cdots < i_k \leq p} v_1 \wedge \cdots \wedge Av_{i_1} \wedge \cdots \wedge Av_{i_k} \wedge \cdots \wedge v_p$$


 * i.e. choose k components on which A would act, then sum up all results obtained from different choices. If $$p<k$$, define $${\bigwedge}^p A^k=0 $$. Since $${\bigwedge}^n V $$ is 1-dimensional with basis $$e_1 \wedge \cdots \wedge e_n$$, we can identify $${\bigwedge}^n A^k $$ with the unique number $$\kappa \in K $$ satisfying


 * $${\bigwedge}^n A^k (e_1 \wedge \cdots \wedge e_n) = \kappa (e_1 \wedge \cdots \wedge e_n) .$$


 * For $$\varphi \in \operatorname{End}({\bigwedge}^p V)$$, define the exterior transpose $$\varphi^\mathrm{T} \in \operatorname{End}({\bigwedge}^{n-p} V)$$ to be the unique operator satisfying


 * $$\forall \omega_p \in {\bigwedge}^p V, \omega_{n-p} \in {\bigwedge}^{n-p}V, (\varphi^\mathrm{T}\omega_{n-p})\wedge\omega_p=\omega_{n-p}\wedge(\varphi \omega_p)$$


 * For $$A \in \operatorname{End}(V)$$, define $$\det A={\bigwedge}^n A^n, \operatorname{Tr}(A)={\bigwedge}^n A^1, \operatorname{adj} A=({\bigwedge}^{n-1} A^{n-1})^\mathrm{T} $$. These definitions is equivalent to the other versions.

Basic Properties
All results obtained from other definitions of the determinant, trace and adjoint can be obtained from this definition (since these definitions are equivalent). Here are some basic properties related to these new definitions:


 * $$(\cdot)^\mathrm{T}$$ is $$K$$-linear.


 * $$(AB)^\mathrm{T} = B^\mathrm{T} A^\mathrm{T}.$$


 * We have a canonical isomorphism
 * $$\begin{cases}\psi:\operatorname{End}({\bigwedge}^k V) \cong \operatorname{End}({\bigwedge}^{n-k} V) \\ A \mapsto A^\mathrm{T} \end{cases}$$
 * However, there is no canonical isomorphism between $${\bigwedge}^k V$$ and $${\bigwedge}^{n-k} V.$$


 * $$\operatorname{Tr} \left ({\bigwedge}^k A \right ) = {\bigwedge}^n A^k.$$ The entries of the transposed matrix of $${\bigwedge}^k A$$ are $$k \times k$$-minors of $$A$$.


 * $$\forall k \leqslant n-1, p \leqslant k, A \in \operatorname{End}(V),$$
 * $$\sum_{q=0}^p \left ({\bigwedge}^{n-k} A^{p-q} \right )^\mathrm{T} \left ({\bigwedge}^k A^q \right ) = \left ({\bigwedge}^n A^p \right ) \operatorname{Id} \in \operatorname{End}(V).$$
 * In particular,
 * $$\left ({\bigwedge}^{n-1} A^{p-1} \right )^\mathrm{T} A + \left ({\bigwedge}^{n-1} A^p \right )^\mathrm{T} = \left ({\bigwedge}^n A^p \right ) \operatorname{Id}$$
 * and hence
 * $$(\operatorname{adj} A)A = \left ({\bigwedge}^{n-1} A^{n-1} \right )^\mathrm{T} A = \left ({\bigwedge}^n A^n \right ) \operatorname{Id} =(\det A)\operatorname{Id}.$$


 * $$\left ({\bigwedge}^{n-1} A^p \right )^\mathrm{T} = \sum_{q=0}^p \left ({\bigwedge}^n A^{p-q} \right )(-A)^q = \sum_{q=0}^p \operatorname{Tr} \left ({\bigwedge}^{p-q} A \right )(-A)^q.$$
 * In particular,
 * $$\operatorname{adj} A = \sum_{q=0}^{n-1} \left ({\bigwedge}^n A^{n-q-1} \right )(-A)^q.$$


 * $$\operatorname{Tr} \left ({\bigwedge}^k \operatorname{adj} A \right ) = {\bigwedge}^n (\operatorname{adj} A)^k = (\det A)^{k-1} \left ({\bigwedge}^n A^{n-k} \right ) = (\det A)^{k-1} \operatorname{Tr} \left ({\bigwedge}^{n-k} A \right ).$$


 * $$\operatorname{Tr} \left ( \left ({\bigwedge}^{n-1} A^k \right )^\mathrm{T} \right )=(n-k) {\bigwedge}^n A^p=(n-k) \operatorname{Tr}\left ({\bigwedge}^p A \right ).$$


 * The characteristic polynomial $$\operatorname{ch}_A(t)$$ of $$A \in \operatorname{End}(V)$$ can be given by
 * $$\operatorname{ch}_A(t)=\sum_{k=0}^n \operatorname{Tr} \left ({\bigwedge}^k A \right )(-t)^{n-k} = \sum_{k=0}^n \left ({\bigwedge}^n A^k \right )(-t)^{n-k} .$$
 * Similarly,
 * $$\operatorname{ch}_{\operatorname{adj} A}(t)=\sum_{k=0}^n \left ({\bigwedge}^n(\operatorname{adj} A)^k \right )(-t)^{n-k} = \sum_{k=0}^n (\det A)^{k-1} \left ({\bigwedge}^n A^{n-k} \right )(-t)^{n-k}$$

Leverrier's algorithm
$${\bigwedge}^n A^k$$ is the coefficient of $$(-t)^{n-k}$$ term in the characteristic polynomial. They also appear in the expressions of $\left({\bigwedge}^{n-1} A^p\right)^\mathrm{T}$ and $${\bigwedge}^n (\operatorname{adj} A)^k$$. Leverrier's Algorithm is an economical way of computing $${\bigwedge}^n A^k$$ and $${\bigwedge}^{n-1} A^k$$:


 * Set $${\bigwedge}^{n-1} A^0 = 1$$;


 * For $$k=n-1,n-2,\ldots,1,0$$,


 * $${\bigwedge}^n A^{n-k} = \frac{1}{n-k} \operatorname{Tr}(A \circ {\bigwedge}^{n-1} A^{n-k-1});$$


 * $${\bigwedge}^{n-1} A^{n-k} = {\bigwedge}^n A^{n-k} \cdot \operatorname{Id} - A \circ {\bigwedge}^{n-1} A^{n-k-1}.$$

Physics
In physics, many quantities are naturally represented by alternating operators. For example, if the motion of a charged particle is described by velocity and acceleration vectors in four-dimensional spacetime, then normalization of the velocity vector requires that the electromagnetic force must be an alternating operator on the velocity. Its six degrees of freedom are identified with the electric and magnetic fields.

Linear geometry
The decomposable k-vectors have geometric interpretations: the bivector $u ∧ v$ represents the plane spanned by the vectors, "weighted" with a number, given by the area of the oriented with sides u and v. Analogously, the 3-vector $u ∧ v ∧ w$ represents the spanned 3-space weighted by the volume of the oriented with edges u, v, and w.

Projective geometry
Decomposable k-vectors in ΛkV correspond to weighted k-dimensional s of V. In particular, the of k-dimensional subspaces of V, denoted Grk(V), can be naturally identified with an  of the  P(ΛkV). This is called the.

Differential geometry
The exterior algebra has notable applications in, where it is used to define s. Differential forms are mathematical objects that evaluate the length of vectors, areas of parallelograms, and volumes of , so they can be over curves, surfaces and higher dimensional s in a way that generalizes the s and  from calculus. A at a point of a  is an alternating multilinear form on the  at the point. Equivalently, a differential form of degree k is a on the k-th exterior power of the tangent space. As a consequence, the exterior product of multilinear forms defines a natural exterior product for differential forms. Differential forms play a major role in diverse areas of differential geometry.

In particular, the gives the exterior algebra of differential forms on a manifold the structure of a. The exterior derivative commutes with along smooth mappings between manifolds, and it is therefore a. The exterior algebra of differential forms, equipped with the exterior derivative, is a whose cohomology is called the  of the underlying manifold and plays a vital role in the  of differentiable manifolds.

Representation theory
In, the exterior algebra is one of the two fundamental s on the category of vector spaces, the other being the. Together, these constructions are used to generate the s of the ; see.

Superspace
The exterior algebra over the complex numbers is the archetypal example of a, which plays a fundamental role in physical theories pertaining to s and. A single element of the exterior algebra is called a supernumber or. The exterior algebra itself is then just a one-dimensional : it is just the set of all of the points in the exterior algebra. The topology on this space is essentially the, the being the s.  An $n$-dimensional superspace is just the $n$-fold product of exterior algebras.

Lie algebra homology
Let L be a Lie algebra over a field K, then it is possible to define the structure of a on the exterior algebra of L. This is a K-linear mapping


 * $$\partial : {\bigwedge}^{p+1}L\to{\bigwedge}^p L$$

defined on decomposable elements by


 * $$\partial (x_1\wedge\cdots\wedge x_{p+1}) = \frac{1}{p+1}\sum_{j<\ell}(-1)^{j+\ell+1}[x_j,x_\ell]\wedge x_1\wedge\cdots\wedge \hat{x}_j\wedge\cdots\wedge\hat{x}_\ell\wedge\cdots\wedge x_{p+1}.$$

The holds if and only if ∂∂ = 0, and so this is a necessary and sufficient condition for an anticommutative nonassociative algebra L to be a Lie algebra. Moreover, in that case ΛL is a with boundary operator ∂. The associated to this complex is the.

Homological algebra
The exterior algebra is the main ingredient in the construction of the, a fundamental object in.

History
The exterior algebra was first introduced by in 1844 under the blanket term of Ausdehnungslehre, or Theory of Extension. This referred more generally to an algebraic (or axiomatic) theory of extended quantities and was one of the early precursors to the modern notion of a. also published similar ideas of exterior calculus for which he claimed priority over Grassmann.

The algebra itself was built from a set of rules, or axioms, capturing the formal aspects of Cayley and Sylvester's theory of multivectors. It was thus a calculus, much like the, except focused exclusively on the task of formal reasoning in geometrical terms. In particular, this new development allowed for an axiomatic characterization of dimension, a property that had previously only been examined from the coordinate point of view.

The import of this new theory of vectors and s was lost to mid 19th century mathematicians, until being thoroughly vetted by in 1888. Peano's work also remained somewhat obscure until the turn of the century, when the subject was unified by members of the French geometry school (notably, , and ) who applied Grassmann's ideas to the calculus of s.

A short while later,, borrowing from the ideas of Peano and Grassmann, introduced his. This then paved the way for the 20th century developments of by placing the axiomatic notion of an algebraic system on a firm logical footing.