A group is abelian if its binary operator is commutative. That is, whenever for all .
An affine space can be thought of a vector space that has "lost it's origin".
Formally, an affine space is a set of points together with a vector space of "displacements". Points in can be subtracted to form a vector. That is . Subtraction must satisfy the following properties:
These conditions are called the Weyl axioms. An alternative formulation is to assert that acts on as a free action (treating as a group under vector addition).
When is equipped with an inner product, then becomes a metric space with the metric . In particular, if and equipped with an inner product, then is said to be euclidean.
The (algebraic) dual space of the vector space over the scalar field consists of all linear mappings of the form . is itself a vector space over . Elements of may be referred to as linear functionals or covectors or dual vectors. The notion of a dual space is integral in the theory of tensors.
and are isomorphic if they finite-dimenstional. That is, they share the same dimension. Moreover, for any non-degenerate bilinear form , the bijection is an isomoprhism between and . Thus, a finite-dimensional inner product space has a "natural" isomorphism to its dual.
Regardless of the dimensionality of , there exists a natural homomorphism from to the dual of its dual, . This natural homorphism maps each to the "evaluation map" defined by . For finite-dimensional , this homomorphism is also an isomorphism.
The existance of a natural isomorphism means that and may roughly be treated as equivalent.
Let form a basis of . Define the corresponding dual basis as the sequence of covectors satisfying the following:
Here, if , otherwise (this quantity is sometimes called the Kronecker delta). The dual basis is a basis of .
A bilinear form over a vector space (over some scalar field ) is a mapping that is linear in each argument.
Bilinear forms may be characterized by additional properties they posses. Let be a bilinear form. Then is:
Bilinear forms may be represented as matrices for finite-dimensional vector spaces. Let . Then every bilinear form can written in the following fashion:
where is the matrix representation of , and and are column vector representations of and .
Caley's theorem asserts that a permutation group and an abstract group are essentially equivalent concepts. It is clear that a permutation group meets the definition of an abstract group. Caley's theorem states that an abstract group is isomorphic to some permutation group.
For a given , the bijection defined by is the left action by . Similarly, the map is the right action by . The mapping from to the corresponding left action (or right action) is an isomorphism.
A change of basis is a process of converting the components of a vector (or matrix or tensor) with respect to one basis to components in another basis.
Let be a finite-dimensional vector space with dimension . And let and be two basis on . Then there uniquely exists an matrix called a transition matrix or change of basis matrix such that
for all . Here, and are the column vectors written with respect to and , respectively.
Transition matrices have a number of properties:
Each basis on is uniquely associated with a linear isomorphism . Let and be isomorphisms corresponding to and respectively. Then the matrix is the matrix representation of the map given by .
Chebyshev's inequality states that for any random variable with expectation and variance that the following holds for ant :
Let be a finite-dimensional real or complex vector space and be a linear transformation. The determinant of , notated as either or as , is a number that may be defined in various ways.
Before defining the determinant, it is useful to list some useful properties:
Let specifically be a matrix, then:
Let be the algebraically distinct eigenvalues of . The determinant of is defined as their product:
Here, "algebraically distinct" refers to the fact that the eigenvalues may not be numerically distinct, but act as distinct factors of the characteristic polynomial of :
This definition may seem circular since the characteristic polynomial is itself usually defined by the determinant (). However, it is possible to define this polynomial in other ways. This is the approach taken, for example, in Axler's Linear Algebra Done Right.
It is possible to define a determinant in the context of Grassman algebra.
Let be -dimensional. Then the exterior power is one dimensional. The corresponding exterior power of , , can be written in the form . And can thus be defined as the determinant of .
Let be the matrix representation of with respect to some basis of . Then it is possible to define the determinant of as a function of the corresponding matrix elements .
Suppose is -dimensional. A bijection of the form is called an (-fold) permutation. The sign of this permutation, denoted as , is equal to 1 if can be written as an even number of "swaps" (permutations that swap two elements but leave the others unchanged) and -1 if the number of swaps is odd.
Then the determinant of and is:
Here, the above summation is over all possible -fold permutations.
A differentiable manifold of dimension is a mathematical structure consisting of a set and an equivalence class of atlases.
The study of differentiable manifolds is a central focus of differential topology.
The dual numbers can be thought of an extension of the real numbers with an infinitesimal offset. A dual number may be written as , where and are real numbers. And may be thought to be "infinitesimal". That is, is distinct from zero despite that .
The expression is linear in and and obeys the following rule of multiplication:
For any polynomial , it may be readily shown that:
And for any analytic function , one can similarly extend to the dual numbers:
With this, it is possible to "automatically differentiate" a function by calculating the "epsilon" component of .
The ForwardDiff.jl Julia package follows this exact approach, utilizing a multidimensional analogue of dual numbers to calculate gradients.
In linear algebra, the eigendecomposition of a square matrix , also known as diagonalization, is the representation of in the following factored form:
where is a diagonal matrix whose diagonal entries are eigenvalues of . And is a matrix whose corresponding columns are eigenvectors. Here, , and share the same number of dimensions.
A matrix can be eigendecomposed if and only if it is diagonalizable.
Let be the eigendecomposition of and let be some analytic function. Then it can be shown that
Moreover, the calculation of is straightforward. It is a diagonal matrix whose th element is , where is the eigenvalue located at the th index of .
Let be a linear operator and a vector space. An eigenvalue of is a scalar such that for some non-zero , called an eigenvector. The set of eigenvalues of forms the eigenspectrum of . The linear combination of all eigenvectors corresponding to a given eigenvalue forms an eigenspace.
The following properties hold regardless of the dimensionality of :
The following properties apply when is finite-dimensional and the underlying field is real or complex.
Here, "algebraically distinct" eigenvalues means that the eigenvalues appear as algebraically distinct roots in the charachteristic polynomial. That is, let the characteristic polynomial of be written as
Then
A euclidean space is a mathematical formalization and abstraction of physical space. The study of objects in euclidean space is known as euclidean geometry.
Formally, a euclidean space is an affine space whose underlying vector space is equipped with an inner product. That is, a euclidean space is a metric space.
A field is a commutative ring such that non-zero elements form a group under matrix multiplication.
Subsets of fields that are themselves fields with the inherited addition and multiplication operators are called subfields.
Some elementary examples of fields:
A filtration on a measurable space is a collection of -algebras (indexed by , where is some indexing set) such that for all . Intuitively, a filtration is a model of how information is gained over time (interpreting to be the time axis).
Galilean Relativity refers to a theory of relativity consistent with Newton's laws.
It is named after Galileo's thought experiment involving a vessel traveling with a perfectly uniform and linear motion. An observer contained within the vessel, observing only phenomena also contained within the vessel, will have no means of determining the vessel's speed or direction of travel.
A galilean space(time) is a four-dimensional affine space . Points in this space are called "events". The set of displacements forms a four-dimensional, real vector space .
There exists a rank-1 linear map mapping spatio-temporal displacements to time intervals. Two events and are simultaneous if .
The three-dimensional quotient space is euclidean (that is, equipped an inner product ). From this, the distance between simultaneous events and is defined as
where is the natural projection (epimorphism) from to .
An isomorphism between galilean spaces is a bijection that preserves galilean structure (affinity, euclidean distance between simultaneous events, and time intervals).
All galilean spaces are isomorphic to , where the and components are each equiped with the standard inner product. Isomorphisms of spacetime onto are called inertial reference frames or galilean coordinates.
Automorphisms of are called galilean transformations, which forms the galilean group. The galilean group is generated from the following galilean transformations:
A homomorphism is a mapping that preserves algebraic structure. In the context of group theory, a homomoprhism between a group and is a mapping with the following property for all :
may be categorized as a special "type" of homomorphism according to additional properties it may hold:
Let be a group and be a vector space over the field . A representation with respect to these objects is a homomorphism from to , the general linear group of . That is, a representation is just a group action consisting of linear transformations. The study of group representations forms much of representation theory.
Usually, and is finite. In this case, (or, rather, ) may be identified as a finite set of matrices under some coordinate system.
The subspace is said to be invariant with respect to if for all and .
The representation is said to be irreducible over if the only invariant subspaces are and the subspace consisting of just the zero element.
An important problem of representation theory is decomposing into irreducible components That is, write with irreducible and none of is equal to . If this is possible, then the representation is said to be fully reducible.
A group is a simple algebraic structure that represents a composable set of permutations. There are two popular definitions of a group: one as a set of permutations and one as a set equipped with a binary operator. Both definitions are essentially equivalent by Caley's Theorem.
One formulation of a group is as a set of permutations. That is, a permutation group is a set of bijections of some set with the following closure properties:
An abstract group is a set together with a mapping called a binary operator. We write as or simply as .
An abstract group must satisfy the following properties:
It is clear that a permutation group is an abstract group by having composition as the chosen binary operator.
Note: For notational convenience, the binary operator is usually assumed and a group is identified by the underlying set. So, for example, claiming that " is an element of the group " means that " is an element of the underlying set of the group ".
An identity matrix is a square matrix with 1's on the diagnonal and 0's everywhere else. The identity matrix is often written as . For example,
The identity matrix act as the identity element on the general linear group . That is, for any matrix , .
An inner product space is a vector space over a scalar field together with a bilinear form , called an inner product, that satisfies the following properties:
The vector space together with an inner product is called a euclidean space. An inner product space with complex scalars is sometimes called a unitary space.
The function
The inner product norm further satisfies the Cauchy-Schwartz Inequality and Triangle Inequality.
A linear transformation is unitary if the inner product is preserved.
Linear Algebra is the study of linear structure.
The most basic objects are vector spaces over defined some given field. Vector spaces are basically spaces of elements (vectors) that can be combined linearly. Example of vector spaces include:
Vector spaces are often metric spaces via the inclusion of inner products. The archetypal example of an inner product space is a euclidean space.
Morphisms of vector spaces -- that is, maps between vector spaces that preserve linearity -- are called linear maps. Linear maps can be represented as matrices, rectangular numerical arrays, via a basis. Matrices may be combined and manipulated numerically, .
Linear maps that are also isomorphisms are called linear transformations. Linear transformations that are also isometries (preserving an inner product) are called unitary (or orthogonal for real scalar fields).
An important operation with linear transformations is their eigendecompostion to some canonical form using eigenvalues. Eigenvalues are usually defined using determinants.
Linear maps take in a single vector argument. Maps that take into multiple vector arguments and are linear componentwise are called tensors. The study of tensors belong to multilinear algebra, a sub-field of linear algebra. This includes exterior algebra, which has extensive applications in physics and geometry.
Linear algebra has notable applications in the following:
Let be a square matrix with real or complex entries. The characteristic polynomial of this matrix is defined as:
Here, is the identity matrix of the same dimension as . Of interest is this polynomial written in factored form:
Here, the 's are the "algebraically distinct" eigenvalues of if is a complex matrix. Otherwise, if is a real matrix, then only the real roots are considered eigenvalues.
A linear combination is an expression of the form
where are vectors belonging to some vector space and are scalars.
Let be a set of vectors in some vector space . These vectors are mutually linearly dependent if there exists some finite subset and sequence of non-zero scalars such that
If is not linearly dependent, then it is constituent vectors are linearly independent.
A linear map is a mapping between two vector spaces that preserves linearity.
Formally, a map from vector space to vector space is linear if the following equation holds for all vectors and scalar :
Of course, and should have the same underlying scalar field for this to make sense.
A linear map may sometimes be called a linear transformation, especially if . The term linear operator is also common, especially in physics.
Given a linear map , one can construct a number of relevant vector subspaces of or :
The rank-nullity theorem states that the nullity and rank of sums up to the dimension of :
The rank-nullity theorem is basically a manifestation theorem of the first isomorphism theorem for groups.
Consider a linear map of the form for some scalar field . Then there uniquely exists an matrix such that for all , with being treated as a "column vector" ( matrix).
More generally, let where and . And consider isomorphisms and . These isomorphisms may be identified as a basis on and . With respect to the basis and , one can define the matrix as the matrix corresponding to . See Change of Basis for more details.
The algebra of linear maps corresponds directly to matrix algebra. Let and be linear maps between finite-dimensional vector spaces. Then, with respect to some given basis:
Here, is a scalar.
Markov's inequality states that for any non-negative random variable for which exists, then
for any . An immediate consequence of this theorem is Chebychev's inequality.
Matrices may be produced from binary and unary operations on other matrices.
In the following, suppose that and are matrices with elements and , respectively.
A matrix is a rectangular array of numbers, commonly used in applications of linear algebra.
For example, the following is a two-by-four () matrix of real numbers:
This matrix has two rows and four columns.
A matrix may generally contain any number of rows or columns. The entries of a matrix usually belong to some specified field, usually or .
Matrix variables are often denoted by capital letters (), sometimes bolded ().
An entry of a matrix may be located by specifying which row and column the entry belongs to. This can be done by supplying an "index" to the desired row and column. For example, denote the entries of the afforementioned matrix as . Then is the entry in the first row (counting top-to-bottom) and second column (counting left-to-right).
Matrices may be combined and transformed according to the conventions of matrix algebra.
In linear algebra, an matrix is a representation of a linear map of the form for some field .
A metric space is a space equpped with an abstraction notion of distance, called a metric. Metric spaces form an important class of topological spaces. The archetypal example of a metric space is a euclidean space.
Formally, a matrix space is a set equipped with a metric or distance function satisfying the following properties for all
The last of these properties is known as the triangle inequality. An important corollary of these properties is that for all .
In a topological space, a neighborhood is a set of points surrounding some particular point. Formally, a neighborhood around a given point is a set of points containing an open set that itself contains the given point. The given point is said to be in the interior of the neighborhood.
Consider a set . For each , suppose is a collection of subsets of obeying the following axioms:
Then is said to be a system of neighborhoods for .
Given such a system of neighborhoods, a set is said to be open if . The collection of such sets forms a topological space. More importantly, every topological space can be formed in this manner. It can be shown that is theo collection of all neighborhoods around the point .
Some authors use the above fact to define a topological space using such a system of neighborhoods instead of by the properties of its open sets. The neighborhood formulation, while less verbose, is arguably more intuitive.
Every topological space can be uniquely specified from the system of neighborhoods of each of its points.
A normed vector space is a vector space over together with a function , the norm, satisfying the following properties for all and :
A normed vector space is also a metric space under the metric .
A seminorm has the properties of the norm, except with the property that may be zero for nonzero .
For any two vectors and in a real or complex inner product-space, the polarization identity asserts that:
Essentially, this means that inner products can be recovered from norms.
On Linux, procfs
is a virtual filesystem, mounted at /proc
, containing information on
processes, threads, and the overall system.
Directories of the form /proc/[0-9]+
contain process-specific information. Paths
of the form /proc/[a-z]+
contain system-specific information.
/proc
Sourced from the RHEL 6 Deployment Guide.
Process-specific data:
/proc/\(pid/
is a directory containing process information for the process with
identifer \)pid
. The path /path/self/
links to this directory corresponding for the
calling process.
/proc/\(pid/cwd
links to the process's working directory./proc/\)pid/fd/
is a directory containing links to file decriptors opened by the file./proc/\(pid/environ
contains process-specific environment variables/proc/\)pid/exe
links to the process's executable/proc/\(pid/maps/
contains the process's memory maps./proc/\)pid/mem
contains a mapping to a process's memory. This files is not normally
available without attaching ptrace
.
/proc/\(pid/task/\)tid/
is a directory of a process's thread with identifier \(tid
System-wide data:
/proc/bus/
contains information about available buses. In particlar /proc/bus/pci
contains information about available PCI devices.
/proc/cpuinfo
conatains information about the system's CPU (model name, cache size,
feature flags, ...).
/proc/filesystems
contains a list of filesystems./proc/iomem
maps memory regions to physical devices./proc/kcore
contains a view into the system's memory./proc/loadavg
shows relative load across CPU cores./proc/locks
list of file locks held by the kernel./proc/meminfo
displays statistics on memory usage./proc/modules
contains a list of kernel modules./proc/mounts
contains a list of filesystem mounts./proc/stats
contains a large amount of statistics collected since the system was
last restarded.
/proc/uptime
shows how long the system has been running since last restart./proc/version
shows the kernel version, including compiler info.Let and be sets, the latter of which is equipped with a topology . And let be a mapping.
The pullback topology is the topology on consisting of pre-images of elements in . The pullback topology is the smallest topology on that makes continuous.
A quotient group for a given group and a "normal" subgroup of is a group that has a "coarser" or "more relaxed" algebraic structure relative to . The notion of a quotient group is essential in a number of isomorphism theorems.
Let be a group with subgroup .
A left coset of , denoted as or for some , is the set consisting of elements of the form for some . That is, is the image of under the left action induced by .
Similarly, a right coset of , is the image of under the right action induced by .
The subgroup is said to be normal if for all . In this case, there is no distinction between "right" and "left" cosets.
The cosets of a normal subgroup itself forms a group
Let and be finite and normal. Lagrange's theorem states that
A quotient (vector) space is an extension of the concept of a quotient group for vector spaces.
Let be a vector subspace of . Since is a normal subgroup of with respect to vector addition, one can construct the quotient group consisting of cosets of (affine hyperplanes parallel to ).
Moreover, also inherits a form of scalar multiplication, making itself a vector space. Let be a vector in the quotient space. That is, is a coset of . Scalar multiplication of every point in yields another coset of . Hence, scalar multiplication is well-defined for vector spaces.
The definition of a quotient vector space can be readily generalized to a module. That is, one can construct quotient modules in a similar fashion.
A sigma algebra is a space of mathematical statements that may be combined using a countable number of boolean operations.
More precisely, a sigma algebra over a set is a collection of subsets in with the following closure properties:
The tuple is called a measurable space. is said to be a sub-sigma algebra of if is measurable and . is said to be "coarser" than .
Elements of a sigma algebra are called measurable sets.
Sigma algebras are often generating from other sigma algebras:
Let be a field of scalars, and be the corresponding dimensional cartesian vector space. The standard basis is the basis of defined such that the th component of is 1 if and 0 otherwise.
Let be a group. If , then is a subgroup of if is itself a group. Equivalently, is a subgroup of if whenever .
The tangent bundle on a differentiable manifold is the union of tangent spaces on :
The tangent bundle itself is a manifold whose dimension is twice that of . If is of class , then is of class .
The manifold structure of is derived from the manifold structure of . Let be a chart on . Then one can construct an analogous chart on in the following way:
Let . Then define by setting the first components of to . The second components are set to , where is a differentiable curve in with and .
The tangent space of a differentiable manifold attached at is the set of tangent vectors at .
A tangent vector at is an equivalence class of curves attached at . A curve is a function such that is differentiable for every chart .
Two curves and are equivalent if and and for some (hence any) chart . Thus, any vector in can be uniquely specified by its representation
We write if the curve belongs to the and for some . If the curve belongs to .
The tangent space has a natural vector space structure. Let and be tangent vectors attached at . Then define to be the vector whose coordinate representation under some chart containing in its domain is the sum of the coordinates of and . A similar construction can be made for scalar multiplication . This construction can be shown to satisfy the properties of a vector space and can be shown to be invariant under the choice of chart.
The dual space of the tangent space is known as the cotangent space .
Tensors are the fundamental tools of multilinear algebra, describing the relationship between multiple vectors and covectors.
Let be a finite dimensional vector space over some scalar field . And let be its corresponding dual space. A tensor is a multilinear map of the form
Such a tensor is said to be of type (or in some literature). If , the tensor is said to be covariant. If , the tensor is said to be contravariant. Otherwise, the tensor is said to be mixed.
Let be a basis of . And let be the corresponding basis in (that is, ). Then a natural basis for the space of -typed tensors on is given by
for all possible sequences and in . The dimension of the space of -typed tensors is thus .
A topological space is a set equipped topological structure. Essentially, a topological space is defined so that the "limit" of a sequences o "points" in the space may be defined.
A topological space may be defined as a set to together with a topology , which consists of subsets of . Elements of this topology are called open sets. A topology satisfies the following axioms:
A set is said to be closed if its complement is open. The set of closed sets uniquely specifies space's topology.
Topologies are rarely defined by directly specifying the open sets directly. Rather, they are usually generated from simpler constructs. For example:
Let be an matrix. The trace of this matrix is the sum of its diagonal elements:
The trace can also be calculated as the sum of 's eigenvalues (adjusting for multiplicities). That is, let the characteristic equation of be given by . Then
Since the eigenvalues of a matrix are invariant under coordinate transforms, the trace of a linear transformation of a finite-dimensional space can be defined as the trace of one of its matrix representations.
Traces satisfy the following properties:
A unitary transformation is an isomorphism of an inner product space. That is, it is a linear transformation preserving inner products:
For euclidean vector spaces (), unitary transformations are called orthonormal transformations.
The eigenspaces of a linear transformation are orthogonal.
The matrix representation of a unitary transformation is called a unitary matrix. Such matrices have the following properties:
Here, denotes the conjugate transpose of (obtained from by taking the complex conjugate of each element and then transposing the matrix).
For euclidean spaces, a unitary matrix is specifically called an orthonormal matrix.
A basis of a vector space over a scalar field is a set of vectors that are linearly independent and span .
More explicitly, let be a basis. Then, every can be uniquely written in the form for some .
If is finite, then is said to have a finite dimension. And the dimension of is the cardinality of . Otherwise, is said to be infinitely dimensional.
All basis of a vector space share the same cardinality.
Let be dimensional ( finite). Then a basis may be uniquely identified by a linear isomorphism via the following construction:
Here, is the th component of . is an coordinate system on .
Converting from one basis to another may be done using transition matrices.
Vector spaces are used to model a collection of objects -- vectors -- that can be combined in a linear way to form more vectors. Vector spaces may also be called linear spaces.
Formally, a vector space over a field (with usually being either or ) of "scalars" is a set together with two operations: