A group is abelian if its binary operator is commutative. That is, whenever for all .
An affine space can be thought of a vector space that has "lost it's origin".
Formally, an affine space is a set of points together with a vector space of "displacements". Points in can be subtracted to form a vector. That is . Subtraction must satisfy the following properties:
These conditions are called the Weyl axioms. An alternative formulation is to assert that acts on as a free action (treating as a group under vector addition).
The (algebraic) dual space of the vector space over the scalar field consists of all linear mappings of the form . is itself a vector space over . Elements of may be referred to as linear functionals or covectors or dual vectors. The notion of a dual space is integral in the theory of tensors.
and are isomorphic if they finite-dimenstional. That is, they share the same dimension. Moreover, for any non-degenerate bilinear form , the bijection is an isomoprhism between and . Thus, a finite-dimensional inner product space has a "natural" isomorphism to its dual.
Regardless of the dimensionality of , there exists a natural homomorphism from to the dual of its dual, . This natural homorphism maps each to the "evaluation map" defined by . For finite-dimensional , this homomorphism is also an isomorphism.
The existance of a natural isomorphism means that and may roughly be treated as equivalent.
Let form a basis of . Define the corresponding dual basis as the sequence of covectors satisfying the following:
Here, if , otherwise (this quantity is sometimes called the Kronecker delta). The dual basis is a basis of .
A bilinear form over a vector space (over some scalar field ) is a mapping that is linear in each argument.
Bilinear forms may be characterized by additional properties they posses. Let be a bilinear form. Then is:
Bilinear forms may be represented as matrices for finite-dimensional vector spaces. Let . Then every bilinear form can written in the following fashion:
where is the matrix representation of , and and are column vector representations of and .
Caley's theorem asserts that a permutation group and an abstract group are essentially equivalent concepts. It is clear that a permutation group meets the definition of an abstract group. Caley's theorem states that an abstract group is isomorphic to some permutation group.
For a given , the bijection defined by is the left action by . Similarly, the map is the right action by . The mapping from to the corresponding left action (or right action) is an isomorphism.
|Item||Variant||kcal / 100g|
|Item||Variant||kcal / 100g|
Let , be vectors in an inner product space. The Cauchy-Schwartz inequality asserts that
A change of basis is a process of converting the components of a vector (or matrix or tensor) with respect to one basis to components in another basis.
for all . Here, and are the column vectors written with respect to and , respectively.
Transition matrices have a number of properties:
Each basis on is uniquely associated with a linear isomorphism . Let and be isomorphisms corresponding to and respectively. Then the matrix is the matrix representation of the map given by .
A combination of chinese congee and greek avgolemeno. Good use of leftover chicken.
Combine one part jasmine rice to about ten parts chicken stock, by weight. Half a cup of rice (or 115g) and four cups of stock should be good for four servings. Season with salt and add coarsely chopped ginger to taste. Cook in a pressure cooker until thick and hot. Remove the ginger. Optionally thicken further with an immersion blender.
Add cooked chicken meat (thighs work best), shredded or chopped into small pieces. Add in one large egg yolk per two cups of stock. Temper the yolks with the porridge to prevent curdling. Add lemon juice to taste, but be generous. Add in some rice flake noodles for texture. Wait for rice noodles to soften before serving. Garnish with whatever you like, such as: green onion, fried scallions, fermented soy beans, chili paste, hot sauce, cilantro, mint, dill, youtiao, a poached egg.
Vegetarian Alternative: Replace chicken and chicken stock with cooked mushrooms and mushroom stock.
Before defining the determinant, it is useful to list some useful properties:
Let specifically be a matrix, then:
Let be the algebraically distinct eigenvalues of . The determinant of is defined as their product:
Here, "algebraically distinct" refers to the fact that the eigenvalues may not be numerically distinct, but act as distinct factors of the characteristic polynomial of :
This definition may seem circular since the characteristic polynomial is itself usually defined by the determinant (). However, it is possible to define this polynomial in other ways. This is the approach taken, for example, in Axler's Linear Algebra Done Right.
It is possible to define a determinant in the context of Grassman algebra.
Let be -dimensional. Then the exterior power is one dimensional. The corresponding exterior power of , , can be written in the form . And can thus be defined as the determinant of .
Let be the matrix representation of with respect to some basis of . Then it is possible to define the determinant of as a function of the corresponding matrix elements .
Suppose is -dimensional. A bijection of the form is called an (-fold) permutation. The sign of this permutation, denoted as , is equal to 1 if can be written as an even number of "swaps" (permutations that swap two elements but leave the others unchanged) and -1 if the number of swaps is odd.
Then the determinant of and is:
Here, the above summation is over all possible -fold permutations.
The dual numbers can be thought of an extension of the real numbers with an infinitesimal offset. A dual number may be written as , where and are real numbers. And may be thought to be "infinitesimal". That is, is distinct from zero despite that .
The expression is linear in and and obeys the following rule of multiplication:
For any polynomial , it may be readily shown that:
And for any analytic function , one can similarly extend to the dual numbers:
With this, it is possible to "automatically differentiate" a function by calculating the "epsilon" component of .
The ForwardDiff.jl Julia package follows this exact approach, utilizing a multidimensional analogue of dual numbers to calculate gradients.
where is a diagonal matrix whose diagonal entries are eigenvalues of . And is a matrix whose corresponding columns are eigenvectors. Here, , and share the same number of dimensions.
A matrix can be eigendecomposed if and only if it is diagonalizable.
Let be the eigendecomposition of and let be some analytic function. Then it can be shown that
Moreover, the calculation of is straightforward. It is a diagonal matrix whose th element is , where is the eigenvalue located at the th index of .
Let be a linear operator and a vector space. An eigenvalue of is a scalar such that for some non-zero , called an eigenvector. The set of eigenvalues of forms the eigenspectrum of . The linear combination of all eigenvectors corresponding to a given eigenvalue forms an eigenspace.
The following properties hold regardless of the dimensionality of :
The following properties apply when is finite-dimensional and the underlying field is real or complex.
Here, "algebraically distinct" eigenvalues means that the eigenvalues appear as algebraically distinct roots in the charachteristic polynomial. That is, let the characteristic polynomial of be written as
A euclidean space is a mathematical formalization and abstraction of physical space. The study of objects in euclidean space is known as euclidean geometry.
Subsets of fields that are themselves fields with the inherited addition and multiplication operators are called subfields.
Some elementary examples of fields:
Galilean Relativity refers to a theory of relativity consistent with Newton's laws.
It is named after Galileo's thought experiment involving a vessel traveling with a perfectly uniform and linear motion. An observer contained within the vessel, observing only phenomena also contained within the vessel, will have no means of determining the vessel's speed or direction of travel.
A galilean space(time) is a four-dimensional affine space . Points in this space are called "events". The set of displacements forms a four-dimensional, real vector space .
There exists a rank-1 linear map mapping spatio-temporal displacements to time intervals. Two events and are simultaneous if .
The three-dimensional quotient space is euclidean (that is, equipped an inner product ). From this, the distance between simultaneous events and is defined as
where is the natural projection (epimorphism) from to .
An isomorphism between galilean spaces is a bijection that preserves galilean structure (affinity, euclidean distance between simultaneous events, and time intervals).
All galilean spaces are isomorphic to , where the and components are each equiped with the standard inner product. Isomorphisms of spacetime onto are called inertial reference frames or galilean coordinates.
Automorphisms of are called galilean transformations, which forms the galilean group. The galilean group is generated from the following galilean transformations:
A homomorphism is a mapping that preserves algebraic structure. In the context of group theory, a homomoprhism between a group and is a mapping with the following property for all :
may be categorized as a special "type" of homomorphism according to additional properties it may hold:
Let be a group and be a vector space over the field . A representation with respect to these objects is a homomorphism from to , the general linear group of . That is, a representation is just a group action consisting of linear transformations. The study of group representations forms much of representation theory.
Usually, and is finite. In this case, (or, rather, ) may be identified as a finite set of matrices under some coordinate system.
The subspace is said to be invariant with respect to if for all and .
The representation is said to be irreducible over if the only invariant subspaces are and the subspace consisting of just the zero element.
An important problem of representation theory is decomposing into irreducible components That is, write with irreducible and none of is equal to . If this is possible, then the representation is said to be fully reducible.
A group is a simple algebraic structure that represents a composable set of permutations. There are two popular definitions of a group: one as a set of permutations and one as a set equipped with a binary operator. Both definitions are essentially equivalent by Caley's Theorem.
One formulation of a group is as a set of permutations. That is, a permutation group is a set of bijections of some set with the following closure properties:
An abstract group is a set together with a mapping called a binary operator. We write as or simply as .
An abstract group must satisfy the following properties:
It is clear that a permutation group is an abstract group by having composition as the chosen binary operator.
Note: For notational convenience, the binary operator is usually assumed and a group is identified by the underlying set. So, for example, claiming that " is an element of the group " means that " is an element of the underlying set of the group ".
An inner product space is a vector space over a scalar field together with a bilinear form , called an inner product, that satisfies the following properties:
The vector space together with an inner product is called a euclidean space. An inner product space with complex scalars is sometimes called a unitary space.
A linear transformation is unitary if the inner product is preserved.
Linear Algebra is the study of linear structure.
Morphisms of vector spaces -- that is, maps between vector spaces that preserve linearity -- are called linear maps. Linear maps can be represented as matrices, rectangular numerical arrays, via a basis. Matrices may be combined and manipulated numerically, .
Linear maps that are also isomorphisms are called linear transformations. Linear transformations that are also isometries (preserving an inner product) are called unitary (or orthogonal for real scalar fields).
Linear maps take in a single vector argument. Maps that take into multiple vector arguments and are linear componentwise are called tensors. The study of tensors belong to multilinear algebra, a sub-field of linear algebra. This includes exterior algebra, which has extensive applications in physics and geometry.
Linear algebra has notable applications in the following:
Let be a square matrix with real or complex entries. The characteristic polynomial of this matrix is defined as:
Here, is the identity matrix of the same dimension as . Of interest is this polynomial written in factored form:
Here, the 's are the "algebraically distinct" eigenvalues of if is a complex matrix. Otherwise, if is a real matrix, then only the real roots are considered eigenvalues.
A linear combination is an expression of the form
where are vectors belonging to some vector space and are scalars.
Let be a set of vectors in some vector space . These vectors are mutually linearly dependent if there exists some finite subset and sequence of non-zero scalars such that
If is not linearly dependent, then it is constituent vectors are linearly independent.
A linear map is a mapping between two vector spaces that preserves linearity.
Formally, a map from vector space to vector space is linear if the following equation holds for all vectors and scalar :
Of course, and should have the same underlying scalar field for this to make sense.
A linear map may sometimes be called a linear transformation, especially if . The term linear operator is also common, especially in physics.
Given a linear map , one can construct a number of relevant vector subspaces of or :
The rank-nullity theorem states that the nullity and rank of sums up to the dimension of :
The rank-nullity theorem is basically a manifestation theorem of the first isomorphism theorem for groups.
Consider a linear map of the form for some scalar field . Then there uniquely exists an matrix such that for all , with being treated as a "column vector" ( matrix).
More generally, let where and . And consider isomorphisms and . These isomorphisms may be identified as a basis on and . With respect to the basis and , one can define the matrix as the matrix corresponding to . See Change of Basis for more details.
The algebra of linear maps corresponds directly to matrix algebra. Let and be linear maps between finite-dimensional vector spaces. Then, with respect to some given basis:
Here, is a scalar.
Matrices may be produced from binary and unary operations on other matrices.
In the following, suppose that and are matrices with elements and , respectively.
A matrix is a rectangular array of numbers, commonly used in applications of linear algebra.
For example, the following is a two-by-four () matrix of real numbers:
This matrix has two rows and four columns.
A matrix may generally contain any number of rows or columns. The entries of a matrix usually belong to some specified field, usually or .
Matrix variables are often denoted by capital letters (), sometimes bolded ().
An entry of a matrix may be located by specifying which row and column the entry belongs to. This can be done by supplying an "index" to the desired row and column. For example, denote the entries of the afforementioned matrix as . Then is the entry in the first row (counting top-to-bottom) and second column (counting left-to-right).
Matrices may be combined and transformed according to the conventions of matrix algebra.
In linear algebra, an matrix is a representation of a linear map of the form for some field .
A metric space is a space equpped with an abstraction notion of distance, called a metric. Metric spaces form an important class of topological spaces. The archetypal example of a metric space is a euclidean space.
Formally, a matrix space is a set equipped with a metric or distance function satisfying the following properties for all
The last of these properties is known as the triangle inequality. An important corollary of these properties is that for all .
Minimalist, non-traditional miso soup.
500 ml of broth/stock. I use 10g of Better than Bouillon concentrated mushroom stock. Traditionally, fish stock (Dashi) would be used.
Boil the broth with other desired toppings until top are soft. For the additional ingrediants, just about anything would work: green onions, tofu, seaweed, fried bean curd, egg, spinach, &c. I like mine plain and consumed like tea.
Turn the heat off and and temper in about 20g of miso paste with some of the warm broth and add back to the soup. Serve hot.
In a topological space, a neighborhood is a set of points surrounding some particular point. Formally, a neighborhood around a given point is a set of points containing an open set that itself contains the given point. The given point is said to be in the interior of the neighborhood.
Consider a set . For each , suppose is a collection of subsets of obeying the following axioms:
Then is said to be a system of neighborhoods for .
Given such a system of neighborhoods, a set is said to be open if . The collection of such sets forms a topological space. More importantly, every topological space can be formed in this manner. It can be shown that is theo collection of all neighborhoods around the point .
Some authors use the above fact to define a topological space using such a system of neighborhoods instead of by the properties of its open sets. The neighborhood formulation, while less verbose, is arguably more intuitive.
Every topological space can be uniquely specified from the system of neighborhoods of each of its points.
A normed vector space is a vector space over together with a function , the norm, satisfying the following properties for all and :
A normed vector space is also a metric space under the metric .
A seminorm has the properties of the norm, except with the property that may be zero for nonzero .
procfs is a virtual filesystem, mounted at
/proc, containing information on
processes, threads, and the overall system.
Directories of the form
/proc/[0-9]+ contain process-specific information. Paths
of the form
/proc/[a-z]+ contain system-specific information.
Sourced from the RHEL 6 Deployment Guide.
/proc/\(pid/is a directory containing process information for the process with identifer
\)pid. The path
/path/self/links to this directory corresponding for the calling process.
/proc/\(pid/cwdlinks to the process's working directory.
/proc/\)pid/fd/is a directory containing links to file decriptors opened by the file.
/proc/\(pid/environcontains process-specific environment variables
/proc/\)pid/exelinks to the process's executable
/proc/\(pid/maps/contains the process's memory maps.
/proc/\)pid/memcontains a mapping to a process's memory. This files is not normally available without attaching
/proc/\(pid/task/\)tid/is a directory of a process's thread with identifier
/proc/bus/contains information about available buses. In particlar
/proc/bus/pcicontains information about available PCI devices.
/proc/cpuinfoconatains information about the system's CPU (model name, cache size, feature flags, ...).
/proc/filesystemscontains a list of filesystems.
/proc/iomemmaps memory regions to physical devices.
/proc/kcorecontains a view into the system's memory.
/proc/loadavgshows relative load across CPU cores.
/proc/lockslist of file locks held by the kernel.
/proc/meminfodisplays statistics on memory usage.
/proc/modulescontains a list of kernel modules.
/proc/mountscontains a list of filesystem mounts.
/proc/statscontains a large amount of statistics collected since the system was last restarded.
/proc/uptimeshows how long the system has been running since last restart.
/proc/versionshows the kernel version, including compiler info.
A quotient group for a given group and a "normal" subgroup of is a group that has a "coarser" or "more relaxed" algebraic structure relative to . The notion of a quotient group is essential in a number of isomorphism theorems.
Let be a group with subgroup .
A left coset of , denoted as or for some , is the set consisting of elements of the form for some . That is, is the image of under the left action induced by .
Similarly, a right coset of , is the image of under the right action induced by .
The subgroup is said to be normal if for all . In this case, there is no distinction between "right" and "left" cosets.
The cosets of a normal subgroup itself forms a group
Let and be finite and normal. Lagrange's theorem states that
Let be a vector subspace of . Since is a normal subgroup of with respect to vector addition, one can construct the quotient group consisting of cosets of (affine hyperplanes parallel to ).
Moreover, also inherits a form of scalar multiplication, making itself a vector space. Let be a vector in the quotient space. That is, is a coset of . Scalar multiplication of every point in yields another coset of . Hence, scalar multiplication is well-defined for vector spaces.
The definition of a quotient vector space can be readily generalized to a module. That is, one can construct quotient modules in a similar fashion.
Ring buffers store their data as vectors, making them faster than linked lists as queue implementations.
Ring buffers have a maximum capacity. Appends may therefore be destructive. For example, when pushing data into head of a full-capacity buffer, data is written into the memory address previously occupied by the tail. Visually:
(Ouroboros, source: Wikimedia)
Data is stored in a pre-allocated vector. The length of this vector is the buffer's capacity and should not be confused with the size of the buffer. In general, only a region of this vector is valid buffer data. The size of a buffer is no greater than its capacity.
The valid buffer region can be defined by a pointer or index that indicates the "start" or "head" of the buffer as well as the size of the valid buffer region. A pointer to the tail of the buffer region may alternatively be supplied.
The valid buffer region is generally not contiguous in memory. In particular, it may "wrap around" to the beginning of the vector. This is where the namesake of the ring buffer comes in. We can think of the first and last memory cell in the data vector to be connected together.
Consider, for example, the ring buffer represented in the above image. The data vector has a capacity of 8, with memory cells enumerated 1 through 8. The head pointer is 6. The tail pointer is 2. Since the head pointer is larger than the tail pointer, the buffer wraps around the end. The buffer consists of five cells: 6, 7, 8, 1, and 2. Cells 3, 4, and 5 are not part of the valid buffer region. But they may be used if data is pushed to the head or tail of the buffer.
Pushing and popping data from either end of the buffer may involve incrementing or decrementing the head and/or tail pointers. If, in the above example, four elements were pushed to the tail of the buffer, then the tail pointer will be incremented four times and the head pointer will be incremented once. The incrementing of the head pointer in this case indicates that data is being overwritten due to the buffer being at full capacity.
Below is a minimal and peformant Julia implementation of a queue implmented as a ring buffer. For simplicity, the size of the buffer is used instead of a pointer to the tail.
This can be made into a deque with similarly implemented
pushfirst! procedures. Automatic resizing is possible with a copy, which
linearizes the queue.
A sigma algebra is a space of mathematical statements that may be combined using a countable number of boolean operations.
More precisely, a sigma algebra over a set is a collection of subsets in with the following closure properties:
The tuple is called a measurable space. is said to be a sub-sigma algebra of if is measurable and . is said to be "coarser" than .
Elements of a sigma algebra are called measurable sets.
Sigma algebras are often generating from other sigma algebras:
Let be a group. If , then is a subgroup of if is itself a group. Equivalently, is a subgroup of if whenever .
Such a tensor is said to be of type (or in some literature). If , the tensor is said to be covariant. If , the tensor is said to be contravariant. Otherwise, the tensor is said to be mixed.
Let be a basis of . And let be the corresponding basis in (that is, ). Then a natural basis for the space of -typed tensors on is given by
for all possible sequences and in . The dimension of the space of -typed tensors is thus .
A topological space is a set equipped topological structure. Essentially, a topological space is defined so that the "limit" of a sequences o "points" in the space may be defined.
A topological space may be defined as a set to together with a topology , which consists of subsets of . Elements of this topology are called open sets. A topology satisfies the following axioms:
A set is said to be closed if its complement is open. The set of closed sets uniquely specifies space's topology.
Topologies are rarely defined by directly specifying the open sets directly. Rather, they are usually generated from simpler constructs. For example:
Let be an matrix. The trace of this matrix is the sum of its diagonal elements:
The trace can also be calculated as the sum of 's eigenvalues (adjusting for multiplicities). That is, let the characteristic equation of be given by . Then
Since the eigenvalues of a matrix are invariant under coordinate transforms, the trace of a linear transformation of a finite-dimensional space can be defined as the trace of one of its matrix representations.
Traces satisfy the following properties:
A unitary transformation is an isomorphism of an inner product space. That is, it is a linear transformation preserving inner products:
For euclidean vector spaces (), unitary transformations are called orthonormal transformations.
The eigenspaces of a linear transformation are orthogonal.
The matrix representation of a unitary transformation is called a unitary matrix. Such matrices have the following properties:
Here, denotes the conjugate transpose of (obtained from by taking the complex conjugate of each element and then transposing the matrix).
For euclidean spaces, a unitary matrix is specifically called an orthonormal matrix.
More explicitly, let be a basis. Then, every can be uniquely written in the form for some .
If is finite, then is said to have a finite dimension. And the dimension of is the cardinality of . Otherwise, is said to be infinitely dimensional.
All basis of a vector space share the same cardinality.
Let be dimensional ( finite). Then a basis may be uniquely identified by a linear isomorphism via the following construction:
Here, is the th component of . is an coordinate system on .
Converting from one basis to another may be done using transition matrices.
Vector spaces are used to model a collection of objects -- vectors -- that can be combined in a linear way to form more vectors. Vector spaces may also be called linear spaces.
Formally, a vector space over a field (with usually being either or ) of "scalars" is a set together with two operations:
A linear map is a homomorphism between two vector spaces. That is, it is a map from one vector space to another vector space that preserves its linear structure (vector addition and scalar multiplication).
A vector space may be equipped with a basis. If this basis is finite, then the vector space is said to have a finite dimension. The dimension of the vector space is the cardinality of any of its basis (invariant across basis).
If is dimensional, then it is isomorphic to the cartesian space (see below).
A vector subspace is a subset of some vector space that is itself a vector space when inheriting vector addition and scalar multiplication from . Equivalently, is a vector space whenever it is closed under vector addition and scalar multiplication.
Some common examples of vector spaces include: