19.2 Vector spaces
19.2.1 The vector space axioms
There are eight axioms in total, but I find it easier to remember them this way:
A vector space is a set V over a field (elements of which are called \sayscalars) equipped with an operator called \sayvector addition which is an operator taking two elements of and returning a single element in , and an operator called \sayscalar multiplication which takes a scalar and a vector, and outputs a vector.
We have the following eight axioms,
The first four axioms are equivalent to stating that must be an Abelian group.
We have two kinds of distributivity, one is that if and , then(19.40)
The second is that if and , then(19.41)
The neutral element (e.g. in this is ) in has the following property,(19.42)
We also have a kind of \saymultiplicative distributivity(19.43)
Not exactly the most exciting stuff, but we can’t build castles without foundations! I’m not a structural engineer, but I’m pretty sure this is a true statement.
19.2.2 Linear independence
This is a concept in linear algebra.
Let be some vectors in a vector space V, and let be some scalars in the field (over which this vector space is defined).
We say these vectors are linearly independent if and only if
In words, this means \sayif the only values for all the s which satisify are when all the s are zero, then the vectors are linearly independent.
Change of basis
Let us suppose that we have two sets of basis vectors for the same vector space . It doesn’t really matter what we call them, but and are names as good as any. These vectors can be written in the form
For any vector we can always write it in the co-ordinate system by writing the vector as a linear combination55 5 This is always possible because is a basis for . of the vectors in . We can write this as
Where are such that
That is, they are the coefficients needed to write as a linear combination of . This also helps to understand why for example the vector space of symmetric matrices66 6 i.e. those in the form (19.49) is three dimensional; we can write every matrix as a vector of dimension where each coefficient denotes what to multiply each basis vector by to obtain our specific vector.
But what if we want to find a way to translate into ? This is actually doable using a single matrix. Here’s how. We start by applying the definition of , that is, we have that if and only if
To find , it is sufficient to find the in terms of the basis vectors in . How do we do this? A straightforward approach is to write every vector in in terms of those in and then to substitute for them, which removes all the -vectors and means that we instead have -vectors.
Because and are both basis for , we can write every vector in in terms of those in .
We can then substitute this into the linear combination of in terms of the basis vectors in , giving
This looks scary, but we just need to stick to the definitions and keep our goal in mind; writing in terms of all the . We can move things around to obtain
Therefore, we have that
Alternatively - the change of basis matrix has as its th column the scalars needed to write the th element of the one basis as a linear combination of the others.
Let V be a vector space. The set W is a subspace of V if W is a vector space, and W is a subset of .
Showing that something is a subspace. Suppose we have a vector space V, and we want to prove that W is a subspace of V. The steps to do so are this
Show that the zero vector is in the subspace in question.
Show that using the standard technique for showing that something is a subset of something else (as in Section TODO: write).
Then we must show that is closed under vector addition and scalar multiplication. The rest of the vector space axioms follow from the fact that and is a vector space.
This theorem is given as both an example of how to prove facts about vector spaces, but also because it is important in its own right.
Let V be a vector space, and U and W be subspaces of V. Prove that is a subspace of if and only if or .
To prove this, first we will show the \sayif direction, and then the only if direction.
If. Without loss of generality, assume that , in which case , and this is a subspace of as is a subspace of . The proof for the other case follows by swapping and in the proof.
Only if. This direction requires a bit more of an intuition about what directions to explore. First we will assume that is a subspace, and then we will assume that the consequent is untrue (i.e. that is not true), in which case there exist such that(19.83)
We can then ask (this is the core idea in the proof which is not immediately obvious – to me at least), about the status of . As and by assumption is a subspace (and therefore by definition closed under vector addition) it must be that . Then either or (by definition of the set union).
If , then also which is a contradiction as by definition of (Equation LABEL:definition_of_u_and_w) .
If , a very similar thing is the case; also which is a contradiction as .
Therefore, by contradiction this direction of the theorem must be true.