Skip to main content

Chapter VS Vector Spaces

We now have a computational toolkit in place and so we can begin our study of linear algebra at a more theoretical level.

Linear algebra is the study of two fundamental objects, vector spaces and linear transformations (see Chapter LT). This chapter will focus on the former. The power of mathematics is often derived from generalizing many different situations into one abstract formulation, and that is exactly what we will be doing throughout this chapter.

Annotated Acronyms VS.
Definition VS.

The most fundamental object in linear algebra is a vector space. Or else the most fundamental object is a vector, and a vector space is important because it is a collection of vectors. Either way, Definition VS is critical. All of our remaining theorems that assume we are working with a vector space can trace their lineage back to this definition.

Theorem TSS.

Checking all ten properties of a vector space (Definition VS) can get tedious. But if you have a subset of a known vector space, then Theorem TSS considerably shortens the verification. Also, proofs of closure (the last two conditions in Theorem TSS) are a good way to practice a common style of proof.

Theorem VRRB.

The proof of uniqueness in this theorem is a very typical employment of the hypothesis of linear independence. But that is not why we mention it here. This theorem is critical to our first section about representations, Section VR, via Definition VR.

Theorem CNMB.

Having just defined a basis (Definition B) we discover that the columns of a nonsingular matrix form a basis of \(\complex{m}\text{.}\) Much of what we know about nonsingular matrices is either contained in this statement, or much more evident because of it.

Theorem SSLD.

This theorem is a key juncture in our development of linear algebra. You have probably already realized how useful Theorem G is. All four parts of Theorem G have proofs that finish with an application of Theorem SSLD.

Theorem RPNC.

This simple relationship between the rank, nullity and number of columns of a matrix might be surprising. But in simplicity comes power, as this theorem can be very useful. It will be generalized in the very last theorem of Chapter LT, Theorem RPNDD.

Theorem G.

A whimsical title, but the intent is to make sure you do not miss this one. Much of the interaction between bases, dimension, linear independence, and spanning is captured in this theorem.

Theorem RMRT.

This one is a real surprise. Why should a matrix, and its transpose, both row-reduce to the same number of nonzero rows?