Skip to main content

Section SLT Surjective Linear Transformations

The companion to an injection is a surjection. Surjective linear transformations are closely related to spanning sets and ranges. So as you read this section reflect back on Section ILT and note the parallels and the contrasts. In the next section, Section IVLT, we will combine the two properties.

Subsection SLT Surjective Linear Transformations

As usual, we lead with a definition.

Definition SLT. Surjective Linear Transformation.

Suppose \(\ltdefn{T}{U}{V}\) is a linear transformation. Then \(T\) is surjective if for every \(\vect{v}\in V\) there exists a \(\vect{u}\in U\) so that \(\lteval{T}{\vect{u}}=\vect{v}\text{.}\)

Given an arbitrary function, it is possible for there to be an element of the codomain that is not an output of the function (think about the function \(y=f(x)=x^2\) and the codomain element \(y=-3\)). For a surjective function, this never happens. If we choose any element of the codomain (\(\vect{v}\in V\)) then there must be an input from the domain (\(\vect{u}\in U\)) which will create the output when used to evaluate the linear transformation (\(\lteval{T}{\vect{u}}=\vect{v}\)). Some authors prefer the term onto where we use surjective, and we will sometimes refer to a surjective linear transformation as a surjection.

Subsection ESLT Examples of Surjective Linear Transformations

It is perhaps most instructive to examine a linear transformation that is not surjective first.

Archetype Q is the linear transformation

\begin{equation*} \ltdefn{T}{\complex{5}}{\complex{5}},\quad \lteval{T}{\colvector{x_1\\x_2\\x_3\\x_4\\x_5}}= \colvector{-2 x_1 + 3 x_2 + 3 x_3 - 6 x_4 + 3 x_5\\ -16 x_1 + 9 x_2 + 12 x_3 - 28 x_4 + 28 x_5\\ -19 x_1 + 7 x_2 + 14 x_3 - 32 x_4 + 37 x_5\\ -21 x_1 + 9 x_2 + 15 x_3 - 35 x_4 + 39 x_5\\ -9 x_1 + 5 x_2 + 7 x_3 - 16 x_4 + 16 x_5}\text{.} \end{equation*}

We will demonstrate that

\begin{equation*} \vect{v}=\colvector{-1\\2\\3\\-1\\4} \end{equation*}

is an unobtainable element of the codomain. Suppose to the contrary that \(\vect{u}\) is an element of the domain such that \(\lteval{T}{\vect{u}}=\vect{v}\text{.}\)

Then

\begin{align*} \colvector{-1\\2\\3\\-1\\4}&=\vect{v}=\lteval{T}{\vect{u}} =\lteval{T}{\colvector{u_1\\u_2\\u_3\\u_4\\u_5}}\\ &=\colvector{ -2 u_1 + 3 u_2 + 3 u_3 - 6 u_4 + 3 u_5\\ -16 u_1 + 9 u_2 + 12 u_3 - 28 u_4 + 28 u_5\\ -19 u_1 + 7 u_2 + 14 u_3 - 32 u_4 + 37 u_5\\ -21 u_1 + 9 u_2 + 15 u_3 - 35 u_4 + 39 u_5\\ -9 u_1 + 5 u_2 + 7 u_3 - 16 u_4 + 16 u_5}\\ &= \begin{bmatrix} -2&3&3&-6&3\\ -16&9&12&-28&28\\ -19&7&14&-32&37\\ -21&9&15&-35&39\\ -9&5&7&-16&16 \end{bmatrix} \colvector{u_1\\u_2\\u_3\\u_4\\u_5}\text{.} \end{align*}

Now we recognize the appropriate input vector \(\vect{u}\) as a solution to a linear system of equations. Form the augmented matrix of the system, and row-reduce to

\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 0 & 0 & -1 & 0\\ 0 & \leading{1} & 0 & 0 & -\frac{4}{3} & 0\\ 0 & 0 & \leading{1} & 0 & -\frac{1}{3} & 0\\ 0 & 0 & 0 & \leading{1} & -1 & 0\\ 0 & 0 & 0 & 0 & 0 & \leading{1}\\ \end{bmatrix}\text{.} \end{equation*}

With a leading 1 in the last column, Theorem RCLS tells us the system is inconsistent. From the absence of any solutions we conclude that no such vector \(\vect{u}\) exists, and by Definition SLT, \(T\) is not surjective.

Again, do not concern yourself with how \(\vect{v}\) was selected, as this will be explained shortly. However, do understand why this vector provides enough evidence to conclude that \(T\) is not surjective.

Here is a cartoon of a non-surjective linear transformation. Notice that the central feature of this cartoon is that the vector \(\vect{v}\in V\) does not have an arrow pointing to it, implying there is no \(\vect{u}\in U\) such that \(\lteval{T}{\vect{u}}=\vect{v}\text{.}\) Even though this happens again with a second unnamed vector in \(V\text{,}\) it only takes one occurrence to destroy the possibility of surjectivity.

Figure NSLT. Non-Surjective Linear Transformation

To show that a linear transformation is not surjective, it is enough to find a single element of the codomain that is never created by any input, as in Example NSAQ. However, to show that a linear transformation is surjective we must establish that every element of the codomain occurs as an output of the linear transformation for some appropriate input.

Archetype R is the linear transformation

\begin{equation*} \ltdefn{T}{\complex{5}}{\complex{5}},\quad \lteval{T}{\colvector{x_1\\x_2\\x_3\\x_4\\x_5}}= \colvector{-65 x_1 + 128 x_2 + 10 x_3 - 262 x_4 + 40 x_5\\ 36 x_1 - 73 x_2 - x_3 + 151 x_4 - 16 x_5\\ -44 x_1 + 88 x_2 + 5 x_3 - 180 x_4 + 24 x_5\\ 34 x_1 - 68 x_2 - 3 x_3 + 140 x_4 - 18 x_5\\ 12 x_1 - 24 x_2 - x_3 + 49 x_4 - 5 x_5}\text{.} \end{equation*}

To establish that \(R\) is surjective we must begin with a totally arbitrary element of the codomain, \(\vect{v}\) and somehow find an input vector \(\vect{u}\) such that \(\lteval{T}{\vect{u}}=\vect{v}\text{.}\) We desire,

\begin{align*} \lteval{T}{\vect{u}}&=\vect{v}\\ \colvector{-65 u_1 + 128 u_2 + 10 u_3 - 262 u_4 + 40 u_5\\ 36 u_1 - 73 u_2 - u_3 + 151 u_4 - 16 u_5\\ -44 u_1 + 88 u_2 + 5 u_3 - 180 u_4 + 24 u_5\\ 34 u_1 - 68 u_2 - 3 u_3 + 140 u_4 - 18 u_5\\ 12 u_1 - 24 u_2 - u_3 + 49 u_4 - 5 u_5} &= \colvector{v_1\\v_2\\v_3\\v_4\\v_5}\\ \begin{bmatrix} -65&128&10&-262&40\\ 36&-73&-1&151&-16\\ -44&88&5&-180&24\\ 34&-68&-3&140&-18\\ 12&-24&-1&49&-5 \end{bmatrix} \colvector{u_1\\u_2\\u_3\\u_4\\u_5} &= \colvector{v_1\\v_2\\v_3\\v_4\\v_5}\text{.} \end{align*}

We recognize this equation as a system of equations in the variables \(u_i\text{,}\) but our vector of constants contains symbols. In general, we would have to row-reduce the augmented matrix by hand, due to the symbolic final column. However, in this particular example, the \(5\times 5\) coefficient matrix is nonsingular and so has an inverse (Theorem NI, Definition MI).

\begin{equation*} \inverse{ \begin{bmatrix} -65&128&10&-262&40\\ 36&-73&-1&151&-16\\ -44&88&5&-180&24\\ 34&-68&-3&140&-18\\ 12&-24&-1&49&-5 \end{bmatrix} } = \begin{bmatrix} -47 & 92 & 1 & -181 & -14 \\ 27 & -55 & \frac{7}{2} & \frac{221}{2} & 11\\ -32 & 64 & -1 & -126 & -12\\ 25 & -50 & \frac{3}{2} & \frac{199}{2} & 9 \\ 9 & -18 & \frac{1}{2} & \frac{71}{2} & 4 \end{bmatrix} \end{equation*}

so we find that

\begin{align*} \colvector{u_1\\u_2\\u_3\\u_4\\u_5} &= \begin{bmatrix} -47 & 92 & 1 & -181 & -14 \\ 27 & -55 & \frac{7}{2} & \frac{221}{2} & 11\\ -32 & 64 & -1 & -126 & -12\\ 25 & -50 & \frac{3}{2} & \frac{199}{2} & 9 \\ 9 & -18 & \frac{1}{2} & \frac{71}{2} & 4 \end{bmatrix} \colvector{v_1\\v_2\\v_3\\v_4\\v_5}\\ &= \colvector{-47 v_1 + 92 v_2 + v_3 - 181 v_4 - 14 v_5\\ 27 v_1 - 55 v_2 + \frac{7}{2} v_3 + \frac{221}{2} v_4 + 11 v_5\\ -32 v_1 + 64 v_2 - v_3 - 126 v_4 - 12 v_5\\ 25 v_1 - 50 v_2 + \frac{3}{2} v_3 + \frac{199}{2} v_4 + 9 v_5\\ 9 v_1 - 18 v_2 + \frac{1}{2} v_3 + \frac{71}{2} v_4 + 4 v_5}\text{.} \end{align*}

This establishes that if we are given any output vector \(\vect{v}\text{,}\) we can use its components in this final expression to formulate a vector \(\vect{u}\) such that \(\lteval{T}{\vect{u}}=\vect{v}\text{.}\) So by Definition SLT we now know that \(T\) is surjective. You might try to verify this condition in its full generality (i.e. evaluate \(T\) with this final expression and see if you get \(\vect{v}\) as the result), or test it more specifically for some numerical vector \(\vect{v}\) (see Exercise SLT.C20).

Here is the cartoon for a surjective linear transformation. It is meant to suggest that for every output in \(V\) there is at least one input in \(U\) that is sent to the output. (Even though we have depicted several inputs sent to each output.) The key feature of this cartoon is that there are no vectors in \(V\) without an arrow pointing to them.

Figure SLT. Surjective Linear Transformation

Let us now examine a surjective linear transformation between abstract vector spaces.

Archetype V is defined by

\begin{equation*} \ltdefn{T}{P_3}{M_{22}},\quad\lteval{T}{a+bx+cx^2+dx^3}= \begin{bmatrix} a+b & a-2c\\ d & b-d \end{bmatrix}\text{.} \end{equation*}

To establish that the linear transformation is surjective, begin by choosing an arbitrary output. In this example, we need to choose an arbitrary \(2\times 2\) matrix, say

\begin{equation*} \vect{v}=\begin{bmatrix}x&y\\z&w\end{bmatrix} \end{equation*}

and we would like to find an input polynomial

\begin{equation*} \vect{u}=a+bx+cx^2+dx^3 \end{equation*}

so that \(\lteval{T}{\vect{u}}=\vect{v}\text{.}\) So we have,

\begin{align*} \begin{bmatrix}x&y\\z&w\end{bmatrix}&=\vect{v}\\ &=\lteval{T}{\vect{u}}\\ &=\lteval{T}{a+bx+cx^2+dx^3}\\ &=\begin{bmatrix} a+b & a-2c\\ d & b-d \end{bmatrix}\text{.} \end{align*}

Matrix equality leads us to the system of four equations in the four unknowns, \(x,y,z,w\text{,}\)

\begin{align*} a+b&=x\\ a-2c&=y\\ d&=z\\ b-d&=w \end{align*}

which can be rewritten as a matrix equation,

\begin{equation*} \begin{bmatrix} 1 & 1 & 0 & 0\\ 1 & 0 & -2 & 0 \\ 0 & 0 & 0 & 1\\ 0 & 1 & 0 & -1 \end{bmatrix} \colvector{a\\b\\c\\d} = \colvector{x\\y\\z\\w}\text{.} \end{equation*}

The coefficient matrix is nonsingular, hence it has an inverse,

\begin{equation*} \inverse{ \begin{bmatrix} 1 & 1 & 0 & 0\\ 1 & 0 & -2 & 0 \\ 0 & 0 & 0 & 1\\ 0 & 1 & 0 & -1 \end{bmatrix} } = \begin{bmatrix} 1 & 0 & -1 & -1\\ 0 & 0 & 1 & 1\\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & -\frac{1}{2}\\ 0 & 0 & 1 & 0 \end{bmatrix} \end{equation*}

so we have

\begin{align*} \colvector{a\\b\\c\\d}&= \begin{bmatrix} 1 & 0 & -1 & -1\\ 0 & 0 & 1 & 1\\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & -\frac{1}{2}\\ 0 & 0 & 1 & 0 \end{bmatrix} \colvector{x\\y\\z\\w}\\% &= \colvector{ x-z-w\\ z+w\\ \frac{1}{2}(x-y-z-w)\\ z }\text{.} \end{align*}

So the input polynomial \(\vect{u}=(x-z-w)+(z+w)x+\frac{1}{2}(x-y-z-w)x^2+zx^3\) will yield the output matrix \(\vect{v}\text{,}\) no matter what form \(\vect{v}\) takes. This means by Definition SLT that \(T\) is surjective. All the same, let us do a concrete demonstration and evaluate \(T\) with \(\vect{u}\text{,}\)

\begin{align*} \lteval{T}{\vect{u}}&=\lteval{T}{(x-z-w)+(z+w)x+\frac{1}{2}(x-y-z-w)x^2+zx^3}\\ &= \begin{bmatrix} (x-z-w)+(z+w) & (x-z-w) - 2(\frac{1}{2}(x-y-z-w))\\ z & (z+w)-z \end{bmatrix}\\ &= \begin{bmatrix} x & y\\ z & w \end{bmatrix}\\ &=\vect{v}\text{.} \end{align*}

Subsection RLT Range of a Linear Transformation

For a linear transformation \(\ltdefn{T}{U}{V}\text{,}\) the range is a subset of the codomain \(V\text{.}\) Informally, it is the set of all outputs that the transformation creates when fed every possible input from the domain. It will have some natural connections with the column space of a matrix, so we will keep the same notation, and if you think about your objects, then there should be little confusion. Here is the careful definition.

Definition RLT. Range of a Linear Transformation.

Suppose \(\ltdefn{T}{U}{V}\) is a linear transformation. Then the range of \(T\) is the set

\begin{equation*} \rng{T}=\setparts{\lteval{T}{\vect{u}}}{\vect{u}\in U}\text{.} \end{equation*}

Archetype O is the linear transformation

\begin{equation*} \ltdefn{T}{\complex{3}}{\complex{5}},\quad \lteval{T}{\colvector{x_1\\x_2\\x_3}}= \colvector{-x_1 + x_2 - 3 x_3\\ -x_1 + 2 x_2 - 4 x_3\\ x_1 + x_2 + x_3\\ 2 x_1 + 3 x_2 + x_3\\ x_1 + 2 x_3 }\text{.} \end{equation*}

To determine the elements of \(\complex{5}\) in \(\rng{T}\text{,}\) find those vectors \(\vect{v}\) such that \(\lteval{T}{\vect{u}}=\vect{v}\) for some \(\vect{u}\in\complex{3}\text{,}\)

\begin{align*} \vect{v} =\lteval{T}{\vect{u}} &=\colvector{-u_1 + u_2 - 3 u_3\\ -u_1 + 2 u_2 - 4 u_3\\ u_1 + u_2 + u_3\\ 2 u_1 + 3 u_2 + u_3\\ u_1 + 2 u_3}\\ &= \colvector{-u_1\\-u_1\\u_1\\2 u_1\\ u_1} + \colvector{u_2\\2u_2\\u_2\\3u_2\\ 0} + \colvector{-3u_3\\-4u_3\\u_3\\u_3\\ 2 u_3} = u_1\colvector{-1\\-1\\1\\2\\1} + u_2\colvector{1\\2\\1\\3\\ 0} + u_3\colvector{-3\\-4\\1\\1\\2}\text{.} \end{align*}

This says that every output of \(T\) (in other words, the vector \(\vect{v}\)) can be written as a linear combination of the three vectors

\begin{align*} \colvector{-1\\-1\\1\\2\\1}&& \colvector{1\\2\\1\\3\\ 0}&& \colvector{-3\\-4\\1\\1\\2} \end{align*}

using the scalars \(u_1,\,u_2,\,u_3\text{.}\) Furthermore, since \(\vect{u}\) can be any element of \(\complex{3}\text{,}\) every such linear combination is an output. This means that

\begin{equation*} \rng{T}=\spn{\set{ \colvector{-1\\-1\\1\\2\\1},\, \colvector{1\\2\\1\\3\\ 0},\, \colvector{-3\\-4\\1\\1\\2} }}\text{.} \end{equation*}

The three vectors in this spanning set for \(\rng{T}\) form a linearly dependent set (check this!). So we can find a more economical presentation by any of the various methods from Section CRS and Section FS. We will place the vectors into a matrix as rows, row-reduce, toss out zero rows and appeal to Theorem BRS, so we can describe the range of \(T\) with a basis,

\begin{equation*} \rng{T}=\spn{\set{ \colvector{1\\0\\-3\\-7\\-2},\,\colvector{0\\1\\2\\5\\1} } }\text{.} \end{equation*}

We know that the span of a set of vectors is always a subspace (Theorem SSS), so the range computed in Example RAO is also a subspace. This is no accident, the range of a linear transformation is always a subspace.

We can apply the three-part test of Theorem TSS. First, \(\zerovector_U\in U\) and \(\lteval{T}{\zerovector_U}=\zerovector_V\) by Theorem LTTZZ, so \(\zerovector_V\in\rng{T}\) and we know that the range is nonempty.

Suppose we assume that \(\vect{x},\,\vect{y}\in\rng{T}\text{.}\) Is \(\vect{x}+\vect{y}\in\rng{T}\text{?}\) If \(\vect{x},\,\vect{y}\in\rng{T}\) then we know there are vectors \(\vect{w},\,\vect{z}\in U\) such that \(\lteval{T}{\vect{w}}=\vect{x}\) and \(\lteval{T}{\vect{z}}=\vect{y}\text{.}\) Because \(U\) is a vector space, additive closure (Property AC) implies that \(\vect{w}+\vect{z}\in U\text{.}\)

Then

\begin{align*} \lteval{T}{\vect{w}+\vect{z}}&=\lteval{T}{\vect{w}}+\lteval{T}{\vect{z}}&& \knowl{./knowl/definition-LT.html}{\text{Definition LT}}\\ &=\vect{x}+\vect{y}&& \text{Definition of }\vect{w}, \vect{z}\text{.} \end{align*}

So we have found an input, \(\vect{w}+\vect{z}\text{,}\) which when fed into \(T\) creates \(\vect{x}+\vect{y}\) as an output. This qualifies \(\vect{x}+\vect{y}\) for membership in \(\rng{T}\text{.}\) So we have additive closure.

Suppose we assume that \(\alpha\in\complexes\) and \(\vect{x}\in\rng{T}\text{.}\) Is \(\alpha\vect{x}\in\rng{T}\text{?}\) If \(\vect{x}\in\rng{T}\text{,}\) then there is a vector \(\vect{w}\in U\) such that \(\lteval{T}{\vect{w}}=\vect{x}\text{.}\) Because \(U\) is a vector space, scalar closure implies that \(\alpha\vect{w}\in U\text{.}\) Then

\begin{align*} \lteval{T}{\alpha\vect{w}}&=\alpha\lteval{T}{\vect{w}}&& \knowl{./knowl/definition-LT.html}{\text{Definition LT}}\\ &=\alpha\vect{x}&& \text{Definition of }\vect{w}\text{.} \end{align*}

So we have found an input (\(\alpha\vect{w}\)) which when fed into \(T\) creates \(\alpha\vect{x}\) as an output. This qualifies \(\alpha\vect{x}\) for membership in \(\rng{T}\text{.}\) So we have scalar closure and Theorem TSS tells us that \(\rng{T}\) is a subspace of \(V\text{.}\)

Let us compute another range, now that we know in advance that it will be a subspace.

Archetype N is the linear transformation

\begin{equation*} \ltdefn{T}{\complex{5}}{\complex{3}},\quad \lteval{T}{\colvector{x_1\\x_2\\x_3\\x_4\\x_5}}= \colvector{2 x_1 + x_2 + 3 x_3 - 4 x_4 + 5 x_5\\ x_1 - 2 x_2 + 3 x_3 - 9 x_4 + 3 x_5\\ 3 x_1 + 4 x_3 - 6 x_4 + 5 x_5}\text{.} \end{equation*}

To determine the elements of \(\complex{3}\) in \(\rng{T}\text{,}\) find those vectors \(\vect{v}\) such that \(\lteval{T}{\vect{u}}=\vect{v}\) for some \(\vect{u}\in\complex{5}\text{,}\)

\begin{align*} \vect{v}&=\lteval{T}{\vect{u}}\\ &= \colvector{ 2 u_1 + u_2 + 3 u_3 - 4 u_4 + 5 u_5\\ u_1 - 2 u_2 + 3 u_3 - 9 u_4 + 3 u_5\\ 3 u_1 + 4 u_3 - 6 u_4 + 5 u_5}\\ &= \colvector{2u_1\\u_1\\3u_1}+ \colvector{u_2\\-2u_2\\0}+ \colvector{3u_3\\3u_3\\4u_3}+ \colvector{-4u_4\\-9u_4\\-6u_4}+ \colvector{5u_5\\3u_5\\5u_5}\\ &= u_1\colvector{2\\1\\3}+ u_2\colvector{1\\-2\\0}+ u_3\colvector{3\\3\\4}+ u_4\colvector{-4\\-9\\-6}+ u_5\colvector{5\\3\\5}\text{.} \end{align*}

This says that every output of \(T\) (in other words, the vector \(\vect{v}\)) can be written as a linear combination of the five vectors

\begin{align*} \colvector{2\\1\\3}&& \colvector{1\\-2\\0}&& \colvector{3\\3\\4}&& \colvector{-4\\-9\\-6}&& \colvector{5\\3\\5} \end{align*}

using the scalars \(u_1,\,u_2,\,u_3,\,u_4,\,u_5\text{.}\) Furthermore, since \(\vect{u}\) can be any element of \(\complex{5}\text{,}\) every such linear combination is an output. This means that

\begin{equation*} \rng{T}=\spn{\set{ \colvector{2\\1\\3},\, \colvector{1\\-2\\0},\, \colvector{3\\3\\4},\, \colvector{-4\\-9\\-6},\, \colvector{5\\3\\5} }}\text{.} \end{equation*}

The five vectors in this spanning set for \(\rng{T}\) form a linearly dependent set (Theorem MVSLD). So we can find a more economical presentation by any of the various methods from Section CRS and Section FS. We will place the vectors into a matrix as rows, row-reduce, toss out zero rows and appeal to Theorem BRS, so we can describe the range of \(T\) with a (nice) basis,

\begin{equation*} \rng{T}=\spn{\set{ \colvector{1\\0\\0},\,\colvector{0\\1\\0},\,\colvector{0\\0\\1} } }=\complex{3}\text{.} \end{equation*}

In contrast to injective linear transformations having small (trivial) kernels (Theorem KILT), surjective linear transformations have large ranges, as indicated in the next theorem.

(⇒) 

By Definition RLT, we know that \(\rng{T}\subseteq V\text{.}\) To establish the reverse inclusion, assume \(\vect{v}\in V\text{.}\) Then since \(T\) is surjective (Definition SLT), there exists a vector \(\vect{u}\in U\) so that \(\lteval{T}{\vect{u}}=\vect{v}\text{.}\) However, the existence of \(\vect{u}\) gains \(\vect{v}\) membership in \(\rng{T}\text{,}\) so \(V\subseteq\rng{T}\text{.}\) Thus, \(\rng{T}=V\text{.}\)

(⇐) 

To establish that \(T\) is surjective, choose \(\vect{v}\in V\text{.}\) Since we are assuming that \(\rng{T}=V\text{,}\) \(\vect{v}\in\rng{T}\text{.}\) This says there is a vector \(\vect{u}\in U\) so that \(\lteval{T}{\vect{u}}=\vect{v}\text{,}\) i.e. \(T\) is surjective.

We are now in a position to revisit our first example in this section, Example NSAQ. In that example, we showed that Archetype Q is not surjective by constructing a vector in the codomain where no element of the domain could be used to evaluate the linear transformation to create the output, thus violating Definition SLT. Just where did this vector come from?

The short answer is that the vector

\begin{equation*} \vect{v}=\colvector{-1\\2\\3\\-1\\4} \end{equation*}

was constructed to lie outside of the range of \(T\text{.}\) How was this accomplished? First, the range of \(T\) is given by

\begin{equation*} \rng{T}=\spn{\set{ \colvector{1\\0\\0\\0\\1},\,\colvector{0\\1\\0\\0\\-1},\, \colvector{0\\0\\1\\0\\-1},\,\colvector{0\\0\\0\\1\\2} } }\text{.} \end{equation*}

Suppose an element of the range \(\vect{v^*}\) has its first 4 components equal to \(-1\text{,}\) \(2\text{,}\) \(3\text{,}\) \(-1\text{,}\) in that order. Then to be an element of \(\rng{T}\text{,}\) we would have

\begin{equation*} \vect{v^*}=(-1)\colvector{1\\0\\0\\0\\1}+(2)\colvector{0\\1\\0\\0\\-1}+(3) \colvector{0\\0\\1\\0\\-1}+(-1)\colvector{0\\0\\0\\1\\2} =\colvector{-1\\2\\3\\-1\\-8}\text{.} \end{equation*}

So the only vector in the range with these first four components specified, must have \(-8\) in the fifth component. To set the fifth component to any other value (say, 4) will result in a vector (\(\vect{v}\) in Example NSAQ) outside of the range. Any attempt to find an input for \(T\) that will produce \(\vect{v}\) as an output will be doomed to failure.

Whenever the range of a linear transformation is not the whole codomain, we can employ this device and conclude that the linear transformation is not surjective. This is another way of viewing Theorem RSLT. For a surjective linear transformation, the range is all of the codomain and there is no choice for a vector \(\vect{v}\) that lies in \(V\text{,}\) yet not in the range. For every one of the archetypes that is not surjective, there is an example presented of exactly this form.

In Example RAO the range of Archetype O was determined to be

\begin{equation*} \rng{T}=\spn{\set{ \colvector{1\\0\\-3\\-7\\-2},\,\colvector{0\\1\\2\\5\\1} } } \end{equation*}

a subspace of dimension 2 in \(\complex{5}\text{.}\) Since \(\rng{T}\neq\complex{5}\text{,}\) Theorem RSLT says \(T\) is not surjective.

The range of Archetype N was computed in Example FRAN to be

\begin{equation*} \rng{T}=\spn{\set{ \colvector{1\\0\\0},\,\colvector{0\\1\\0},\,\colvector{0\\0\\1} } }\text{.} \end{equation*}

Since the basis for this subspace is the set of standard unit vectors for \(\complex{3}\) (Theorem SUVB), we have \(\rng{T}=\complex{3}\) and by Theorem RSLT, \(T\) is surjective.

Sage SLT. Surjective Linear Transformations.

The situation in Sage for surjective linear transformations is similar to that for injective linear transformations. One distinction — what your text calls the range of a linear transformation is called the image of a transformation, obtained with the .image()method. Sage's term is more commonly used, so you are likely to see it again. As before, two examples, first up is Example RAO.

Besides showing the relevant commands in action, this demonstrates one half of Theorem RSLT. Now a reprise of Example FRAN.

Previously, we have chosen elements of the codomain which have nonempty or empty preimages. We can now explain how to do this predictably. Theorem RPI explains that elements of the codomain with nonempty pre-images are precisely elements of the image. Consider the non-surjective linear transformation Tfrom above.

Now any linear combination of the basis vectors b0and b1must be an element of the image. Moreover, the first two slots of the resulting vector will equal the two scalars we choose to create the linear combination. But most importantly, see that the three remaining slots will be uniquely determined by these two choices. This means there is exactly one vector in the image with these values in the first two slots. So if we construct a new vector with these two values in the first two slots, and make any part of the last three slots even slightly different, we will form a vector that cannot be in the image, and will thus have a preimage that is an empty set. Whew, that is probably worth reading carefully several times, perhaps in conjunction with the example following.

Subsection SSSLT Spanning Sets and Surjective Linear Transformations

Just as injective linear transformations are allied with linear independence (Theorem ILTLI, Theorem ILTB), surjective linear transformations are allied with spanning sets.

We need to establish that \(\rng{T}=\spn{R}\text{,}\) a set equality. First we establish that \(\rng{T}\subseteq\spn{R}\text{.}\) To this end, choose \(\vect{v}\in\rng{T}\text{.}\) Then there exists a vector \(\vect{u}\in U\text{,}\) such that \(\lteval{T}{\vect{u}}=\vect{v}\) (Definition RLT). Because \(S\) spans \(U\) there are scalars, \(\scalarlist{a}{t}\text{,}\) such that

\begin{equation*} \vect{u}=\lincombo{a}{u}{t}\text{.} \end{equation*}

Then

\begin{align*} \vect{v} &=\lteval{T}{\vect{u}}&& \knowl{./knowl/definition-RLT.html}{\text{Definition RLT}}\\ &=\lteval{T}{\lincombo{a}{u}{t}}&& \knowl{./knowl/definition-SSVS.html}{\text{Definition SSVS}}\\ &=a_1\lteval{T}{\vect{u}_1}+a_2\lteval{T}{\vect{u}_2}+a_3\lteval{T}{\vect{u}_3}+\ldots+a_t\lteval{T}{\vect{u}_t}&& \knowl{./knowl/theorem-LTLC.html}{\text{Theorem LTLC}} \end{align*}

which establishes that \(\vect{v}\in\spn{R}\) (Definition SS). So \(\rng{T}\subseteq\spn{R}\text{.}\)

To establish the opposite inclusion, choose an element of the span of \(R\text{,}\) say \(\vect{v}\in\spn{R}\text{.}\) Then there are scalars \(\scalarlist{b}{t}\) so that

\begin{align*} \vect{v} &=b_1\lteval{T}{\vect{u}_1}+b_2\lteval{T}{\vect{u}_2}+b_3\lteval{T}{\vect{u}_3}+\cdots+b_t\lteval{T}{\vect{u}_t}&& \knowl{./knowl/definition-SS.html}{\text{Definition SS}}\\ &=\lteval{T}{\lincombo{b}{\vect{u}}{t}}&& \knowl{./knowl/theorem-LTLC.html}{\text{Theorem LTLC}}\text{.} \end{align*}

This demonstrates that \(\vect{v}\) is an output of the linear transformation \(T\text{,}\) so \(\vect{v}\in\rng{T}\text{.}\) Therefore \(\spn{R}\subseteq\rng{T}\text{,}\) so we have the set equality \(\rng{T}=\spn{R}\) (Definition SE). In other words, \(R\) spans \(\rng{T}\) (Definition SSVS).

Theorem SSRLT provides an easy way to begin the construction of a basis for the range of a linear transformation, since the construction of a spanning set requires simply evaluating the linear transformation on a spanning set of the domain. In practice the best choice for a spanning set of the domain would be as small as possible, in other words, a basis. The resulting spanning set for the codomain may not be linearly independent, so to find a basis for the range might require tossing out redundant vectors from the spanning set. Here is an example.

Define the linear transformation \(\ltdefn{T}{M_{22}}{P_2}\) by

\begin{equation*} \lteval{T}{ \begin{bmatrix} a&b\\c&d \end{bmatrix}} =\left(a+2b+8c+d\right)+\left(-3a+2b+5d\right)x+\left(a+b+5c\right)x^2\text{.} \end{equation*}

A convenient spanning set for \(M_{22}\) is the basis

\begin{equation*} S=\set{ \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix},\, \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix},\, \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix},\, \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} }\text{.} \end{equation*}

So by Theorem SSRLT, a spanning set for \(\rng{T}\) is

\begin{align*} R &=\set{ \lteval{T}{\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}},\, \lteval{T}{\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}},\, \lteval{T}{\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}},\, \lteval{T}{\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}} }\\ &=\set{1-3x+x^2,\,2+2x+x^2,\,8+5x^2,\,1+5x}\text{.} \end{align*}

The set \(R\) is not linearly independent, so if we desire a basis for \(\rng{T}\text{,}\) we need to eliminate some redundant vectors. Two particular relations of linear dependence on \(R\) are

\begin{align*} (-2)(1-3x+x^2)+(-3)(2+2x+x^2)+(8+5x^2)&=0+0x+0x^2=\zerovector\\ (1-3x+x^2)+(-1)(2+2x+x^2)+(1+5x)&=0+0x+0x^2=\zerovector\text{.} \end{align*}

These, individually, allow us to remove \(8+5x^2\) and \(1+5x\) from \(R\) without destroying the property that \(R\) spans \(\rng{T}\text{.}\) The two remaining vectors are linearly independent (check this!), so we can write

\begin{equation*} \rng{T}=\spn{\set{1-3x+x^2,\,2+2x+x^2}} \end{equation*}

and see that \(\dimension{\rng{T}}=2\text{.}\)

Elements of the range are precisely those elements of the codomain with nonempty preimages.

(⇒) 

If \(\vect{v}\in\rng{T}\text{,}\) then there is a vector \(\vect{u}\in U\) such that \(\lteval{T}{\vect{u}}=\vect{v}\text{.}\) This qualifies \(\vect{u}\) for membership in \(\preimage{T}{\vect{v}}\text{,}\) and thus the preimage of \(\vect{v}\) is not empty.

(⇐) 

Suppose the preimage of \(\vect{v}\) is not empty, so we can choose a vector \(\vect{u}\in U\) such that \(\lteval{T}{\vect{u}}=\vect{v}\text{.}\) Then \(\vect{v}\in\rng{T}\text{.}\)

Now would be a good time to return to Figure KPI which depicted the pre-images of a non-surjective linear transformation. The vectors \(\vect{x},\,\vect{y}\in V\) were elements of the codomain whose pre-images were empty, as we expect for a non-surjective linear transformation from the characterization in Theorem RPI.

(⇒) 

Assume \(T\) is surjective. Since \(B\) is a basis, we know \(B\) is a spanning set of \(U\) (Definition B). Then Theorem SSRLT says that \(C\) spans \(\rng{T}\text{.}\) But the hypothesis that \(T\) is surjective means \(V=\rng{T}\) (Theorem RSLT), so \(C\) spans \(V\text{.}\)

(⇐) 

Assume that \(C\) spans \(V\text{.}\) To establish that \(T\) is surjective, we will show that every element of \(V\) is an output of \(T\) for some input (Definition SLT). Suppose that \(\vect{v}\in V\text{.}\) As an element of \(V\text{,}\) we can write \(\vect{v}\) as a linear combination of the spanning set \(C\text{.}\) So there are scalars, \(\scalarlist{b}{m}\text{,}\) such that

\begin{equation*} \vect{v}=b_1\lteval{T}{\vect{u}_1}+b_2\lteval{T}{\vect{u}_2}+b_3\lteval{T}{\vect{u}_3}+\cdots+b_m\lteval{T}{\vect{u}_m}\text{.} \end{equation*}

Now define the vector \(\vect{u}\in U\) by

\begin{equation*} \vect{u}=\lincombo{b}{u}{m}\text{.} \end{equation*}

Then

\begin{align*} \lteval{T}{\vect{u}}&=\lteval{T}{\lincombo{b}{u}{m}}\\ &=b_1\lteval{T}{\vect{u}_1}+b_2\lteval{T}{\vect{u}_2}+b_3\lteval{T}{\vect{u}_3}+\cdots+b_m\lteval{T}{\vect{u}_m}&& \knowl{./knowl/theorem-LTLC.html}{\text{Theorem LTLC}}\\ &=\vect{v}\text{.} \end{align*}

So, given any choice of a vector \(\vect{v}\in V\text{,}\) we can design an input \(\vect{u}\in U\) to produce \(\vect{v}\) as an output of \(T\text{.}\) Thus, by Definition SLT, \(T\) is surjective.

Subsection SLTD Surjective Linear Transformations and Dimension

Suppose to the contrary that \(m=\dimension{U}\lt\dimension{V}=t\text{.}\) Let \(B\) be a basis of \(U\text{,}\) which will then contain \(m\) vectors. Apply \(T\) to each element of \(B\) to form a set \(C\) that is a subset of \(V\text{.}\) By Theorem SLTB, \(C\) is a spanning set of \(V\) with \(m\) or fewer vectors. So we have a set of \(m\) or fewer vectors that span \(V\text{,}\) a vector space of dimension \(t\text{,}\) with \(m\lt t\text{.}\) However, this contradicts Theorem G, so our assumption is false and \(\dimension{U}\geq\dimension{V}\text{.}\)

The linear transformation in Archetype T is

\begin{equation*} \ltdefn{T}{P_4}{P_5},\quad \lteval{T}{p(x)}=(x-2)p(x)\text{.} \end{equation*}

Since \(\dimension{P_4}=5\lt 6=\dimension{P_5}\text{,}\) \(T\) cannot be surjective for then it would violate Theorem SLTD.

Notice that the previous example made no use of the actual formula defining the function. Merely a comparison of the dimensions of the domain and codomain are enough to conclude that the linear transformation is not surjective. Archetype O and Archetype P are two more examples of linear transformations that have “small” domains and “big” codomains, resulting in an inability to create all possible outputs and thus they are non-surjective linear transformations.

Subsection CSLT Composition of Surjective Linear Transformations

In Subsection LT.NLTFO we saw how to combine linear transformations to build new linear transformations, specifically, how to build the composition of two linear transformations (Definition LTC). It will be useful later to know that the composition of surjective linear transformations is again surjective, so we prove that here.

That the composition is a linear transformation was established in Theorem CLTLT, so we need only establish that the composition is surjective. Applying Definition SLT, choose \(\vect{w}\in W\text{.}\)

Because \(S\) is surjective, there must be a vector \(\vect{v}\in V\text{,}\) such that \(\lteval{S}{\vect{v}}=\vect{w}\text{.}\) With the existence of \(\vect{v}\) established, that \(T\) is surjective guarantees a vector \(\vect{u}\in U\) such that \(\lteval{T}{\vect{u}}=\vect{v}\text{.}\) Now,

\begin{align*} \lteval{\left(\compose{S}{T}\right)}{\vect{u}}&=\lteval{S}{\lteval{T}{\vect{u}}}&& \knowl{./knowl/definition-LTC.html}{\text{Definition LTC}}\\ &=\lteval{S}{\vect{v}}&& \text{Definition of }\vect{u}\\ &=\vect{w}&& \text{Definition of }\vect{v}\text{.} \end{align*}

This establishes that any element of the codomain (\(\vect{w}\)) can be created by evaluating \(\compose{S}{T}\) with the right input (\(\vect{u}\)). Thus, by Definition SLT, \(\compose{S}{T}\) is surjective.

Sage CSLT. Composition of Surjective Linear Transformations.

As we mentioned in the last section, experimenting with Sage is a worthwhile complement to other methods of learning mathematics. We have purposely avoided providing illustrations of deeper results, such as Theorem ILTB and Theorem SLTB, which you should now be equipped to investigate yourself. For completeness, and since composition will be very important in the next few sections, we will provide an illustration of Theorem CSLTS. Similar to what we did in the previous section, we choose dimensions suggested by Theorem SLTD, and then use randomly constructed matrices to form a pair of surjective linear transformations.

Reading Questions SLT Reading Questions

1.

Suppose \(\ltdefn{T}{\complex{5}}{\complex{8}}\) is a linear transformation. Why is \(T\) not surjective?

2.

What is the relationship between a surjective linear transformation and its range?

3.

There are many similarities and differences between injective and surjective linear transformations. Compare and contrast these two different types of linear transformations. (This means going well beyond just stating their definitions.)

Exercises SLT Exercises

C10.

Each archetype below is a linear transformation. Compute the range for each.

Archetype M, Archetype N, Archetype O, Archetype P, Archetype Q, Archetype R, Archetype S, Archetype T, Archetype U, Archetype V, Archetype W, Archetype X

C20.

Example SAR concludes with an expression for a vector \(\vect{u}\in\complex{5}\) that we believe will create the vector \(\vect{v}\in\complex{5}\) when used to evaluate \(T\text{.}\) That is, \(\lteval{T}{\vect{u}}=\vect{v}\text{.}\) Verify this assertion by actually evaluating \(T\) with \(\vect{u}\text{.}\) If you do not have the patience to push around all these symbols, try choosing a numerical instance of \(\vect{v}\text{,}\) compute \(\vect{u}\text{,}\) and then compute \(\lteval{T}{\vect{u}}\text{,}\) which should result in \(\vect{v}\text{.}\)

C22.

The linear transformation \(\ltdefn{S}{\complex{4}}{\complex{3}}\) is not surjective. Find an output \(\vect{w}\in\complex{3}\) that has an empty pre-image (that is \(\preimage{S}{\vect{w}}=\emptyset\text{.}\))

\begin{equation*} \lteval{S}{\colvector{x_1\\x_2\\x_3\\x_4}}= \colvector{ 2x_1+x_2+3x_3-4x_4\\ x_1+3x_2+4x_3+3x_4\\ -x_1+2x_2+x_3+7x_4 } \end{equation*}
Solution

To find an element of \(\complex{3}\) with an empty pre-image, we will compute the range of the linear transformation \(\rng{S}\) and then find an element outside of this set.

By Theorem SSRLT we can evaluate \(S\) with the elements of a spanning set of the domain and create a spanning set for the range.

\begin{align*} \lteval{S}{\colvector{1\\0\\0\\0}}&=\colvector{2\\1\\-1} & \lteval{S}{\colvector{0\\1\\0\\0}}&=\colvector{1\\3\\2} & \lteval{S}{\colvector{0\\0\\1\\0}}&=\colvector{3\\4\\1} & \lteval{S}{\colvector{0\\0\\0\\1}}&=\colvector{-4\\3\\7} \end{align*}

So

\begin{equation*} \rng{S}=\spn{\set{ \colvector{2\\1\\-1},\, \colvector{1\\3\\2},\, \colvector{3\\4\\1},\, \colvector{-4\\3\\7} }}\text{.} \end{equation*}

This spanning set is obviously linearly dependent, so we can reduce it to a basis for \(\rng{S}\) using Theorem BRS, where the elements of the spanning set are placed as the rows of a matrix. The result is that

\begin{equation*} \rng{S}=\spn{\set{ \colvector{1\\0\\-1},\, \colvector{0\\1\\1} }}\text{.} \end{equation*}

Therefore, the unique vector in \(\rng{S}\) with a first slot equal to 6 and a second slot equal to 15 will be the linear combination

\begin{equation*} 6\colvector{1\\0\\-1}+15\colvector{0\\1\\1}=\colvector{6\\15\\9}\text{.} \end{equation*}

So, any vector with first two components equal to 6 and 15, but with a third component different from 9, such as

\begin{equation*} \vect{w}=\colvector{6\\15\\-63} \end{equation*}

will not be an element of the range of \(S\) and will therefore have an empty pre-image.

Another strategy on this problem is to guess. Almost any vector will lie outside the range of \(T\text{,}\) you have to be unlucky to randomly choose an element of the range. This is because the codomain has dimension 3, while the range is “much smaller” at a dimension of 2. You still need to check that your guess lies outside of the range, which generally will involve solving a system of equations that turns out to be inconsistent.

C23.

Determine whether or not the following linear transformation \(\ltdefn{T}{\complex{5}}{P_3}\) is surjective:

\begin{align*} \lteval{T}{\colvector{a\\b\\c\\d\\e}} &= a + (b + c)x + (c + d)x^2 + (d + e)x^3\text{.} \end{align*}
Solution

The linear transformation \(T\) is surjective if for any \(p(x) = \alpha + \beta x + \gamma x^2 + \delta x^3\text{,}\) there is a vector \(\vect{u} = \colvector{a\\b\\c\\d\\e}\) in \(\complex{5}\) so that \(\lteval{T}{\vect{u}} = p(x)\text{.}\) We need to be able to solve the system

\begin{align*} a &=\alpha\\ b+c &= \beta\\ c + d &= \gamma\\ d + e &= \delta\text{.} \end{align*}

This system has an infinite number of solutions, one of which is \(a = \alpha\text{,}\) \(b = \beta\text{,}\) \(c = 0\text{,}\) \(d = \gamma\) and \(e = \delta - \gamma\text{,}\) so that

\begin{align*} \lteval{T}{\colvector{\alpha\\ \beta\\0\\ \gamma \\ \delta - \gamma}} &= \alpha + (\beta + 0)x + (0 + \gamma) x^2 + (\gamma + (\delta - \gamma)) x^3\\ &= \alpha + \beta x + \gamma x^2 + \delta x^3\\ &= p(x)\text{.} \end{align*}

Thus, \(T\) is surjective, since for every vector \(\vect{v} \in P_3\text{,}\) there exists a vector \(\vect{u} \in \complex{5}\) so that \(\lteval{T}{\vect{u}} = \vect{v}\text{.}\)

C24.

Determine whether or not the linear transformation \(\ltdefn{T}{P_3}{\complex{5}}\) below is surjective:

\begin{align*} \lteval{T}{a + bx + cx^2 + dx^3} &= \colvector{a + b \\ b + c \\ c + d \\ a + c\\ b + d}\text{.} \end{align*}
Solution

According to Theorem SLTD, if a linear transformation \(\ltdefn{T}{U}{V}\) is surjective, then \(\dimension{U}\ge\dimension{V}\text{.}\) In this example, \(U = P_3\) has dimension 4, and \(V = \complex{5}\) has dimension 5, so \(T\) cannot be surjective. (There is no way \(T\) can “expand” the domain \(P_3\) to fill the codomain \(\complex{5}\text{.}\))

C25.

Define the linear transformation

\begin{equation*} \ltdefn{T}{\complex{3}}{\complex{2}},\quad \lteval{T}{\colvector{x_1\\x_2\\x_3}}=\colvector{2x_1-x_2+5x_3\\-4x_1+2x_2-10x_3}\text{.} \end{equation*}

Find a basis for the range of \(T\text{,}\) \(\rng{T}\text{.}\) Is \(T\) surjective?

Solution

To find the range of \(T\text{,}\) apply \(T\) to the elements of a spanning set for \(\complex{3}\) as suggested in Theorem SSRLT. We will use the standard basis vectors (Theorem SUVB).

\begin{equation*} \rng{T}= \spn{\set{\lteval{T}{\vect{e}_1},\,\lteval{T}{\vect{e}_2},\,\lteval{T}{\vect{e}_3}}}= \spn{\set{\colvector{2\\-4},\,\colvector{-1\\2},\,\colvector{5\\-10}}}\text{.} \end{equation*}

Each of these vectors is a scalar multiple of the others, so we can toss two of them in reducing the spanning set to a linearly independent set (or be more careful and apply Theorem BCS on a matrix with these three vectors as columns). The result is the basis of the range,

\begin{equation*} \set{\colvector{1\\-2}}\text{.} \end{equation*}

Since \(\rng{T}\) has dimension \(1\text{,}\) and the codomain has dimension \(2\text{,}\) they cannot be equal. So Theorem RSLT says \(T\) is not surjective.

C26.

Let \(\ltdefn{T}{\complex{3}}{\complex{3}} \) be given by \(\lteval{T}{\colvector{a\\b\\c}} = \colvector{a + b + 2c\\ 2c\\ a + b + c}\text{.}\) Find a basis of \(\rng{T}\text{.}\) Is \(T\) surjective?

Solution

The range of \(T\) is

\begin{align*} \rng{T} &= \setparts{\colvector{a + b + 2c\\ 2c\\a + b + c}}{a,b,c\in\complexes}\\ &= \setparts{a\colvector{1\\0\\1} + b \colvector{1\\0\\1}+ c\colvector{2\\2\\1}}{a,b,c\in\complexes}\\ &= \spn{\set{\colvector{1\\0\\1},\colvector{2\\2\\1}}}\text{.} \end{align*}

Since the vectors \(\colvector{1\\0\\1}\) and \(\colvector{2\\2\\1}\) are linearly independent (why?), a basis of \(\rng{T}\) is \(\set{\colvector{1\\0\\1},\colvector{2\\2\\1}}\text{.}\) Since the dimension of the range is 2 and the dimension of the codomain is 3, \(T\) is not surjective.

C27.

Let \(\ltdefn{T}{\complex{3}}{\complex{4}}\) be given by \(\lteval{T}{\colvector{a\\b\\c}} = \colvector{a + b -c\\ a - b + c\\ -a + b + c\\a + b + c}\text{.}\) Find a basis of \(\rng{T}\text{.}\) Is \(T\) surjective?

Solution

The range of \(T\) is

\begin{align*} \rng{T} &= \setparts{\colvector{a + b -c \\ a - b + c\\ -a + b + c\\a + b + c}}{a,b,c\in\complexes}\\ &= \setparts{a\colvector{1\\1\\-1\\1} + b \colvector{1\\-1\\1\\1} + c\colvector{-1\\1\\1\\1}}{a,b,c\in\complexes}\\ &= \spn{\set{\colvector{1\\1\\-1\\1},\colvector{1\\-1\\1\\1}, \colvector{-1\\1\\1\\1}}}\text{.} \end{align*}

By row reduction (not shown), we can see that the set

\begin{gather*} \set{ \colvector{1\\1\\-1\\1},\, \colvector{1\\-1\\1\\1},\, \colvector{-1\\1\\1\\1} } \end{gather*}

is linearly independent, so is a basis of \(\rng{T}\text{.}\) Since the dimension of the range is 3 and the dimension of the codomain is 4, \(T\) is not surjective. (We should have anticipated that \(T\) was not surjective since the dimension of the domain is smaller than the dimension of the codomain.)

C28.

Let \(\ltdefn{T}{\complex{4}}{ M_{22}}\) be given by \(\lteval{T}{\colvector{a\\b\\c\\d}} = \begin{bmatrix} a + b & a + b + c\\ a + b + c & a + d\end{bmatrix}\text{.}\) Find a basis of \(\rng{T}\text{.}\) Is \(T\) surjective?

Solution

The range of \(T\) is

\begin{align*} \rng{T} &= \setparts{ \begin{bmatrix} a + b & a + b + c\\ a + b + c& a + d \end{bmatrix}}{a,b,c,d\in\complexes}\\ &= \setparts{ a\begin{bmatrix} 1 & 1 \\1 & 1 \end{bmatrix} + b\begin{bmatrix} 1 & 1 \\1 & 0 \end{bmatrix} + c\begin{bmatrix} 0 & 1 \\1 & 0 \end{bmatrix} + d\begin{bmatrix} 0 & 0 \\0 & 1 \end{bmatrix} }{a,b,c,d\in\complexes}\\ &=\spn{\set{ \begin{bmatrix} 1 & 1 \\1 & 1 \end{bmatrix}, \begin{bmatrix} 1 & 1 \\1 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 1 \\1 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 \\0 & 1 \end{bmatrix}}}\\ &=\spn{\set{ \begin{bmatrix} 1 & 1 \\1 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 1 \\1 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 \\0 & 1 \end{bmatrix} }}\text{.} \end{align*}

Can you explain the last equality above?

These three matrices are linearly independent, so a basis of \(\rng{T}\) is

\begin{equation*} \set{ \begin{bmatrix} 1 & 1 \\1 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 1 \\1 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 \\0 & 1 \end{bmatrix}}\text{.} \end{equation*}

Thus, \(T\) is not surjective, since the range has dimension 3 which is shy of \(\dimension{M_{22}}=4\text{.}\) (Notice that the range is actually the subspace of symmetric \(2 \times 2\) matrices in \(M_{22}\text{.}\))

C29.

Let \(\ltdefn{T}{P_2}{P_4}\) be given by \(\lteval{T}{p(x)} = x^2 p(x)\text{.}\) Find a basis of \(\rng{T}\text{.}\) Is \(T\) surjective?

Solution

If we transform the basis of \(P_2\text{,}\) then Theorem SSRLT guarantees we will have a spanning set of \(\rng{T}\text{.}\) A basis of \(P_2\) is \(\set{1, x, x^2}\text{.}\) If we transform the elements of this set, we get the set \(\set{x^2, x^3, x^4}\) which is a spanning set for \(\rng{T}\text{.}\) These three vectors are linearly independent, so \(\set{x^2, x^3, x^4}\) is a basis of \(\rng{T}\text{.}\)

Since \(\rng{T}\) has dimension \(3\text{,}\) and the codomain has dimension \(5\text{,}\) they cannot be equal. So Theorem RSLT says \(T\) is not surjective.

C30.

Let \(\ltdefn{T}{P_4}{P_3}\) be given by \(\lteval{T}{p(x)} = p^\prime(x)\text{,}\) where \(p^\prime(x)\) is the derivative. Find a basis of \(\rng{T}\text{.}\) Is \(T\) surjective?

Solution

If we transform the basis of \(P_4\text{,}\) then Theorem SSRLT guarantees we will have a spanning set of \(\rng{T}\text{.}\) A basis of \(P_4\) is \(\set{1, x, x^2, x^3, x^4}\text{.}\) If we transform the elements of this set, we get the set \(\set{0, 1, 2x, 3x^2, 4x^3}\) which is a spanning set for \(\rng{T}\text{.}\) Reducing this to a linearly independent set, we find that \(\{1, 2x, 3x^2, 4x^3\}\) is a basis of \(\rng{T}\text{.}\) Since \(\rng{T}\) and \(P_3\) both have dimension 4, \(T\) is surjective.

C40.

Show that the linear transformation \(T\) is not surjective by finding an element of the codomain, \(\vect{v}\text{,}\) such that there is no vector \(\vect{u}\) with \(\lteval{T}{\vect{u}}=\vect{v}\text{.}\)

\begin{equation*} \ltdefn{T}{\complex{3}}{\complex{3}},\quad \lteval{T}{\colvector{a\\b\\c}}= \colvector{2a+3b-c\\2b-2c\\a-b+2c} \end{equation*}
Solution

We wish to find an output vector \(\vect{v}\) that has no associated input. This is the same as requiring that there is no solution to the equality

\begin{equation*} \vect{v}=\lteval{T}{\colvector{a\\b\\c}}=\colvector{2a+3b-c\\2b-2c\\a-b+2c} =a\colvector{2\\0\\1}+b\colvector{3\\2\\-1}+c\colvector{-1\\-2\\2}\text{.} \end{equation*}

In other words, we would like to find an element of \(\complex{3}\) not in the set

\begin{equation*} Y=\spn{\set{\colvector{2\\0\\1},\,\colvector{3\\2\\-1},\,\colvector{-1\\-2\\2}}}\text{.} \end{equation*}

If we make these vectors the rows of a matrix, and row-reduce, Theorem BRS provides an alternate description of \(Y\text{,}\)

\begin{equation*} Y=\spn{\set{\colvector{2\\0\\1},\,\colvector{0\\4\\-5}}}\text{.} \end{equation*}

If we add these vectors together, and then change the third component of the result, we will create a vector that lies outside of \(Y\text{,}\) say \(\vect{v}=\colvector{2\\4\\9}\text{.}\)

M60.

Suppose \(U\) and \(V\) are vector spaces. Define the function \(\ltdefn{Z}{U}{V}\) by \(\lteval{Z}{\vect{u}}=\zerovector_{V}\) for every \(\vect{u}\in U\text{.}\) Then by Exercise LT.M60, \(Z\) is a linear transformation. Formulate a condition on \(V\) that is equivalent to \(Z\) being an surjective linear transformation. In other words, fill in the blank to complete the following statement (and then give a proof): \(Z\) is surjective if and only if \(V\) is . (See Exercise ILT.M60, Exercise IVLT.M60.)

T15.

Suppose that \(\ltdefn{T}{U}{V}\) and \(\ltdefn{S}{V}{W}\) are linear transformations. Prove the following relationship between ranges,

\begin{equation*} \rng{\compose{S}{T}}\subseteq\rng{S}\text{.} \end{equation*}
Solution

This question asks us to establish that one set (\(\rng{\compose{S}{T}}\)) is a subset of another (\(\rng{S}\)). Choose an element in the “smaller” set, say \(\vect{w}\in\rng{\compose{S}{T}}\text{.}\) Then we know that there is a vector \(\vect{u}\in U\) such that

\begin{equation*} \vect{w}=\lteval{\left(\compose{S}{T}\right)}{\vect{u}}=\lteval{S}{\lteval{T}{\vect{u}}}\text{.} \end{equation*}

Now define \(\vect{v}=\lteval{T}{\vect{u}}\text{,}\) so that then

\begin{equation*} \lteval{S}{\vect{v}}=\lteval{S}{\lteval{T}{\vect{u}}}=\vect{w}\text{.} \end{equation*}

This statement is sufficient to show that \(\vect{w}\in\rng{S}\text{,}\) so \(\vect{w}\) is an element of the “larger” set, and \(\rng{\compose{S}{T}}\subseteq\rng{S}\text{.}\)

T20.

Suppose that \(A\) is an \(m\times n\) matrix. Define the linear transformation \(T\) by

\begin{equation*} \ltdefn{T}{\complex{n}}{\complex{m}},\quad \lteval{T}{\vect{x}}=A\vect{x}\text{.} \end{equation*}

Prove that the range of \(T\) equals the column space of \(A\text{,}\) \(\rng{T}=\csp{A}\text{.}\)

Solution

This is an equality of sets, so we want to establish two subset conditions (Definition SE).

First, show \(\csp{A}\subseteq\rng{T}\text{.}\) Choose \(\vect{y}\in\csp{A}\text{.}\) Then by Definition CSM and Definition MVP there is a vector \(\vect{x}\in\complex{n}\) such that \(A\vect{x}=\vect{y}\text{.}\) Then

\begin{align*} \lteval{T}{\vect{x}}&=A\vect{x}&& \text{Definition of }T\\ &=\vect{y}\text{.} \end{align*}

This statement qualifies \(\vect{y}\) as a member of \(\rng{T}\) (Definition RLT), so \(\csp{A}\subseteq\rng{T}\text{.}\)

Now, show \(\rng{T}\subseteq\csp{A}\text{.}\) Choose \(\vect{y}\in\rng{T}\text{.}\) Then by Definition RLT, there is a vector \(\vect{x}\) in \(\complex{n}\) such that \(\lteval{T}{\vect{x}}=\vect{y}\text{.}\) Then

\begin{align*} A\vect{x} &=\lteval{T}{\vect{x}}&& \text{Definition of }T\\ &=\vect{y}\text{.} \end{align*}

So by Definition CSM and Definition MVP, \(\vect{y}\) qualifies for membership in \(\csp{A}\) and so \(\rng{T}\subseteq\csp{A}\text{.}\)