The neat thing about linear algebra in general is some
very seemingly simple concepts can be interpreted in a bunch
of different ways, and can be shown to represent different
ideas or different problems. And that's what I'm going to
do in this video.
I'm going to explore the nullspace, or even better, I'm
going to explore the relationship-- if I have some
matrix A times some vector x, and that that is
equal to the 0 vector.
And we, of course, saw on the last few videos that the
nullspace of A, is equal to all of the vectors x in Rn.
So this will have n components.
This would have to be an m by n matrix.
If this was an m by a matrix, I'd say all of
the vectors in Ra.
So this number right here has to be the same as that number
in order for the matrix vector multiplication to be valid.
But the nullspace of A is all of the vectors in Rn that
satisfy this equation.
Where if I take A and I multiply it times any one of
the vectors in the nullspace, I should get the 0 vector.
And this is going to have m components, and we've seen
that in previous-- this is going to have the 0 vectors.
I'll put it this way, the 0 vector's going to
be a member of Rm.
So that's what our nullspace is.
Let's explore it a little bit.
We know already that our vector, our matrix, can be
rewritten like this.
We could just write it as a set of column vectors.
I could say this right here, that's v1, then I'm going to
have v2, and I have n columns.
So this last column right here is going to be v sub n.
So if I define my vectors this way, that's the first vector,
that's the second vector, than I can rewrite my matrix A.
I could say A is equal to just a bunch of column vectors.
v1, v2, all the way to vn.
And multiplying this matrix times a vector x, so times x,
so times x1, x2, all the way to xn.
We've seen in the past on the matrix vector product
definition video that this can be interpreted as, this has
actually just coming straight out of the definition.
This is the same thing as x1 times vector 1, times the
first column, plus x2 times the second column, times that
column, all the way to, and you just keep adding them up,
all the way to xn.
Times the nth column.
This just comes straight out of our definition of matrix
Now, if we're saying that Ax is equal to 0, we're looking
for the solution set to that.
If we're looking for the solution set to Ax is equal to
0, then that means-- is equal to the 0 vector, that that
means that this sum, we're trying to find a solution set
of this sum is equaling 0.
We want to figure out the x1's, x2's, x3's, all the way
to xn's, that make this equal the 0 vector.
What are we doing?
We're taking linear combinations
of our column vectors.
We're taking linear combinations of our column
vectors, and seeing if we can take some linear combination
and get it to the 0 vector.
Now, this should start ringing bells in your head.
This little equation, or this little expression right here,
should start ringing bells.
This was part of how we defined what linear
We said that if this was the definition of linear
independence, or we proved this fell out of the
definition of linear independence, and if I have a
bunch of vectors, v1, v2, all the way to vn, we say that
they are linearly independent.
There's kind of the non-mathematical way of
describing it, I guess this is mathematical as well, is that
look, none of those vectors can be represented as a
combination of the other ones.
And then we show that that means that the only solution
to this equation would be that x1, x2, all of the
coefficients on this, has to be equal to 0.
That this is the only solution.
Linear independence means that this is the only solution to
this equation right now.
If the only way that you get the 0 vector, by taking
combination of all of these common vectors, the only way
to do that is to have all of these guys equal 0.
Then you are linearly independent.
Likewise, if v1, v2, all the way to vn are linearly
independent, then the only solution to this is for these
coefficients to be 0.
And we saw that in our video on linear independence.
Now, if all of these coefficients are 0,
what does that mean?
That means that our vector x is the 0 vector,
and only the 0 vector.
That's the only solution.
So we have something interesting here.
If our column vectors are linearly independent, if v1,
v2, all the way to vn, are linearly independent, then
that means that the only solution to Ax equals 0, is
that x has to be equal to 0 vector.
Or put another way, the solution set of this equation,
which is really just a nullspace, the nullspace is
all of the x's that satisfy this equation.
So that the nullspace of A has to only contain the 0 vector.
So that's an interesting result.
If we're linearly independent, then the nullspace of A only
contains the 0 vector.
Which is another way of saying that-- let me write this--
well, I already wrote it down, that x1, x2, all of them, have
to be equal to 0.
Now if I were to multiply this equation out and get it into
reduced row echelon form, what does that mean?
We saw in a previous video that the nullspace of A is
equal to the nullspace of the reduced row echelon form of A.
And that's-- the nullspace of A is 0, because its column
vectors are each linearly independent, and that means
that the nullspace of the reduced row echelon form of A
must also equal the 0 vector.
And that means that if I take the reduced row echelon form
of A, times-- maybe I'm being a little redundant-- the
reduced row echelon form of A, and I multiply that times x,
or I want to solve this equation, the only solution
right here is x is equal to the 0 vector.
And if you think about what that means, if this is the
only solution, that means that this reduced row echelon form
has no free variables.
It literally would just have to look like this.
So this is x, x1, x2, all the way to xn, the reduced row
echelon form of A, in order for this to have a unique
solution, and that unique solution being 0, the reduced
echelon form is going to have to look like this.
1 times x1 plus 0 times all the other ones, so you're
going to have just a bunch of n0's, and you're going to have
1 times x2, plus 0's times everything else.
And those 1's are going to go all the way down the diagonal,
so it's going to look like that, and then that is going
to be equal to the 0 vector.
And this is going to be a square matrix, where this has
to be n, and this has to be n.
How do I know that?
Because I said that x1, x2, and all of these have to be
equal to 0.
So they have to be equal to 0.
If I just write them as a system of equations, if I
write x1 is equal to 0, x2 is equal to 0, x3 is equal to 0,
all the way to xn is equal to 0.
This system of equations, if I wrote it as an augmented
matrix, remember this is x1 plus 0, x2 plus 0.
This as in augmented matrix, and we've done this multiple
times, it would look like this.
1, you just have a bunch of 0's, n0's, and then the 1's
would just go down the diagonal, and then you'd have
n0's right there.
So that's where I'm getting it from.
If we are linearly independent, the nullspace of
A is going to be just a 0 vector, and if the nullspace
of A is just a 0 vector, then the nullspace of the reduced
row echelon form is only the 0 vector.
The only solution is all of the x's equal to 0.
Which means a reduced row echelon form of A has to
essentially just be 1's down the diaganol, with 0's
So anyway, I just want to make this-- this is kind of a neat
by-product of an
interpretation of the nullspace.
Let me write that.
Let me summarize our results.
The nullspace of A, if it just equals 0, then that means, you
can go both ways, that's true if and only if the column
vectors of A are linearly independent.
And all of that's only true-- this is true, I was going to
do a triangle, it might turn into a square-- if x1, x2, all
of these have to be equal 0.
This is the only solution.
And then that implies that the reduced row echelon, and I
didn't do it as precisely as I would have liked, but the
reduced row echelon form of A is essentially going to be a
square n by n matrix.
And, by the way, this can only be true if we're dealing with
an n by n matrix to begin with.
And maybe I'll do that a little bit more precisely in a
But then the reduced row echelon form of A is going to
have to look like this, just a bunch of 1's down the
diagonal, with 0's everywhere else.
And these all imply each other.
Now, what if the nullspace of A contains some other vectors?
Well, then we would have to say that the column vectors of
A are linearly dependent.
And if they're linearly dependent, then we wouldn't
have a reduced row echelon form of A that looked like
this, you would have something that would have some free
variables that allows you to create more solutions there.
But anyway, I just wanted to give you this angle on how you
can interpret the nullspace and how it relates to linear