Inverse matrices, column space and null space | Essence of linear algebra, chapter 7

Inverse matrices, column space and null space | Essence of linear algebra, chapter 7


As you can probably tell by now, the bulk
of this series is on understanding matrix and vector operations through that more visual lens of linear transformations. This video is no exception, describing the
concepts of inverse matrices, column space, rank, and null space through
that lens. A forewarning though: I’m not going to talk
about the methods for actually computing these things, and some would argue that that’s pretty important. There are a lot of very good resources for
learning those methods outside this series. Keywords: “Gaussian elimination” and “Row
echelon form.” I think most of the value that I actually
have to add here is on the intuition half. Plus, in practice, we usually get software
to compute this stuff for us anyway. First, a few words on the usefulness of linear
algebra. By now, you already have a hint for how it’s
used in describing the the manipulation of space, which is useful for things like computer graphics
and robotics, but one of the main reasons that linear algebra
is more broadly applicable, and required for just about any technical
discipline, is that it lets us solve certain systems of
equations. When I say “system of equations,” I mean you
have a list of variables, things you don’t know, and a list of equations relating them. In a lot of situations, those equations can
get very complicated, but, if you’re lucky, they might take on a
certain special form. Within each equation, the only thing happening
to each variable is that it’s scaled by some constant, and the only thing happening to each of those
scaled variables is that they’re added to each other. So, no exponents or fancy functions, or multiplying
two variables together; things like that. The typical way to organize this sort of special
system of equations is to throw all the variables on the left, and put any lingering constants on the right. It’s also nice to vertically line up the common
variables, and to do that you might need to throw in
some zero coefficients whenever the variable doesn’t show up in one of the equations. This is called a “linear system of equations.” You might notice that this looks a lot like
matrix vector multiplication. In fact, you can package all of the equations
together into a single vector equation, where you have the matrix containing all of
the constant coefficients, and a vector containing all of the variables, and their matrix vector product equals some
different constant vector. Let’s name that constant matrix A, denote the vector holding the variables with
a boldface x, and call the constant vector on the right-hand
side v. This is more than just a notational trick
to get our system of equations written on one line. It sheds light on a pretty cool geometric
interpretation for the problem. The matrix A corresponds with some linear
transformation, so solving Ax=v means we’re looking for a vector x which,
after applying the transformation, lands on v. Think about what’s happening here for a moment. You can hold in your head this really complicated
idea of multiple variables all intermingling with each other just by thinking about squishing and morphing
space and trying to figure out which vector lands on another. Cool, right? To start simple, let’s say you have a system
with two equations and two unknowns. This means that the matrix A is a 2×2 matrix, and v and x are each two dimensional vectors. Now, how we think about the solutions to this
equation depends on whether the transformation associated
with A squishes all of space into a lower dimension, like a line or a point, or if it leaves everything spanning the full
two dimensions where it started. In the language of the last video, we subdivide
into the case where A has zero determinant, and the case where A has nonzero determinant. Let’s start with the most likely case, where
the determinant is nonzero, meaning space does not get squished into a
zero area region. In this case, there will always be one and
only one vector that lands on v, and you can find it by playing the transformation
in reverse. Following where v goes as we rewind the tape
like this, you’ll find the vector x such that A times
x equals v. When you play the transformation in reverse,
it actually corresponds to a separate linear transformation, commonly called “the inverse of A” denoted A to the negative one. For example, if A was a counterclockwise rotation
by 90º then the inverse of A would be a clockwise
rotation by 90º. If A was a rightward shear that pushes j-hat
one unit to the right, the inverse of a would be a leftward shear
that pushes j-hat one unit to the left. In general, A inverse is the unique transformation
with the property that if you first apply A, then follow it with the transformation A inverse, you end up back where you started. Applying one transformation after another
is captured algebraically with matrix multiplication, so the core property of this transformation
A inverse is that A inverse times A equals the matrix that corresponds to doing
nothing. The transformation that does nothing is called
the “identity transformation.” It leaves i-hat and j-hat each where they
are, unmoved, so its columns are one, zero, and zero, one. Once you find this inverse, which, in practice,
you do with a computer, you can solve your equation by multiplying
this inverse matrix by v. And again, what this means geometrically is
that you’re playing the transformation in reverse, and following v. This nonzero determinant case, which for a
random choice of matrix is by far the most likely one, corresponds with the idea that if you have
two unknowns and two equations, it’s almost certainly the case that there’s
a single, unique solution. This idea also makes sense in higher dimensions, when the number of equations equals the number
of unknowns. Again, the system of equations can be translated
to the geometric interpretation where you have some transformation, A, and some vector, v, and you’re looking for the vector x that lands
on v. As long as the transformation A doesn’t squish
all of space into a lower dimension, meaning, its determinant is nonzero, there will be an inverse transformation, A
inverse, with the property that if you first do A, then you do A inverse, it’s the same as doing nothing. And to solve your equation, you just have
to multiply that reverse transformation matrix by the vector v. But when the determinant is zero, and the
transformation associated with this system of equations squishes space into a smaller dimension, there
is no inverse. You cannot un-squish a line to turn it into
a plane. At least, that’s not something that a function
can do. That would require transforming each individual
vector into a whole line full of vectors. But functions can only take a single input
to a single output. Similarly, for three equations in three unknowns, there will be no inverse if the corresponding
transformation squishes 3D space onto the plane, or even if it squishes it onto a line, or
a point. Those all correspond to a determinant of zero, since any region is squished into something
with zero volume. It’s still possible that a solution exists
even when there is no inverse, it’s just that when your transformation squishes
space onto, say, a line, you have to be lucky enough that the vector
v lives somewhere on that line. You might notice that some of these zero determinant
cases feel a lot more restrictive than others. Given a 3×3 matrix, for example, it seems
a lot harder for a solution to exist when it squishes space onto a line compared
to when it squishes things onto a plane, even though both of those are zero determinant. We have some language that’s a bit more specific
than just saying “zero determinant.” When the output of a transformation is a line,
meaning it’s one-dimensional, we say the transformation has a “rank” of
one. If all the vectors land on some two-dimensional
plane, We say the transformation has a “rank” of
two. So the word “rank” means the number of dimensions
in the output of a transformation. For instance, in the case of 2×2 matrices,
rank 2 is the best that it can be. It means the basis vectors continue to span
the full two dimensions of space, and the determinant is nonzero. But for 3×3 matrices, rank 2 means that we’ve
collapsed, but not as much as they would have collapsed
for a rank 1 situation. If a 3D transformation has a nonzero determinant,
and its output fills all of 3D space, it has a rank of 3. This set of all possible outputs for your
matrix, whether it’s a line, a plane, 3D space, whatever, is called the “column space” of your matrix. You can probably guess where that name comes
from. The columns of your matrix
tell you where the basis vectors land, and the span of those transformed basis
vectors gives you all possible outputs. In other words, the column space is the
span of the columns of your matrix. So, a more precise definition of rank
would be that it’s the number of dimensions in the column
space. When this rank is as high as it can be, meaning it equals the number of columns, we
call the matrix “full rank.” Notice, the zero vector will always be
included in the column space, since linear transformations must keep the
origin fixed in place. For a full rank transformation, the only vector
that lands at the origin is the zero vector itself, but for matrices that aren’t full rank,
which squish to a smaller dimension, you can have a whole bunch of vectors that
land on zero. If a 2D transformation squishes space onto
a line, for example, there is a separate line in a different direction, full of vectors that get squished onto the
origin. If a 3D transformation squishes space onto
a plane, there’s also a full line of vectors that
land on the origin. If a 3D transformation squishes all the space
onto a line, then there’s a whole plane full of vectors
that land on the origin. This set of vectors that lands on the
origin is called the “null space” or the “kernel” of your matrix. It’s the space of all vectors that become
null, in the sense that they land on the zero vector. In terms of the linear system of
equations, when v happens to be the zero vector, the null space gives you all of
the possible solutions to the equation. So that’s a very high-level overview of how to think about linear systems of equations
geometrically. Each system has
some kind of linear transformation associated with it, and when that
transformation has an inverse, you can use that inverse to solve your system. Otherwise, the idea of column space lets
us understand when a solution even exists, and the idea of a null space
helps us to understand what the set of all possible solutions can look like. Again there’s a lot that I haven’t
covered here, most notably how to compute these things. I also had to limit my scope to examples where
the number of equations equals the number of unknowns. But the goal here is not to try to teach everything; it’s that you come away with
a strong intuition for inverse matrices, column space, and null space, and that those intuitions make any future
learning that you do more fruitful. Next video, by popular request, will be a
brief footnote about nonsquare matrices. Then, after that, I’m going to give you my
take on dot products, and something pretty cool that
happens when you view them under the light of linear transformations. See you then!

Only registered users can comment.

  1. Very interesting how systems of equations is in the 7th episode of this series whereas in school, it's the first thing you learn. Really shows the difference between institutionalized learning and conceptual/intuitive learning.

  2. so "a inverse" is the same as 1/a, there will only be an inverse if a is nothing but 0 since you can't devide by 0. Your explanation however made it so much clearer visually and I'm now back deep down the world of math but I enjoy every single second

  3. For 7:47 if the det(A) = 0 and we do A*v = x. As the vector A cannot span the subspace and is linearly dependent it can be replaced with a scalar b*v = x. Which means that v is an eigen vector and x is the eigen value. Please let me know if I am wrong.

  4. Why co-ordinate (1,2) written vertically in matrix form 2×1 and equation 1x+2y written horizontally in matrix form 1×2

  5. Hey 3Blue1Brown, thanks for the videos.
    I got a problem imagening the transformations of defective matrices regarding null space, column space, ect. (For example: the eigenvector of a defective matrice being in the null space and the column space at the same time)
    I'm trying to link these transformations of defective matrices to the animations in minute 10-11, but cannot get the hang of it, even after hours and hours of mental acrobatics.
    Could you maybe help out?

  6. I worship you,man,really,what the hell is the "span" of your creativity?I think its "column space" has infinite set of elements.All the "dimensions" of my comprehension squishes into a single point in front of your understanding and I feel astonished when my "null space" gets filled with intuition,and it loses its "function" because from that single point,it explodes to the space time and intuition is everywhere!

  7. you are a genious. please do such a video on every topic in the word! i seriously watch this becouse it intersting and give me a cool prespective about the matireal ,thet i shure will help me more than any other colculation explanetion video! thank you a lot!!!

  8. I just discovered that i really like understanding mathematics from a high level point of view instead of low level. Who would have guessed.

  9. At 7:36 why do you say we need to be lucky enough for the vector v to be on the line , won’t it always be on the line . As both the basis vectors after transformation is on the same line , won’t their linear combination (span) will be that line.

  10. These videos are so amazing because they connect all the dots and show us the big picture of a subject that we are learning. Thanks!

  11. Our maths professor explained everything mentioned in this playlist almost only numerically. Watching this got me more mind blows than ever before in such a short amount of time!

    PS: You truly saved my semester, dear sir

  12. I just heard two terms I've heard of in computers/computer science, namely null and kernel. I wonder if there is a correlation to the terms being used in computing applications to the way they are used in linear algebra?

  13. 7:00 – the main problem is that it would be impossible to determine which of the infitiely many solution sis the correct one as all these point in the higher dimesional space were mapped to the same point in the lower dimensional space.

  14. When we are talking about rank and nullity, is this connected to linear independence? Ie when the rank of a 3D matrix becomes 2, does that mean the transformation has created one linearly independent vector thereby creating a null space of 1 and rank of 2?? Thanks!

  15. this series would've seriously been super useful when i was still in school lol. much suffering was had by my younger self when the teacher only taught how to memorise and not visualise

  16. I got a solid A in my Linear Algebra course, but I have taken away more intuition about the subject in these videos than I did in that course. It isn't a slight at my old instructor either, you are legitimately an amazing instructor. Thank you so much for giving us these gifts.

  17. please help me understand the null space,
    what I think about the null space is that all the vectors that lie on the null space and then get squished to a lower dimensions are the only vectors that have solutions. ie Ax=[Zero vector]
    Is that correct.?
    Correct me if I am wrong.

  18. I'll say it once more. The animations in these videos are beautiful. It's very pleasing for an OCD mind.

  19. How did you figure out the intuition behind linear algebra? I'm pretty sure I could search the web for hours on end without finding anyone else willing to explain how it actually works. Was it an exeptional book/paper you read or did you think it all through on your own?

  20. This series is incredible – I use linear algebra every day but the deep level of intuition I'm gaining is astounding (and I'm only half way through!)

  21. when i see the number of views of these type of videos, full of knowledge, and then i see stupid and empty videos without meaning with millions of views, i really wonder about people

  22. when you talk about powers, 0 is zero dimension, 1 is first dimension, 2 is second and 3 is third dimension but what about more numbers?
    how do we express these powers with geometry?

  23. i just had an exam of computer graphics and most of the syllabus was related to this and now i found this…………i have comp architecture exam tomorrow but still watching this :3

  24. Great video, but you should have included a brief guide to finding inverses of 2×2 matrices. The process looks intimidating on paper, but is somewhat intuitive.

  25. Now I feel that I’m really dumb to not open YouTube and watch this series while I was taking Linear Algebra course. I was just telling myself that those computations of finding out the bases vectors by gauss elimination/jordan methods were useless and it will not help me as a student in my major. It turns out it really does afterwards.

  26. Is linear algebra like doing math where terms are a stack of Double-ended queue? Maybe not the best analogy

  27. 9:03 column space in my textbook is the same thing as range. However, 3Blue1Brown defined colspace as the set of ALL possible outputs of A(v). wasn't that what CODOMAIN is? i thought codomain was the set of all possible outputs, and range was the set of actual outputs.

  28. I hope I saw the videos 15 years ago when I study college, the animate definitely helps a lot for the core concept

  29. How does determinant equal to zero squish the space? It just reorients the vector such that they have no volume between them. Shouldn't there be an inverse transformation that can bring the vectors back to their initial positions?

  30. What is any vector transformation that transforms three dimensions into the point of/at origin? I would be interested in seeing this. I suppose after this course, I should be able to see x hat as negative coming back some number of steps, and the same with y hat and k hat. But then if we treat that situation as a new beginning, then there is no means of figuring out a singular inverse matrix, but rather an infinite number of inverse matrices. And so we like Zeno before us, must just accept that the rock has landed (breaking Rank), and reset the matrix to x hat = 1, y hat = 1 and k hat =1. We arrive oddly at the invisible transformation from x hat = 1, y hat = 1 and k hat =1 to x hat = 1, y hat = 1 and k hat =1: didn't anything just happen? Nope.

  31. What is any vector transformation that transforms three dimensions into the point of/at origin? I would be interested in seeing this. I suppose after this course, I should be able to see x hat as negative coming back some number of steps, and the same with y hat and k hat. But then if we treat that situation as a new beginning, then there is no means of figuring out a singular inverse matrix, but rather an infinite number of inverse matrices. And so we like Zeno before us, must just accept that the rock has landed (breaking Rank), and reset the matrix to x hat = 1, y hat = 1 and k hat =1. We arrive oddly at the invisible transformation from x hat = 1, y hat = 1 and k hat =1 to x hat = 1, y hat = 1 and k hat =1: didn't anything just happen? Nope.

  32. What is any vector transformation that transforms three dimensions into the point of/at origin? I would be interested in seeing this. I suppose after this course, I should be able to see x hat as negative coming back some number of steps, and the same with y hat and k hat. But then if we treat that situation as a new beginning, then there is no means of figuring out a singular inverse matrix, but rather an infinite number of inverse matrices. And so we like Zeno before us, must just accept that the rock has landed (breaking Rank), and reset the matrix to x hat = 1, y hat = 1 and k hat =1. We arrive oddly at the invisible transformation from x hat = 1, y hat = 1 and k hat =1 to x hat = 1, y hat = 1 and k hat =1: didn't anything just happen? Nope.

  33. I thought the vectors would be lines that are being transformed but now it looks like the coordinate system is being transformed to fit the lines can someone tell me how to interpret this?

  34. This is really helpful towards building the intuition on linear algebra. The null space idea somehow illustrated the case for homogenous equation very vividly. Thank you so much!

  35. I know I'm coming in late but if someone could explain the concept of adjoint of a matrix, and then give the visual intuition of how
    inv(A) =(1/det(A)) *adj(A)..
    I'd be really glad. Thank you 🙂

  36. Wonderful explanation! One question, at 7:42 how come the v vector has not ended up at the 1D line after the transformation, since with the animation shown at 7:33 it seems that all vectors are squished and fall on the line? I've been trying to get my head around this for quite a while.

  37. This is what education really means, this is the most elegant explanation of matrices which I struggle for a long time. I only want to know how to producer came up with these ideas

  38. i love you, seriously this is amazing i mean you revealed the simplicity and the beauty of linear algebra.
    When i studied it i was sure it was more logical but the book was extremely (lame) and now i am wandering in the internet to get it the RIGHT way like here.

  39. WHY???
    Why Bachelor courses not give us THOSE insights, instead of the nitty-gritty calculations only!!!
    I'm such disappointed!
    Thanks 3Blue1Brown, you're the best!

  40. So beautiful! Understanding math more i always feel how i can get to know more God who created everything so unique.

  41. Sir Please keep the speed of audio little slow. I find it difficult to follow. Otherwise ur explanation is perfect. Nobody taught me like u do. Thank u sir.

  42. I am having trouble visualizing the 3-d transformations. Can some one tell me a software where I can visualize 3-d transformation?

  43. I just hope one day textbooks will be written by people who have the ABILITY to teach not by crocodiles expecting everyone to understand their language

  44. It would be great if you could create intuitive and geometrical videos to show attributes of symmetric matrices, upper triangular and lower triangular matrices and what it means to do elimination. Your are great at this. Thanks.

  45. so i have some questions, just to clarify my understanding:
    – Can the column space ever be equal to the span of the original, untransformed basis vectors? Is it not possible because the grid lines wouldn't be in their original positions anymore?

    -When a matrix is not full rank, it means that some (or all) of its columns will be linearly dependent? (since we have lost one more dimensions)

    – If det(A) = 0, we may or may not have a solution. We will have a solution if the "output" vector v lies on the column space. We may ( or will? ) have multiple solutions if the "output" vector lies on [0,0]. The solutions in this second case is the null space. Right? Did I understand this well?
    Thanks a lot to anyone who can help, the videos are very well made, but it's still taking me some time to properly my head around the concepts

  46. Unlike other videos who just explain how everything is as it is, you really dive deep into why it is right. You are the reason why I finally understand all of these concepts, by just watching the video once.

  47. 11:12 At this moment it dawned on me! This is where the determinant method (that's what we call it in school) for solving two equations with two unknows come from!
    It's essentially putting your constants into a 2×2 matrix and then finding the inverse of that. Amazing! 😀

  48. So, when a transformation squishes any vector space to a point (Det=0), then the inverse is not possible!
    Lets call this BlackHole Transformation!

Leave a Reply

Your email address will not be published. Required fields are marked *