Three-dimensional linear transformations | Essence of linear algebra, chapter 5

Three-dimensional linear transformations | Essence of linear algebra, chapter 5


[classical music] “Lisa: Well, where’s my dad? Frink: Well, it should be obvious to even the most dimwitted individual who holds an advanced degree in hyperbolic topology that Homer Simpson has stumbled into … (dramatic pause) … the third dimension.” Hey folks I’ve got a relatively quick
video for you today, just sort of a footnote between chapters. In the last two videos I talked about linear transformations and matrices, but,
I only showed the specific case of transformations that take
two-dimensional vectors to other two-dimensional vectors. In general throughout the series we’ll work
mainly in two dimensions. Mostly because it’s easier to actually
see on the screen and wrap your mind around, but, more importantly than that once you get all the core ideas in two
dimensions they carry over pretty seamlessly to higher dimensions. Nevertheless it’s good to peak our heads
outside of flatland now and then to… you know see what it means to apply these
ideas in more than just those two dimensions. For example, consider a linear transformation with three-dimensional vectors as inputs and three-dimensional vectors as outputs. We can visualize this by smooshing around
all the points in three-dimensional space, as represented by a grid, in such a
way that keeps the grid lines parallel and evenly spaced and which fixes
the origin in place. And just as with two dimensions,
every point of space that we see moving around is really just a proxy for a vector who
has its tip at that point, and what we’re really doing
is thinking about input vectors *moving over* to their corresponding outputs, and just as with two dimensions, one of these transformations is completely described by where the basis vectors go. But now, there are three standard basis
vectors that we typically use: the unit vector in the x-direction, i-hat; the unit vector in the y-direction, j-hat; and a new guy—the unit vector in
the z-direction called k-hat. In fact, I think it’s easier to think
about these transformations by only following those basis vectors since, the for 3-D grid representing all
points can get kind of messy By leaving a copy of the original axes
in the background, we can think about the coordinates of
where each of these three basis vectors lands. Record the coordinates of these three
vectors as the columns of a 3×3 matrix. This gives a matrix that completely describes the transformation using only nine numbers. As a simple example, consider,
the transformation that rotate space 90 degrees around the y-axis. So that would mean that it takes i-hat to the coordinates [0,0,-1]
on the z-axis, it doesn’t move j-hat so it stays at the
coordinates [0,1,0] and then k-hat moves over to the x-axis at
[1,0,0]. Those three sets of coordinates become
the columns of a matrix that describes that rotation transformation. To see where vector with coordinates XYZ
lands the reasoning is almost identical to what it was for two dimensions—each
of those coordinates can be thought of as instructions for how to scale each basis vector so that they add
together to get your vector. And the important part just like the 2-D case is
that this scaling and adding process works both before and after the
transformation. So, to see where your vector lands
you multiply those coordinates by the corresponding columns of the matrix
and then you add together the three results. Multiplying two matrices is also similar whenever you see two 3×3 matrices
getting multiplied together you should imagine first applying the
transformation encoded by the right one then applying the transformation encoded
by the left one. It turns out that 3-D matrix
multiplication is actually pretty important for fields like computer
graphics and robotics—since things like rotations in three dimensions can be
pretty hard to describe, but, they’re easier to wrap your mind around if
you can break them down as the composition of separate easier to think about
rotations Performing this matrix multiplication
numerically, is, once again pretty similar to the two-dimensional case. In fact a
good way to test your understanding of the last video would be to try to reason
through what specifically this matrix multiplication should look like thinking
closely about how it relates to the idea of applying two successive of
transformations in space. In the next video I’ll start getting
into the determinant.

Only registered users can comment.

  1. Really good video! I would love to see a video about Quaternions as well! I think they're amazing mathematic tools for 3D rotation, and honestly, I don't really understand how they really work 😥. It would great if you explain it in one of your videos, I really enjoy them😄

  2. This is pretty straightforward if one grasps the previous chapters. A (linear) change of basis is a linear transformation.

  3. i wish you make a video about the vectors transformation between cartesian coordinate system and cylindrical/spherical systems;but just from the same viewpoint which you had in this video.although i know mathematical aspect of these transformations,but i have a big problem at understanding them conceptually.of course,i think these transformations arent linear ones anymore??

  4. Actually, I would like to think a non-square matrix to be a square one, by adding zeros into a new column or a new row. It will give a much better intuition into this kind of transformation, by applying knowledge from previous chapters.

  5. There's an old programming book titled "flights of fantasy". The author uses a lot of matrices for the 3d calculations and finally now, 20 years later, I understand why.

  6. Just stay calm, Frinky. These babies will be in the stores while he's still grappling with the pickle matrix!! Bhay-gn-flay-vn!!

  7. At this point in the tutorial, I just want to give you all my money. You've made my mathematics make so much sense, I feel like a child with an endless supply of candy!

  8. I got a question: at 2:00–2:15 you said that I should put the 3 basis vectors inside of a 3×3 matrix to make the matrix multiplication work later on, BUT does this also apply to row vectors, because you store them here as column vectors. I might be wrong because I've only started studying linear algebra recently, but wouldn't this mess things up if I would've wanted to use row vectors, right?

  9. at 1:52 you showed the basis vectors with the z-direction pointing up. Is that really true? Because in 2d, x goes from left to right and y goes from bottom to top. Adding the 3rd dimension and naming it z, shouldn't z then go from outside the screen to inside the screen (so to speak)? To put it another way: If in 2d i^ is the x-axis, going from left to right, and j^ is the y-axis, going from bottom to top, then why is everything named differently in 3d?

  10. I learned linear algebra a long time ago and I still use matrices and transformations, but these videos have helped me visualize them in ways I never did. Thanks!

  11. I wish I have knew these and other concepts earlier in time , may be I would see the world differently. Thanks for the most appreciated effort spent in theses videos. Hoping to see more

  12. i enjoyed every second in this series. you explained the material better than any professor I have encountered during my bachelor and master degrees. well done.

  13. pls pls pls make one moving from one dimension to another, like 2 to 3 dimensions. I know you briefly explained one moving to lower dimension( in the subspaces one).

  14. so basically you always and only need n^2 numbers to definite that transformation, with n being the number of dimensions, right? E.g., a 2-D space would require 4 numbers, a 3-D space required 9 numbers, 4-D requires 16, etc.

  15. Hey does this mean that getting a matrix into row-echelon form is the same as solving for where each to the basis vectors ends up?

  16. Hello guys, I have a question:
    For the thought experiment he put forth at the end of the video, does he simply end up with vectors that lie in the same plane? Because the first transformation puts the i, j, and k vectors in the same plane (so they no longer can span all of 3-D space). Then he has a second transformation which, if observed on its own, would still span all of 3-D space.
    However, I do not believe any matrix multiplication will increase the span of a set of vectors. Is this correct? (I plotted the points on a 3-D graphing tool to aid in my evaluations).

  17. So I am at this chapter and I can already visualize "How and why, maximum number of zeroes that a 3×3 matrix (or n*n) can have"…while many try to prove using 'determinant '.
    I can never thank you enough.

  18. This just made sense of the hours of struggling I experienced with the XNA game studio way back in 2010. MAN I wish I'd had this so clearly demonstrated back then!

  19. So does this mean that a 10×10 matrix is essentially a vector 10 dimensional space? If so, that means multiplying two 10×10 matrices would result in the rotation of a 10 dimensional space. That's impossible to imagine.

  20. what is he doing at 31 seconds when he puts the transformed vectors tail to tail and draws the vertical line equivalent to (-1,2)??

  21. This is a big service you've done for all generations… any way I could do a one time payment to support?

  22. I learned this when I studied robotics. Sadly, I took a linear algebra course before, but the teacher didn't tell us what matrices represent (I believe he didn't even know) so the first classes of robotics were like black magic to me.

  23. so a linear transformation is just multivariable calculus with multi outputs with each output corresponding to a variable instead of one?

  24. I was stuck trying to understand Linear Algebra without going into too many details until thankfully I found these videos, THANKS ALOT

  25. I realise every time your video ends, there is a soft smile on my face and feeling of content. Your videos are SOUL FOOD

  26. I count myself as one of the luckiest people to have stumbled on this playlist just before taking linear algebra in college

  27. How do we relate 3D rotation matrix (3 x 3 matrix) and its 3D angular velocity matrix (skew-symmetric matrix)? It seems like all the explanations so far are awfully complicated.

  28. Matrix multiplication makes absolute sense to me after these videos. You basically apply a linear transformation to each vector of the first matrix which is your base, and the result is a new transformation for the input vector.

  29. Wow, I'm speechless. In a matter of hours, these videos caused me to go from hating linear algebra to one of my favorite subjects. At my high school, since not many people are advanced enough to do linear algebra, we have to teach ourselves. There are about 20 or so kids in my class and we each alternate teaching the class, and everyone is new to teaching. So, we ended up doing about 6 numerical theorems for every section without thinking about it, many without proofs. I was desperate for some type of visualization of linear algebra, and I just found the complete gold mine.

  30. I am going through these videos to refresh my memory on linear algebra from my second semester, because I will be doing machine and deep learning in my 7th semester starting last week. Thank you very much. So far it is a beautiful series. I am looking forward to go through all 4 seasons and hope by the end I will understand the basics of neural networks so I can follow the course in my university.

Leave a Reply

Your email address will not be published. Required fields are marked *