If I have a vector sitting here in 2D space we have a standard way to describe it with

coordinates. In this case, the vector has coordinates [3,

2], which means going from its tail to its tip involves moving 3 units to the right and 2

units up. Now, the more linear-algebra-oriented way

to describe coordinates is to think of each of these numbers as a

scalar a thing that stretches or squishes vectors. You think of that first coordinate as scaling

i-hat the vector with length 1, pointing to the

right while the second coordinate scales j-hat the vector with length 1, pointing straight

up. The tip to tail sum of those two scaled vectors is what the coordinates are meant to describe. You can think of these two special vectors as encapsulating all of the implicit assumptions

of our coordinate system. The fact that the first number indicates rightward

motion that the second one indicates upward motion exactly how far unit of distances. All of that is tied up in the choice of i-hat

and j-hat as the vectors which are scalar coordinates

are meant to actually scale. Anyway to translate between vectors and sets

of numbers is called a coordinate system and the two special vectors, i-hat and j-hat,

are called the basis vectors of our standard coordinate system. What I’d like to talk about here is the idea of using a different set of basis

vectors. For example, let’s say you have a friend,

Jennifer who uses a different set of basis vectors

which I’ll call b1 and b2 Her first basis vector b1 points up into the

right a little bit and her second vector b2 points left and up Now, take another look at that vector that

I showed earlier The one that you and I would describe using

the coordinates [3, 2] using our basis vectors i-hat and j-hat. Jennifer would actually describe this vector

with the coordinates [5/3, 1/3] What this means is that the particular way

to get to that vector using her two basis vectors is to scale b1 by 5/3, scale b2 by 1/3 then add them both together. In a little bit, I’ll show you how you could

have figured out those two numbers 5/3 and 1/3. In general, whenever Jennifer uses coordinates

to describe a vector she thinks of her first coordinate as scaling

b1 the second coordinate is scaling b2 and she adds the results. What she gets will typically be completely

different from the vector that you and I would think

of as having those coordinates. To be a little more precise about the setup

here her first basis vector b1 is something that we would describe with the

coordinates [2, 1] and her second basis vector b2 is something that we would describe as [-1,

1]. But it’s important to realize from her perspective

in her system those vectors have coordinates [1, 0] and

[0, 1] They are what define the meaning of the coordinates

[1, 0] and [0, 1] in her world. So, in effect, we’re speaking different languages We’re all looking at the same vectors in space but Jennifer uses different words and numbers

to describe them. Let me say a quick word about how I’m representing

things here when I animate 2D space I typically use this square grid But that grid is just a construct a way to visualize our coordinate system and so it depends on our choice of basis. Space itself has no intrinsic grid. Jennifer might draw her own grid which would be an equally made-up construct meant is nothing more than a visual tool to help follow the meaning of her coordinates. Her origin, though, would actually line up

with ours since everybody agrees on what the coordinates

[0, 0] should mean. It’s the thing that you get when you scale any vector by 0. But the direction of her axes and the spacing of her grid lines will be different, depending on her choice

of basis vectors. So, after all this is set up a pretty natural question to ask is How we translate between coordinate systems? If, for example, Jennifer describes a vector

with coordinates [-1, 2] what would that be in our coordinate system? How do you translate from her language to

ours? Well, what our coordinates are saying is that this vector is -1 b1 + 2 b2. And from our perspective b1 has coordinates [2, 1] and b2 has coordinates [-1, 1] So we can actually compute -1 b1 + 2 b2 as they’re represented in our coordinate system And working this out you get a vector with coordinates [-4, 1] So, that’s how we would describe the vector

that she thinks of as [-1, 2] This process here of scaling each of her basis

vectors by the corresponding coordinates of some vector then adding them together might feel somewhat familiar It’s matrix-vector multiplication with a matrix whose columns represent Jennifer’s

basis vectors in our language In fact, once you understand matrix-vector

multiplication as applying a certain linear transformation say, by watching what I’ve you to be the most

important video in this series, chapter 3. There’s a pretty intuitive way to think about

what’s going on here. A matrix whose columns represent Jennifer’s

basis vectors can be thought of as a transformation that moves our basis vectors, i-hat and j-hat the things we think of when we say [1,0] and

[0, 1] to Jennifer’s basis vectors the things she thinks of when she says [1,

0] and [0, 1] To show how this works let’s walk through what it would mean to take the vector that we think of as having

coordinates [-1, 2] and applying that transformation. Before the linear transformation we’re thinking of this vector as a certain linear combination of our basis

vectors -1 x i-hat + 2 x j-hat. And the key feature of a linear transformation is that the resulting vector will be that

same linear combination but of the new basis vectors -1 times the place where i-hat lands + 2 times

the place where j-hat lands. So what this matrix does is transformed our misconception of what Jennifer

means into the actual vector that she’s referring

to. I remember that when I was first learning

this it always felt kind of backwards to me. Geometrically, this matrix transforms our

grid into Jennifer’s grid. But numerically, it’s translating a vector

described in her language to our language. What made it finally clicked for me was thinking about how it takes our misconception

of what Jennifer means the vector we get using the same coordinates

but in our system then it transforms it into the vector that

she really meant. What about going the other way around? In the example I used earlier this video when I have the vector with coordinates [3,

2] in our system How did I compute that it would have coordinates

[5/3, 1/3] in Jennifer system? You start with that change of basis matrix that translates Jennifer’s language into ours then you take its inverse. Remember, the inverse of a transformation is a new transformation that corresponds to

playing that first one backwards. In practice, especially when you’re working

in more than two dimensions you’d use a computer to compute the matrix

that actually represents this inverse. In this case, the inverse of the change of

basis matrix that has Jennifer’s basis as its columns ends up working out to have columns [1/3,

-1/3] and [1/3, 2/3] So, for example to see what the vector [3, 2] looks like in

Jennifer’s system we multiply this inverse change of basis matrix

by the vector [3, 2] which works out to be [5/3, 1/3] So that, in a nutshell is how to translate the description of individual

vectors back and forth between coordinate systems. The matrix whose columns represent Jennifer’s

basis vectors but written in our coordinates translates vectors from her language into

our language. And the inverse matrix does the opposite. But vectors aren’t the only thing that we

describe using coordinates. For this next part it’s important that you’re all comfortable representing transformations with matrices and that you know how matrix multiplication corresponds to composing successive transformations. Definitely pause and take a look at chapters

3 and 4 if any of that feels uneasy. Consider some linear transformation like a 90°counterclockwise rotation. When you and I represent this with the matrix we follow where the basis vectors i-hat and

j-hat each go. i-hat ends up at the spot with coordinates

[0, 1] and j-hat end up at the spot with coordinates

[-1, 0] So those coordinates become the columns of

our matrix but this representation is heavily tied up in our choice of basis

vectors from the fact that we’re following i-hat and

j-hat in the first place to the fact that we’re recording their landing

spots in our own coordinate system. How would Jennifer describe this same 90°rotation

of space? You might be tempted to just translate the columns of our rotation matrix

into Jennifer’s language. But that’s not quite right. Those columns represent where our basis vectors

i-hat and j-hat go. But the matrix that Jennifer wants should represent where her basis vectors land and it needs to describe those landing spots

in her language. Here’s a common way to think of how this is

done. Start with any vector written in Jennifer’s

language. Rather than trying to follow what happens

to it in terms of her language first, we’re going to translate it into our

language using the change of basis matrix the one whose columns represent her basis

vectors in our language. This gives us the same vector but now written in our language. Then, apply the transformation matrix to what

you get by multiplying it on the left. This tells us where that vector lands but still in our language. So as a last step apply the inverse change of basis matrix multiplied on the left as usual to get the transformed vector but now in Jennifer’s language. Since we could do this with any vector written in her language first, applying the change of basis then, the transformation then, the inverse change of basis That composition of three matrices gives us the transformation matrix in Jennifer’s

language. it takes in a vector of her language and spits out the transformed version of that

vector in her language For this specific example when Jennifer’s basis vectors look like [2,

1] and [-1, 1] in our language and when the transformation is a 90°rotation the product of these three matrices if you work through it has columns [1/3, 5/3] and [-2/3, -1/3] So if Jennifer multiplies that matrix by the coordinates of a vector in her system it will return the 90°rotated version of

that vector expressed in her coordinate system. In general, whenever you see an expression

like A^(-1) M A it suggests a mathematical sort of empathy. That middle matrix represents a transformation

of some kind, as you see it and the outer two matrices represent the empathy,

the shift in perspective and the full matrix product represents that

same transformation but as someone else sees it. For those of you wondering why we care about

alternate coordinate systems the next video on eigen vectors and eigen

values will give a really important example of this. See you then!

The matrix "A" can be seen as a linear transformation, transforming a vector "u" into a vector "v" : "Au = v". So computing the coordinates of the vector "v" in Jennifer's basis knowing its coordinates in "our basis" corresponds to finding the vector u : "u = A^{-1}v"

may you please do a video on last fermat theorem…thanks.

why rotating 90 degrees in our "language" is the same as rotating 90 degrees in her language? (why not 90 degrees in our vector space corresponds to perhaps 80 degrees of rotation in her vector space)?

The origin is always the same because "zero" is a concept more abstract than finite numbers.

The problem I have is how to find out the transformation matrix T between the bases, if both are "complicated" and multidimensional. :s

I just had to sit through a much less intuitive textbook presentation of this very idea than you do, Grant, and I am not sure I could have handled it without 3B1B and the pi-creatures as my emotional support animals on this journey. Thanks!

Thank u very much,please do more videos

I have one question : In mechanics we say a Pressure is a scalar and Force is a vector. According to the definition the pressure is a force * (1/area), which means a vector time scalar. So normally it gives to us a smaller of a larger vector but the result is a scalar WHY ??????

Hello! This course is amazing! Is the manim code for the lessons available Somewhere?

Please check this out for another INTUITIVE explanation for " Change of Basis "….. AWESOME

https://www.youtube.com/watch?v=Qp96zg5YZ_8

Hi I really enjoyed this series of linear algebra. I was wondering if you guys could do a similarly enlightening series on classification of PDEs into elliptic, parabolic and hyperbolic. I'm really looking for the explanation and intuition of it all, not just substituting B^2 – 4*A*C.

2:12 The look of pure confusion on the blue guy's face as Jennifer tries to use her skewed, rotated coordinate system…

Dammit Jennifer

I would just like to say that what you are doing is realy fantastic! High quality content like this is hard to find, and i believe that you and Kahn Academy realy are on the start of a new revulution in teaching math & science! Much Love from a Science Student in Norway

the essence of the video: the change of the vector basis is carried out by linear transformation

Hey, there! In 11min55s, the vetor (the yellow one) you find out by rotating 90° in Jennifer's language it's (-1,1). But the arrow show's to being (-2,1). Why is that?

Lovely as it gets, mind blowing. Please continue the good work. Teachers like yourself are a blessing

Omg i was hoping you would be talking about matrix vector multiplication because that's what i would have used there, and you did !

If you do a series on tensor analysis/differential geometry I will lose my shit.

Please make a video about SVD!!! I feel like that is one of the most elusive yet crown achievements of LinAlg

Everyone should watch this knee down the floor and saying thank god

Chapter 9 or 13 ?

Exactly what I needed! I had to watch it twice, because I'm not an englisch native speaker, but at the end, everything seemed logic and completly intuitive. Thank you

I hesitate to question anything about the best YouTube channel I've ever encountered, but I gotta ask: in a video all about expanding beyond two rigid ideas for what defines a space, did we really have to make the girl pink?

Still, eternally grateful for what Grant is doing for the world and my continually-blown mind.

A mathematical sort of empathy. That's deep.

Dang, I remember finding this a year ago, and it's still as mind-blowing as it was then.

Thank you! @3Blue1Brown ! Oh, a question: Is the inv(A)MA remark at the end of the video supposed to hint at diagonalization, or can it? Just wanting to re-inforce this topic I'm studying in class right now.

I wish I were taught by my professor as you do!

This video explains why I'll never understand women.

Why the hell don't you write textbooks? I wouldn't feel sour about paying for a book from you. I doubt anyone'd have much trouble understanding content with how beautifully you break things down.

I feel like I understand women much better now.

Now I just have to figure out her similarity transformation matrix.

You are a god.

One word : Wow!

Amazing how the concept of applying a transformation, in this case rotation, expressed in one system to a vector represented in other system is still the same even if you are dealing with an completely different structure. Like when working with quaternions, the computations are exactly the same

quat^-1 * vector * quat

In fact this video helped me a lot to understand what I am really doing when projecting a vector in one system to another given the quaternion that represents the orientation of that other system. And in fact your video on quaternions helped me a lot to understand them, and since that I’m a fan of your channel, trying to catch up with all the content you’ve uploaded.

this should have at least 10 million views

your videos sometimes give me headache!!!!

The empathy analogy is probably the best intuition I had about linear algebra ever.

Your course covers for memorizing equations of thousands of students.

Thank you.

A^{-1}MA explained in the BEST way possible in 12 minutes. unbelievably good.

Thanks, now I can pass my test tomorrow.

This is the first time I've come across anyone talking about the seeming backwardness of putting together transformations for changes of basis. It's something I've never quite wrapped my head around, and I'm so fricking excited for that "aha" moment!

Jennifer just playing hard to get

06:59 the first transformation, transform jennifer grid in our grid(not the other way round), jennifer language in our language(correct)

This Video is wonderful!

Not entirely true about using a computer to calculate the inverse. In practice we never wanna compute inverses because they are computationally heavy. You'd use other methods and tricks to avoid actually doing the inverse.

I am finding the interpretation of inv(A) to be a bit difficult from the perspective of "transforming a set of basis vectors back to the original one"… since inv(A)*A = I (the matrix with i, j, k, etc. in the columns)… but how does inv(A)*v yield Jennifer's vector from a geometric standpoint?

@10:39 this is easier to memorise than the change if basis theorem

Wow… hard to master how to do this, BY THE WAY when I saw this video 1st, I didn't really get it but I didn't think it would really matter, since everyone should agree on using the identity matrix's version of i and j, but even though it seems pretty useless, that's until you reach the next 2 episodes which somewhat rely on them for nice tricks, so pay attention to it ^

At about 11:53 when you rotate the yellow vector 90 degrees counterclockwise, you also rotated Jennifer´s basis, my believe is that Jennifer´s language should not be changed for this problem, only the yellow vector rotates. I tried to add the new vector components using the rotated basis and could not make any sense of the calculated components [-1, 1] to come up with yellow vector. But if the original Jennifer´s basis remains fixed then I can see how the scaled basis (components) added up to make the yellow vector.

i love the way you do the little anthropomorphic pi people in your vids.

I think this can be applied in SPecial Relativity

Screw off Jennifer

so good.

Hey, isn't the transformation matrix that records where my basis vectors go after transformation in my coordinate AND the matrix that records Jennifer's basis vectors in my coordinates so that we jump from her language to our language essentially the same thing ??after all there is no grid in space rather just vectors..

do reply please

If only I had discovered this series of videos before I failed my algebra exam… Thank you so much for this wonderful channel.

This is GOD'S WORK!

After a few weeks of delving into Eigenvalues, Eigenvectors, Symmetric Matrices, Positive Definite Matrices, Similar Matrices… in order to try and understand Dimensionality Reduction for Machine Learning through Singular Value Decomposition and Principal Component Analysis, I've returned to this video. NOW I think I understand what is going on. It all has to do with Eigendecomposition of a matrix and using this technique as a "change of basis" to change the axes and define the Principal Components?!

Why Jennifer transform why not Kate?

I’m no PhD so it’s possible I’ve got this wrong. However, if I’m not mistaken, there are two conventions or ways of defining the change-of-basis matrix from basis B to basis C, depending on whether basis B is expressed in terms of basis C, or whether basis C is expressed in terms of basis B. The choice of convention carries over to the way matrix similarity is defined, too. I’ve noticed most U.S. texts and notes (at least at the elementary level) adopt the “old basis in terms of new basis” convention. Could somebody confirm if what I wrote is the case?

Very nice video! What I don't understand is how do we calculate the vectors (2, 1), (-1, 1)? In other words, how do we transform Jennifer's basis vectors to their representation in our basis?

You are such a legend man. Keep it up. You made LA so intuitive. Truly doing a service to mankind!

It is not the spoon that bends, it is one yourself

@thesickbeat yoooo van Goth

Hi! First of all, THANK YOU for this series. I've understood the essence of linear algebra. I love it!

And then, I would like to kindly draw attention to a little discrepancy in this video. In 10:27, you show a vector which has coordinates [-1; 2] in Jennifer's language. But then in 11:48 – this vector has changed to [1; 2], so the product of Jennifer's matrix of rotation and this vector [1; 2] equals to [-1; 1]. But in the earlier case (with the vector [-1; 2]), the product should equal to [-5/3; -7/3]. It slightly confused me for the first moment when my product was different from yours.

But once again, thank you for your phantastic job. You have a special gift to convey knowledge in a comprehensible and human-related language.

As I watched this video for the 3rd time, trying to wrap my mind around this immensely refreshing view of this topic. I had the most enlightening thought. The change of basis is just a projection from one space to another. (It was inspired by this video, but reinforced by Geometric Algebra by Eric Chisolm, a pdf on arxiv, where I tried to understand why we need a coordinate system to begin with.) These projections of space on to other spaces seems like a pretty natural interpretation of what's happening, right? Just kind of wish I had this understanding sooner 😂 I'm grateful for these videos! Wait… is it possible to do a series on Tensor Calculus?!

Get a new wife no one cares about stubborn Jennifer.

all the coordinates in different basises show the same point/vector, coord is a scalar, so this would be satisfied:

B_1*vec{c_1}=B_2*vec{c_2}, c for coef, one of the basis is usually identity, then you could get the coord in the other basis.

dammit jennifer, can't you be like everyone else?!

The backwards-ness at 6:50 still gives me brain twinges

Phenomenal. Never have to ponder hours when reading any papers involving A^(-1)MA. Perfect.

That A-1MA format looks familiar. It's what we use when we convert local coordinates global coordinates.

Can someone explain the last problem, I am having difficulty understanding. Very confused on how he got those numbers. step by step would really help

these series of videos are absolutely amazing! thank you so much for making these videos! I cannot imagine the amount of painstaking work and time that these would have taken!! thanks again!!!

does anyone know the song used in the first seconds of the video?

I think this video cries to be reworked. Incredibly overcomplicated.

We learned C^(-1)BC in linear algebra, but did not learn WHY nor any application for it. Our textbook also offered no context. Thank you for explaining.

But how would you explain translations? What if Jennifer's origin was shifted too?

Isn’t this the other way around for the dot product? i.e. we have the linear transformation matrix (which is the transpose vector / dual vector) then we multiply it by a vector in “our language” to get that vector in “their language”?

Beautiful.

Beautiful! gave me tears 🙂

nice video!

I watched this video 10 times now. There are still gaps. I dont use change of basis on a normal day basis. I wish there are real life applications you can show us. But anyways thank you, this is a lot better than what I learned in class years ago. I hope the blue guy and jennifer get married in real life.

6:57 actually this matrix transforms "the grid Jennifer sees" into "the grid we see", since everyone sees his own basis as the standard unit grid. In this way, the matrix transforms a vector "in her language" to "our language". This would be a more natrual way to describe it. : )

Tensor PTSD

I literally watched all the advertisements before all of the videos in this series because you deserve all the advertising dollars. It's the least I can do to thank you for deconstructing all these topics in a visually satisfying way that was not accomplished in school.

I can only grasp about half of what your saying but that half I really love it. I got to precalculas in high school and some limits but your videos have inspired me to relearn what I have forgotten and try and go forward to this point . I really would love to get a grasp of linear algebra.

I wish I would listened this lecture when I was in university…

This chapter was prominently enlightening and inspirational.

Man I just realized something crazy I recently started studying abstract algebra and I just realized that a linear transformation from a vector space a to a vector space b is an isomorphism( assuming it has an inverse if not its a homomorphism) since its commutes with vector addition and scalar multiplication.That really helps me when thinking about different bases it’s kind of like when you have two isomorphic sets when you think of an element in one of the you have an associated element in the other set.Thats kind of what this is like each element in one vector space has another associated vector in another vector space with a different basis.

What is the difference between change of basis and linear transformation

Can anyone explain what he meant between 7:00 and 7:13

This is sick! I can now see the true beauty of T^n=AA'(T^n)AA'=A(D^n)A'

To Jennifer the Transformation has no rotation and is just stretching everything. But the rotation is defined in our system, so she has to do a change of basis to calculate this stretch (A'TA) only once and is then able to quickly calculate the n-th power.

To be able to the same trick from our perspective we need an extra change of basis to get to her perspective and then look back to our own perspective from her perspective

It's genius! I wish I could have watched this videos when I was still in college.

just love it..

Is there any change of basis which can shift the origin?

I think the answer to my question is buried in this but I'm not sure. Jennifer and I agree that Central Park is at (1,1) on both of our maps but she says MacDonald's is at (3,3) which is my (5,3). Since we don't agree on where McDonald's is, I got 13 other map location coordinates from her and I'm trying to figure out her basis vectors.

Central Park My(1,1) = Jennifer(1,1), My(2,1)=Jennifer(1,2), M(3,1)=J(1,3), M(4,1)=J(5,3), Squiggly's House M(5,1)=J(5,2), Ophelia's House M(5,2)=J(5,1), M(4,2)=J(2,1), M(3,2)=J(2,2), M(2,2)=J(2,3), M(1,2)=J(4,3), M(1,3)=J(4,2), M(2,3)=J(4,1), M(3,3)=J(3,1), M(4,3)=J(3,2), MacDonald's M(5,3)=J(3,3).

it's like when you look at an interesting object and then run to your friend and point to the same object. The direction of your finger and distance from the object will be different from where you originally saw it from.

So in essence, a linear transformation and a change of basis are one and the same.

Grant says at 6:48 : "Geometrically this matrix [[2 1], [-1 1]] transforms our grid into Jennifer's grid but numerically it's translating a vector described in her to our language."

I think a better interpretation is to think that the matrix [[2 1], [-1 1]] expresses her perspective in our perspective. That is, the matrix [[2 1], [-1 1]] does not transform our grid into her grid, rather it EXPRESSES her grid in our grid system. It is as if her grid is overlaid or superimposed on our grid, and we are reading off from our grid anything that she expresses with her grid. We are still in our grid. A weak analogy would be that we are wearing correcting lenses (I.e the matrix [[2 1], [-1 1]]) which translates her worldview into our worldview.

I think that here we cannot speak of transformation like before, because here the two grid systems are coexistent and simultaneous.

When we adopt this interpretation, everything seems natural and logical.

Make a tensor vídeo plsss

this video is already an repetation , you have already showed this in transformations

Jennifer is a bitch

best teacher of my life

O NOME DELA EH JENIFFER