원본출처: Tensors for Beginners by eigenchris(YouTube)
In the last video we saw that all covectors can be written as linear combinations of the Dual Basis vectors. We saw how covector component can be obtained by counting how many covector lines that basis vector pierces. We also saw that covector components transform in opposite way that vector components do.
지난번 강의[5. 여벡터 성분]에서 여벡터(covector)는 이중 기저벡터(dual basis vector)의 선형결합(linear combination)으로 기술된다는 것을 알아봤다. 그리고 기저벡터가 관통하는 여벡터 선의 갯수가 곧 여벡터 성분이라는 것도 살펴봤다. 아울러 여벡터 성분의 변환은 벡터 성분 변환과 반대로 적용된다는 것도 알게됐다.
So now just gonna confirm a covector transformation rules should be mathematically. So the first thing we need to get out of the way is figuring out how covectors themselves transform. So not covector components, not the oppose, we're talking about covector themselves which are the epsilon.
이번 강의는 여벡터 변환 규칙을 수학적으로 규명해 본다. 먼저 여벡터 자신의 변환을 먼저 알아보고 이어서 여벡터의 성분 변환을 구하기로 하자.
So with the vectors in order to get, from old basis, the new basis vectors out of old basis vectors. Then, now these are forward [vector] transform. Now we wanna same things on covectors. So, how we can build new Dual-Basis out of the old Dual-Basis.
벡터의 변환을 다룰때 원기저벡터로부터 변형기저벡터를 구한다고 배웠다. 이를 순방향 변환(Forward transform)이라 한다. 여벡터의 경우에도 이와 마찬가지로 원이중기저로 부터 변형이중기저를 구한다.[이중기저(Dual Basis):여벡터(Covector)의 선형합]

What are the Q coefficients that let's do this. So to figure these[Q coefficients] out, we start by applying ε^1_tilde to e_1 and replacing ε^1_tilde with ε's as written here using linearity rules. We can get this.

So we know that this goes to 1, and this goes to 0, so ε^1_tilde of e_1 equals to Q_11.

And if we apply ε^1_tilde of e_2, we get that is equals to Q_12.

Ok. So an alternative way of writing is like this and these parts here are just numbers.

Let's bring out of backward transformation that we can write out old basis vectors in terms of new basis vectors. So we can get these.


Now using the linearity of covectors and these goes to 1 while these goes to 0. and we get this the new dual basis vectors in terms of the olds.

And if you turn to old that again, you can do same thing to ε^2_tilde.

So, you noticed that this coefficients and these coefficients are awfully similar. So, this means that to go from old Dual-Basis to new Dual-Basis we used backward transformation.

Ok, Let's try to improve that for any dimension. I have some stuffs here. I have Dual-Basis definitions.


I have forward and backward transformation,

And I have reminder that forward and backward transformation are inverse, because when we multiply them, get the Kronecker-delta.

Ok, Let's start by letting epsilon tilde as linear combination of epsilons.

Now apply covectors to the new basis vectors, e_k_tilde, left-hand side became to Kronecker delta, by the definition here, right-hand side can be replaced by new basis vectors, which is linear combination of old basis vector using the forward transformation,

We can take scaling constant outside, and it became also Kronecker delta by this definition here,

This term, here becomes 0 when j doesn't equal to l. So we can ignore all other terms that are j doesn't equal l. We only take care about j equals l. So that means I can replace this l here with j.

And so now we have that Q multiply by F gives us the Kronecker delta. So in another words, Q and F are inverses. But we remember that we only know that the inverses of F is B. So what that means is that Q is equal to B. Since only F have one inverse. So Q is backward transform. So we move from old Dual-Basis to new Dual-Basis with backward transform. Now you understand that why we write index at top, because that transform in opposite way that basis vectors do.

Here we summarize that the Basis-Vector and Covector transformation rules and again they are opposite.

So now we know that how basis-vector transform figuring out how the component transform was really easy.
We just write covector α as sum of the old basis covector.

We used the forward transformation that we rewrite the old basis in terms of new basis , we rearrange the sums and we see that this part of middle has to be equal to the new covector components, α_tilde.

And so we find out that the forward transform brings from the old covector components to the new. By likewise backward transform brings from the new covector components to the old. So the Covector Component transformation is the same way that the Basis-Vectors do.
--------------------
Let's do the sanity check to make it sense.
So here we have a covector sitting in the space basis e_1 and e_2. And it's looks like e_1 pierces 2-lines and e_2 pierces 2-lines. So the component in the spaces are [2 2]. Now wat if we make these space vector twice as big. So we get a new basis here e_1_tilde and e_2_tilde. What are the covector components now? Well e_1_tilde pierces 4-lines, e_2_tilde pierces 4-lines as well. So the components are [4 4].

This means that when we increase the size of basis, we also increase the size of the covector components. So covector component transform in the same way that the basis-vector do. So, hopefully that makes intuitive sense now.
--------------------
Here are summarize all the transformation rules we learn so far in the videos.
We started with the transformation rules of 'Basis-Vectors' and then we figure out that transformation rules for 'Vector Components' which are opposite compare to the Basis-Vector. So that are called 'Contravariant'.
Then earlier in this video, we found transformation rules for Basis-Covectors were also opposite compare to Basis-Vector. So, Basis-Covectors also transform by the 'Contravariant Rule'.
And finally just we find out 'Covector Components' transform in the same way that Basis-Vectors do. So this means that Covector component transform Covariantly and that's why we write as seen in the bottom just like Basis-Vectors, because they transform in the same way.
--------------------
So, that finishes all vectors. In the next video we'll talk about 3rd examples of tensors which is 'Linear-Map'.
----------------------
[이전] 5. 여벡터 성분(Covector Components)
[다음] 7. 선형 사상(Linear Maps)
------------------------------------
[구구단만 알아도 '텐서']
-1. 동기(Motivation)
0. 텐서의 정의(Tensor Definition)
1. 정역방향 변환(Forward and Backward Transformation)
2 벡터의 정의(Vector Definition)
3. 벡터 변환 규칙(Vector Transformation Rules)
4. 여벡터 란?(What's a Covector?)
5. 여벡터 성분(Covector components)
6. 여벡터 변환 규칙(Covector Transformation Rules)
7. 선형 사상(Linear Maps)
8. 선형사상 변환규칙(Linear Map Transformation Rules)
9. 측량 텐서(Metric Tensor)
10. 쌍선형 형식(Bilinear Form)
11. 선형사상은 벡터-여벡터의 짝(Linear-Maps are Vector-Covector Pair)
12. 쌍선형 형식은 여벡터-여벡터 짝(Bilinear Forms are Covector-Covector Pairs)
13. 텐서 곱 vs. 크로네커 곱(Tensor Product vs. Kronecker Product)
14. 텐서는 벡터-여벡터 조합의 일반형(Tensors are a general vector-covector combinations)
15. 텐서 곱 공간(Tensor Product Spaces)
16. 색인 올림과 내림(Raising/Lowering Indexes)
[이전] 5. 여벡터 성분(Covector Components)
[다음] 7. 선형 사상(Linear Maps)
------------------------------------
[구구단만 알아도 '텐서']
-1. 동기(Motivation)
0. 텐서의 정의(Tensor Definition)
1. 정역방향 변환(Forward and Backward Transformation)
2 벡터의 정의(Vector Definition)
3. 벡터 변환 규칙(Vector Transformation Rules)
4. 여벡터 란?(What's a Covector?)
5. 여벡터 성분(Covector components)
6. 여벡터 변환 규칙(Covector Transformation Rules)
7. 선형 사상(Linear Maps)
8. 선형사상 변환규칙(Linear Map Transformation Rules)
9. 측량 텐서(Metric Tensor)
10. 쌍선형 형식(Bilinear Form)
11. 선형사상은 벡터-여벡터의 짝(Linear-Maps are Vector-Covector Pair)
12. 쌍선형 형식은 여벡터-여벡터 짝(Bilinear Forms are Covector-Covector Pairs)
13. 텐서 곱 vs. 크로네커 곱(Tensor Product vs. Kronecker Product)
14. 텐서는 벡터-여벡터 조합의 일반형(Tensors are a general vector-covector combinations)
15. 텐서 곱 공간(Tensor Product Spaces)
16. 색인 올림과 내림(Raising/Lowering Indexes)
-------------------------
댓글 없음:
댓글 쓰기