지금까지 여러가지 벡터(vector), 여벡터(covector), 성형사상(linear maps), 쌍선형 형식(bilinear forms) 등 거의 모든 텐서를 다뤘다. 그 과정에서 좌표계의 변형에 대응하는 변환 규칙(transformation rules)에 집중했다. 지금부터 수준을 높여보자.
지금까지는 텐서의 종류를 나열하였는데 관점을 조금 바꿔 텐서를 다루는 법에 대해 논할 것이다. 이를테면 텐서를 가장 적절하게 정의한 '벡터와 여벡터가 결합된 텐서 곱'을 살펴보기로 한다. 지금까지 텐서를 다루던 방식(좌표계 변형에 따른 벡터 변환)과 아주 다른게 접근할 것이다. 텐서 곱을 배움으로써 텐서에 대해 좀더 폭넓은 이해를 가지길 바란다.
'텐서 곱'의 첫번째 강의로 벡터와 여벡터의 쌍(Vector-Covector Pairs)을 다룬다.
이번편부터 몇편에 걸쳐 텐서 곱을 다룰 텐데 이해를 돕기 위해 교과서나 여러 온-라인 문서에 나오는 표준의 텐서 곱 표기법에서 벗어날 것이므로 주의하기 바란다.
앞서 텐서는 '벡터와 여벡터의 조합'이라고 했다. 벡터와 여벡터는 텐서를 구성하는 기초단위다(building blocks). 이 두 기초 단위를 '텐서 곱'이라는 방법을 통해 생성한 것이 텐서다. 사실 선형사상이나 쌍선형 형식도 바로 벡터와 여벡터를 가지고 텐서 곱을 통해 생성된 것이다.
그럼 선형사상이나 쌍선형 형식이 어떻게 벡터와 여벡터로부터 만들어지는지 보자.
[이런 추상적인 수학을 영어로는 어떻게 설명하는지 들어보기로 하자. 실은 한글로 옮기기가 넘나 힘들다.]
[2:13]So we're already know that wen we multiply a row vector and column vector like this, a row vector first, we end up just scalar.
But we reverse the order, a column vector first, we'll get something different. Instead of scalar, it'll end up with matrix. And the entries of matrix will be these, here.
Ok, we have matrix and a matrix is basically linear map, right? So, there we go combination of vector and covector together to get a linear map.
So, this method combining row vector and column vector is sort of the first step to understanding tensor product. While we need to explore bunch of it before we can say what the tensor product is. So, we go back deeper here, i'm going to get you consider something.
[2:57] Let's have, we have this linear map here represented as matrix with the components; 4, 400, 8, 800. Now, can we go in the reverse direction? Can we figure out the components a,b,c,d that we break this matrix back of into column vector and row vector. So, you want to pause this video and try it by your self, but I'm gonna go ahead and give you answer. So, we have the column vector; 4, 8, and row vector; 1, 100. We can multiply them together to get this matrix, here.
Ok. Now consider this example where we have this same matrix, but the 800 has been changed to 1200. Can we do this same thing, break this matrix off into column vector and row vector? You can try how you'd like, but it turns out that it's impossible.
I'll prove that to you; the matrix elements have to equal to ac, bc ad and bd. And so following that reasoning; that means, ac equals 4, bc equals 8, ad equals 400 and bd equals 1200. The first thing we realize, these two [ac and bd], the first equation tell us that b equals to 2a, I'll just show here, it just follow this, we can get that b equals to 2a. We can apply similar reasoning, if we just substitute and follow the reasoning, we can show that b equals 3a. So, how can b equals 2a and 3a same time. That imply 2a equals 3a. That means that a has to be zero. This isn't obviously true. Otherwise entire first row of the matrix need to be zero. And it is NOT. So, this a equals zero is contradiction and it means it's impossible to solve for a, b, c and d, and such way that this column and this row multiply to get this matrix.
[5:00]What we've learned here is some matrix can be broken off into column vector and row vector and other matrix can't be. So we have two categories of matrices; Pure and Impure matrices. Pure matrix' components can be written as product of column vector and row vector component; whereas impure matrix component can't be written as this product.
Here are couple of examples of pure matrices and it turns out that pure matrices are actually really boring when they are used for Linear Maps. That's because all the output vectors exist along the same direction. The reason for that is ,as you notice what the pure matrix is, the columns of matrix are all scalar multiples each other. (4, 8) multiplied by 100 give us (400, 800), and (1/2, 1) multiplied by 2 gives us (1, 2).

We recall that since matrix' columns tell us where each basis vector copy goes, when it's put through Linear Maps, if all the matrix' column multiplies each other, that means that all the basis vectors give outputs that point in the same direction. And that means that all possible vector inputs going through linear maps are same to the same direction. So, that's why pure matrices are kind of NOT very interesting set of transformation. They can do is really limited.
[06:35] But impure matrices are the more interesting ones. They can send basis vector to different actions. So we can get more interesting transformations.

So, we have a better problem, here. We can construct pure matrices using column vector-row vector products, but boring ones. How can we construct impure matrices, the interesting ones, using column vector-row vector product?

--------------------------------------------
[06:58] What I'm going to do is, I'm going to find 4-special vectors-covector pairs using old e-basis and old ε-dual_basis.

So, this matrix is 1 on top-left and zeros on what else. this can be written as the product which is really just product of basis vector e_1 and covector ε_1.

We can do the same things for the other basis vectors and covectors. We get (e_1,ε_1), (e_1,ε_2)(e_2,ε_1)(e_2,ε_2). We have 4-matrices, here.

[7:30] And you will notice that with these 4 matrices one take a linear combination. So one, we scale each matrix by different amount and add them all together, we can get any general 2x2 matrix that we like, just by packing the scaling members.

So, really this set of 4-products forms basis for matrices that a linear maps from the vector space V to itself.

So any general linear map, L can be written as this linear combination.

If we take the coefficients,.. and we can summarize this using the Einstein's notation, and see that any linear map, L can be written using this components, L of i,j.

Now you might be thinking; "All I see here is a vector and covector written in next each other, how can this be a Linear Map ?"
[8:24] Well, if we think of some linear map, L as a linear combination of this basis, linear maps and also have vector v which is linear combination of these basis vectors,

What we'll get when L act on input vector v ?
To do that we just substitute this for L and substitute this for input v. As you can see here, this ε^j, dual-basis vector, it's acting on input vector only, not totally. That's covector do there act on vectors.[#1]
So, we can use the linearity of ε to take out scaling coefficient v^k and put it on front, now we left ε^j acting on e_k.[#2]
Remember by definition, this is just Kronecker delta j,k.[#3]
And by the Kronecker delta index cancelation rule, we can cancel out the k index and replace it with j. [#4]
And finally, we get this.[#5] which is output vector written as linear combination of the e-basis vectors.
And these coefficients are just numbers, we can get from the standard matrix multiplication rule.[#6]
So, as you can see, this e, ε product pair, this really is linear map; take input vector, transform it and gave us output vector. So Vector-Covector pairs are really Linear-Maps. [#7]

------------------------------------------------------------
[9:50]Now, early here, I gave you this set of 4 linear maps and said that they form the basis all possible linear maps from V to V. But, just like the vectors there is nothing special about this basis. We can just as easily as choose another basis. Right? Even know those matrix looks very nice and simple, we can just easily take another basis matrices.

For example, those matrices here, they all form the basis for the set of 2x2 matrix. And that might pretty look very hard to believe, but in fact we choose the scaling numbers right, we can indeed get any 2x2 matrix that we want.

And likewise we don't have to choose those set of 4 linear maps['Old']. We can just as easily as choose these ['New'] set of 4-linear maps instead. So, these 'New' basis vector and 'New' dual basis covector pairs [for the basis for V-to-V map].

Then, now it be equally valid chore of basis.

--------------------------------------------------
Let's sum up for this video. We've learned that we can combine vectors and covectors to get Linear Maps.
So doing with arrays, we put column vector first on the left[#1] and row vector second on the right[#2], and the result of that multiplication is matrix[#3].
And we can also do the things algebraically by just letting the vector, next to the covector like this[#4]. That is a Linear Maps that can take a vector as an input just like we showed before.
But the PROBLEM is that single vector and single covector combined together like this[#5] creates a PURE matrix or pure Linear Map, and those really boring. Because they send all the output vectors to the same direction.
[11:30]So to get the more interesting linear maps, we need to combine a bunch of pure linear maps take together in linear combination[#6]. That'll help us get more interesting IMPURE linear maps.[#7]

Introduce circled-times notation ⊗ for 'tensor product' symbol;

--------------------------------------
[이전] 10. 쌍선형 형식(Bilinear Forms)
[다음] 12. 쌍선형 형식은 여벡터-여벡터 짝(Bilinear Forms are Covector-Covector Pairs)
------------------------------------
[구구단만 알아도 '텐서']
-1. 동기(Motivation)
0. 텐서의 정의(Tensor Definition)
1. 정역방향 변환(Forward and Backward Transformation)
2 벡터의 정의(Vector Definition)
3. 벡터 변환 규칙(Vector Transformation Rules)
4. 여벡터 란?(What's a Covector?)
5. 여벡터 성분(Covector components)
6. 여벡터 변환 규칙(Covector Transformation Rules)
7. 선형 사상(Linear Maps)
8. 선형사상 변환규칙(Linear Map Transformation Rules)
9. 측량 텐서(Metric Tensor)
10. 쌍선형 형식(Bilinear Form)
11. 선형사상은 벡터-여벡터의 짝(Linear-Maps are Vector-Covector Pair)
12. 쌍선형 형식은 여벡터-여벡터 짝(Bilinear Forms are Covector-Covector Pairs)
13. 텐서 곱 vs. 크로네커 곱(Tensor Product vs. Kronecker Product)
14. 텐서는 벡터-여벡터 조합의 일반형(Tensors are a general vector-covector combinations)
15. 텐서 곱 공간(Tensor Product Spaces)
16. 색인 올림과 내림(Raising/Lowering Indexes)
-------------------------
댓글 없음:
댓글 쓰기