In the past two videos we have talked about how the tensor product woks and how it works, but haven't really formal definition of it. In this video I'm going to define formally what the tensor product is and also introduce the idea of tensor product basics.
[0:23]------------------------------------------------------------
In previous video I'd showed how combining vector and covector together using the tensor product can give us Linear Maps. The coefficients of this linear map is really just entry of array given by the Kronecker product of the column array representing vector and row array representing covector.
[0:41]-----------------------------------------------------------
We also showed that combining two covectors using tensor product can give us Bilinear Forms whose coefficient really just the entries of array given by the Kronecker product of the two row array associated with these covectors.
[0:53]--------------------------------------------------------------
So in this video we're finally going to properly define what the tensor product operation is.
[0:58]-----------------------------------------------------------
So we've called all of those tensor product stuffs, sort of suggestion doing array multiplication what a column on the left and arow on the right. Now consider what happens if we scale this by some number, n. That's same things at scaling column by n or scaling row by n. In both cases we get the same result which is matrix up there.
Likewise when we have the tensor product between vector and covector that scaled by some number, n, we can either scale the vector by n or scale the covector by n and all get the same result. So all three things here are same linear maps.
[]-------------------------------------------------------------
If you wanted to, we can rewrite these array multiplication as Kronecker product and we rewrite these tensor product using the circled times symbol, it's basically all the same thing.
[1:45]--------------------------------------------------------------
Now let's consider addition. So, let's say we have two column-row array multiplications that are the same in both terms. This case we can factor out column and rewrite things like this. Both of these will result the same matrix.
Likewise if we have some of two tensor product, the tensor on the left is the same both terms, we can just factor out like this and both of these give the same Linear Maps.
Alternatively if we have some like this where we have the same row array in both terms, we can factor out the row.
And the same thing goes with the tensor product, we can factor out the tensor on the right to get this and go , so to be, same Linear Maps.
Finally, with the case if we have the array terms that is nothing in-common, there is basically no factor out we can do. So, we could multiply all separately and do this addition together as single array. But there's no way we can take this expression and factor it into something simpler. And the same thing can goes when we have some other tensor products, when we have two terms nothing in common, we can't simplify that all. We just however leave it as it is.
* 텐서 곱이 열(벡터)과 행(여벡터)에 대해 결합/분배 법칙이 성립된다는 이야기를 장황 스럽게 했다. 당연한 것 같아 보이지만 텐서 곱과 크로네커 곱 연산이 같은 것이며 그 연산의 선형성을 행렬 곱셉을 통해 증명하는 중이다.
Of course we can rewrite the array multiplications and Kronecker products and we can rewrite the tensor product with the circled-times symbol notation. It is all the same thing.
[3:00]------------------------------------------------------------
Ok. What we've just done is if we come up with scaling and adding rules, we can rewrite them out either in this notation I used or we can use circled-times notation that you'll find in the text books.
[3:14]-----------------------------------------------------
So, basically what this scaling rules saying what we have a bunch of tensors that are producted together like this. We scale the entire collection of tensor by some number, n. We can basically choose to move this n inside and scale any one of tensors that we'd like. So we can choose a or b or c or d or e, or we can only choose to scale one of them. And if you want to choose notation of the circled-times instead, we rewrite thing like this. So we can move n inside to scale one of the five tensors that we'd like.
* 다항식에서 공통인수 묶어내기(인수분해), 분배 법칙등을 설명하고 있다. 역시 텐서(벡터와 여벡터) 연산의 선형성을 보여주기 위함이다. 주의 할 점은 이 연산이 행렬 연산이므로 곱의 순서가 바뀌면 않된다.
[내용이 지루하지만 영어 듣기 연습이라 치고 옮겨 본다. 곱셈 덧셈을 이리 장황하게 늘어 놓는 걸 보면서 '서양 사람들은 구구단에 약하다'고 흉보지 말자. 텐서는 그냥 숫자가 아니라 상당한 수준의 추상성을 가진 객체다. 이런 객체와 그 연산도 선형성을 갖추고 있어서 구구단으로 풀수 있다는 것을 꼼꼼히 보여주는 중이다. 새로운 객체와 연산을 소개하면서 아주 기초적인 덧셈곱셈이 통하는지 증명하려는 이유는 수학으로 아무말 대잔치가 아니라는 점을 증명하는 것이다.]
[3:43]--------------------------------------------------------------
What the adding rules are saying is when we have some of tensor product like this.
Whenever both terms have something in common on the left we can factor it out. So in this case we can factor it out a, b and we have both terms something in common on the right. We can factor that out, so in this case we can factor out d, e.
And if we want to rewrite this things using circled-times notation, we get this. so we can factor out a tensors b, and then factor out d tensor e, and that's same thing. We just write a little bit differently.
[4:13]------------------------------------------------------------------
If we came up with scaling and adding rules and hopefully you know what that means, if we can scale, we can add, that means that we have vector space.
[지금 다루려는 Space(공간)는 기하학적 의미의 '공간'이 아니다. 어떤 연산이 일어나고 그 결과가 속할 조건을 뜻한다. (입출력 및 연산 포함)]
So, we know that these vector is v, w, e_1, e_2 and these vectors are live in vector space capital V. And we also know that covectors like α, β, ε_1, ε_2. These all live in the vector space V*.
We know that we can make vector pairs like vα, vβ, wα, wβ and any other of these examples.
And we can add them and scale them using these rules that we just talked about here.
So that means that these must be vectors in the vector space. So what the vector spaces do that live in and the answer is all of these living in the vector space, V tensor(circles times) V*.

And you noticed that using this circled-times symbol here again and this is actually new use of circled-times symbol that was never seen before. Because rather than combining vectors and/or tensors, we're combining vector space.
[5:15]-------------------------------------------------------------
Totally we're come across 3 different use of circled-times symbol. There's the Kronecker product which combine two array in new array. There's the tensor product whcich can combine two tensors into a new tensor. And new usage here, we have the tensor product of vector space, which combine two vector space into a new vector space.

[5:32]-------------------------------------------------------------
So, the term, tensor product can offer two different things here. There's the tensor product of tensors which I'd like to think of little tensor product. Because it combine individual tensors. But the also the tensor product of vector space which i'd like to think of BIG tensor product, because combine entire vector space together.

[5:55]--------------------------------------------------------
We've discussed new vector space by combining V, V* together

and the vectors in this vector space follow these scaling and adding rules up here.

[6:09]----------------------------------------------------------
We have these vector space V tensor V*. So, what are the elements of this vector space? Well the members of this vector spaces are (1,1) tensor or basically vector-covector pairs and the linear combination.

What can these tensors do? Well, remember that vector components always going to have upstairs index, while covector components are always going to have a downstairs index.

So, if we have focused L coefficients here, if we do its summation with vector components like this, L of j, we end up vector component as the output. Because we have one upstairs i index that's left. So vector-in, vector-out, or essentially Linear Map.

So, we can also do it, summation with covector components like this, sum over i. Output will have j, which is downstair index. So we have basically end up covector components of the output. So, it's covector-in, covector-out. This is a map from V* to V*.

[7:11]------------------------------------------------------------
Also we can provide L with both vector component and covector component and we can do two summations over i and j. Now it gives a scalar as the output since the output has no indexes left some over. So this case L can be viewed as a function from a pair of vectors and covectors to scalar.

[7:31]-------------------------------------------------------
And finally we can do this same thing but reversed inputs, and this case L is function from a covector-vector pair to scalar.

So the elements of these vector space, (V tensor V*) can be interpreted as any of the these things depending on the number and type of inputs that we give it. So all of these are (1,1) Tensors and all of them is the members of the vector space, (V tensot V*).

[7:57]----------------------------------------------------------
So, let's consider something else. We have these same rules for the tensor product. But what if we use tensor product to combine two covector together. Covectors are α, β, εs live in V*. When we have covector-covector pair like αβ, or ε_1ε_2, it turns out that they are live in the vector space (V* tensor V*).

[8:21]------------------------------------------------------
So elements of (V* tensor V*) are (0,2)-Tensors which are covector-covector pairs and linear combinations. And we know already if we take these v components and do two summations with two sets of vector components like this, then we end up with scalar. And this of course is Bilinear Form which takes pair of vectors and output to scalar.

[Ref]------------------------------------
'Bilinear Forms are covector-covector pair'. It's a function taking two vectors and output a scalar.

[8:45]-------------------------------------------------------
We also do single summation over i with one set of vector component and we've left with index j, downstairs. So output will be set of covector component. So, inthis case, W is a map from vectors to covectors or it map from V to V*.
Also, we could instead chooseand do a summation with vector component over j index and then it end up with covector components with i index on the bottom and this be another map of V to V*, but it'll be different map in this up here, because we're doing summation differently.

[9:21]-------------------------------------------------------------
So, elements of (V tensor V*) which are (0,2)-Tensors can be any of these things depending on the inputs that we give it.

[9:30]--------------------------------------------------------------
* 기저벡터와 기저 여벡터, 벡터와 여벡터를 모두 아우르는 텐서와 텐서 연산 규칙, 텐서 곱으로 생성되는 새로운 공간의 원소가 될 텐서의 성분 표기법(윗첨자 아랫 첨자의 표기)등을 예를 들어가며 길게 설명하고 있다. 다소 지루하지만 익혀두자. 색인 올림과 내림을 거저먹게될 것이다.
So, really, the basic building block of all these vector spaces are the two vector spaces, V and V*, and those contain tensors whose components are here, remember vector components have its upstair index and covector components have downstairs index.
We can combine these two vector spaces into new vector spaces using tensor product and these vector spaces will have these components and again we call the indexes from V upstairs and indexes from V* go downstairs.
And we can continue to make larger and larger vector spaces using the tensor product. All these new vector spaces contain tensors with components that have different combination of upstairs and downstairs indexes depending on whether they are constructed using V and V*.

* 벡터 공간 V와 여벡터 공간 V*의 텐서 곱으로 더욱 복잡한 새로운 벡터 공간을 만들 수 있다. 벡터와 여벡터의 텐서 곱의 결과는 벡터, 여벡터 혹은 실수 중 하나일 뿐이다. 이 결과로는 특정 공간의 원소임을 표기하지 못하므로 텐서 곱의 계수에 복잡한 텐서 곱의 연산을 반영한다. 윗 첨자는 벡터, 아랫첨자는 여벡터다. 벡터와 여벡터의 텐서 곱은 선형성을 갖지만 복합 구성의 경우 순서가 바뀌면 않된다. 크로네커 곱이라는 점을 기억해 두자. 벡터는 열, 여벡터는 행의 배열로 둔 행렬 곱셈이다. (V ⊗ V*)와 (V* ⊗ V)으로 생성된 공간은 서로 다르다.
[10:19]----------------------------------------------------------------
* "조각난 성분 색인('Cracked' component index)"에서 '조각난'의 의미에 주목하자.
We have some new tensor from vector space we've never seen before. We can easily get the 'cracked' component indexes just by looking at the vector spaces.
So, looking at this vector spaces we see that basis could be made up of covector, vector, covector and another covector and combination. And we get the components just by placing indexes and the opposite position that we have seen in the basis. So that all the summation properly.

복잡한 구성을 한 텐서의 예를 보자. 이 텐서는 한개 이상의 텐서 곱 공간에 존재하는 텐서다. 이 텐서를 구성하는 벡터와 여벡터에 대응하는 기저벡터와 총합 기호를 모두 써서 기술 할 수 있다. 하지만 간략한 표현(Einstein notation)을 쓰면 편하다. 기저벡터 조합의 성분(component=기저벡터의 계수) (T^j_ikl) 만 보더라도 어떤 기저벡터로 수성된 텐서 인지 알 수 있다. 따라서 텐서를 표현할 때 총합기호는 물론 기저벡터까지 모두 생략한 성분으로 나타낸다. 사실 '성분'은 그저 값일 뿐이다.
[10:46]-----------------------------------------------------------------
And now we can ask how can this tensor T act on other tensors ? We can basically do any summation we'd like as long as upstairs indexes are matched with downstairs index, and downstairs index are matches with upstairs indexes.

So we do something like this with four summations and you can see all indexes are positioned properly,

Or we can do this again gour summation, and we get indexes work out.

Or we can do three summation and three indexes involved in the position correctly and it would left the index of i on the output.

Or we can do this two summation where we have indexes k and l on the output.

[11:23]---------------------------------------------------------
So the first example T is acting like a function and it takes vector, covector, vector, vector as inputs and output is a number, since all the indexes are summed over.

Here input components with three upper indexes. Remember the component in upper indexes means vector components. So this tensor components are from the tensor of V tensor V tensor V space and this is just covector component. So that's from V*, and again output is a scalar, all the coefficient summed over.

Here we have covector components. So that's from V* and component with two upper indexes. So, thus components are from (V tensor V) and since lower index i isn't summed and left over to the output. So that's element from V* as the output.

Here we have an input with one upper index and one lower index. So those are components from a tensor from (V tensor V*) and since the two lower indexes are remained, k and l aren't summed over. We end up output from (V* tensor V*).

So we have all these functions here, and I'm sure how often ... more examples what T can do. But, what do all this functions have in common.
[12:34]----------------------------------------------------------
Let's take a look at this function. It has four inputs,

and we'll going to take all input except one and make them constant. So it's like fixed in stone and all never change.

[Scaling]We scaled input w, we can just bring scaling coefficient n outside. Basically we can either scale the input before or scale the output after.
[Adding]Also we replace this input vector components with a same of two sets of components. We can just distribute the rest of and get the some output. So, basically we can add the input or we can add the outputs.

And the same things goes for this example. If we freeze all the inputs except one in stone, we can scale the remaining inputs before or scale the output after. Also we can choose either input or add outputs.

So basically with all this maps, all this maps are linear, if I choose, hold all input constant except one.

And there's words for that. We call functions that become this way "Multilinear Maps". So, a multilinear map is a function that is linear when all input except one are held constant. So another words are multilinear map is a function which is going to obey these two properties(scaling, adding).
The first property says that when all input except one are hole constant, if we scale the input variable, that's same thing as scaling output of the function.
The second property is, when we hold inputs constant except one, when we do summation this input, that's the same thing as doing as some of these two outputs here. So we can add these two output separately and get the same answer.
And just to be clear, these properties need to be true for all the input slots. So it doesn't matter which input hold constant and which input will allow to vary. These properties have to be true for all inputs.
Another way saying that is that multilinear functions are linear for each input variable. So saying what tensor is, when we use it as function, it obey these multilinearity properties.

[14:50]-----------------------------------------------------------
Let's summarize what we've learned in this video.

We learn this formal definition of tensor product which is a way combining tensors that obeys these scaling and adding rules.
We also learn that the tensors, we get from the tensor product, form new vector spaces which are denoted using the tensor product for vector spaces like this.
And finally we learned that all tensors are linear maps which means that they're functions that take some number of inputs and they're linear to one input variable while all other inputs are held constant.
------------------------------------------------------------------[지금 다루려는 Space(공간)는 기하학적 의미의 '공간'이 아니다. 어떤 연산이 일어나고 그 결과가 속할 조건을 뜻한다. (입출력 및 연산 포함)]
So, we know that these vector is v, w, e_1, e_2 and these vectors are live in vector space capital V. And we also know that covectors like α, β, ε_1, ε_2. These all live in the vector space V*.
We know that we can make vector pairs like vα, vβ, wα, wβ and any other of these examples.
And we can add them and scale them using these rules that we just talked about here.
So that means that these must be vectors in the vector space. So what the vector spaces do that live in and the answer is all of these living in the vector space, V tensor(circles times) V*.
And you noticed that using this circled-times symbol here again and this is actually new use of circled-times symbol that was never seen before. Because rather than combining vectors and/or tensors, we're combining vector space.
[5:15]-------------------------------------------------------------
Totally we're come across 3 different use of circled-times symbol. There's the Kronecker product which combine two array in new array. There's the tensor product whcich can combine two tensors into a new tensor. And new usage here, we have the tensor product of vector space, which combine two vector space into a new vector space.
[5:32]-------------------------------------------------------------
So, the term, tensor product can offer two different things here. There's the tensor product of tensors which I'd like to think of little tensor product. Because it combine individual tensors. But the also the tensor product of vector space which i'd like to think of BIG tensor product, because combine entire vector space together.
[5:55]--------------------------------------------------------
We've discussed new vector space by combining V, V* together
and the vectors in this vector space follow these scaling and adding rules up here.
[6:09]----------------------------------------------------------
We have these vector space V tensor V*. So, what are the elements of this vector space? Well the members of this vector spaces are (1,1) tensor or basically vector-covector pairs and the linear combination.
What can these tensors do? Well, remember that vector components always going to have upstairs index, while covector components are always going to have a downstairs index.
So, if we have focused L coefficients here, if we do its summation with vector components like this, L of j, we end up vector component as the output. Because we have one upstairs i index that's left. So vector-in, vector-out, or essentially Linear Map.
So, we can also do it, summation with covector components like this, sum over i. Output will have j, which is downstair index. So we have basically end up covector components of the output. So, it's covector-in, covector-out. This is a map from V* to V*.
[7:11]------------------------------------------------------------
Also we can provide L with both vector component and covector component and we can do two summations over i and j. Now it gives a scalar as the output since the output has no indexes left some over. So this case L can be viewed as a function from a pair of vectors and covectors to scalar.
[7:31]-------------------------------------------------------
And finally we can do this same thing but reversed inputs, and this case L is function from a covector-vector pair to scalar.
So the elements of these vector space, (V tensor V*) can be interpreted as any of the these things depending on the number and type of inputs that we give it. So all of these are (1,1) Tensors and all of them is the members of the vector space, (V tensot V*).
[7:57]----------------------------------------------------------
So, let's consider something else. We have these same rules for the tensor product. But what if we use tensor product to combine two covector together. Covectors are α, β, εs live in V*. When we have covector-covector pair like αβ, or ε_1ε_2, it turns out that they are live in the vector space (V* tensor V*).
[8:21]------------------------------------------------------
So elements of (V* tensor V*) are (0,2)-Tensors which are covector-covector pairs and linear combinations. And we know already if we take these v components and do two summations with two sets of vector components like this, then we end up with scalar. And this of course is Bilinear Form which takes pair of vectors and output to scalar.
[Ref]------------------------------------
'Bilinear Forms are covector-covector pair'. It's a function taking two vectors and output a scalar.

[8:45]-------------------------------------------------------
We also do single summation over i with one set of vector component and we've left with index j, downstairs. So output will be set of covector component. So, inthis case, W is a map from vectors to covectors or it map from V to V*.
Also, we could instead chooseand do a summation with vector component over j index and then it end up with covector components with i index on the bottom and this be another map of V to V*, but it'll be different map in this up here, because we're doing summation differently.
[9:21]-------------------------------------------------------------
So, elements of (V tensor V*) which are (0,2)-Tensors can be any of these things depending on the inputs that we give it.
[9:30]--------------------------------------------------------------
* 기저벡터와 기저 여벡터, 벡터와 여벡터를 모두 아우르는 텐서와 텐서 연산 규칙, 텐서 곱으로 생성되는 새로운 공간의 원소가 될 텐서의 성분 표기법(윗첨자 아랫 첨자의 표기)등을 예를 들어가며 길게 설명하고 있다. 다소 지루하지만 익혀두자. 색인 올림과 내림을 거저먹게될 것이다.
So, really, the basic building block of all these vector spaces are the two vector spaces, V and V*, and those contain tensors whose components are here, remember vector components have its upstair index and covector components have downstairs index.
We can combine these two vector spaces into new vector spaces using tensor product and these vector spaces will have these components and again we call the indexes from V upstairs and indexes from V* go downstairs.
And we can continue to make larger and larger vector spaces using the tensor product. All these new vector spaces contain tensors with components that have different combination of upstairs and downstairs indexes depending on whether they are constructed using V and V*.
* 벡터 공간 V와 여벡터 공간 V*의 텐서 곱으로 더욱 복잡한 새로운 벡터 공간을 만들 수 있다. 벡터와 여벡터의 텐서 곱의 결과는 벡터, 여벡터 혹은 실수 중 하나일 뿐이다. 이 결과로는 특정 공간의 원소임을 표기하지 못하므로 텐서 곱의 계수에 복잡한 텐서 곱의 연산을 반영한다. 윗 첨자는 벡터, 아랫첨자는 여벡터다. 벡터와 여벡터의 텐서 곱은 선형성을 갖지만 복합 구성의 경우 순서가 바뀌면 않된다. 크로네커 곱이라는 점을 기억해 두자. 벡터는 열, 여벡터는 행의 배열로 둔 행렬 곱셈이다. (V ⊗ V*)와 (V* ⊗ V)으로 생성된 공간은 서로 다르다.
[10:19]----------------------------------------------------------------
* "조각난 성분 색인('Cracked' component index)"에서 '조각난'의 의미에 주목하자.
We have some new tensor from vector space we've never seen before. We can easily get the 'cracked' component indexes just by looking at the vector spaces.
So, looking at this vector spaces we see that basis could be made up of covector, vector, covector and another covector and combination. And we get the components just by placing indexes and the opposite position that we have seen in the basis. So that all the summation properly.
복잡한 구성을 한 텐서의 예를 보자. 이 텐서는 한개 이상의 텐서 곱 공간에 존재하는 텐서다. 이 텐서를 구성하는 벡터와 여벡터에 대응하는 기저벡터와 총합 기호를 모두 써서 기술 할 수 있다. 하지만 간략한 표현(Einstein notation)을 쓰면 편하다. 기저벡터 조합의 성분(component=기저벡터의 계수) (T^j_ikl) 만 보더라도 어떤 기저벡터로 수성된 텐서 인지 알 수 있다. 따라서 텐서를 표현할 때 총합기호는 물론 기저벡터까지 모두 생략한 성분으로 나타낸다. 사실 '성분'은 그저 값일 뿐이다.
[10:46]-----------------------------------------------------------------
And now we can ask how can this tensor T act on other tensors ? We can basically do any summation we'd like as long as upstairs indexes are matched with downstairs index, and downstairs index are matches with upstairs indexes.
So we do something like this with four summations and you can see all indexes are positioned properly,
Or we can do this again gour summation, and we get indexes work out.
Or we can do three summation and three indexes involved in the position correctly and it would left the index of i on the output.
Or we can do this two summation where we have indexes k and l on the output.
[11:23]---------------------------------------------------------
So the first example T is acting like a function and it takes vector, covector, vector, vector as inputs and output is a number, since all the indexes are summed over.
Here input components with three upper indexes. Remember the component in upper indexes means vector components. So this tensor components are from the tensor of V tensor V tensor V space and this is just covector component. So that's from V*, and again output is a scalar, all the coefficient summed over.
Here we have covector components. So that's from V* and component with two upper indexes. So, thus components are from (V tensor V) and since lower index i isn't summed and left over to the output. So that's element from V* as the output.
Here we have an input with one upper index and one lower index. So those are components from a tensor from (V tensor V*) and since the two lower indexes are remained, k and l aren't summed over. We end up output from (V* tensor V*).
So we have all these functions here, and I'm sure how often ... more examples what T can do. But, what do all this functions have in common.
[12:34]----------------------------------------------------------
Let's take a look at this function. It has four inputs,
and we'll going to take all input except one and make them constant. So it's like fixed in stone and all never change.
[Scaling]We scaled input w, we can just bring scaling coefficient n outside. Basically we can either scale the input before or scale the output after.
[Adding]Also we replace this input vector components with a same of two sets of components. We can just distribute the rest of and get the some output. So, basically we can add the input or we can add the outputs.
And the same things goes for this example. If we freeze all the inputs except one in stone, we can scale the remaining inputs before or scale the output after. Also we can choose either input or add outputs.
So basically with all this maps, all this maps are linear, if I choose, hold all input constant except one.
And there's words for that. We call functions that become this way "Multilinear Maps". So, a multilinear map is a function that is linear when all input except one are held constant. So another words are multilinear map is a function which is going to obey these two properties(scaling, adding).
The first property says that when all input except one are hole constant, if we scale the input variable, that's same thing as scaling output of the function.
The second property is, when we hold inputs constant except one, when we do summation this input, that's the same thing as doing as some of these two outputs here. So we can add these two output separately and get the same answer.
And just to be clear, these properties need to be true for all the input slots. So it doesn't matter which input hold constant and which input will allow to vary. These properties have to be true for all inputs.
Another way saying that is that multilinear functions are linear for each input variable. So saying what tensor is, when we use it as function, it obey these multilinearity properties.
[14:50]-----------------------------------------------------------
Let's summarize what we've learned in this video.
We learn this formal definition of tensor product which is a way combining tensors that obeys these scaling and adding rules.
We also learn that the tensors, we get from the tensor product, form new vector spaces which are denoted using the tensor product for vector spaces like this.
And finally we learned that all tensors are linear maps which means that they're functions that take some number of inputs and they're linear to one input variable while all other inputs are held constant.
[이전]14. 텐서는 벡터와 여벡터 조합의 일반형(Tensors are a general vector-covector combination)
[다음]16. 색인 올림과 내림(Raising/Lowering Indexes)
------------------------------------
[구구단만 알아도 '텐서']
-1. 동기(Motivation)
0. 텐서의 정의(Tensor Definition)
1. 정역방향 변환(Forward and Backward Transformation)
2 벡터의 정의(Vector Definition)
3. 벡터 변환 규칙(Vector Transformation Rules)
4. 여벡터 란?(What's a Covector?)
5. 여벡터 성분(Covector components)
6. 여벡터 변환 규칙(Covector Transformation Rules)
7. 선형 사상(Linear Maps)
8. 선형사상 변환규칙(Linear Map Transformation Rules)
9. 측량 텐서(Metric Tensor)
10. 쌍선형 형식(Bilinear Form)
11. 선형사상은 벡터-여벡터의 짝(Linear-Maps are Vector-Covector Pair)
12. 쌍선형 형식은 여벡터-여벡터 짝(Bilinear Forms are Covector-Covector Pairs)
13. 텐서 곱 vs. 크로네커 곱(Tensor Product vs. Kronecker Product)
14. 텐서는 벡터-여벡터 조합의 일반형(Tensors are a general vector-covector combinations)
15. 텐서 곱 공간(Tensor Product Spaces)
16. 색인 올림과 내림(Raising/Lowering Indexes)
-------------------------
댓글 없음:
댓글 쓰기