📜 ⬆️ ⬇️

The magic of tensor algebra: Part 5 - Action on tensors and some other theoretical questions

Content


  1. What is a tensor and what is it for?
  2. Vector and tensor operations. Ranks of tensors
  3. Curved coordinates
  4. Dynamics of a point in the tensor representation
  5. Actions on tensors and some other theoretical questions
  6. Kinematics of free solid. Nature of angular velocity
  7. The final turn of a solid. Rotation tensor properties and method for calculating it
  8. On convolutions of the Levi-Civita tensor
  9. Conclusion of the angular velocity tensor through the parameters of the final rotation. Apply head and maxima
  10. Get the angular velocity vector. We work on the shortcomings
  11. Acceleration of the point of the body with free movement. Solid Corner Acceleration
  12. Rodrig – Hamilton parameters in solid kinematics
  13. SKA Maxima in problems of transformation of tensor expressions. Angular velocity and acceleration in the parameters of Rodrig-Hamilton
  14. Non-standard introduction to solid body dynamics
  15. Non-free rigid motion
  16. Properties of the inertia tensor of a solid
  17. Sketch of nut Janibekov
  18. Mathematical modeling of the Janibekov effect


Introduction


Before continuing the story about the applied aspects of the use of tensor calculus, it is absolutely necessary to touch on the topic indicated by the title. These questions surfaced implicitly in all previous parts of the cycle. However, I made some inaccuracies, in particular, the tensor forms of the scalar and vector product in articles 1 and 2 were called “convolution” by me, although in reality they are a combination of convolution and multiplication of tensors. On addition, multiplication of tensors by number, the tensor product was mentioned only in passing. About symmetric, antisymmetric tensors in general, there was no talk.

In this post we will talk about tensor operations in more detail. For further exercises, we will need to navigate them well.
')
In addition, the concept of symmetric and antisymmetric tensors is important. We learn that any tensor can be decomposed into symmetric and antisymmetric parts, as well as learn about the fact that the antisymmetric part of the tensor can be associated with a pseudovector. Many physical quantities (for example, angular velocity) are pseudovectors. And it is the tensor approach to the description of physical phenomena that allows us to reveal the true nature of certain quantities.

1. Four basic actions on tensors


1.1. Multiplication of a tensor by a scalar and addition of tensors (linear combination)


By multiplication by a number is meant multiplying by this number of each component of the original tensor. The result is a tensor of the same rank as the original.

To add the same, you can only tensors with the same rank. In non-component writing, the linear combination of tensors looks like this

\ mathbf {C} = \ lambda \, \ mathbf {A} + \ mu \, \ mathbf {B}

Where \ lambda, \, \ mu - scalars. If we go to the component record, then, for example, for tensors of the second rank, this operation looks as follows

c_ {ij} = \ lambda \, a_ {ij} + \ mu \, b_ {ij}

1.2. Multiplication of tensors


Multiplication is performed on tensors of any rank. The result is a total rank tensor. Let, for example \ mathbf {a} - rank tensor (0,1), and \ mathbf {B} - rank tensor (0,2). Then the result of their multiplication is the tensor \ mathbf {C} rank (0.3)

\ mathbf {a} \, \ mathbf {B} = \ mathbf {C}

or, in component form

a_ {i} \, B_ {jk} = C_ {ijk}

We have already come across the tensor product in the second article , considering the dyad. Let's return to this again by multiplying two vectors

\ mathbf {c} = \ mathbf {a} \, \ mathbf {b}

what is in component form

c ^ {ij} = a ^ {\, i} \, b ^ {\, j}

gives a matrix representation of the resulting dyad

\ mathbf {c} = \ begin {bmatrix} a ^ 1 \, b ^ 1 & amp; & amp; a ^ 1 \, b ^ 2 & amp; & amp; a ^ 1 \, b ^ 3 \\ a ^ 2 \, b ^ 1 & amp & amp; a ^ 2 \, b ^ 2 & amp; & amp; a ^ 2 \, b ^ 3 \\ a ^ 3 \, b ^ 1 & amp & amp; a ^ 3 \, b ^ 2 & amp; & amp; a ^ 3 \, b ^ 3 \ end {bmatrix}

From the last examples, in particular, it is clear that in the general case the tensor product is not commutative

\ mathbf {a} \, \ mathbf {b} \ ne \ mathbf {b} \, \ mathbf {a}

which is very easy to check by writing multiplication in component form and writing out the matrix representation of the dyad

d ^ {\, ij} = b ^ {\, i} \, a ^ {\, j}

\ mathbf {d} = \ begin {bmatrix} b ^ 1 \, a ^ 1 & amp; & amp; b ^ 1 \, a ^ 2 & amp; & amp; b ^ 1 \, a ^ 3 \\ b ^ 2 \, a ^ 1 & amp & amp; b ^ 2 \, a ^ 2 & amp; & amp; b ^ 2 \, a ^ 3 \\ b ^ 3 \, a ^ 1 & amp & amp; b ^ 3 \, a ^ 2 & amp; & amp; b ^ 3 \, a ^ 3 \ end {bmatrix}

It's obvious that \ mathbf {c} \ ne \ mathbf {d} but it’s also obvious that

\ mathbf {d} = \ mathbf {c} ^ {\, T}

This is a consequence of performing another action on the tensors.

1.3. Rearrangement of tensor indices


In this case, a new set of quantities is formed from the components of the original tensor, with a different order of indices. The rank of the tensor does not change. For example, from the tensor \ mathbf {A} rank (0.3), you can get three other tensors \ mathbf {B} , \ mathbf {C} and \ mathbf {D} such that

B_ {ijk} = A_ {jik}, \ quad C_ {ijk} = A_ {kji}. \ quad D_ {ijk} = A_ {ikj}

For tensors of second rank, only one permutation is possible, called transposition

d_ {ij} = c_ {ji} \ quad \ Rightarrow \ quad \ mathbf {d} = \ mathbf {c} ^ {\, T}

Above, when we looked at the non-commutativity of the tensor product and rearranged the vectors forming the dyad, we just performed the permutation of the indices, because permutation of the factors leads to the permutation of the indices of the resulting tensor

1.4. Convolution


Convolution is the summation of the components of a tensor over a pair of indices. This action is performed on one tensor and, at the output, gives a tensor with a smaller one by two. Say, for a second rank tensor, convolution gives a scalar, called, the first principal invariant or trace of the tensor

A_ {ii} = I_1 (\ mathbf {A}) = \ mathop {\ rm tr} \ mathbf {A}

Convolution is always performed on a pair of differently variable indices (one index must be upper and the other lower).

Very often, convolution is combined with the product of tensors. Sometimes this combination is called the inner product of tensors. In this case, the tensors first multiply, and then fold the resulting tensor of the total rank. An example is the scalar product record we used earlier.

c = a _ {\, i} \, a ^ {\, i}

equivalent to no index

c = \ mathbf {a} \ cdot \ mathbf {b}

The point resembling a scalar product in the index-free record just means the combination of multiplication with convolution. Convolution is performed on neighbors with a point pair of indices. We show the whole process deployed. Covector \ mathbf {a} and vectors \ mathbf {b} multiply form a tensor \ mathbf {c} rank (1,1)

c_i ^ {\, j} = a_i \, b ^ j

or

\ mathbf {C} = \ mathbf {a} \, \ mathbf {b} = \ begin {bmatrix} a_1 \, b ^ 1 & &; a_2 \, b ^ 1 & amp; a_3 \, b ^ 1 \\ a_1 \, b ^ 2 & amp & amp; a_2 \, b ^ 2 & amp; & amp; a_3 \, b ^ 2 \\ a_1 \, b ^ 3 & amp & amp; a_2 \, b ^ 3 & amp & amp; a_3 \, b ^ 3 \ end {bmatrix}

We turn the resulting tensor on its single pair of indices

C_k ^ k = a_1 \, b ^ 1 + a_2 \, b ^ 2 + a_3 \, b ^ 3 = c

However, you should not consider this point as a scalar product, because, for example, such an operation

A_ {ij} = B_ {ik} \, C_j ^ k

same multiplication combined with convolution

\ mathbf {A} = \ mathbf {B} \ cdot \ mathbf {C}

but in the sense of the actions performed, it is equivalent to the product of matrices that represent the components of the tensors.

2. Symmetric and antisymmetric tensors


We formulate the definition
A tensor is said to be symmetric with respect to a pair of indices if it does not change with the interchange of these indices.

B_ {ij \ cdots} = B_ {ji \ cdots}

If the tensor does not change when rearranging any two indices, then it is absolutely symmetric

And one more definition
A tensor is called antisymmetric with respect to a pair of indices if, when they are rearranged, the tensor changes sign

B_ {ij \ cdots} = B_ {ji \ cdots}

If the tensor changes sign when rearranging any two indices, then it is absolutely antisymmetric.

Any tensor can be decomposed into symmetric and antisymmetric, according to the selected pair of indices, parts. It is very easy to prove, let tensor be given \ mathbf {A} . Let's do equivalent transformations on it.

A_ {ij \ cdots} = \ frac {1} {2} \, A_ {ij \ cdots} + \ frac {1} {2} \, A_ {ij \ cdots} = \ frac {1} {2} \ , A_ {ij \ cdots} + \ frac {1} {2} \, A_ {ij \ cdots} + \ frac {1} {2} \, A_ {ji \ cdots} - \ frac {1} {2} \, A_ {ji \ cdots} =

= \ frac {1} {2} \ left (A_ {ij \ cdots} + A_ {ji \ cdots} \ right) + \ frac {1} {2} \ left (A_ {ij \ cdots} - A_ {ji \ cdots} \ right) = A _ {(ij) \ cdots} + A _ {[ij] \ cdots}

where is the symmetric part of the tensor

A _ {(ij) \ cdots} = \ frac {1} {2} \ left (A_ {ij \ cdots} + A_ {ji \ cdots} \ right),

and its antisymmetric part

A _ {[ij] \ cdots} = \ frac {1} {2} \ left (A_ {ij \ cdots} - A_ {ji \ cdots} \ right).

In order to leave no doubt, we prove, for the tensors we received, symmetry

A _ {(ji) \ cdots} = \ frac {1} {2} \ left (A_ {ji \ cdots} + A_ {ij \ cdots} \ right) = A _ {(ij) \ cdots}

and antisymmetry

A _ {[ji] \ cdots} = \ frac {1} {2} \ left (A_ {ji \ cdots} - A_ {ij \ cdots} \ right) = - \ frac {1} {2} \ left (A_ {ij \ cdots} - A_ {ji \ cdots} \ right) = - A _ {[ij] \ cdots}

If we talk about tensors of the second rank, then if such a tensor is symmetric, then it is absolutely symmetric too. The same applies to the antisymmetric tensor of the second rank. These properties follow directly from our definitions - the second rank tensor has only one pair of indices.

Antisymmetric tensor has a curious property. Let the tensor of the second rank \ mathbf {B} - antisymmetric. Then its components satisfy the condition

B_ {ij} = -B_ {ji}

This condition is feasible only if the diagonal components of the tensor are zeros, since when the indices are rearranged (and the matrix component is transposed), the diagonal components go into themselves. And the only number that is opposite to itself is zero. Components symmetrical with respect to the main diagonal have opposite signs.

Thus, of the nine components of the antisymmetric tensor of the second rank, only three are independent (this is, of course, a three-dimensional space). Three independent components form a vector (or covector). It is logical to assume that there may be a certain vector that uniquely depends on a given antisymmetric tensor. Let's try to find such a vector.

3. Companion vector of the second rank tensor


In order to deal with this issue, I carefully, before the keys on the keyboard overheat, “google”. I did not find an sensible and at the same time elegant answer to the question formulated by the paragraph, so I offer my answer, which is in some way a compilation and processing of the information I received.

Recall the Levi-Civita tensor, about which I have already written in detail here , and build such a tensor

C_ {ij} = \ varepsilon_ {ijk} \, a ^ {\, k} \ quad (1)

We prove that the tensor (1) is antisymmetric. Rearrange the indexes in it

C_ {ji} = \ varepsilon_ {jik} \, a ^ {\, k} = - \ varepsilon_ {ijk} \, a ^ {\, k} = -C_ {ij} \ quad (2)

The minus in (2) emerged due to the fact that the Levi-Civita tensor is an absolutely antisymmetric third-rank tensor. Permutation of indices in it leads to permutation of the basis vectors, on the mixed product of which the given tensor is built. Thus, the tensor (1) is really antisymmetric. Then we can easily find a vector a ^ {\, k}

\ varepsilon ^ {\, ijl} \, C _ {\, ij} = \ varepsilon ^ {\, ijl} \, \ varepsilon _ {\, ijk} \, a ^ {\, k} = 2 \, \ delta_ { k} ^ {\, l} \, a ^ {\, k} = 2 \, a ^ {\, l}

a ^ {\, l} = \ frac {1} {2} \, \ varepsilon ^ {\, ijl} \, C _ {\, ij} \ quad (3)

Note : where two Kronecker deltas came from in (3) can be found in the eighth article of the cycle.

antisymmetric tensor C_ {ij} . The third rank tensor in (3) is a contravariant Levi-Civita tensor, which repeats the properties of a covariant fellow with the only difference that

\ varepsilon ^ {ijk} = \ begin {cases} + \ cfrac {1} {\ sqrt g}, \ quad P (i, j, k) = +1 \\ - \ cfrac {1} {\ sqrt g} , \ quad P (i, j, k) = -1 \\ \ quad 0, \ quad i = j \ vee j = k \ vee k = i \ end {cases} \ quad (4)

- for the right coordinate system (for the left one it is necessary to change the sign of nonzero components to the opposite one). The components of the vector (3), taking into account the properties of the tensor (4) are determined uniquely

a ^ {\, 1} = \ frac {1} {2 \, \ sqrt g} \, c _ {\, 23} \ quad a ^ {\, 2} = \ frac {1} {2 \, \ sqrt g} \, c _ {\, 31} \ quad a ^ {\, 3} = \ frac {1} {2 \, \ sqrt g} \, c _ {\, 12}

or, if we present the matrix component of the antisymmetric tensor C_ {ij} then we will see such a record

\ mathbf {C} = \ begin {bmatrix} 0 & amp; & amp; 2 \, \ sqrt g \, a ^ 3 & amp; & amp; -2 \, \ sqrt g \, a ^ 2 \\ -2 \, \ sqrt g \, a ^ 3 & amp; & amp; 0 & amp; & amp; 2 \, \ sqrt g \, a ^ 1 \\ 2 \, \ sqrt g \, a ^ 2 & amp; & amp; -2 \, \ sqrt g \, a ^ 1 & amp; & amp; 0 \ end {bmatrix}

We note one more fact, which is impossible not to mention, but leaving a rigorous proof beyond the scope of this article (we will return to this later). If tensor C_ {ij} /> is a true tensor , then the corresponding vector (3) is a pseudovector or axial vector. The pseudovector is transformed as a vector when the coordinate axes are rotated, but when the basis changes from right to left (or from left to right), it changes its direction to the opposite (all its components change sign).

If in (1) a vector a ^ {\, k} - a true vector, the antisymmetric tensor formed from it is a pseudo - tensor - the components of such a tensor are transformed in the same way as the components of a true tensor when the axes of the coordinate system are rotated, but they change sign to the opposite when the basis orientation changes.

Thus, any antisymmetric tensor can be put in accordance with the pseudovector obtained in accordance with the expression (3).

Now we show that the symmetric tensor does not have a corresponding pseudovector, or rather, this pseudovector is zero. Suppose we are given a symmetric tensor \ mathbf {G} equality is true

G_ {ij} = G_ {ji} \ quad (5)

Suppose there is a vector

b ^ {\, k} = \ frac {1} {2} \, \ varepsilon ^ {\, ijk} \, G_ {ij} \ quad (6)

Rearrange the indices in (6) given the symmetry (5)

b ^ {\, k} = \ frac {1} {2} \, \ varepsilon ^ {\, jik} \, G_ {ji} = - \ frac {1} {2} \, \ varepsilon ^ {\, ijk} \, G_ {ij} = -b ^ {\, k} \ quad (7)

Expression (7) is valid only in one case, if

b ^ {\, k} = -b ^ {\, k} = 0 \ quad (8)

That is, if we multiply the symmetric tensor by the Levi-Civita tensor followed by convolution over two pairs of indices, we get the zero vector. If we do the same with an arbitrary second rank tensor

\ frac {1} {2} \, \ varepsilon ^ {ijk} \, T_ {ij} = \ frac {1} {2} \, \ varepsilon ^ {ijk} \ left (T_ {ij} ^ {\, S} + T_ {ij} ^ {\, A} \ right) = \ frac {1} {2} \, \ varepsilon ^ {ijk} \, T_ {ij} ^ {\, A}

the output will be a pseudovector corresponding to its antisymmetric part.

Conclusion


It turned out another immersion in the theory of tensor calculus. But immersion is undoubtedly necessary, because we use the results collected in this article in further articles of the cycle. Thank you readers for your attention!

To be continued…

Source: https://habr.com/ru/post/261991/


All Articles