1
2
DUAL BASES, OR AN INTRODUCTION TO TENSOR ANALYSIS ENGLISH VERSION II, revised and expanded
Stanislav Shcherbakov
3
Stanislav Shcherbakov Theoretical and Mathematical Physics Master Degree student, at the Herzen State Pedagogical University of Russia, St. Petersburg BSc Mathematics, Sholokhov Moscow State University for Humanities, 2008 stasvs111@gmail.com skype: stasvs5 Copyright © 2016 by Stanislav Shcherbakov
Editorial Área de Innovación y Desarrollo,S.L.
All rights reserved. This publication may not be reproduced, distributed, publicly communicated or used, in whole or in part, without prior authorization. © Text rigths: the author ÁREA DE INNOVACIÓN Y DESARROLLO, S.L. C/ Els Alzamora, 17 - 03802 - ALCOY (ALICANTE) SPAIN info@3ciencias.com First edition: April 2016 ISBN: 978-84-945424-1-1 DOI: http://dx.doi.org/10.17993/IngyTec.2016.15
4
CONTENTS I. DUAL BASES ....................................................................................................................................... 7 I.1 CONCEPTS OF COVARIANT AND CONTRAVARIANT COORDINATE REPRESENTATION ............... 7 I.2 TRANSFORMATION MATRICES .................................................................................................. 12 I.3 LOCAL BASES AND TRANSFORMATION MATRICES FORMATION ............................................. 15 I.4 DUAL BASES IN THREE DIMENSIONS......................................................................................... 25 I.5 PARTIAL DERIVATIVES AS BASE VECTORS OF DUAL BASES IN THREE DIMENSIONS ................ 32 II. TENSORS ......................................................................................................................................... 45 II.1 METRIC TENSORS ..................................................................................................................... 45 II.2 METRIC TENSORS AND LOCAL BASES....................................................................................... 61 II.3 TRANSFORMATION LAWS FOR TENSORS................................................................................. 82 BIBLIOGRAPHY ................................................................................................................................... 87
5
6
I.1 CONCEPTS OF COVARIANT AND CONTRAVARIANT COORDINATE REPRESENTATION Representation of a vector may be not unique even in the same coordinate system if a coordinate system is not orthogonal (oblique coordinate system is not orthogonal for example). How can we estimate, say (2, system with coordinate axes
6)? An oblique coordinate
Х1 and Х2 is shown on fig.1. Two ways of point P
components representation are shown there. Let the point P be an end of a position vector P, then we have the following [see: Reflections on Relativity. 5.2. Tensors, Contravariant and Covariant]. Х2
Contravariant way of coordinate representation/interpretation
Р
Р2
The second way of coordinate representation
Р2
Х1
Р1
Р1 О
Fig.1 Ways of coordinate representation and interpretation
The first way of coordinate interpretation. We interpret a pair of numbers
(a, b) in a following way. We measure x1 = a unit vectors along the X1 coordinate axis, and x2 = b unit vectors along the X2 coordinate axis. 7
The second way of coordinate interpretation. We measure x1 = a unit vectors perpendicular to the
X2 coordinate axis, and x2 = b unit vectors perpendicular to
the X1 coordinate axis. The first way of coordinate interpretation is contravariant. (We mark the contravariant components in this way: Хi index is at the top of a coordinate symbol, say
Х1, Х2; here i, 1, 2 are not powers, they are coordinate component indexes). The second way of coordinate interpretation is quite different from the first one, but both ways of coordinate interpretation are correct. However, even these two ways are not the only possible methods of coordinate interpretation. We now modify these ways of coordinate interpretation, or rather, the first way of interpretation remains the same, but the second one we change as follows [see: Reflections on Relativity. 5.2. Tensors, Contravariant and Covariant, Ş. Selçuk Bayin., Mathematical Methods in Science and Engineering]: In the second way of coordinate representation: We drew perpendiculars from the point P to the coordinate axes X1 and X2 (as shown on fig 2). Now we have (see: fig. 2).
8
X2
Р2
P
Р2
X1 Р1 Р1
Fig. 2 Covariant and contravariant compoments of vector P
We have a component
P1 at the intersection of a perpendicular drawn from
the end of the vector P and the coordinate axis X1, and a component P2 at the intersection of a perpendicular drawn from the end of the vector
P and the coordi-
nate axis X2. Now both covariant and contravariant components of a vector are on the same axes. We use this to form dual bases. Let us define a condition of dual bases first. [see: Young, Eutiquo C., Vector and Tensor Analysis]. This condition is
1, if i k e ek 0, if i k i
9
where ei are base vectors of the original basis, ei of the dual one. But this formula is based on a dot product of base vectors ei and ek, so we have
ek ei ek · ei ·cos, where Θ is the angle between the directions of base vectors ek and ei.
cos(90°) = 0, and cos(0°) = 1, so a
We now consider a very important thing:
dot product of the base vectors ek and ei lying on perpendicular axes is zero regardless of their lengths. Then we have a formula (This formula is the founding principle of dual bases. That is ei·ej = 1, if i = j, and ei·ej = 0, if i # j. (see Fig.3)):
ei ·ei ei · ei ·cos 1 e2
e2
e1
e1 90°
90°
e1
a)
b)
Fig.3 A dual basis formation in a two-dimensional case.
We now perform a dual basis formation in a two-dimensional case. We see an oblique basis on fig.3 a), and formation of a dual base vector e1 on fig.3 b) (e1 is dual to e2; e1 is perpendicular to e2). Now we regulate the length of e1 so that the condition
10
e1·e1 1, e1·e2 0 is satisfied we should have
ei ·ei ei · ei ·cos 1 [see: Simmonds, James G., A Brief on Tensor Analysis]. Vector e2 we build in the same way. The result is on figure 4: V2
Here positions of base vectors ei, e’i, ei, e’i on the coordinate axes are shown, but their scale relations are not worked out
U2
e’2 e’2 e2
e2 U1
e’1
e1 e1 Fig. 4. Dual bases U and V with the common origin. (We let the basis U be the original one, while the basis V – the dual one).
e’1 V1
We see two coordinate systems (U and V) on fig.3. The base vectors of these coordinate systems are ei and e’i (basis ei and it's reciprocal (dual) basis ei belong to the coordinate system U; basis e’i and it’s dual e’i belong to the coordinate system V).
11
I.2 TRANSFORMATION MATRICES Now the transformation of base vectors from the ”old” basis to the “new” one becomes quite simple (here we let the “old” basis to be the one of the coordinate system U, while the “new” basis be the one of the coordinate system V). [see: Young, Eutiquo C., Vector and Tensor Analysis]. A dot product is commutative, that is, for example,
e’i e j ·e j
e’ ·e e j
i
j,
and
ei ·ei 1 (for dual bases), so we have
e’i
j i e ’ · e e and e ’ i j
i j e ’ · e e j .
Here (e’i·ej) and (e’i·ej) are dot products of base vectors. These formulas we represent by matrices (in a two-dimensional space here):
e '1 e '1 e1 e '1 e 2 e1 e ' 1 2 e e ' e e ' e 2 2 2 2
e '1 e '1 e1 e '1 e2 e1 2 2 2 2 e ' e e ' e e ' 1 2 e
- a covariant transformation;
- a contravariant transformation.
A transformation from the “new” basis to the “old” one is also possible:
ei
e ·e’ e’ j
i
j
e ·e’ e’ .
and ei
i
j
j
We can represent this in a matrix form:
e1 e1 e '1 e1 e '2 e '1 e 1 2 e ' e e ' e e ' 2 2 2 2
12
e1 e1 e '1 e1 e '2 e '1 2 2 2 2 e e ' e e ' e 1 2 e ' All the terms in these matrices are dot products:
e’i ·ei e’i · ei ·cos or
ei ·e’i ei · e’i ·cos. Vectors e’i, ei belong to the same axis, and ei, e’i belong to the same axis, so
cos cos 0 1, and,
e’i ·ei e’i · ei and ei ·e’i ei · e’i . Now we turn to non-basis vectors. We have:
a '1 e '1 e1 e '1 e 2 a1 a ' - a covariant transformation; 1 2 a e ' e e ' e 2 2 2 2 a '1 e '1 e1 e '1 e2 a1 2 2 2 2 e ' e e ' e a ' 1 2 a and the inverse transformations
a1 e1 e '1 e1 e '2 a '1 a 1 2 a ' e e ' e e ' 2 2 2 2
13
- a contravariant transformation;
a1 e1 e '1 e1 e '2 a '1 2 . 2 2 2 e e ' e e ' a 1 2 a ' Components of a vector
A, for example, transform in the same fashion as the
“old” basis vectors ei do, so the basis ei, the ai (components of the vector
A), and
the transformation matrix itself we call “covariant”. But ai (components of the vector
A) transform just as the basis vectors of the “new” basis ei do, so the basis ei,
components of the vector ai, and it's transformation matrix we call “contravariant”. Components of a vector A are shown on fig. 5:
V2
U2
a’2 a2 a’2
A a2
Fig. 5. Components of a nonbase vector А in the coordinate systems U and V.
U1
a1
a1
a’1
a’1
V1
14
I.3 LOCAL BASES AND TRANSFORMATION MATRICES FORMATION Now we turn to practical ways of forming basis vectors ei and ei. The way to form them should be well defined, easy to understand and applicable to a maximum number of practical problems. We use partial derivatives as basis vectors. [see: Spiegel, Murray R., Vector Analysis and an Introduction to Tensor Analysis, Young, Eutiquo C., Vector and Tensor Analysis]. We have already introduced the coordinate systems
U and V. Now we introduce an orthogonal coordinate system (O, x, y)
and a position vector of the origin of the coordinate systems U and V, that is,
R xi yj , where
i, j are unit base vectors in the orthogonal coordinate system, x, y are the
functions defined below (x1 = x, x2 = y here). Then, the coordinate systems
U and V have a common origin. We represent
lines on which the axes U1, U2, V1,
V2 lay as functions of the orthogonal coordi-
nates:
u1 u1 x1 , x 2 , u 2 u 2 x1 , x2 , v1 v1 x1 , x 2 , v 2 v 2 x1, x2 . We also have inverse functions:
x1 x1 u1 , u 2 , x 2 x 2 u1 , u 2 , x1 x1 v1 , v 2 , x 2 x 2 v1 , v 2 . Now we express basis vectors as partial derivatives of the position vector
R and as
gradients of functions u1, u2, v1, v2. These vectors are the base ones but they are
R e not a unit vectors in general case. Vectors i u i of the coordinate system U
15
lay on the ui axes (this is the expression of the contravariant coordinate interpreta-
tion principle). For example,
e’i
R v i
e1
R u 1
lays on the u1axis. Vectors
of the coordinate system V lay on the vi axes (this also is the expression
of the contravariant coordinate interpretation principle). Gradients perpendicular to the
grad ui
are
ui axes (this is the expression of the covariant coordinate in-
terpretation principle). For example, grad
u1 is perpendicular to the u1axis. Gradi-
ents
grad vi lay on the ui axes. Then, we have base vectors
ei grad ui and e’i grad vi in coordinate systems
U
and
V.
Now we can write base vectors for a two-
dimensional basis (we take partial derivatives of the position vector
R with respect
to contravariant coordinates ui and vi of the vector P in both coordinate systems): For
R ei i u we have:
R x1 e1 1 1 u u
x 2 u1
- lays on the u1 axis (this is the expression of the
contravariant coordinate interpretation principle);
16
R x1 e2 2 2 u u
x 2 u 2 - lays on the u2 axis (this is the expression of
the contravariant coordinate interpretation principle). For
ei grad u i we have:
u1 e 1 x
u1 x 2
u 2 e 1 x
u 2 x 2
1
2
For
e’1
R v1
we have:
x1 e’ 1 v
x 2 v1
x1 e’ 2 v
x 2 v 2
1
2
For
e’i grad vi we have:
17
1 v e’1 1 x
v1 x 2
2 v e’2 1 x
v 2 x 2
Now we write these matrices in a bit different form:
x i x i e1 1 , e2 2 , u u 2 u1 u e i , e2 i x x
1
x i x i e’1 1 , e’2 2 , v v 1 v v 2 1 2 e’ i , e’ i x x
.
We see this on the fig.6
18
V2
X2
U2
e’2
e’2
e2
e2
A
U1
e1 e1
R
e’1
e’1
V1
The position vector R, partial derivatives of which with respect to ui and vi are the base vectors in the respective coordinate systems
X1
O
Here only positions of the base vectors ei , e’i, ei, e’i on the coordinare axes are shown, their scale relations are not worked out.
Fig. 6. Dual bases U and V and coordinate systems U and V we represent in the orthogonal coordinates. The position vector R is drawn to the origin of the coordinate systems U and V.
19
Now we can represent covariant and contravariant transformation matrices in a two-dimensional basis using partial derivatives as base vectors. We have the following: Covariant transformation:
e’i
e’ · e e j
i
j – symbolically; we have, however
i 1 x u x v1 v1 x i i x x i u1 v 2 2 i v x i
x i u 2 x i 1 i 1 v x u i i 2 x u x , 2 i u 2 v x
which we represent symbolically as
e '1 e '1 e1 e '1 e 2 e1 e ' 1 2 e 2 e '2 e e '2 e 2 and
a '1 e '1 e1 e '1 e 2 a1 a ' , 1 2 a 2 e '2 e e '2 e 2 so,
x i u1 1 i a '1 v x a ' 2 x i u1 2 i v x
x i u 2 1 i v x a1 x i u 2 a2 . 2 i v x
20
This matrix we can write in a more compact form:
x i u1 1 i v x x i u1 2 i v x
x i u 2 u1 1 i 1 v x v 1 x i u 2 u 2 i v 2 v x
u 2 v1 u 2 . 2 v
This matrix (it's terms are partial derivatives of “old” coordinates with respect to the “new” ones)
u1 v1 1 u v 2
u 2 i v1 u j 2 u v v 2
is a covariant transformation matrix in two dimensions.
Contravariant transformation:
e’i
i j e ’ · e e j – symbolically; we have however
1 i v x v x i x i u1 2 v v 2 x i x i i 1 x u 1
v1 x i u1 i 2 i x u x 2 2 i v x u , i 2 x i x u
which is just the
21
e '1 e '1 e1 e '1 e2 e1 2 2 2 2 e ' e ' e1 e ' e2 e and
a '1 e '1 e1 e '1 e2 a1 2, 2 2 2 a ' e ' e1 e ' e2 a so,
v1 x i i 1 1 a ' x u 2 2 i a ' v x i 1 x u
v1 x i i 2 1 x u a 2 i 2 v x a . i 2 x u
This matrix we write in a more compact form:
v1 x i i 1 x u v 2 x i i 1 x u
v1 x i v1 i 2 1 x u u 1 v 2 x i v i 2 u 2 x u
v 2 u1 v 2 . 2 u
So, this matrix (it's terms are partial derivatives of “new” coordinates with respect to the “old” ones)
22
v1 u1 1 v u 2
v 2 i 1 v u j 2 v u u 2
is a covariant transformation matrix in two dimensions. Here we should clarify why, for example, the equation
v1 x i v1 1 1 i x u u is true. Let us write
v1 xi v1 v1 x1 x 1 1 2 1 1 i x u x x u u 2
v1 x1 v1 x1 v1 x 2 v1 x 2 1 1 2 1 1 1 2 1 x u x u x u x u v1 x1 v1 x 2 v1 x1 v1 x 2 1 1 1 2 1 2 1 1 u x u x u x u x
v1 1 u Here
x i 1, j x if i = j, 23
x i 0, j x if
i # j. (if xi does not depend on xj, that is i#j, then in taking the derivative
xi is a constant, and a derivative of a constant is zero). As for the sum
v1 x1 v1 x 2 1 1 2 1 u x u x
,
we consider the following: We have a function W. Then
W 1 W 2 dW 1 dx 2 dx . x x But if we divide dW by
dW, we get a unity, that is
dW = 1, dW or
dW W dx1 W dx 2 1 2 1 dW x dW x dW so,
v1 x1 v1 x 2 v1 1 1 2 1 1 u x u x u
24
.
x i x j
I.4 DUAL BASES IN THREE DIMENSIONS Now we form dual bases in three dimensions using a scalar triple product. [see: Борисенко А.И., Тарапов И.Е. Векторный анализ и начала тензорного исчисления]. Let an original basis be е1 е2 е3. We should follow the rule: «j coordinate is parallel to the j axis», and the condition of the dual basis «js coordinate is perpendicular to all coordinate axes but js» (this is the second way of coordinate interpretation). We should have the following to meet these conditions: Every base vector of the original basis is orthogonal to two base vectors of the dual one (for example, e1 is perpendicular to e2 and e3). The corresponding base vector of the dual base crosses the coordinate axis of the original base vector in question at an angle less then 90°. Then we have the following: 1)
е1 е 2 е1 е 3
е1 makes an acute angle with е1 2)
е 2 е1 е2 е 3 е2 makes an acute angle with е2
3)
е3 е 1 25
е3 е 2 е3 makes an acute angle with е3 «
» means «perpendicular». We should state here that both bases are interchangeable. We can take the
dual basis and build a dual one to it (and get the basis we call the original one as a result). Then, we have the following: 4)
е1 е 2
е 1 е3 е1 makes an acute angle with е1 5)
е е1 2
е 2 е3 е2 makes an acute angle with е2 6)
е 3 е1 е 3 е2 е3 makes an acute angle with е3
All these relations lead to such a conclusion: “edges of a parallelepiped built on base vectors of a dual basis е1, е2, е3 are perpendicular to the faces of a paral-
26
lelepiped built on base vectors of an original basis е1, е2, е3”. And “edges of a parallelepiped built on base vectors of a dual basis е1, е2, е3 are perpendicular to the faces of a parallelepiped built on base vectors of an original basis е1, е2, е3” is also true. All the facts cited above are connected to the definition of duality of vectors: that is “if an angle between vectors, say, еi and еj, is 900, then е1·е2
= 0, because
cos(90o) = 0, and if this angle is 0o, then cos(0o) = 1, so ei·ej = 1 if i = j, and ei·ej = 0 if i # j”. (We shall need this condition to form dual bases):
1, if i k e ek 0, if i k i
Now let us form a dual basis. We use conditions 1), 2), 3) to do this. We need to get a vector of the dual base, which is perpendicular to two vectors in the original base, so we employ a cross product:
e1 m e2 e3
(*)
Here a cross product of base vectors е2 and е3 produces a vector perpendicular to these base vectors е2 and е3. This resulting vector lies on the line on which е1 rests (just as required by the condition 1)), and a scalar m brings the length of
е2 × е3 equal to the required length of е1. We find m from the condition of duality of vectors
27
е1· е1 1 Then
1 е е1 1
and, inserting it into (*) we get
1 m e2 e3 , e1 so:
m·е1 (е2 е3 ) 1 , and then,
m
1 e1 e2 e3
We now insert this formula for m into (*) and get the required expression for
e 1:
е1 Here
е2 е3 е е 2 `3 е1 (е2 е3 ) V V` is a volume of a parallelepiped built on base vectors of the original
basis (that is on е1, е2, е3). We can get an expression for е2 and е3 in the same fashion:
28
е2
е3 е1 е е 3 1 е2 (е3 е1 ) V`
е3
е1 е2 е е 1 `2. е3 (е1 е2 ) V
Now we can write the general formula:
e i
e j ek el (em en )
where indexes (i,
j, k) and (l, m, n) are cyclic (even) permutations of 1, 2,
3. We expressed base vectors of a dual basis (е1, е2, е3) via base vectors of an original basis (е1,
е2, е3). However, these bases are interchangeable, so we can
express base vectors of an original basis through base vectors of a dual one. We use the same process of a dual basis forming, but now the “original” basis would be ( е1,
е2, е3), and the “dual” one (е1, е2, е3). Here we shall use the conditions 4), 5), 6). We get the following formulas:
е 2 е3 е 2 е3 е1 1 2 3 е (е е ) V`
e3 e1 e3 e1 e2 2 3 1 e (e e ) V ` e1 e2 e1 e2 , e3 3 1 2 e (e e ) V `
29
where V• is the volume of the parallelepiped built on base vectors (е1, е2, е3). Now we generalize this formula:
e j ek еi l m n e (e e ) where indexes (i,
j, k) and (l, m, n) are cyclic (even) permutations of 1, 2,
3. If one of the bases, say е1,
е2, е3, is orthonormal, then it's dual would just
coincide with it. The reason for this is: The interpretation of the “j coordinate as parallel to
j axis” (a contravariant
way of coordinate interpretation) and interpretation of “j coordinate as being perpendicular to all coordinate axes but j” (a covariant way of interpretation) are equal here. If applied to position the same geometric object (say, to position a point
Р),
they give the same result. We have in this case:
е1 е1 i1 (or i) е2 е2 i2 (or j)
е3 е3 i3 (or k). Lengths of these vectors are equal to unity and they are mutually perpendicular. This follows from
30
1, if l k il ik 0, if l k or
ik il lm , inin 0, where indexes (i, j, k) and (l, m, n) are cyclic (even) permutations of 1, 2, 3.
31
I.5 PARTIAL DERIVATIVES AS BASE VECTORS OF DUAL BASES IN THREE DIMENSIONS Now let us use partial derivatives and gradients as dual base vectors in three dimensions. [see: Spiegel, Murray R..,Vector Analysis and an Introduction to Tensor Analysis]. We introduce the orthogonal coordinate system (O, nate systems U and
x, y, z) and coordi-
V, just as we did in case of two dimensions, but here all these
coordinate systems are three-dimensional. We have three surfaces
U1 U1 x, y, z c1 , U 2 U 2 x, y, z c2 , U 3 U 3 x, y, z c3. These surfaces intersect forming following curves: a curve u1 is a result of intersection of U2(x,y,z) curve u2 is a result of intersection of U1(x,y,z)
= c2 and U3(x,y,z) = c3; a
= c1 and U3(x,y,z) = c3; a curve
u3 is a result of intersection of U2(x,y,z) = c2 and U1(x,y,z) = c1. All these three surfaces intersect at the point P. A position vector R we draw from the origin of the orthogonal coordinate system to the point P. We represent the vector R as
R xi yj zk in the orthogonal coordinate system. Here
i, j, k
are unit base vectors in the or-
thogonal coordinate system, x, y, z are the functions defined below. Now we can form an oblique coordinate system with the origin at the point P. Base vectors are partial derivatives of a function, defining the position vector R with respect to the curves u1, u2, u3 in this oblique coordinate system. These partial derivatives (and base vectors) are tangential to the curves
u1, u2, u3 at the point P.
This is a local basis. If constants c1, c2, c3 were different we would have a different point of intersection of surfaces
u1, u2, u3. And so, of course, tangent vectors to 32
the curves u1, u2, u3 would be different too. We can represent the curves u1, u2, and
u3 as functions of variables x, y, z of orthogonal coordinate system; besides,
variables x, y, z we can express as functions of u1, u2, and
u3. That is:
u1 u1 x, y, z , u 2 u 2 x, y, z , u 3 u 3 x, y, z . We also have their inverses
x x u1 , u 2 , u 3 , y y u1 , u 2 , u 3 , z z u1 , u 2 , u 3 . Then, a tangential to the axis u1 is a base vector
e1
R x y z 1, 1, 1 1 u u u u ,
a tangential to the axis u2 is a base vector
R x y z e2 2 2 , 2 , 2 , u u u u a tangential to the axis u3 is a base vector
e3
R x y z 3 , 3 , 3 . 3 u u u u
They form a basis of the coordinate system U. We can set x = x1, y = x2,
z = x3, or xi, i = 1, 2, 3 of course (see fig.7).
33
U3
Z or X3
U2 = c2
U1= c1
P
U1 U3 = c3
U2
Position vector R
Y or X2
X or X1
Fig. 7. Oblique coordinate system in three dimensions. Transition to a local basis.
Now we form a dual basis to the basis e1, Gradient
e2, e3 (which we have just built).
grad U1 at the point P is perpendicular to the surface U1, so we have
e1 = grad U1. Applying the same procedure to other gradients at the point P we can form a basis
u1 u1 u1 u1 e grad U1 i 1 , 2 , 3 x x x x , 1
u 2 u 2 u 2 u 2 e grad U 2 i 1 , 2 , 3 x x x x , 2
34
3 3 3 3 u u u u e3 grad U 3 i 1 , 2 , 3 x x x x .
We use the base vectors e1, e2, e3 of the dual basis of the coordinate system
U to build an original base vectors for the coordinate system V (the original base V
vectors for the coordinate system
would differ from e1,
e2, e3 in length). We
use the axes on which the base vectors e1, e2, e3 of the coordinate system U lay as the axes on which base vectors Now we choose such a surfaces
e’1, e’2, e’3
of the coordinate system
V
lay.
V1, V2, V3 that they all intersect at the point P
(the origin of both U and V coordinate systems). Then the intersections of surfaces
V 1 c1 , V 2 c 2 , V 3 c3 form the coordinate curves v1, v2
,v3 (the curve v1 is formed by intersection of V2
and V3, the curve v2 is formed by intersection of V1 and V3, the curve v3 is formed by intersection of V1 and V2). We form vectors, tangential to the curves v1, v2, v3 by taking partial derivatives of a vector function defining the position vector
R xi yj zk where i, j,
,
k are unit base vectors in the orthogonal coordinate system, x, y, z
the functions. That is, tangent to the curve v1 is the basis vector
R x y z e’1 1 1 , 1 , 1 v v v v , tangent to the curve v2 is the basis vector
35
are
e’2
R x y z 2 , 2 , 2 2 v v v v ,
tangent to the curve v3 is the basis vector
e’3
R x y z 3 , 3 , 3 3 v v v v .
These vectors form a basis of the coordinate system V. We can set
x x1 , y x2 , z x3 , or xi , i 1, 2, 3, of course. All this follows from this fact: We choose surfaces V1, V2, V3 so that their intersections result in producing the curves v1, v2, v3 and that all three these surfaces intersect at the origin P. Now we can form a reciprocal (dual) basis to the coordinate system V. A gradient gradVj at the point P is perpendicular to the surface Vj, so we have
e’ j gradV j , j 1, 2, 3. Then 1 1 1 1 v v v v e’1 grad V1 i 1 , 2 , 3 x x x x , 2 2 2 2 v v v v e’2 grad V2 i 1 , 2 , 3 x x x x ,
3 3 3 3 v v v v e’3 grad V3 i 1 , 2 , 3 x x x x .
Now we can render this in a bit different way. Base vectors e’i (“original” base vectors of the coordinate system
V) and base vectors ei (dual dual base vectors of 36
the coordinate system U) are perpendicular to the surfaces Ui
= ci. These surfaces
Ui = ci are the faces of the parallelepiped built on the base vectors ei (that is on
ei
R x y z i , i , i - vectors lying on the ui axes). We should meni u u u u
tion again that curves (axes) ui are produced by intersection of the surfaces
U j c j and U k ck j , k # i . So, volume of a parallelepiped built on base vectors
ei of the U coordinate system
(which we consider to be the “original” basis), that is on
R x j ei i i , u u (we set
x x1 , y x2 , z x3 , or xi , i 1, 2,3 here) is equal to the determinant:
V(in basis U )
x1 u1 1 x det 2 u 1 x u 3
x 2 u1 x 2 u 2 x 2 u 3
x j det i u
37
x3 u1 x3 u 2 3 x u 3
R det i u . Then, base vectors e’i (of the coordinate system V), and base vectors ei (of the coordinate system U) are perpendicular to the surfaces Vi = ki. These surfaces
Vi = ki are the faces of a parallelepiped built on the base vectors e’i (that is on
R x y z e’i i i , i , i i v v v v - vectors lying parallel to the axes v ). We should mention here that axes (curves) vi are formed by intersection of surfaces
V j k j and Vk kk j , k # i . So, volume of a parallelepiped built on base vectors e’i of the coordinate system V (dual), that is on
R x j e’i i i v v if we set
x x1 , y x2 , z x3 , or xi , i 1, 2,3 , is equal to the determinant:
V(in basis V )
x1 v1 1 x det 2 v 1 x v3
x 2 v1 x 2 v 2 x 2 v3
x3 v1 x3 v 2 3 x v3 38
x j det i v
R det i v . So, the edges of the parallelepiped built on the original base vectors е1, е2, е3 are perpendicular to the faces of the parallelepiped built on base vectors е1, е2, е3 [see: Борисенко А.И., Тарапов И.Е. Векторный анализ и начала тензорного исчисления]. Now we have dual bases. Let us check the relationship
1, if i k e ek 0, if i k i
for base vectors e1,
e 2, e3
and e1,
e2, e3. Here an angle between е1 and e1
is
acute, and е1 is perpendicular to e2 and e3, and e1 is perpendicular to e2 and e3. Also, an angle between е2 and e2 is acute, and е2 is perpendicular to e1 and e3, and e2 is perpendicular to e1 and e3. An angle between е3 and e3 is acute, and е3 is perpendicular to e2 and e1, and e3 is perpendicular to e2 and e1. The same relations are true for e’1, e’2, e’3 and e’1, e’2, e’3(see fig.8).
39
U3
Z or X3
e3
e3
e2 e2
e1 U1 e1
U2 Position vector R
Y or X2
Fig.8.Application of partial derivatives and gradients as base vectors of original and dual bases.
X or X1
We can build transformation matrices in three dimensions just as we built the two-dimensional ones, and we get just the same matrices (but in three dimensions). For example, for
e’i
e’ ·e e : j
i
j
40
u i v j
, and
v i u j
x1 v1 e '1 1 e ' x 2 v 2 e '3 1 x v3
x 2 v1 x 2 v 2 x 2 v3
x 3 u1 v1 x1 x 3 u1 v 2 x 2 1 3 x u v 3 x 3
Here the rows of the matrix
x 1 1 v 1 x v 2 x 1 3 v
x 2 v 1 x 2 v 2 x 2 v 3
u 2 x1 u 2 x 2 u 2 x 3
x 3 v 1 x 3 v 2 x 3 3 v
x y z i, i, i v v v or
x1 x 2 x 3 i , i , i , v v v if we set
x x1 , y x 2 , z x 3 . These base vectors form dot products with base vectors
u i u i u i 1 , 2 , 3 , x x x
41
u 3 x1 e1 3 u e 2 2 x e3 . 3 u x 3
are the base vectors
which are the columns of the matrix
transpose of
u 1 1 x 2 u the x 1 u 3 1 x
u 1 x 2 u 2 x 2 u 3 x 2
u 1 1 x 1 u x 2 u 1 3 x
u 1 x 3 u 2 ). x 3 u 3 x 3
u 2 x 1 u 2 x 2 u 2 x 3
u 3 x 1 u 3 x 2 u 3 x 3
(this matrix is the
All this agrees with the condition
e’ ·e . j
i
Such a matrix multiplication leads to formation of a covariant transformation matrix
x1 v1 1 x v 2 1 x v3
x 2 v1 x 2 v 2 x 2 v3
x3 u1 v1 x1 x3 u1 v 2 x 2 1 3 x u v3 x3
u 3 u1 x1 v1 u 3 u1 x 2 v 2 1 3 u u x3 v3
u 2 x1 u 2 x 2 u 2 x3
u 2 v1 u 2 v 2 u 2 v3
Using the same algorithm and the condition
e’ ·e , i
j
we can produce a contravariant transformation matrix
42
v i u j
.
u 3 v1 u 3 u i v 2 v j 3 u v3
u i The matrix ( v j nate system
v i ) is covariant, and ( u j
) is contravariant if the coordi-
U is chosen to be the “original” one and the coordinate system V
is
chosen to be the dual one. We had a detailed discussion of formation of these matrices in the twodimensional case. The same principle of formation is true for the three-dimensional case.
43
44
II.1 METRIC TENSORS Transformation matrices we have already discussed. Now let us discuss matrices determining a length of a vector. [see: Reflections on Relativity. 5.2. Tensors, Contravariant and Covariant, URL: http://www.mathpages.com/RR/s5-02/5-02.htm, Kay, C. David., Tensor Calculus, Schaum's Outline, Spiegel, Murray R., Advanced Mathematics for Engineers and Scientists]. A square of a vectors length we give by a metric tensor. A dot product of a vector with itself is a square of it's length in the orthogonal coordinates, that is:
P 2 x1 x1 x 2 x 2 x n x n . However, that is not true if a coordinate system is non-orthogonal. We represent a length of a vector by a metric tensor in that case. We represent a metric tensor in contravariant coordinates by terms gij placed before coordinate variables xi in the following formula:
P 2 g11 x1 x1 g 22 x 2 x 2 g nn x n x n 2 g12 x1 x 2 2 g13 x1 x3 2 g23 x 2 x3 2 gnm x n x m , We represent a metric tensor in covariant coordinates by terms gij placed before coordinate variables xi n the formula:
P 2 g11 x1 x1 g 22 x2 x2 g nn x n xn 2 g 12 x 1 x2 2 g 13 x 1 x3 2 g 23 x2 x3 2 g nm xn xm . Here we have gij and gij
= gji, gij =gji, but in orthogonal coordinates gij = 1, if i = j,
= 0, if i # j (also in orthogonal coordinates gij = 1, if i = j, and gij = 0, if i
45
# j). The most important fact about dual bases is that a square of vectors length in non-orthogonal coordinates we represent as follows:
P2 x1 x1 x2 x2 x n xn . All the expressions for a square of vector length we give here by metric tensor matrices. These matrices are square and symmetric. A square of vector length is given by the Pythagoras theorem in orthogonal coordinates (here in two dimensions), but we use the theorem of cosines to represent a vectors length in case of oblique coordinate system. That we explain as follows. We think of cathetuses (a and b) of a rectangular triangle as components of a vector (hypotenuse c) in an orthogonal coordinate system. Then a square of a length of a hypotenuse is the sum of squares of cathetuses (the Pythagoras theorem), that is
a 2 b2 c2 . However, in oblique coordinate system (rectilinear but non-orthogonal) this is not true. We represent a square of a length of one side of a non-rectangular triangle (say, a) as a sum of squares of two other sides of this triangle (say, b and c) minus the term of two multiplied by length modules of each of the sides and a cosine of an angle between these sides, that is,
a 2 b2 c 2 – 2bc cos , where Θ is the angle between b and c. Here we employed a contravariant way of coordinate interpretation, that is, “components are measured along the coordinate axes”. In a case of oblique coordinate system represented here (a case of the theorem of cosines) a is a vector, b and c are it's components in an oblique coordinate system, Θ is the angle between the coordinate axes. The term
“– 2bc cos ” 46
looks like a manifestation of non-zero terms of the form 2 gnm xn xm or 2g nm xn xm , but it is not so. What term would it be and why is that so, see an explanation in the text of this chapter. Now we find a length of a vector
P geometrically. We use the theorem of co-
sines to do that. Let us look at the fig.9. Here we see two oblique coordinate systems (U and V) with a common origin. The U1 axis is perpendicular to the V2 axis, and the U2 axis is perpendicular to the V1 axis. Let us consider the coordinate system
U as the “original” one, and the coordinate system V as the dual one. (We
should mention again that these coordinate systems are interchangeable and we could have chosen the coordinate system V to be the “original” one, and the coordinate system U to be the dual one.) See fig.9.
47
U2
V2
Θ
u2
P
v2
v2
U1
u2
u1 u1 V1
Θ v1
W
v1
w
Fig. 9. Coordinate systems U and V. Conta- and covariant components of a vector P in both coordinate systems are shown. Also shown are the angles between the axes U1 and U2 – w, between the axes V1 and V2 - W, between the axes U1 and V1 – Θ, and between the axes U2 and V2 – Θ.
Relations between the angles are:
W w , and
W
w . 2 48
Now we find a length of the vector P using the theorem of cosines. The length of P is invariant, and we should get the same result in both of these coordinate representations. Geometrically, (see fig.10) in the contravariant components of the coordinate system U:
P u
1 2
2
u
2 2
2u1u 2cos W
U2
V2
Θ
P
u1
u2
U1 W
u2
u1
Θ
W
V1
w
Fig.10. Finding a length of the vector P in contravariant components of the coordinate system U.
49
The angle W is unrelated to the basis U, so we change W for w:
cos W cos w , And then,
P u
1 2
2
2u u cos w u 1 2
.
2 2
Geometrically, (see fig.11) in the covariant components of the coordinate system V:
P v 2
1 2
v
2 2
– 2v1v 2cos w
We used the theorem of cosines here.
50
U2
V2
Θ
v1
P v2 v2
w
U1
V1
Θ v1
W
w
Fig. 11. Finding a length of the vector P in the contravariant components of coordinate system V.
The angle w is unrelated to the basis V, so we change w for W:
cos w cos W . Then, we have
P v 2
1 2
2v v cos W v 1 2
51
.
2 2
However, we cannot find a length of a vector P in covariant components of either coordinate system using the theorem of cosines directly, because we get a length of a section of a line crossing the vector P (see fig. 12).
U2
V2
Θ
u2
P
v2
v2
U1
u2
u1 u1 V1
Θ v1
W
v1
w
Fig. 12. Segments, lengths of which we find using the theorem of cosines in covariant components of both coordinate systems U and V.
Still we can find a length of a vector P in covariant components using the theorem of cosines. We use the following geometric relations to do that. 52
v1 u cos() 1
and
v2 u cos() 2
(see fig. 13)
53
U2
V2
Θ u2
P
v2 U1
u2
v2 Θ
u1 u1 V1
Θ v1 W
v1
w
Fig. 13. Geometric relations for finding a length of the vector P in covariant components of the coordinate system V.
Now we have:
P 2
u
1 2
2u u cos w u 1 2
2
2 2
v1 v1 v2 v2 2 cos( w ) cos() cos() cos() cos()
v 1
2
2v1v2cos W v2 cos 2 ()
2
54
2
But,
cos() sin w sin W , and then, the length of the vector P in covariant components of the coordinate system V is
P
2
v1
2
2v1v2 cos W v2 sin 2 (W)
2
.
We get a length of the vector P in covariant components of the coordinate system U (see fig.14.):
55
U2
V2
Θ
u2 P v2
U1
u2 v2
u1 u1 V1 Θ v1
W
v1
w
Fig 14. Geometric relations for finding a length of the vector P in covariant components of the coordinate system U.
56
We have
u1 v cos() 1
and
u2 v cos() . 2
So,
P 2
v
1 2
2v v cos W v 1 2
2 2
2
u1 u1 u2 u2 2 cos(W) cos( ) cos( ) cos( ) cos( ) u1 2u1u2 cos w u2 cos 2 () 2
2
But, using again
cos() sin w sin W , we get
P
2
u1
2
2u1u2 cos w u2 sin 2 (w)
2 .
Now we can combine all the formulas for a length of the vector P:
57
2
u
1 2
P 2
u1
2
v
2u1u2 cos w u2 sin 2 (w)
1 2
v1
2
2u u cos w u 1 2
2v v cos W v 1 2
2 2
2
2 2
2v1v2 cos W v2 sin 2 (W)
2
Here we used the following relations between angles:
W w
W
w 2
(just a geometric fact),
cos w cos W
sin w sin W (these we got by inserting
W w
ric reduction formulas)
cos sin w sin W
sin cos w cos W . Here we used the following:
58
or
w = π - W into the trigonomet-
W
w 2
W
– – W W 2 2
w w w 2
2
.
We inserted these relations into the cos and sin formulas. Then we implemented the trigonometric reduction formulas. All these four formulas for length of a vector
P we can express as symmetric
square matrices. These matrices are metric tensors. Now we can write them:
cos w g11 1 gij u 1 g 21 cos w
g12 g 22
for
P u 2
1 2
2u u cos w u 1 2
;
2 2
11 1 cos w g 1 g ij 21 2 u 1 g sin ( w) cos w for
P
2
u1
2
2u1u2 cos w u2 sin 2 (w)
1 gij v cos W
cos W g11 1 g 21 59
2 .
g12 g 22
g 12 22 g
for
P v
1 2
2
2v v cos W v 1 2
1 1 ij g v sin 2 (W ) cos W
.
2 2
cos W g 11 21 1 g
g 12 g 22
for
P
2
v1
2
2v1v2 cos W v2 sin 2 (W)
2
.
Now a squire of a length of a vector P (we let now P2 = S2) is:
S 2 g11x1x1 g12 x1x2 g21x2 x1 g22 x2 x2 (length of the vector P in contravariant components),
S 2 g11 x1 x1 g12 x1 x2 g 21 x2 x1 g 22 x2 x2 (length of
P in covariant components).
These formulas we can give in a more compact form:
S 2 g ij x i x j and S 2 g ij xi x j . Here the symbol x is used to show that all these formulas express the general principle of representing length of a vector via metric tensor.
60
II.2 METRIC TENSORS AND LOCAL BASES However, the geometry we did in the last chapter have brought us a bit too far. Metric tensor is a second order tensor. [see: Spiegel, Murray R..,Vector Analysis and an Introduction to Tensor Analysis, Kay, C. David., Tensor Calculus, Schaum's Outline]. Now we should define a first order tensor and laws of an n-th order tensor formation. Let us return to a two-dimensional local basis (see a geometric situation described in “I.3 Local Bases and Transformation Matrices Formation”). We defined a local basis there for a general case. Now we choose one of local bases. That we do to form a specific for some possible case local basis (there are infinitely many of them). We represent the vector
P,
dual bases
U
and
V
(which have a common
origin) in the orthogonal basis (see fig.15).We have already chosen the local basis, so the geometric situation now is static, unchanging. That is possible if it's position vector R is fixed. So, we can now get rid of this position vector R because it is constant here. If a local base is chosen and fixed, then the coordinate axes of the coordinate systems
U and V are fixed in their present positions too. Now we can put the ori-
gins of all these coordinate systems at the same point (that is the coordinate systems U, V and the orthogonal one have a common origin now). This is because all the elements of the orthogonal coordinate system we can transport along the position vector R. Now all the partial derivatives we use here to build base vectors, that
is
u i ui vi vi x j x j x j x j , j, j, j, i, , i, j x x x x u ui v vi
are calculated at the same point
P, and their values are fixed (for the moment in
which we are considering them, or, for this particular position of the point
61
P only).
That is, all three coordinate systems have fixed axes, their positions in relation to each other are fixed too. The partial derivatives
u i ui vi vi x j x j x j x j , j, j, j, i, , i, j x x x x u ui v vi we use as base vectors in coordinate systems U and V. Now we have fixed base vectors of the “original” basis, that is
ei contr
x j x j x j x j e’i contr i , e’i cov i , ei cov , vi ui v u
(see an explanation later), and base vectors of the dual basis (they are formed of
grad xi), which are i u i vi ui v i i i e j , e’ j , e j , e’ j . x x x x
i
We express all the elements of the coordinate systems
U
and
V in the or-
thogonal basis of the orthogonal coordinate system, which we have already introduced. So, the orthogonal coordinate system is “original” to the coordinate systems
U and V (U and V are oblique coordinate systems, that is, their axes are straight lines but the angles between the axes are not 90°). Then base vectors of the orthogonal basis play the role of original base vectors, and the matrix of transformation of the original base vectors to the base vectors of the coordinate system
U or V is a
covariant one. Then, all the components of the elements of coordinate systems and
U
V expressed in orthogonal coordinates transform covariantly. Matrices of this
62
transformation are
x j u i
or
of the coordinate systems
transformation are
u i x j
x j vi
U
or
and
v i x j
(for bases U and V respectively). Base vectors
V
transform contravariantly. Matrices of this
. (Base vectors of original basis transform covar-
iantly. It is historically accepted). Base vectors of oblique coordinate systems (if we represent them in orthogonal basis) transform contravariantly. That is just the situation we have here (see fig. 15):
63
U2,X2
V2
Θ u2
P
v2
U1
u2
v2
u1
X2
u1 V1
Θ v1
v1 X1
W
w
X1
Fig. 15. Dual coordinate systems U and V represented in orthogonal coordinate system with axes X1 and X2. The axis X1 coincides with the axe V1, the axe X2 coincides with the axe V2.
Now we form a square matrix expressing a length of a vector P (here we consider a geometrically found length of the vector metric tensors, using components of the vector
P). Then we find the matrices of
P in oblique coordinate systems U
and V (see fig. 15). We do not consider the vector ponents in the coordinate systems
P any more after that (it's com-
U and V now play the role which the vector P
itself has been playing so far). Then we compare the results obtained in both methods.
64
We can express components of a vector
P geometrically in matrix form (see
fig.15). Base vectors are unitary and they are mutually perpendicular in the orthogonal coordinate system (O,
x, y). We use this for the numerical representation of
the length of vector P. Now we tie the three coordinate systems (orthogonal, U, V) together. We can do this because the v1, v1 components of the vector P lay on the
X1
axis, and u2,
u2 components of the vector P lay on the X2
axis. Now we set
v1 x, u 2 y to tie the three coordinate systems up.
(We could have set v x, u2 y ). 1
Then, using relations of hypotenuse and cathetuses in a rectangular triangle and implementing sin and cos relations of hypotenuse and cathetuses, we find formulas for length of the vector P (these formulas are a vector sums, see fig. 15). For basis U: contravariant components of the vector
P in orthogonal basis are expressed as fol-
lows:
1 1 u x 0y x 0y , cos() sin( w) 1
u2 y
sin() cos( w) x x y. cos() sin( w)
65
That is, we have a matrix
u1 sin1 w 0 x 2 cos w 1 , y u sin w or
u j u jx (a) x i
i
Now we write it's inverse
x sin w 0 u1 y cos w 1 2 , u or
x i x iu u (b) j
j
Covariant components of the vector P we express as follows:
u1 u1 u 2 sin xsin w ycos w u2 0 x y. Than, we have a matrix
u1 sin w cos w x , u 0 1 y 2 or
66
ui j ui j x (c) x and it's inverse
x sin1 w y 0
w u cos sin w 1 1 u2
or
x x ui ui . (d) j
j
For basis V: We have: contravariant components of the vector P in orthogonal basis are as follows:
sin() cos( w) v x y x y cos() sin( w) 1
v2 0x
1 y sin( w)
and, the matrix is
v1 1 2 0 v
x , 1 sin w y
cos w sin w
or
67
i v v i j x j (e) x and it's inverse is
x 1 cos w v1 y 0 sin w 2 , v or
j x x j i vi v (f). Covariant components of the vector P are as follows:
0 x v1 1 v cos w sin w y , 2 or
vi j vi j x (g), x and the inverse matrix is:
x 1 y cos w sin w
0 v1 , 1 sin w v2
or
68
x x vi vi (h). j
j
We represented these matrices as jacobians and now we can compare them to base vectors i u ei contr j x
,
or i
e cov
ui j x
,
and i v e’i contr j x
vi e’ cov j x
,
i
,
x j eicontr i u
e’icontr
eicov
,
x j i v
x j ui
,
, 69
e’icov
x j vi
.
(Till this chapter we did not give these additional marks to the base vectors). We have:
J1 u 2 x u1 x
sin1 w 0 cos w u 2 sin w 1 , y u1 y
or
ux u 2 x 1
i u i i grad u e contr j u 2 x y u1 y
u J1T ux1 y
u 2 x
ux1 J 2 u2 x
u 1 y u2 y
1
sin1 w u 2 y 0
x w cos u1 sin w y 1 u1
y u2 x u2
sin w cos w 0 1
Then
ux1 u2 x
u 1 y u2 y
ui i grad ui j e cov x 70
J2
ux1 u1 y
T
J 3 v2 x v1 x
u2 x u2 y
sin w 0 ux1 y cos w 1 u1
1 v 2 y 0 v1 y
x u 2 y u 2
1 sin w
cos w sin w
or
v2 x v1 x
J
T 3
i v i i grad v j e’ contr v 2 x y v1 y
v1 y v1 x
vx1 J 4 v2 x
1 cos w v 2 y sin w v 2 x
v1 y v2 y
0 vx1 y 1 sin w v1
y v2 x v2
1 0 cos w sin w
71
And
vx1 v2 x J4
T
v1 y v2 y
vi i grad vi j e’ cov. x
vx1 v1 y
v2 x v2 y
1 cos w vx1 y 0 sin w v1
Then, for
x sin w 0 u1 y cos w 1 2 u we have
x 5 u1 J x u 2
J
T 5
ux1 y u1
y u1 y u 2
x j i ei contr . u
x u 2 y u 2
sin w cos w 1 0
72
x v 2 y v 2
For
x sin1 w y 0
w u cos sin w 1 1 u2
we have
ux1 J 6 x u2
J6
T
ux1 y u1
x j ei cov , y u i u2 y u1
sin1 w 0 cos w y 1 sin w u2 x u2
For
x 1 y cos w sin w
0 v1 1 v sin w 2
73
we have
vx1 J 7 x v2
J7
T
1 cos w y v2 sin w y v1
y v1 x v1
0 x j e’i cov , 1 vi sin w
1 y 0 v2 x v2
1 sin w
cos w sin w
For
x 1 cos w v1 y 0 sin w 2 v we have
vx1 J 8 x v2
J 8T
y v1 y v 2
vx1 y v1
1 cos w x j i e’i contr , 0 sin w v x v 2 y v 2
1 0 cos w sin w
74
We now form metric tensors using these matrices:
u 2 u 2 u x 1
u1 x
u 2 u1 y y u1 y
u1 u 2 u y 2
u1 x
u 2 x
So, a twice contravariant metric tensor in the basis U we form as follows
u u u g ij J1 J1T ux2 uy2 ux1 u x y y w sin1 w 0 sin1 w cos sin w cos w 0 1 1 sin w 1
1
1
u 2 y u 2 x
ei contr · ei contr
cos w g 11 1 1 2 21 1 g sin ( w) cos w
g 12 22 g
That is, just the same result we obtained geometrically. Then, a twice covariant metric tensor in the basis U we form as follows
u1 u1 x u u2 2 x
u 1 y u2 y
ux1 u1 y
so,
75
u2 x u2 y
u1 2 u
gij J J u
T 5 5
ux1 y u1
x u 2 y u 2
eicontr · eicontr
ux1 x u 2
y u1 y u 2
ux1 uy1 ux1 ux2 u2 u2 u1 u2 x y y y sin w cos w sin w 0 cos w 1 0 1 cos w g11 g12 1 g g cos w 1 21 22 and again, just the same result we obtained geometrically. Then, a twice contravariant metric tensor in the basis V is formed as follows
v 2 v2 v x 1
v1 x
v 2 v1 y y v1 y
v1 v 2 v y 2
v1 x
v 2 x
so,
76
v v v g ij J 3 J 3T vx2 vy2 vx1 v x y y w 0 1 cos sin w 1 1 1 cos w 0 sin w sin w sin w e’icontr · e’icontr 1
1
1
v 2 y v 2 x
cos w 1 1 1 sin( w) cos w cosW g 11 1 1 21 1 g sin(W) cosW because an angle
W
belongs here, and not
g 12 g 22
w. Again, just the same result we ob-
tained geometrically. Then, a twice covariant metric tensor in the basis V we form as follows
v1 v 1 x v v2 2 x
v1 y v2 y
vx1 v1 y
so,
77
v2 x v2 y
v1 2 v
gij J 7 J 7 v
T
eicontr
vx1 y v1 · eicontr
x v 2 y v 2
vx1 x v2
y v1 y v 2
vx1 vy1 vx1 vx2 v2 v2 v1 v2 x y y y 0 1 cos w 1 0 sin w cos w sin w cos w 1 cos w 1 cosW g11 1 1 g 21 cosW because an angle
W
belongs here, and not
g12 g 22 w. Again, just the same result we ob-
tained geometrically. All these calculations are possible because Ji JjT is a dot product of vectors which are rows of jacobians Ji. The second important thing is that:
78
u T J1T ei contr ux1 y x cos w 1 sin w sin w u1 y 1 u1 0
u 2 y x u2 T ei cov y u2 u 2 x
1
ux1 u1 y
J 2 e cov
T
T
i
ux1 y u1
x u 2 y u 2
J
T 3
vx1 y v1
sin w 0 cos w 1
T eicontr
e’ contr i
u2 x u2 y
T
vx v1 y
1 cos w v 2 y sin w v 2 x
1
T e’i cov y v2 x v2
79
0 1 sin w
J 4 e’ cov T
T
i
vx1 y v1 if we set
x v 2 y v 2
vx1 v1 y
and where we have these base vectors
i
,
or
i
e cov
ui j x
,
and i v e’i contr j , x
vi e’ cov j x
,
x j eicontr i u
,
i
e’icontr
1 cos w 0 si n w
T e’icontr
x = x 1, y = x 2,
u i e contr j x
v2 x v2 y
x j i v
,
80
eicov
x j ui
e’icov
x j vi
,
.
81
II.3 TRANSFORMATION LAWS FOR TENSORS A scalar is a tensor of a zero rank and has only one component. A vector is a tensor of the first rank and has r components (in r-dimensional space). A metric tensor is a tensor of a second rank and has a squire matrix of components (metric tensors, their derivation from tensors of the first rank and their square matrices we discussed in previous chapters) [see: Беклемишев Д.В., Курс аналитической геометрии и линейной алгебры, Сборник задач по аналитической геометрии и линейной алгебре, под ред. Беклемишев Д.В., Проскуряков И.В., Сборник задач по линейной алгебре, Kay, C. David., Tensor Calculus, Schaum's Outline, Heinbockel J.H., Introduction to Tensor Calculus and Continuum Mechanics, Young, Eutiquo C., Vector and Tensor Analysis and others]. Now we define tensors of various ranks and their transformation laws. A tensor of a rank r in an n-dimensional space has nr components. This is true for covariant and contravariant tensors, but a situation with mixed tensors is a bit more complicated. 1. A covariant tensor of the first rank transforms as follows:
u p a’i i a p v 2. A contravariant tensor of the first rank transforms as follows:
v i p a’ p a u i
3. A covariant tensor of a second rank transforms as follows:
82
u p u q a’ij i a pq j v v 4. A contravariant tensor of a second rank transforms as follows: i j v v a’ij p q a pq u u
5. A covariant tensor of an n-th rank transforms as follows:
a’ijk
u p u q u r i ... k a pq...r j v v v
6. A contravariant tensor of an n-th rank transforms as follows: ijk
a’
vi v j v k pq...r p q ... r a u u u
7. Mixed tensors of a second rank transform as follows:
a’•i j
u p v j • q i ap q v u
a’•i j
vi u q p p a• q j u v
That is because a product of two matrices does not commute in general. Mixed tensors of an n-th rank transform just as covariant or contravariant tensors of an n-th rank do, but we always should consider a rule that a product of two matrices does not commute in general (that is, a position 7. here). We have a transformation formula
83
ij ...k amn ...l
vi v j v k u t u h u g pq...r p q ... r m n ... l ath... g u u u v v v
Here all contravariant matrices come first, then all covariant ones. However, in general a situation can be a bit more complicated. It should be stressed again that a product of two matrices does not commute (in general), and if we deal with a mixed tensor of an n-th rank we should always remember it. Besides, the rank of a tensor in the last formula is n and this is a mixed tensor. So, if a number of matrices
v i v j v k ... r p q u u u is c, and a number of matrices
u t u h u g ... l m n v v v is
d,
then this tensor is
c
times contravariant, and
d
times covariant, and
c d n. Now we have to discuss volumes. In case of two dimensions we used an area of a parallelogram as a two-dimensional “volume”; in case of three-dimensions we used a volume of a parallelepiped built on three base vectors. We calculated an area of a parallelogram as a vector product of two base vectors on which this parallelogram is built in two-dimensional case. We calculated a volume of a parallelepiped using a scalar triple product of base vectors on which this parallelepiped is built in three-dimensional case. In both cases we calculated a determinant of a matrix, rows or columns of which are base vectors (on these base vectors parallelogram or parallelepiped in question are built). But there are no such concepts as a scalar triple product of three vectors in case of an n-dimensional “volume”. Still, a determinant 84
of a matrix built on n-dimensional (base) vectors (which are rows or columns of such a matrix) is an n-dimensional “volume�. Metric tensors and transformation matrices in n-dimensions are composed in just the same way as in two or three dimensions. Transformation laws for tensors in n-dimensions are the same as in two- or three-dimensional case.
85
86
1. Беклемишев Д.В., Курс аналитической геометрии и линейной алгебры. – М.: «Издательство «Физматлит», 2003. – 304 с. 2. Борисенко А.И., Тарапов И.Е. Векторный анализ и начала тензорного исчисления. – М.: «Издательство «Высшая школа», 1966. – 252 с. 3. Проскуряков И.В., Сборник задач по линейной алгебре. – М.: «Издательство «Лаборатория базовых знаний», 2003. – 384 с. 4. Сборник задач по аналитической геометрии и линейной алгебре, под ред. Беклемишев Д.В. – М.: «Издательство «Физматлит», 2001. – 496 с. 5. Heinbockel J.H., Introduction to Tensor Calculus and Continuum Mechanics. – Free Internet Edition, 1996. – 367 p. 6. Kay, C. David., Tensor Calculus, Schaum's Outline. “McGraw-Hill”, 1988, 228 p. 7. Reflections on Relativity. 5.2. Tensors, Contravariant and Covariant, URL: http://www.mathpages.com/RR/s5-02/5-02.htm 8. Simmonds, James G., A Brief on Tensor Analysis. - “Springer”, 1994, 1982. - 112 p. 9. Spiegel, Murray R., Advanced Mathematics for Engineers and Scientists.Schaum’s Outline, “McGraw-Hill”, 1971. – 407 p. 10. Spiegel, Murray R..,Vector Analysis and an Introduction to Tensor Analysis. Schaum’s Outline, “McGraw-Hill”, 1959. – 225 p. 11. Ş. Selçuk Bayin., Mathematical Methods in Science and Engineering. Wiley-Interscience, 2006.- 679 p. 12. Young, Eutiquo C., Vector and Tensor Analysis. “Taylor & Francis Group”, 1993. – 498 p.
87
88
89
90