Path: ...!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail From: film.art@gmail.com (JanPB) Newsgroups: sci.physics.relativity Subject: Re: Vector =?UTF-8?B?bm90YXRpb24/?= Date: Wed, 7 Aug 2024 09:47:13 +0000 Organization: novaBBS Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Info: i2pn2.org; logging-data="1754338"; mail-complaints-to="usenet@i2pn2.org"; posting-account="PFHx4vNUwg4V82jPHNtC8poebHXhUv1vbEkgQ31MVis"; User-Agent: Rocksolid Light X-Spam-Checker-Version: SpamAssassin 4.0.0 X-Rslight-Posting-User: 029cc7f3dcda181726743e5c10521a3a9f5bbe97 X-Rslight-Site: $2y$10$WoP1t9JdyFQ4XWg8RL8c1.qh9KAo.PD70PIB5Z2dF4D/q9urH1ov2 Bytes: 8375 Lines: 178 On Fri, 2 Aug 2024 10:54:50 +0000, Stefan Ram wrote: > Mikko wrote or quoted: >>Matrices do not match very well with the needs of physics. Many physical >>quantities require more general hypermatrices. But then one must be >>very careful that the multiplicatons are done correctly. > > In the meantime, I have written about this for the case of a ( 0, 2 ) > tensor, i.e., a bilinear form (such as "eta"). It turns out that for > this case, a simple single rule for matrix multiplication suffices. > > To give it the right context, my following text starts with a small > introduction into the linear algebra of vectors and forms and > arrives at the actual matrix multiplication only near the end: > > (If one is not into bilinear algebra, one may stop reading now!) > > It's about the fact that the matrix representation of a ( 0, 2 )- > tensor should actually be a row of rows, not a row of columns, > as you often see in certain texts. A row of columns, on the > other hand, would be suitable for a ( 1, 1 )-tensor. I got this > from a text by Viktor T. Toth. All errors here are my own though. > > But since I want to start with the basics, this matrix > representation will only be dealt with towards the end of > this text, where impatient readers could of course jump to. > > In this text, I limit myself to real vector spaces R, R^1, > R^2, etc. For a vector space R^n, let the set of indices > be I := { i | 0 <= i < n }. > > Forms > > The structure-preserving mappings f into the field R are precisely > the linear mappings of a vector to R. > > I call such a linear mapping f of a vector to R a /form/ or a > /covector/. > > Let f_i be n forms. If the tuple ( f_i( v ))ieI of a vector v > is equal to the tuple ( f_i( w ))ieI of a vector w if and only > if v=w, I call the tuple ( f_i )ieI a /basis/ of the vector space. > The numbers v^i := f_i( v ) are the /(contravariant) coordinates/ > of the vector v in the basis ( f_i )ieM. > > I call the vector e_i, for which f_j( e_i ) is 1 for i=j and 0 > for i<>j, the i-th /basis vector/ of the basis ( f_i( v ))ieI. > > If f is a form, then the tuple ( f( e_i ))ieI are the /(covariant) > coordinates/ of the form f. > > Matrices > > We write the covariant coordinates f_i of a form f as a "horizontal" > 1xn-matrix M( B, f ): > > ( f_0, f_1, ..., f_(n-1) ). > > The contravariant coordinates v^i of a vector v we write in a > basis B as a "vertical" nx1-matrix M( B, v ): > > ( v^0 ) > ( v^1 ) > ( . . . ) > ( v^( n-1 )). > > The application f( v ) of a form to a vector then results from > the matrix multiplication M( B, f )X M( B, v ). > > Rule For The Matrix Multiplication X > .-----------------------------------------------------------------. > | The /multiplication X/ of a 1xn-matrix with an nx1-matrix is | > | a sum with n summands, where the summand i is the product of | > | the column i of the first matrix with the row i of the second | > | matrix. | > '-----------------------------------------------------------------' > > ( 0, 2 )-Tensors > > We also call the forms (covectors) "( 0, 1 )-tensors" to express > that they make a scalar out of 0 covectors and one vector linearly. > > Accordingly, a /( 0, 2 )-tensor/ is a bilinear mapping (bilinear > form) that makes a scalar out of 0 covectors and /two/ vectors. > > Matrix representation of ( 0, 2 )-tensors > > According to Viktor T. Toth, for us, the matrix representation > of a ( 0, 2 )-tensor f is a horizontal 1xn-matrix M( B, f ), > whose individual components are horizontal 1xn-matrices of > scalars. The scalar at position j of component i of M( B, f ) > is f( e^i, e^j ), where the superscripts here do not indicate > components of e but a basis vector. > > (PS: Here I am not sure about the correct order "f( e^i, e^j )" > or "f( e^j, e^i )", but this is a technical detail.) > > Let's now look at the case n=3 and see how we calculate the > application of such a tensor f to two vectors v and w with > the matrix representations! > ( v^0 ) ( w^0 ) > ( (f_00,f_01,f_02)(f_10,f_11,f_12)(f_20,f_21,f_22) ) X ( v^1 ) X ( w^1 ) > ( v^2 ) ( w^2 ) > > We start with the first product: > ( v^0 ) > ( (f_00,f_01,f_02) (f_10,f_11,f_12) (f_20,f_21,f_22) ) X ( v^1 ) > ( v^2 ). > > According to our rule for the matrix multiplication X, this is the > sum > > v^0*(f_00,f_01,f_02)+v^1*(f_10,f_11,f_12)+v^2*(f_20,f_21,f_22)= > > (v^0*f_00,v^0*f_01,v^0*f_02)+ > (v^1*f_10,v^1*f_11,v^1*f_12)+ > (v^2*f_20,v^2*f_21,v^2*f_22)= > > (v^0*f_00+v^1*f_10+v^2*f_20, > v^0*f_01+v^1*f_11+v^2*f_21, > v^0*f_02+v^1*f_12+v^2*f_22). > > This is again a "horizontal" 1xn-matrix (written vertically > here because it does not fit on one line), which can be > multiplied by the vertical nx1-matrix for w according to > our rules for matrix multiplication X: > > (v^0*f_00+v^1*f_10+v^2*f_20, > v^0*f_01+v^1*f_11+v^2*f_21, ( w^0 ) > v^0*f_02+v^1*f_12+v^2*f_22) X ( w^1 ) > ( w^2 ). > > According to our rule for the matrix multiplication X, this > results in the number > > w^0*(v^0*f_00+v^1*f_10+v^2*f_20)+ > w^1*(v^0*f_01+v^1*f_11+v^2*f_21)+ > w^2*(v^0*f_02+v^1*f_12+v^2*f_22). > > So, the multiplication of the given matrix representation > of a ( 0, 2 )-tensor with the matrix representations of > two vectors correctly results in a /number/ using the single > uniform rule for the matrix multiplication X. > > In the literature (especially on special relativity), > the "Minkowski metric", which is a (0,2)-tensor, is written as > a row of /columns/. The application to two vectors would then be: > > ( f_00, f_01, f_02 ) ( v^0 ) ( w^0 ) > ( f_10, f_11, f_12 ) ( v^1 ) ( w^1 ) > ( f_20, f_21, f_22 ) ( v^2 ) ( w^2 ) = > > ( f_00 * v^0 + f_01 * v1 + f_02 * v^2 ) ( w^0 ) > ( f_10 * v^0 + f_11 * v1 + f_12 * v^2 ) ( w^1 ) > ( f_20 * v^0 + f_21 * v1 + f_22 * v^2 ) ( w^2 ) > > Now the product of /two column vectors/ appears, which is > not defined as a matrix multiplication! (Matrix multiplication > is not the same as the dot product of two vectors.) A nice summary. In the matrix language we are either in R^n or at most in a general vector space V with a fixed basis. This allows the standard (typically unstated explicitly) identification of V with V* (the dual space of V). This unstated identification can be confusing. But any bilinear map is isomorphic to an element of V* x V* (where "x" is the tensor product) and then the above standard identification yields an element T of V x V* which acts on a* in V* and b in V. And when T, a*, b are written as matrices [T], [a*], [b] then: T(a*, b) = [a]-transpose.[T].[b] ...since covectors like a* are written as transposed (row) vetcors. -- Jan