MATRICES
We have emphasized above that the simplest formulation of aeroelastic problems usually results in integral equations. The exact solution of such equations is difficult to obtain except in special cases. In general, however, it is permissible to approximate the integral equation by a matrix equation and to obtain the solution by numerical methods. In the present section the meaning of matrices will be explained. Its connection with the finite-differences approximation will be examined in the next section. Practical examples are given in the next chapter.
A table of m X n numbers arranged in a rectangular array of m rows and n columns is called a matrix with m rows and n columns, or a matrix of order m x n. If a(j is the element in the /’th row and yth column, then the matrix can be written down in the following pictorial form:
“11 “12
We shall denote a matrix either by a single symbol in boldface type such as A, or by a symbol (a{j), the first subscript referring to the row and the second to the column. A square matrix of order n X n is a particular case. So is a single-column matrix of m rows (order m x 1) or a singlerow matrix of n columns (order 1 x n). A special square matrix denoted by 0, called a zero matrix, is a square matrix, all of whose elements are zero. Another special square matrix denoted by 1, called a unit matrix, is the following:
1 = (»«)
where
These are, respectively,
/0 0
/ 0 0 • • •
………………
o 0
Two matrices are equal when the corresponding elements of each are equal. The addition, subtraction, multiplication, and division of matrices are defined as follows:
Addition. The sum of two matrices A and В is written as A + В and stands for the matrix with elements a(j + b(j.
Subtraction. The matrix — A is defined as the matrix with elements — a{j, and A — В is defined as the matrix with elements atj — btj. For addition and subtraction to be significant, the matrices must have the same number of rows and the same number of columns.
It is clear that the associative law
(A + В) + С = A + (В + C) (1)
and the commutative law
A + В = В + A (2)
both hold for addition of matrices.
Multiplication. The product of the matrices A = (ai}), В = (b(j), written as AB, is a matrix C with elements cu given by
t’H ^ufuj ацЬц ~f~ ЯфЬ^ ~f~ – f – Uinbni
The summation convention (Chapter 1) is used in Eq. 3. In order that the product AB of two matrices be well defined, the number of rows in the matrix В must be precisely the number of columns in the matrix A.
The product is then a matrix, with as many rows as A and as many columns as B..
The commutative law of multiplication does not necessarily hold even if A and В are square. For BA must be defined as the matrix whose elements are bikakj, and this will be equal to aikbkj only in special cases; in general,
AB Ф BA (4)
Pairs of matrices that satisfy AB = BA are said to commute-, those that satisfy AB = — BA to anticommute.
The associative law
(AB)C = A(BC) (5)
and the distributive law
A(B + C) = AB + AC (6)
hold, provided the order is maintained and the operations are significant. Consequently, the products in Eq. 5 can be written without parentheses as ABC, since the position of the parentheses is irrelevant. It follows that all positive powers of a given matrix commute; for A2 A = AA2, and AmA" = A"Am (m, n positive integers) follows by induction.
The unit matrix 1 has the interesting property that it commutes with all square matrices of the same order. In fact,
A1 = 1A = A (7)
The product of two matrices may be a zero matrix without either factor being zero. As an example,
A = (1, 1, 0), В = 10 V AB = (1)(0) + (1)(0) + (0)(1) = 0
The transposed matrix of a matrix A is the matrix formed from A by interchanging its rows and columns. We shall denote it by A’ and its elements by a’ij. Then
a’a = % (8)
Since
(AB)y = aikbkj — a kib jk = b jka ki = (B A (9)
it follows that the transposed matrix of the product AB, denoted by (AB)’, is equal to the product B’A’.
Symmetry Properties. A matrix is said to be symmetrical if it is unaltered by interchanging rows and columns; i. e.,
It is antisymmetrica/ or skew-symmetrical if the sign is changed when rows and columns are interchanged; i. e.,
ai}=~aH or A=-A’ (11)
A diagonal matrix is one all of whose elements are zero except those in the leading diagonal; i. e., au, a22, ■ ■ •, a„„. All pairs of diagonal matrices of the same order commute.
Inverse of a Matrix and the Solution of Linear Equations. The inverse aor reciprocal, of a real number a is well defined if a A 0. Analogously, if A is a square matrix of order n and if the determinant аі} Ф 0, then there exists a unique matrix, written as A-1 in analogy to the inverse of a number, with the properties
AA-1 = 1, A-1A = 1 (12)
The matrix A-1, if it exists, is called the inverse matrix of A. The necessary and sufficient condition that a matrix A = (%) has an inverse is that the associated determinant aiS Ф 0. The determinant au is formed by the elements of the square matrix A, and is usually referred to as the determinant of the matrix A. If au vanishes, A has no reciprocal and is said to be singular.
The practical calculation of the inverse of a matrix can be shortened by properly arranging the scheme of computation. The method of Grout[11] is the best known.
We can now define division of matrices as follows: Division by a nonsingular matrix is defined as multiplication by its reciprocal, but the quotient depends on the order of the factors as with a product. In general, A_1B is not equal to BA-1.
Since
ABB_1A~3 = АЇА-1 = 1 (13)
it follows that B_3A~3 is the reciprocal of AB, that is (AB)-i. Hence, in forming the reciprocal of a product, the order of the factors must be inverted.
The inverse of a matrix has a simple application to the solution of n nonhomogeneous linear algebraic equations in n unknowns xlt x2, ■ ■ •, xn.
A set of linear equations
may be written in the abbreviated form
ацх1 = b( (15)
where і and j run from 1 to n. If we think of (xt) as a matrix with a single column, Eq. 15 may be written as
AX == В (16)
where В is also a matrix with a single column. If we assume that the determinant of the matrix A is not zero, the inverse matrix A-1 will exist, and we shall have by matrix multiplication
A-J(AX) = А-ЧВ
Since A-1 A = 1 and IX — X, we obtain the solution of Eq. 16:
X = А-ЧВ (17)
Multiplication of Matrices by Numbers. If A = (a{j) is a matrix, not necessarily a square matrix, and c is a number, real or complex, then cA denotes the matrix (caa). This operation of multiplication by numbers enables us to consider matrix polynomials of the type
c0A” + сг A"-1 – f • • • + cn„jA + c„l
where c0, cu • • c„ are numbers, A is a square matrix, and 1 is the unit matrix of the same order as A.
Characteristic Equation of a Matrix and the Cayley-Hamilton Theorem. If A = (fly) is a given square matrix of order n, one can form the matrix M – A, which is called the characteristic matrix of A. The determinant of this matrix, considered as a function of X, is a polynomial of degree n in X, and is called the characteristic function of A. More explicitly, let f{X) = |A1 — A| where |A1 — A| denotes the determinant of A1 — A, then f(X) has the form f(X) = Xn + fljA”-1 + • • • + ап_гХ + an. Since an = /(0), we see that a„ == I — A|. The algebraic equation of degree n for X
js the characteristic equation of the matrix A, and the roots of the equation are the characteristic roots of A..*
We shall quote the famous Cayley-Hamilton theorem without proof: Let
./'(A) — A“ + <ZiAn-1 + • ■ ‘ + an-l^ + an
be the characteristic function of a matrix A, and let 1 and 0 be the unit matrix and zero matrix, respectively, with an order equal to that of A. Then the matrix polynomial equation
X" + fljX"-1 + • • • + an_xX – f – an 1 = 0 (19)
is satisfied by X = A.
Differentiation and Integration of Matrices Depending on a Numerical Variable. We shall have occasion to differentiate or integrate matrices whose elements depend on a numerical variable. Here we shall give the definitions as follows. Let A(t) be a matrix depending on a numerical variable t so that the elements A(t) are numerical functions of t.
Then the derivative of A(t), written as —-—, is
dA{t)
dt
Similarly, we define the integral of A(t) by