Dependence and independence of the strictness of the column of the matrix. Linear combinations of rows or columns of matrices


Let k rows and k columns (k ≤ min(m; n)) be randomly selected in a matrix A of dimensions (m; n). The matrix elements located at the intersection of the selected rows and columns form a square matrix of order k, the determinant of which is called the minor M kk of order k y or the kth order minor of the matrix A.

The rank of a matrix is ​​the maximum order of r nonzero minors of the matrix A, and any minor of order r that is nonzero is a basis minor. Designation: rang A = r. If rang A = rang B and the sizes of matrices A and B are the same, then matrices A and B are called equivalent. Designation: A ~ B.

The main methods for calculating the rank of a matrix are the method of bordering minors and the method.

Bordering minor method

The essence of the bordering minors method is as follows. Let a minor of order k, different from zero, have already been found in the matrix. Then we consider below only those minors of order k+1 that contain (i.e., border) a minor of kth order that is different from zero. If all of them are equal to zero, then the rank of the matrix is ​​equal to k, otherwise among the bordering minors of the (k+1)th order there is a non-zero one and the whole procedure is repeated.

Linear independence of rows (columns) of a matrix

The concept of matrix rank is closely related to the concept of linear independence of its rows (columns).

Matrix rows:

are called linearly dependent if there are numbers λ 1, λ 2, λ k such that the equality is true:

The rows of matrix A are called linearly independent if the above equality is possible only in the case when all numbers λ 1 = λ 2 = … = λ k = 0

The linear dependence and independence of the columns of matrix A are determined in a similar way.

If any row (a l) of matrix A (where (a l)=(a l1 , a l2 ,…, a ln)) can be represented as

The concept of a linear combination of columns is defined in a similar way. The following theorem about the basis minor is valid.

The basis rows and basis columns are linearly independent. Any row (or column) of matrix A is a linear combination of basis rows (columns), i.e., rows (columns) intersecting the basis minor. Thus, the rank of matrix A: rang A = k is equal to the maximum number of linearly independent rows (columns) of matrix A.

Those. The rank of a matrix is ​​the dimension of the largest square matrix within the matrix for which the rank needs to be determined, for which the determinant is not equal to zero. If the original matrix is ​​not square, or if it is square but its determinant is zero, then for square matrices of lower order the rows and columns are chosen arbitrarily.

In addition to determinants, the rank of a matrix can be calculated by the number of linearly independent rows or columns of the matrix. It is equal to the number of linearly independent rows or columns, whichever is smaller. For example, if a matrix has 3 linearly independent rows and 5 linearly independent columns, then its rank is three.

Examples of finding the rank of a matrix

Using the method of bordering minors, find the rank of the matrix

Solution: Second order minor

the bordering minor M 2 is also nonzero. However, both minors are of fourth order, bordering M 3 .

are equal to zero. Therefore, the rank of matrix A is 3, and the basis minor is, for example, the minor M 3 presented above.

The method of elementary transformations is based on the fact that elementary transformations of a matrix do not change its rank. Using these transformations, you can bring the matrix to a form where all its elements, except a 11, a 22, ..., a rr (r ≤min (m, n)), are equal to zero. This obviously means that rank A = r. Note that if an nth-order matrix has the form of an upper triangular matrix, that is, a matrix in which all elements under the main diagonal are equal to zero, then its definition is equal to the product of the elements on the main diagonal. This property can be used when calculating the rank of a matrix using the method of elementary transformations: it is necessary to use them to reduce the matrix to a triangular one and then, by selecting the corresponding determinant, we find that the rank of the matrix is ​​equal to the number of elements of the main diagonal that are different from zero.

Using the method of elementary transformations, find the rank of the matrix

Solution. Let us denote the i-th row of matrix A by the symbol α i . At the first stage, we will perform elementary transformations

At the second stage, we perform the transformations

As a result we get

where are some numbers (some of these numbers or even all of them may be equal to zero). This means that there are the following equalities between the elements of the columns:

From (3.3.1) it follows that

If equality (3.3.3) is true if and only if , then the rows are called linearly independent. Relation (3.3.2) shows that if one of the rows is linearly expressed in terms of the others, then the rows are linearly dependent.

It is easy to see the opposite: if the strings are linearly dependent, then there is a string that will be a linear combination of the remaining strings.

Let, for example, in (3.3.3), then .

Definition. Let a certain r-th order minor be identified in matrix A, and let the (r+1)-th order minor of the same matrix entirely contain the minor . We will say that in this case the minor borders the minor (or is bordering for ).

Now we will prove an important lemma.

Lemma about bordering minors. If a minor of order r of the matrix A= is different from zero, and all minors bordering it are equal to zero, then any row (column) of the matrix A is a linear combination of its rows (columns) that make up .

Proof. Without losing the generality of reasoning, we will assume that a non-zero minor of the rth order is in the upper left corner of the matrix A =:



.

For the first k rows of matrix A, the statement of the lemma is obvious: it is enough to include in a linear combination the same row with a coefficient equal to one, and the rest - with coefficients equal to zero.

Let us now prove that the remaining rows of matrix A are linearly expressed through the first k rows. To do this, we construct a minor of (r+1) order by adding the kth line () to the minor and l th column():

.

The resulting minor is equal to zero for all k and l. If , then it is equal to zero as containing two identical columns. If , then the resulting minor is an edge minor for and, therefore, is equal to zero by the conditions of the lemma.

Let's decompose the minor according to the elements of the last l th column:

Assuming , we get:

(3.3.6)

Expression (3.3.6) means that the kth row of matrix A is linearly expressed through the first r rows.

Since when a matrix is ​​transposed, the values ​​of its minors do not change (due to the property of determinants), then everything proven is also true for columns. The theorem has been proven.

Corollary I. Any row (column) of a matrix is ​​a linear combination of its basis rows (columns). Indeed, the basis minor of the matrix is ​​nonzero, and all minors bordering it are equal to zero.

Corollary II. An nth order determinant is equal to zero if and only if it contains linearly dependent rows (columns). The sufficiency of the linear dependence of rows (columns) for the determinant to be equal to zero was proven earlier as a property of determinants.

Let's prove the necessity. Let us be given a square matrix of nth order whose only minor is zero. It follows that the rank of this matrix is ​​less than n, i.e. there is at least one row that is a linear combination of the basis rows of this matrix.

Let us prove another theorem about the rank of the matrix.

Theorem. The maximum number of linearly independent rows of a matrix is ​​equal to the maximum number of its linearly independent columns and is equal to the rank of this matrix.

Proof. Let the rank of matrix A= be equal to r. Then any of its k basis rows are linearly independent, otherwise the basis minor would be equal to zero. On the other hand, any r+1 or more rows are linearly dependent. Assuming the contrary, we could find a minor of order greater than r that is nonzero by Corollary 2 of the previous lemma. The latter contradicts the fact that the maximum order of non-zero minors is r. Everything proven for rows is also true for columns.

In conclusion, we will outline another method for finding the rank of a matrix. The rank of a matrix can be determined by finding a minor of the maximum order that is different from zero.

At first glance, this requires the calculation of a finite, but perhaps very large number of minors of this matrix.

The following theorem allows, however, to introduce significant simplifications into this.

Theorem. If the minor of matrix A is non-zero, and all minors bordering it are equal to zero, then the rank of the matrix is ​​equal to r.

Proof. It is enough to show that any subsystem of matrix rows for S>r will be linearly dependent under the conditions of the theorem (it will follow that r is the maximum number of linearly independent matrix rows or any of its minors of order greater than k are equal to zero).

Let's assume the opposite. Let the rows be linearly independent. By the lemma about bordering minors, each of them will be linearly expressed in terms of the lines containing the minor and which, due to the fact that they are non-zero, are linearly independent:

Now consider the following linear combination:

or

Using (3.3.7) and (3.3.8), we obtain

,

which contradicts linear row independence.

Consequently, our assumption is incorrect and, therefore, any S>r rows under the conditions of the theorem are linearly dependent. The theorem has been proven.

Let's consider the rule for calculating the rank of a matrix - the method of bordering minors, based on this theorem.

When calculating the rank of a matrix, one should move from minors of lower orders to minors of higher orders. If a minor of the rth order, different from zero, has already been found, then it is necessary to calculate only the minors of the (r+1)th order bordering the minor. If they are equal to zero, then the rank of the matrix is ​​equal to r. This method is also used if we not only calculate the rank of the matrix, but also determine which columns (rows) make up the basis minor of the matrix.

Example. Calculate the rank of the matrix using the bordering minors method

Solution. The second-order minor, located in the upper left corner of matrix A, is non-zero:

.

However, all third-order minors surrounding it are equal to zero:

; ;
; ;
; .

Therefore, the rank of matrix A is equal to two: .

The first and second rows, the first and second columns in this matrix are basic. The remaining rows and columns are linear combinations of them. In fact, the following equalities hold for strings:

In conclusion, we note the validity of the following properties:

1) the rank of the product of matrices is not greater than the rank of each of the factors;

2) the rank of the product of an arbitrary matrix A on the right or left by a non-singular square matrix Q is equal to the rank of matrix A.

Polynomial matrices

Definition. A polynomial matrix or -matrix is ​​a rectangular matrix whose elements are polynomials in one variable with numerical coefficients.

Elementary transformations can be performed on -matrices. These include:

Rearranging two rows (columns);

Multiplying a row (column) by a number other than zero;

Adding to one row (column) another row (column) multiplied by any polynomial.

Two -matrices and of the same size are said to be equivalent: , if one can go from matrix to using a finite number of elementary transformations.

Example. Prove matrix equivalence

, .

1. Swap the first and second columns in the matrix:

.

2. From the second line, subtract the first, multiplied by ():

.

3. Multiply the second line by (–1) and note that

.

4. Subtract from the second column the first, multiplied by , we get

.

The set of all -matrices of given sizes is divided into disjoint classes of equivalent matrices. Matrices that are equivalent to each other form one class, and those that are not equivalent form another.

Each class of equivalent matrices is characterized by a canonical, or normal, matrix of given dimensions.

Definition. A canonical, or normal, matrix of dimensions is a matrix whose main diagonal contains polynomials , where p is the smaller of the numbers m and n ( ), and polynomials that are not equal to zero have leading coefficients equal to 1, and each subsequent polynomial is divided by the previous one. All elements outside the main diagonal are 0.

From the definition it follows that if among the polynomials there are polynomials of degree zero, then they are at the beginning of the main diagonal. If there are zeros, they are at the end of the main diagonal.

The matrix of the previous example is canonical. Matrix

also canonical.

Each class of -matrices contains a unique canonical -matrix, i.e. Every -matrix is ​​equivalent to a unique canonical matrix, which is called the canonical form or normal form of that matrix.

Polynomials located on the main diagonal of the canonical form of a given -matrix are called invariant factors of this matrix.

One method for calculating invariant factors is to reduce a given -matrix to canonical form.

Thus, for the matrix of the previous example, the invariant factors are

From the above it follows that the presence of the same set of invariant factors is a necessary and sufficient condition for the equivalence of -matrices.

Reducing -matrices to canonical form is reduced to determining invariant factors

, ; ,

where r is the rank of the matrix; - the greatest common divisor of the kth order minors, taken with the leading coefficient equal to 1.

Example. Let given -matrix

.

Solution. Obviously, the greatest common divisor of the first order, i.e. .

Let's define second-order minors:

, etc.

Already these data are enough to draw a conclusion: therefore, .

We define

,

Hence, .

Thus, the canonical form of this matrix is ​​the following -matrix:

.

A matrix polynomial is an expression of the form

where is variable; - square matrices of order n with numerical elements.

If , then S is called the degree of the matrix polynomial, n is the order of the matrix polynomial.

Any quadratic -matrix can be represented as a matrix polynomial. Obviously, the opposite statement is also true, i.e. any matrix polynomial can be represented as a square matrix.

The validity of these statements clearly follows from the properties of operations on matrices. Let's look at the following examples:

Example. Represent a polynomial matrix

in the form of a matrix polynomial as follows

.

Example. Matrix polynomial

can be represented as the following polynomial matrix ( -matrix)

.

This interchangeability of matrix polynomials and polynomial matrices plays a significant role in the mathematical apparatus of factor and component analysis methods.

Matrix polynomials of the same order can be added, subtracted, and multiplied in the same way as ordinary polynomials with numerical coefficients. It should, however, be remembered that multiplication of matrix polynomials, generally speaking, is not commutative, since Matrix multiplication is not commutative.

Two matrix polynomials are said to be equal if their coefficients are equal, i.e. corresponding matrices for the same powers of the variable .

The sum (difference) of two matrix polynomials is a matrix polynomial whose coefficient for each degree of the variable is equal to the sum (difference) of the coefficients for the same degree in the polynomials and .

To multiply a matrix polynomial by a matrix polynomial, you need to multiply each term of the matrix polynomial by each term of the matrix polynomial, add the resulting products and bring similar terms.

The degree of a matrix polynomial is a product less than or equal to the sum of the degrees of the factors.

Operations on matrix polynomials can be performed using operations on the corresponding -matrices.

To add (subtract) matrix polynomials, it is enough to add (subtract) the corresponding -matrices. The same applies to multiplication. -matrix of the product of matrix polynomials is equal to the product of -matrices of factors.

On the other hand, and can be written in the form

where B 0 is a non-singular matrix.

When dividing by there is a unique right quotient and a right remainder

where the degree of R 1 is less than the degree , or (division without remainder), as well as the left quotient and the left remainder if and only if, where of order

Linear independence of matrix rows

Given a size matrix

Let's denote the rows of the matrix as follows:

The two lines are called equal , if their corresponding elements are equal. .

Let us introduce the operations of multiplying a string by a number and adding strings as operations carried out element-by-element:

Definition. A row is called a linear combination of matrix rows if it is equal to the sum of the products of these rows by arbitrary real numbers (any numbers):

Definition. The rows of the matrix are called linearly dependent , if there are numbers that are not simultaneously equal to zero, such that a linear combination of matrix rows is equal to the zero row:

Where . (1.1)

Linear dependence of matrix rows means that at least 1 row of the matrix is ​​a linear combination of the rest.

Definition. If a linear combination of rows (1.1) is equal to zero if and only if all coefficients are , then the rows are called linearly independent .

Matrix rank theorem. The rank of a matrix is ​​equal to the maximum number of its linearly independent rows or columns through which all other rows (columns) are linearly expressed.

The theorem plays a fundamental role in matrix analysis, in particular, in the study of systems of linear equations.

6, 13,14,15,16. Vectors. Operations on vectors (addition, subtraction, multiplication by a number),n -dimensional vector. The concept of vector space and its basis.

A vector is a directed segment with a starting point A and end point IN(which can be moved parallel to itself).

Vectors can be designated either by 2 capital letters or by one lowercase letter with a line or an arrow.

Length (or module) a vector is a number equal to the length of the segment AB representing the vector.

Vectors lying on the same line or on parallel lines are called collinear .

If the beginning and end of the vector coincide (), then such a vector is called zero and is denoted = . The length of the zero vector is zero:

1) Product of a vector and a number:

There will be a vector having a length whose direction coincides with the direction of the vector if , and opposite to it if .

2) Opposite vector - called the product of the vector - and the number (-1), i.e. -=.

3) The sum of two vectors and a vector is called, the beginning of which coincides with the beginning of the vector, and the end with the end of the vector, provided that the beginning coincides with the end. (rule of triangles). The sum of several vectors is determined similarly.



4) The difference of two vectors and is called the sum of the vector and the vector -, opposite .

Scalar product

Definition: The scalar product of two vectors is a number equal to the product of the lengths of these vectors and the cosine of the angle between them:

n-dimensional vector and vector space

Definition. An n-dimensional vector is an ordered collection n real numbers written in the form x = (x 1,x 2,…,x n), Where x i i -th component of the vector X.

The concept of an n-dimensional vector is widely used in economics, for example, a certain set of goods can be characterized by a vector x = (x 1,x 2,…,x n), and the corresponding prices y = (y 1,y 2,…,y n).

- Two n-dimensional vectors are equal if and only if their corresponding components are equal, i.e. x=y, if x i= y i, i = 1,2,…,n.

- The sum of two vectors same size n called a vector z = x + y, whose components are equal to the sum of the corresponding components of the summand vectors, i.e. z i= x i+ y i, i = 1,2,…, n.

- The product of a vector x and a real number is called a vector whose components are equal to the product of the corresponding components of the vector, i.e. , i= 1,2,…,n.

Linear operations on any vectors satisfy the following properties:



1) - commutative (commutative) property of the sum;

2) - associative (combinative) property of the sum;

3) - an associative property with respect to a numerical factor;

4) - distributive (distributive) property relative to the sum of vectors;

5) - distributive property with respect to the sum of numerical factors;

6) There is a zero vector such that for any vector (the special role of the zero vector);

7) For any vector there is an opposite vector such that ;

8) for any vector (special role of the numerical factor 1).

Definition. The set of vectors with real components, in which the operations of adding vectors and multiplying a vector by a number that satisfies the above eight properties (considered as axioms) are defined, is called vector state .

Dimension and basis of vector space

Definition. Linear space is called n-dimensional , if it exists n linearly independent vectors, and any of the vectors are already dependent. In other words, dimension of space is the maximum number of linearly independent vectors it contains. The number n is called the dimension of space and is denoted by .

A set of n linearly independent vectors in n-dimensional space is called basis .

7. Eigenvectors and eigenvalues ​​of a matrix. Characteristic equation of a matrix.

Definition. The vector is called eigenvector linear operator if there is a number such that:

The number is called proper operator value (matrices A), corresponding to the vector .

Can be written in matrix form:

Where is a column matrix of vector coordinates, or in expanded form:

Let's rewrite the system so that there are zeros on the right sides:

or in matrix form: . The resulting homogeneous system always has a zero solution. For the existence of a non-zero solution, it is necessary and sufficient that the determinant of the system: .

The determinant is a polynomial n th degree relative to . This polynomial is called characteristic polynomial of the operator or matrix A, and the resulting equation is characteristic equation of the operator or matrix A.

Example:

Find the eigenvalues ​​and eigenvectors of the linear operator given by the matrix.

Solution: We compose the characteristic equation or , whence the eigenvalue of the linear operator .

We find the eigenvector corresponding to the eigenvalue. To do this, we solve the matrix equation:

Or , or , from where we find: , or

Or .

Let us assume that , we obtain that the vectors , for any, are eigenvectors of a linear operator with eigenvalue .

Likewise, vector .

8. System P linear equations with P variables (general view). Matrix form of recording such a system. System solution (definition). Consistent and incompatible, definite and indefinite systems of linear equations.

Solving a system of linear equations with unknowns

Systems of linear equations are widely used in economics.

The system of linear equations with variables has the form:

,

where () are arbitrary numbers called coefficients for variables And free terms of the equations , respectively.

Brief entry: ().

Definition. The solution of the system is such a set of values ​​, upon substitution of which each equation of the system turns into a true equality.

1) The system of equations is called joint , if it has at least one solution, and non-joint, if it has no solutions.

2) The simultaneous system of equations is called certain , if it has a unique solution, and uncertain , if it has more than one solution.

3) Two systems of equations are called equivalent (equivalent) , if they have the same set of solutions (for example, one solution).

Let's write the system in matrix form:

Let's denote: , Where

A– matrix of coefficients for variables, or matrix of the system, X – matrix-column of variables, IN – matrix-column of free members.

Because the number of columns of the matrix is ​​equal to the number of rows of the matrix, then their product is:

There is a column matrix. The elements of the resulting matrix are the left parts of the initial system. Based on the definition of equality of matrices, the initial system can be written in the form: .

Cramer's theorem. Let be the determinant of the matrix of the system, and let be the determinant of the matrix obtained from the matrix by replacing the th column with a column of free terms. Then, if , then the system has a unique solution, determined by the formulas:

Cramer's formula.

Example. Solve a system of equations using Cramer's formulas

Solution. Determinant of the system matrix. Therefore, the system has a unique solution. Let us calculate , obtained from replacing the first, second, third columns with a column of free terms, respectively:

According to Cramer's formulas:

9. Gauss method for solving the systemn linear equations with P variables. The concept of the Jordan–Gauss method.

Gauss method - method of sequential elimination of variables.

The Gauss method consists in the fact that, using elementary row transformations and column permutations, a system of equations is reduced to an equivalent system of a step (or triangular) form, from which all other variables are found sequentially, starting with the last (by number) variables.

It is convenient to carry out Gaussian transformations not with the equations themselves, but with the extended matrix of their coefficients, obtained by assigning a column of free terms to the matrix:

.

It should be noted that the Gauss method can solve any system of equations of the form .

Example. Solve the system using the Gauss method:

Let us write down the extended matrix of the system.

Step 1 . Let's swap the first and second lines so that it becomes equal to 1.

Step 2. Multiply the elements of the first row by (–2) and (–1) and add them to the elements of the second and third rows so that zeros appear under the element in the first column. .

For simultaneous systems of linear equations, the following theorems are true:

Theorem 1. If the rank of the matrix of a joint system is equal to the number of variables, i.e. , then the system has a unique solution.

Theorem 2. If the rank of the matrix of a joint system is less than the number of variables, i.e. , then the system is uncertain and has an infinite number of solutions.

Definition. A basis minor of a matrix is ​​any non-zero minor whose order is equal to the rank of the matrix.

Definition. Those unknowns whose coefficients are included in the notation of the basic minor are called basic (or basic), the remaining unknowns are called free (or non-basic).

Solving a system of equations in the case means expressing and (since the determinant composed of their coefficients is not equal to zero), then and are free unknowns.

Let us express the basic variables in terms of free ones.

From the second row of the resulting matrix we express the variable:

From the first line we express: ,

General solution of the system of equations: , .

A system of vectors of the same order is called linearly dependent if a zero vector can be obtained from these vectors through an appropriate linear combination. (It is not allowed that all coefficients of a linear combination be equal to zero, since this would be trivial.) Otherwise, the vectors are called linearly independent. For example, the following three vectors:

are linearly dependent, since that is easy to check. In the case of a linear dependence, any vector can always be expressed through a linear combination of other vectors. In our example: either or This is easy to check with the appropriate calculations. This leads to the following definition: a vector is linearly independent of other vectors if it cannot be represented as a linear combination of these vectors.

Let us consider a system of vectors without specifying whether it is linearly dependent or linearly independent. For each system consisting of column vectors a, it is possible to identify the maximum possible number of linearly independent vectors. This number, denoted by the letter , is the rank of this vector system. Since each matrix can be viewed as a system of column vectors, the rank of a matrix is ​​defined as the maximum number of linearly independent column vectors it contains. Row vectors are also used to determine the rank of a matrix. Both methods give the same result for the same matrix, and cannot exceed the smallest of or The rank of a square matrix of order ranges from 0 to . If all vectors are zero, then the rank of such a matrix is ​​zero. If all vectors are linearly independent of each other, then the rank of the matrix is ​​equal. If we form a matrix from the above vectors, then the rank of this matrix is ​​2. Since every two vectors can be reduced to a third by a linear combination, then the rank is less than 3.

But we can make sure that any two vectors of them are linearly independent, hence the rank

A square matrix is ​​called singular if its column vectors or row vectors are linearly dependent. The determinant of such a matrix is ​​equal to zero and its inverse matrix does not exist, as noted above. These conclusions are equivalent to each other. As a result, a square matrix is ​​called non-singular, or non-singular, if its column vectors or row vectors are independent of each other. The determinant of such a matrix is ​​not equal to zero and its inverse matrix exists (compare with p. 43)

The rank of the matrix has a quite obvious geometric interpretation. If the rank of the matrix is ​​equal to , then the -dimensional space is said to be spanned by vectors. If the rank is then the vectors lie in an -dimensional subspace that includes all of them. So, the rank of the matrix corresponds to the minimum required dimension of the space “which contains all the vectors”; a -dimensional subspace in an -dimensional space is called an -dimensional hyperplane. The rank of the matrix corresponds to the smallest dimension of the hyperplane in which all vectors still lie.

Note that the rows and columns of the matrix can be considered as arithmetic vectors of dimensions m And n, respectively. Thus, the size matrix can be interpreted as a set m n-dimensional or n m-dimensional arithmetic vectors. By analogy with geometric vectors, we introduce the concepts of linear dependence and linear independence of the rows and columns of a matrix.

4.8.1. Definition. Line
called linear combination of strings with odds
, if all elements of this line have the following equality:

,
.

4.8.2. Definition.

Strings
are called linearly dependent, if there is a non-trivial linear combination of them equal to the zero row, i.e. there are numbers that are not all equal to zero


,
.

4.8.3. Definition.

Strings
are called linearly independent, if only their trivial linear combination is equal to the zero row, i.e.

,

4.8.4. Theorem. (Criterion for linear dependence of matrix rows)

In order for the rows to be linearly dependent, it is necessary and sufficient that at least one of them is a linear combination of the others.

Proof:

Necessity. Let the lines
are linearly dependent, then there is a nontrivial linear combination of them equal to the zero row:

.

Without loss of generality, assume that the first of the coefficients of the linear combination is nonzero (otherwise, the rows can be renumbered). Dividing this ratio by , we get


,

that is, the first row is a linear combination of the others.

Adequacy. Let one of the lines, for example, , is a linear combination of the others, then

that is, there is a non-trivial linear combination of strings
, equal to the zero string:

which means the lines
are linearly dependent, which is what needed to be proven.

Comment.

Similar definitions and statements can be formulated for the columns of the matrix.

§4.9. Matrix rank.

4.9.1. Definition. Minor order matrices size
called the order determinant with elements located at the intersection of some of it lines and columns.

4.9.2. Definition. Non-zero minor order matrices size
called basic minor, if all minors of the matrix are of order
are equal to zero.

Comment. A matrix can have several basis minors. Obviously, they will all be of the same order. It is also possible that the matrix size
minor order is different from zero, and the minors are of order
does not exist, that is
.

4.9.3. Definition. The rows (columns) that form the basis minor are called basic rows (columns).

4.9.4. Definition. Rank of a matrix is ​​called the order of its basis minor. Matrix rank denoted by
or
.

Comment.

Note that due to the equality of the rows and columns of the determinant, the rank of the matrix does not change when it is transposed.

4.9.5. Theorem. (Invariance of matrix rank under elementary transformations)

The rank of a matrix does not change during its elementary transformations.

No proof.

4.9.6. Theorem. (About the basic minor).

The underlying rows (columns) are linearly independent. Any row (column) of a matrix can be represented as a linear combination of its basic rows (columns).

Proof:

Let's do the proof for strings. The proof of the statement for columns can be carried out by analogy.

Let the rank of the matrix sizes
equals , A
− basic minor. Without loss of generality, we assume that the basis minor is located in the upper left corner (otherwise, the matrix can be reduced to this form using elementary transformations):

.

Let us first prove the linear independence of the basis rows. We will carry out the proof by contradiction. Let us assume that the basis rows are linearly dependent. Then, according to Theorem 4.8.4, one of the strings can be represented as a linear combination of the remaining basic strings. Therefore, if we subtract the specified linear combination from this row, we get a zero row, which means that the minor
is equal to zero, which contradicts the definition of a basis minor. Thus, we have obtained a contradiction; therefore, the linear independence of the basis rows has been proven.

Let us now prove that every row of a matrix can be represented as a linear combination of basis rows. If the line number in question from 1 to r, then, obviously, it can be represented as a linear combination with a coefficient equal to 1 for the line and zero coefficients for the remaining rows. Let us now show that if the line number from
before
, it can be represented as a linear combination of basis strings. Consider the matrix minor
, obtained from the basis minor
adding a line and an arbitrary column
:

Let us show that this minor
from
before
and for any column number from 1 to .

Indeed, if the column number from 1 to r, then we have a determinant with two identical columns, which is obviously equal to zero. If the column number from r+1 to , and the line number from
before
, That
is a minor of the original matrix of higher order than the basis minor, which means that it is equal to zero from the definition of basis minor. Thus, it has been proven that the minor
is zero for any line number from
before
and for any column number from 1 to . Expanding it over the last column, we get:

Here
− corresponding algebraic additions. notice, that
, since therefore
is a basic minor. Therefore, the elements of the line k can be represented as a linear combination of the corresponding elements of the basis rows with coefficients independent of the column number :

Thus, we have proven that an arbitrary row of a matrix can be represented as a linear combination of its basis rows. The theorem has been proven.

Lecture 13

4.9.7. Theorem. (On the rank of a non-singular square matrix)

In order for a square matrix to be non-singular, it is necessary and sufficient that the rank of the matrix is ​​equal to the size of this matrix.

Proof:

Necessity. Let the square matrix size n is non-degenerate, then
, therefore, the determinant of the matrix is ​​a basis minor, i.e.

Adequacy. Let
then the order of the basis minor is equal to the size of the matrix, therefore the basis minor is the determinant of the matrix , i.e.
by definition of a basic minor.

Consequence.

In order for a square matrix to be non-singular, it is necessary and sufficient that its rows be linearly independent.

Proof:

Necessity. Since a square matrix is ​​non-singular, its rank is equal to the size of the matrix
that is, the determinant of the matrix is ​​a basis minor. Therefore, by Theorem 4.9.6 on the basis minor, the rows of the matrix are linearly independent.

Adequacy. Since all rows of the matrix are linearly independent, its rank is not less than the size of the matrix, which means
therefore, by the previous Theorem 4.9.7, the matrix is non-degenerate.

4.9.8. The method of bordering minors for finding the rank of a matrix.

Note that part of this method has already been implicitly described in the proof of the basis minor theorem.

4.9.8.1. Definition. Minor
called bordering relative to minor
, if it is obtained from a minor
by adding one new row and one new column to the original matrix.

4.9.8.2. The procedure for finding the rank of a matrix using the bordering minors method.

    We find any current minor of the matrix that is different from zero.

    We calculate all minors bordering it.

    If they are all equal to zero, then the current minor is a basis one, and the rank of the matrix is ​​equal to the order of the current minor.

    If among the bordering minors there is at least one non-zero, then it is considered current and the procedure continues.

Using the method of bordering minors, we find the rank of the matrix

.

It is easy to specify the current non-zero second order minor, e.g.

.

We calculate the minors bordering it:




Consequently, since all bordering minors of the third order are equal to zero, then the minor
is basic, that is

Comment. From the example considered, it is clear that the method is quite labor-intensive. Therefore, in practice, the method of elementary transformations is much more often used, which will be discussed below.

4.9.9. Finding the rank of a matrix using the method of elementary transformations.

Based on Theorem 4.9.5, it can be argued that the rank of the matrix does not change under elementary transformations (that is, the ranks of equivalent matrices are equal). Therefore, the rank of the matrix is ​​equal to the rank of the step matrix obtained from the original one by elementary transformations. The rank of a step matrix is ​​obviously equal to the number of its non-zero rows.

Let's determine the rank of the matrix

by the method of elementary transformations.

Let's present the matrix to step view:

The number of non-zero rows of the resulting echelon matrix is ​​three, therefore,

4.9.10. Rank of a system of linear space vectors.

Consider the system of vectors
some linear space . If it is linearly dependent, then a linearly independent subsystem can be distinguished in it.

4.9.10.1. Definition. Rank of the vector system
linear space the maximum number of linearly independent vectors of this system is called. Vector system rank
denoted as
.

Comment. If a system of vectors is linearly independent, then its rank is equal to the number of vectors in the system.

Let us formulate a theorem showing the connection between the concepts of the rank of a system of vectors in a linear space and the rank of a matrix.

4.9.10.2. Theorem. (On the rank of a system of vectors in linear space)

The rank of a system of vectors in a linear space is equal to the rank of a matrix whose columns or rows are the coordinates of vectors in some basis of the linear space.

No proof.

Consequence.

In order for a system of vectors in a linear space to be linearly independent, it is necessary and sufficient that the rank of the matrix, the columns or rows of which are the coordinates of vectors in a certain basis, is equal to the number of vectors in the system.

The proof is obvious.

4.9.10.3. Theorem (On the dimension of a linear shell).

Dimension of linear hull vectors
linear space equal to the rank of this vector system:

No proof.







2024 gtavrl.ru.