Linear Algebra: Gateway to Mathematics uses linear algebra as a vehicle to introduce students to the inner workings of mathematics. The structures and techniques of mathematics in turn provide an accessible framework to illustrate the powerful and beautiful results about vector spaces and linear tra
Linear Algebra: A Pathway to Abstract Mathematics
✍ Scribed by G. Viglino
- Publisher
- Ramapo College of New Jersey
- Year
- 2018
- Tongue
- English
- Leaves
- 395
- Edition
- 3
- Category
- Library
No coin nor oath required. For personal study only.
✦ Table of Contents
CONTENTS
Mathematics does not run on batteries
The first six chapters may provide a full plate for most one-semester courses. If not, then Chapter 7 (on inner product spaces) is offered for dessert.
We have made every effort to provide a leg-up for the step you are about to take. Our primary goal was to write a readable book, without compromising mathematical integrity. Along the way, you will encounter numerous Check Your Understanding boxes de...
CHAPTER 1
MATRICES AND SYSTEMS OF
LINEAR EQUATIONS
Brackets are used to denote sets. In particular,
denotes the set containing but one element—the element .
An (ordered) n-tuple is an expression of the form , where each is a real number (written ), for .
The s denote variables (or unknowns), while the ’s and ’s are constants (or scalars).
We say that the n-tuple is a solution of the system of m equations in n unknowns:
if each equation in the system is satisfied when is substituted for , for .
The set of all solutions of a system of equations is said to be the solution set of that system.
Equivalent Systems of Equations
You used this third maneuver a lot when eliminating a variable from a given system of equations For example:
Equivalent System of Equations
Elementary Operations on Systems of Linear Equations
Interchange the order of any two equations in the system.
Multiply both sides of an equation by a nonzero number.
Add a multiple of one equation to another equation.
Augmented Matrices
Augmented Matrix
Figure 1.1
Elementary Matrix Row Operations
Interchange the order of any two rows in the matrix.
Multiply each element in a row of the matrix by a nonzero number.
Add a multiple of one row of the matrix to another row of the matrix.
DEFINITION 1.1
systems of equations associated with equivalent augmented matrices are themselves equivalent (same solution set).
Figure 1.2
Pivoting About a Pivot Point
Pivot Point
Pivoting
Elementary Row Operation
Notation
Switch row i with row j:
Multiply each entry in row i by a nonzero number c:
Multiply each entry in row i by a number c, and add the resulting row to row j:
EXAMPLE 1.1
The TI-84+ calculator is featured throughout the text.
Answer: See page B-1.
Row-Reduced-Echelon Form
A matrix satisfying (i), (ii) and a slightly weaker form of (i):
The first non-zero entry in any row is 1, and the entries below (only) that leading-one are 0
is said to be in row-echelon form.
DEFINITION 1.2
The matrices
are in row-echelon form
Answer: Yes: (a), (c), and (d).
No: (b) [fails (ii)]
EXAMPLE 1.2
In harmony with graphing calculators, we will adopt the notation to denote the row-reduced-echelon form of a matrix A.
EXAMPLE 1.3
Answer:
Gauss, Karl Friedrich (1777 -1855), the great German mathematician and astronomer.
Wilhelm Jordan (1842- 1899) German professor of geodesy.
Gauss-Jordan Elimination Method
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16. Construct a system of three equations in three unknowns, x, y, and z such that is a solution of the system.
17. Construct a system of four equations in four unknowns, x, y, z, and w with solution set .
18.
19.
20.
21.
22.
23.
24.
25.
26. Offer an argument to justify the following claim:
27. The system of equations has a solution for all .
28. The system of equations always has a solution for all .
29. The system of equations can never have more than one solution.
30. The systems of equations associated with the two augmented matrices:
31. If the matrix A has n rows, and if contains less than n leading ones, then the last row of must consist entirely of zeros.
§2. Consistent and Inconsistent Systems of Equations
Consistent Inconsistent
EXAMPLE 1.4
EXAMPLE 1.5
EXAMPLE 1.6
Any variable that is not associated with a leading one in the row-reduced echelon form of an augmented matrix is said to be a free variable. In the current setting, the variable w is a free variable (see rref in Figure 1.3).
Figure 1.3
Answer: (a) Inconsistent
(b)
(c):
EXAMPLE 1.7
Unlike the TI-84+, the TI-89 and above have symbolic capabilities. In particular:
EXAMPLE 1.8
Here, unlike with the smaller system of equations in Example 1.8, the TI-89 (or higher) is of little help:
The last row of the above rref matrix tells us that there is no solution to the system, but it “lies,” for solutions do exist for certain values of a, b, c, and d [see ()].
Answer: (a) It is consistent for all a, b, and c.
(b) Consistent if and only if
Figure 1.4
Coefficient Matrix
Let P and Q be two propositions (a proposition is a mathematical statement that is either true or false). To say “P if and only if Q,” (also written in the form ) is to say that if P is true then so is Q (also written ), and if Q is true then so ...
THEOREM 1.2
EXAMPLE 1.9
Answer: (a) Yes (b) No
Homogeneous Systems of Equations
A system with fewer equations than unknowns (“wide”) is said to be underdetermined.
A system with more equations than unknowns (“tall”) is said to be overdetermined.
A square system is a system which contains as many equations as unknowns.
THEOREM 1.3
EXAMPLE 1.10
Figure 1.5
Answer:
While underdetermined (“wide”) homogeneous systems of equations are guaranteed to always have non-trivial solutions, this is not the case with overdetermined (“tall”) systems of equations [see Exercises 27-28], or with square systems of equat...
THEOREM 1.4
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20. Let S is a homogeneous system of equations. Prove that the last column of contains only zeros.
21. Prove that if , then the system of equations:
22. (a) Show that if and are solutions of the system: , then, is also a solution for any given .
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37. For what values of a, b, c, and d will the homogeneous system of equations have a unique solution:
38. Show that if is a solution of a given two by two homogeneous system of equations, then is also a solution for any .
39. Show that if and are solutions of a given two by two homogeneous system of equations, then is also a solution.
40. Let M be the solution set of and let T be the solution set of the corresponding homogeneous system . Show that:
41. Let M be the solution set of and let T be the solution set of the corresponding homogeneous system . Show that for any , .
42. The system of equations associated with the augmented matrix is consistent, independent of the values of the entries a through f.
43. The system of equations associated with the augmented matrix is consistent, independent of the values of the entries a through f.
44. The system of equations associated with the augmented matrix is consistent if and only if .
45. If a homogeneous system of equations has a nontrivial solution, then it has infinitely many solutions.
46. If the homogeneous system has only the trivial solution, then the system has a unique solution for all .
47. Any system S of linear equations in n unknowns with has nontrivial solutions.
48. A system of n linear equations in m unknowns S is consistent if and only if has m leading ones.
n-tuple
Solution Set of a System of Equations
An n-tuple is a solution of the system of m equations in n unknowns:
if each of the m equations is satisfied when is substituted for , for .
Consistent and Inconsistent Systems of Equations
A system of equations is said to be consistent if it has non- empty solution set. A system of equations that has no solution is said to be inconsistent.
Equivalent Systems of Equations
Two systems of equations are said to be equivalent if they have equal solution sets.
Overdetermined, Underdetermined, and Square Systems of Equations
A system of m equations in n unknowns is said to be:
Overdetermined if (more equations than unknowns).
Underdetermined if (fewer equations than unknowns).
Square if .
Elementary Equation Operations
The following three operations on a system of linear equations are said to be elementary equation operations:
Interchange the order of any two equations in the system.
Multiply both sides of an equation by a nonzero number.
Add a multiple of one equation to another equation.
Elementary row operations do not alter the solution sets of systems of equations.
Matrices
Since A has 3 rows and 4 columns, it is said to be a three-by- four matrix. When the number of rows of a matrix equals the number of columns, the matrix is said to be a square matrix.
Elementary Row Operations
The following three operations on any given matrix are said to be elementary row operations:
Interchange the order of any two rows in the matrix.
Multiply each element in a row of the matrix by a nonzero number.
Add a multiple of one row of the matrix to another row of the matrix.
Equivalent Matrices
Two matrices are equivalent if one can be derived from the other by means of a sequence of elementary row operations.
Augmented Matrix
Equivalent systems of equations corresponding to equivalent augmented matrices.
Row-Reduced-Echelon Form of a Matrix
Gauss-Jordan Elimination Method.
Coefficient Matrix
Spanning Theorem
Homogeneous System of Equations
Trivial Solution
Fundamental Theorem of Homogeneous Systems
You can use to solve a homogeneous system of equations S
Linear Independence
Theorem
CHAPTER 2
VECTOR SPACES
§1. Vectors in the Plane and Beyond
Figure 2.1
Figure 2.2
EXAMPLE 2.1
Pick up the top vector and move it 2 units down and 3 units to the right to the right so that its initial point . In the process, the original terminal point is also moved 2 units to the right at 3 units down, coming to rest at .
Figure 2.3
Note that the two-tuple in the expression appears in bold-face, so as to distinguish it from the form which represents a point in the plane.
Figure 2.4
DEFINITION 2.1
Scalar Product and Sums of Vectors
Figure 2.5
DEFINITION 2.2
Vector Addition
(a) (b)
Figure 2.6
While identical in shape, the “+” in differs in spirit from that in : the latter represents the familiar sum of two numbers, as in , while the former represents the newly defined sum of two n-tuples, as in:
DEFINITION 2.3
EXAMPLE 2.2
Answer:
Euclidean Vector Spaces
DEFINITION 2.4
No direction is associated with the zero vector. A zero force, for example, is no force at all, and its “direction” would be a moot point.
DEFINITION 2.5
THEOREM 2.1
To emphasize the important role played by definitions, the symbol instead of will temporarily be used to indicate a step in the proof which follows directly from a definition. In addition, the abbreviation “PofR” will be used to denote that a ste...
This associative property eliminates the need for including parenthesis when summing more than two vectors. In particular,
is perfectly well defined.
In this, and any other abstract math course:
definitions Rule!
Just look at the above proof. It contains but one “logical step,” the step labeled PofR; all other steps hinge on definitions.
Answer: See page B-3.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19. For , , and , find scalars r and s such that:
20. Find scalars r, s, and t, such that:
21. Find scalars r, s, and t, such that:
22. Show that there do not exist scalars r, s, and t, such that
23. Find the vector of length 5 that has the same direction as the vector with initial point and terminal point .
24. Find the vector of length 5 that is in the opposite direction of the vector with initial point and terminal point .
25. Prove Theorem 2.1(ii) for: (a) (b) .
26. Prove Theorem 2.1(v) for: (a) (b) .
27. Prove Theorem 2.1(i) for: (a) (b) (c)
28. Prove Theorem 2.1(iii) for: (a) (b) (c)
29. Prove Theorem 2.1(vi) for: (a) (b) (c)
30. Prove Theorem 2.1(viii) for: (a) (b) (c)
31. Prove that if v, w, and z, are vectors in such that , then .
32. For , if then .
33. For , if then .
34. For and , if then .
35. For , if and only if or .
§2. Abstract Vector Spaces
The elements of a vector space V are called vectors, and will be denoted by bold-faced letters (like v). Scalars will continue to be denoted by non- bold-faced letters (like r).
DEFINITION 2.6
A set is said to be closed, with respect to an operation, if elements of that set subjected to that operation remain in the set. For example, the set of positive integers is closed under addition (the sum of two positive integers is again a positive ...
V is closed under addition:
For every in V,
V is closed under scalar multiplication:
For every and ,
We also point out that, by convention, no meaning is attributed to an expression of the form , wherein a vector v appears to the left of a scalar r.
Matrix Spaces
EXAMPLE 2.3
We are again using to indicate that equality follows from a definition, and “PofR” for “Property of the Real numbers.”
THEOREM 2.2
Answer: See page B-3.
Polynomial Spaces
In particular:
The Greek letter (Sigma) is used to denote a sum.
THEOREM 2.3
Answer: See page B-3.
Function Spaces
All “objects” in mathematics are sets, and functions are no exceptions. The function f given by , for example, is that subset of the plane, typically called the graph of f:
Pictorially:
THEOREM 2.4
A function is defined to be equal to a function , if for every .
The fact that is closed under addition and scalar multiplication is self-evident.
As you can see, we elected to use the letter Z, rather than the symbol 0, for our zero vector. It’s just that an expression like would strongly suggest that a multiplication by zero is being performed, which is not the case.
Answer: See page B-3.
Additional Examples
EXAMPLE 2.4
EXAMPLE 2.5
Answer: Zero vector:
Inverse of :
EXAMPLE 2.6
EXAMPLE 2.7
Answer: See page B-4.
1. , , and .
2. , , and .
3. , , and .
4. , , and .
5. , , and .
6. , , and .
7. , , and .
8. , , and .
9. , , and .
10. , , and .
11. , and .
12. ; , , ; and , .
13. , , and .
14. , , and .
15. , , and .
16. , , .
17. Complete the proof of Theorem 2.2.
18. Complete the proof of Theorem 2.3.
19. Complete the proof of Theorem 2.4.
20. Establish the remaining three axioms for the space of Example 2.4.
21. Establish the remaining six axioms for the space of Example 2.5.
22. A polynomial is an expression of the form for which there exists an m such that for . Show that, with respect to the following operations, the set of all polynomials is a vector space:
23. Let V be a vector space, and let . If , then .
24. Let V be a vector space, and let . If and , then .
25. Let V be a vector space, and let . If and , then .
26. Let V be a vector space, and let . If , then .
27. Let V be a vector space, and let . If and , then .
§3. Properties of vector Spaces
Axiom (iii) asserts the existence of a zero vector, but makes no claim as to its uniqueness, and Axiom (iv) only asserts that every vector has an additive inverse (could it have several?).
Strategy for (a): Assume that 0 and are any two zeros, and then go on to show .
Strategy for (b): Assume that a vector v has two additive inverses, and , and then go on to show that .
Answer: See page B-4.
Answer: See page B-4.
Multiplying any vector by the scalar 0 results in the vector 0.
Answer: See page B-4.
Multiplying any vector by the scalar results in the additive inverse of that vector.
Answer: See page B-5.
Strategy: Show that if you add to v you end up with the vector 0.
Subtraction
DEFINITION 2.7
A definition is the introduction of a new word in the language of mathematics. As such, one must understand all of the words used in its description. This is so in Definition 2.7, where the “new word “” on the left of the equal sign is describe...
Answer: See page B-5.
1. Theorem 2.11 (iv): If and , then .
2. Theorem 2.11 (v): If and , then .
3. Theorem 2.11 (ix): .
4. Theorem 2.11 (xiii): .
5. Theorem 2.11 (xiv): .
6. Theorem 2.11 (xvi): .
7. Theorem 2.11 (xvii): .
8. Theorem 2.11 (xviii): .
9. Show that for any vector v in a vector space V, and any : .
10. Show that for any vector v in a vector space V and any integer : .
11. Let v, w, and z be any vectors in a vector space V, and let , with . Show that if , then .
12. Let v and w be vectors in a vector space V, with . Show that if , then .
13. Let v and w be vectors in a vector space V. Show that if and , then .
14. Show that for any v and w in a vector space V, and for any :
15. Let v and w be non-zero vectors in a vector space V. Show that if , with not both r and s equal to 0, then there exist unique numbers a and b such that and .
16. All vector spaces contain infinitely many vectors.
17. Any vector space that contains more than one vector must contain an infinite number of vectors.
18. For any vector v in a vector space V and any :
19. Let and be vector spaces. Let with operations given by:
20. Let and be vector spaces. Let with operations given by:
§4. Subspaces
DEFINITION 2.8
(i)
(ii)
(v)
(vi)
(vii)
(viii)
EXAMPLE 2.8
The “ticket” to be in S is that the third component is equal to the sum of its first two components.
Since has the “ticket,” it is in S.
Answer: See page B-5.
EXAMPLE 2.9
Answer: See Page B-5.
The “ticket” needed for a function f to get into S is that it maps 9 to 0.
EXAMPLE 2.10
Answer: Not a subspace.
EXAMPLE 2.11
Answer: See page B-5.
Intersection and Union of Subspaces
S intersect T
(a)
S union T
(b)
In the exercises you are asked to show that the intersection of any number of subspaces of V is again a subspace of V.
Figure 2.7
EXAMPLE 2.12
Answer: See page B-6.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31. The subset of even functions:
32. The subset of odd functions:
33. The subset of increasing functions:
34. The subset of decreasing functions:
35. The subset of bounded functions:
36. (Calculus dependent)
37. (Calculus dependent)
38. (Calculus dependent)
39. Let V be a vector space. Show that:
40. (PMI) Establish the following generalization of Theorem 2.14.
41. (PMI) Let be vectors in a vector space V. Show that is a subspace of V.
42. Let S and T be subspaces of a vector space V. Show that is a subspace of V.
43. Let S and T be subspaces of a vector space V, with . Show that every vector in the subspace of the previous exercise can be uniquely expressed as a sum of a vector in S with a vector in T.
44. If S and T are both subsets of a vector space V, and if neither S nor W is a subspace of V, then cannot be a subspace of V.
45. If S and T are both subsets of a vector space V, and if neither S nor W is a subspace of V, then cannot be a subspace of V.
46. If S and T are subspaces of a vector space V, then (see Exercise 43).
47. If S and T are subspaces of a vector space V, then (see Exercise 43).
48. If S, T, and W are subspaces of a vector space V, then is also a subspace of V (see Exercise 43).
49. If S, T, and W are subspaces of a vector space V, then (see Exercise 42).
50. If S and T are subspaces of a vector space V with , then is a subspace of V.
51. If S is a subspace of a vector space V, and if T is a subspace of S, then T is a subspace of V.
52. If a vector space has two distinct subspaces, then it has infinitely many distinct subspaces.
§5. Lines and Planes
Subspaces of
THEOREM 2.15
For any vector :
The vector v is said to be a direction vector for the line, and the vector u is said to be a translation vector.
THEOREM 2.16
Those not satisfied with this geometrical proof are invited to consider Exercise 62.
Figure 2.8
Note that the set:
This brings us to the so- called parametric representation of L:
Answer:
(a)
(b)
EXAMPLE 2.13
Subspaces of
One cannot envision a line in for . We can, however define, in vector form, the line passing through:
and
in to be the set:
where:
and:
THEOREM 2.17
EXAMPLE 2.14
The line can also be expressed in parametric form (see margin note of Example 2.13):
Answer: See page B-6
THEOREM 2.18
Figure 2.9
THEOREM 2.19
EXAMPLE 2.15
P consists of all points such that:
The above is said to be a parametric representation of the plane (with parameters r and s).
Answer: See page B-6
THEOREM 2.20
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13. Exercise 1
14. Exercise 2
15. Exercise 3
16. Exercise 4
17. Exercise 5
18. Exercise 6
19. Exercise 7
20. Exercise 8
21. Exercise 1
22. Exercise 2
23. Exercise 3
24. Exercise 4
25. Exercise 5
26. Exercise 6
27. Exercise 7
28. Exercise 8
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41. Exercise 29
42. Exercise 30
43. Exercise 31
44. Exercise 32
45. Exercise 33
46. Exercise 34
47. Exercise 35
48. Exercise 36
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61. Complete the proof of Theorem 2.15. Incidentally:
62. Prove Theorem 2.16.
63. Prove Theorem 2.17.
64. Prove Theorem 2.18.
65. Prove Theorem 2.19.
66. Prove Theorem 2.20.
Euclidean Vector Space
Abstract Vector Space
Subtraction
Uniqueness of 0 and
Cancellation Properties
Zero Properties
Inverse Properties
Subspace
A nonempty subset S of V which is itself a vector space under the vector addition and scalar multiplication operations of the space V.
Closure says it all
A one liner
Intersection of subspaces
Proper Subspaces
Vector form of lines
Vector form of planes
CHAPTER 3
BASES AND DIMENSION
§1. Spanning Sets
DEFINITION 3.1
EXAMPLE 3.1
Note that, except for the last column, this augmented matrix is the same as that of Example 3.1.
Some Added Insight on Example 3.1
Answer: (a) No. (b) Yes.
THEOREM 3.1
DEFINITION 3.2
EXAMPLE 3.2
System S was solved directly in Example 1.7, page 16. In that example, we labeled the variables x, y, and z, instead of r, s, and t.
Answer. See pare B-7.
EXAMPLE 3.3
EXAMPLE 3.4
Answer:
See Page B-8.
EXAMPLE 3.5
THEOREM 3.2
Answer: See Page B-8.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26. For what values of c do the vectors span ?
27. For what values of c do the vectors span ?
28. For what values of a and b do the vectors and span ?
29. Show that for any given set of vectors , for every .
30. Let the set of vectors and be such that for and for . Prove that .
31. Show that if span a vector space V, then for any vector the vectors also span V.
32. Show that a nonempty subset S of as vector space V is a subspace of V if and only if for every .
33. Let denote the vector space of all polynomials of Exercise 22, page 50. Show that no finite set of vectors in spans .
34. Let S be a subset of a vector space V. Prove that is the intersections of all subspaces of V which contain the set S.
35. If the vectors u and v span V, then so do the vectors u and .
36. If the vectors u and v span V, then so do the vectors u and .
37. If the vectors u and v are contained in the space spanned by the vectors w and z, then .
38. If , and if for , then .
39. If and are finite sets of vectors in a vector space V, then:
40. If and are finite sets of vectors in a vector space V, then:
41. If and are finite sets of vectors in a vector space V, then:
42. If and are subspaces of a vector space V, then:
§2. Linear Independence
Note that if each , then surely
will equal zero.
To say that is linearly independent, is to say that no other linear combination of the vectors equals 0.
DEFINITION 3.3
EXAMPLE 3.6
EXAMPLE 3.7
Most graphing calculators do not have the capability of “rref-ing” a “tall matrix.” But you can always add enough zero columns to arrive at a square matrix:
Answer: Yes.
EXAMPLE 3.8
Answer: See page B-8.
THEOREM 3.3
EXAMPLE 3.9
Answer: See page B-9.
THEOREM 3.4
Contrapositive Proof
Let P and Q be two propositions.
You can prove that:
by showing that:
(After all if Not-Q implies Not-P, then you certainly cannot have P without having Q: think about it)
THEOREM 3.5
In the exercises you are invited to establish the converse of this theorem.
THEOREM 3.6
Answer: See page B-9.
THEOREM 3.7
Answer: See page B-9.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27. , where denotes the set of positive numbers.
28. For what real numbers a is a linearly dependent set in ?
29. For what real numbers a is a linearly dependent set in ?
30. For what real numbers a is a linearly dependent set in ?
31. Find a value of a for which is a linearly dependent set in the function space ?
32. Find a value of a for which is a linearly dependent set in the function space ?
33. Let v be any nonzero vector in a vector space V. Prove that is a linearly independent set.
34. Prove that every nonempty subset of a linearly independent set is again linearly independent.
35. Prove that if is a linearly dependent set in a vector space V, then so is the set for any set of vectors in V.
36. Establish the converse of Theorem 3.6.
37. Let be a set of vectors in a space V. Show that if there exists any vector which can be uniquely expressed as a linear combination of the vectors in S then S is linearly independent.
38. Show that is a linearly independent set in the vector space of Example 2.5, page 47.
39. Let and be linearly independent sets of vectors in a vector space V with . Prove that is also a linearly independent set.
40. If is a linearly dependent set, then for some scalar r.
41. If is a linearly dependent set, then for some scalars r and s.
42. If is a linearly independent set of vectors in a vector space V, then is also linearly independent.
43. If is a linearly independent set of vectors in a vector space V, then is also linearly independent.
44. For any three nonzero distinct vectors in a vector space V, is linearly dependent.
45. If is a linearly independent set of vectors in a vector space V, and if then is also linearly independent.
46. If is a linearly independent set of vectors in a vector space V, and if a is any nonzero number, then is also linearly independent.
47. If and are linearly independent sets of vectors in a vector space V, then is also a linearly independent set.
48. If and are linearly independent sets of vectors in a vector space V, then is also a linearly independent set.
§3. Bases
DEFINITION 3.4 Basis
Standard bases in
EXAMPLE 3.10
If you take the time to solve the system directly, you will find that:
Figure 3.1
EXAMPLE 3.11
In words: There cannot be more lineally independent vectors than the number of vectors in any spanning set.
THEOREM 3.8
Since is a solution of ():
THEOREM 3.9
DEFINITION 3.5
In the exercises you are asked to show that the polynomial space of Exercise 22, page 50, is an infinite dimensional space.
So, if the number of vectors equals the dimension of the space, then to show that those vectors is a basis you do not have to establish both linear independence and spanning, for either implies the other.
THEOREM 3.10
The cycle:
insures that the validity of any of the three propositions implies that of the other two.
THEOREM 3.11
THEOREM 3.12
Procedure: Keep adding vectors, while maintaining linear independence, till you end up with n linearly independent vectors.
EXAMPLE 3.12
EXAMPLE 3.13
Figure 3.2
Note that c and d are the free variables in rref[coef (s)]
THEOREM 3.13
1. (a) Prove that is a basis for . Express as a linear combination of the vectors in .
2. (a) Prove that is a basis for , and express as a linear combinations of the vectors in .
3. (a) Prove that is a basis for , and express as a linear combinations of the vectors in .
4.
5.
6.
7.
8. (a) Prove that the matrix space has dimension 4.
9.
10.
11.
12.
13.
14.
15.
16. (a) Prove that the polynomial space is of dimension 4.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42. Show that is a subspace of , and then find a basis for that subspace.
43. Show that is a subspace of , and then find a basis for that subspace.
44. Show that is a subspace of , and then find a basis for that subspace.
45. Show that is a subspace of , and then find a basis for that subspace.
46. Find all values of c for which is a basis for .
47. Find all values of c for which is a basis for .
48. Find a basis for the vector space of Example 2.5, page 47.
49. Suppose is a basis for a vector space V. For what values of a and b is a basis for V?
50. Let S is a subspace of V with . Prove that .
51. Suppose that is a linearly independent set of vectors in a space V of dimension n, and that spans V. Prove that .
52. A set of vectors S in a finite dimensional vector space V is said to be a maximal linearly independent set if it is not a proper subset of any linearly independent set. Prove that a set of vectors is a basis for V if and only if it is a maximal l...
53. A set of vectors S in a finite dimensional vector space V is said to be a minimal spanning set if no proper subset of S spans V. Prove that a set of vectors is a basis for V if and only if it is a minimal spanning set.
54. Let H and K be finite dimensional subspace of a vector space V with , and let . Prove that . (Note: you were asked to show that is a subspace of V in Exercise 42, page 67.)
55. Let H and K be finite dimensional subspace of a vector space V, and let . Prove that:
56. Prove that the polynomial space of Exercise 22, page 50, is not finite dimensional by showing that it does not have a finite base.
57. (Calculus dependent) Show that is a subspace of the polynomial space P of Exercise 22, page 50. Find a basis for S.
58. Prove that a vector space V is infinite dimensional (not finite dimensional) if and only if for any positive integer n, there exists a set of n linearly independent vectors in V.
59. If is a basis for a vector space V, and if , , and are nonzero scalars, then is also a basis for V.
60. If is a linearly independent set of vectors in a space V of dimension n, and if , then is a basis for V.
61. If is a linearly independent set of vectors in a space V of dimension n, and if , then is a basis for V.
62. If is a spanning set of vectors in a space V of dimension n, then is a basis for V.
63. If is a spanning set of vectors in a space V of dimension n, and if , then is a basis for V.
64. If is a basis for a vector space V, then is also a basis for V.
65. It is possible to have a basis for the polynomial space which consists entirely of polynomials of degree 2.
66. Let be a spanning set for a space V of dimension n satisfying the property that . If you delete any vector from the set , then the resulting set of n vectors will be a basis for V.
67. If V is a space of dimension n, then V contains a subspace of dimension m for every integer .
Linear Combination
Spanning
If , then is said to span the vector space V.
If every vector in a set is contained in the space spanned by another set , then is a subset of .
Linearly Independent Set
Unique representation.
No vector can be built from the rest.
Expanding a linearly independent set.
Linear Independence Theorem.
Linear independence in .
Basis
All bases for a vector space contain the same number of vectors.
You can show that a set of n vectors in an n-dimensional vector space is a basis by either showing that they span the space, or by showing that it is a linearly independent set—you don’t have to do both:
Expansion Theorem
Reduction Theorem
Reducing a set of vectors S in to a basis for Span(S)
115
CHAPTER 4
LINEARITY
§1. Linear Transformations
A linear transformation is also called a linear function, or a linear map. A linear map from a vector space to itself is said to be a linear operator.
DEFINITION 4.1
EXAMPLE 4.1
A smoother approach:
EXAMPLE 4.2
You can also show that the above function is not linear by demonstrating, for example, that . To show that a function is not linear you need only come up with a specific counterexample which “shoots down” either (1) or (2) of Definition 4.1.
Answer: Ye.
In order to distinguish where the different zeros preside, we are using and to indicate the zero is in the vector space V and W, respectively.
Answer: See page B-12.
You can perform the vector operations in V and then apply T to that result: , or you can first apply T and then perform the vector operations in W: . Either way, you will end up at the same vector in W.
Answer: See page B-12.
See Theorem 2.13, page 61
A Linear map is completely determined by its action on a basis
Yes:
A linear transformation is completely determined by its action on a basis of V
Answer: See page B-12.
EXAMPLE 4.3
Answer:
(a)
(b)
Composition of Linear Functions
EXAMPLE 4.4
Answer: (a) See page B-13.
(b)
1. , where .
2. , where .
3. , where .
4. , where .
5. , where .
6. , where .
7. , where .
8. , where .
9. , where .
10. , where .
11. , where .
12. , where .
13. , where .
14. , where .
15. , where .
16. , where .
17. , where .
18. , where , and V is the vector space of Example 2.5, page 47.
19. Let the linear map be such that:
20. Let the linear map be such that:
21. Let the linear map be such that:
22. Let the linear map be such that:
23. Show that there cannot exist a linear transformation such that:
24. Show that there cannot exist a linear transformation such that:
25. Show that the identity function , given by for every v in V, is linear.
26. Show that the zero function , given by for every v in V, is linear. (Referring to the equation , where does 0 live?)
27. In precalculus and calculus, functions of the form are typically called linear functions. Give necessary and sufficient conditions for a function of that form to be a linear operator on the vector space .
28. Show that for any the function , where is linear. (See Theorem 2.4, page 44.)
29. (Calculus Dependent) Let be the subspace of the function space consisting of all differentiable functions. Let be given by , where denotes the derivative of f. Show that T is linear.
30. (Calculus Dependent) Show that the function , given by is linear.
31. (Calculus Dependent) Show that if the linear function is such that , and , then T is the derivative function.
32. (Calculus Dependent) Let denote the vector space of all real-valued functions that are integrable over the interval . Let be given by . Show that T is linear.
33. Let be linear and let S be a subspace of V. Show that is a subspace of W.
34. (PMI) Use the Principle of Mathematical Induction to prove Theorem 4.3.
35. Let , with addition and scalar multiplication given by:
36. (a) Show that if a function satisfies the property that for every and , then is a linear function: which is ti say, that it must also satisfy the property that for every .
37. Let satisfy the condition that for every . Show that:
38. Let satisfy the condition that for every . Show that:
39. , where is given by and by .
40. , where is given by and by
41. , where is given by and by .
42. , where is given by and by .
43. , where is given by , by , and by .
44. , where is given by , by , and by .
45. (PMI) Let be linear, for . Show that is linear.
46. For any the function given by is linear.
47. For any the function given by is linear.
48. Let be linear. If is a linearly independent subset of W then is a linearly independent subset of V.
49. Let be linear. If is a linearly independent subset of V then is a linearly independent subset of W.
50. If for given functions and the composite function is linear, then both f and g must be linear.
51. If for given functions and the composite function is linear, then f must be linear.
52. If, for given functions and , the composite function is linear, then g must be linear.
§2. Kernel and Image
Figure 4.1
DEFINITION 4.2
DEFINITION 4.3
In particular, if
is a basis for V, then
will span .
EXAMPLE 4.5
Answer: (a) See page B-13.
(b) ,
Why can’t you simply show just one of the two?
EXAMPLE 4.6
System S is certainly easy enough to solve directly. Still:
Recall that:
Answer: See page B-14.
One-To-One and Onto Functions
The first part of this theorem is telling is that if a linear map is “one-to-one at 0,” then it is one-to-one everywhere. Certainly not true for other functions:
DEFINITION 4.4
EXAMPLE 4.7
Answer: See page B-14.
1. , where
2. , where
3. , where
4. , where
5. , where
6. , where
7. , where
8. , where
9. , where
10. , where
11. , where
12. , where
13. , where
14. , where
15. , where
16. , where
17. , where
18. , where
19. , where
20. , where
21. , where
22. , where
23. , where
24. where
25. where
26. where
27. where
28. Let be given by .
29. Let be given by .
30. Determine a basis for the kernel and image of the linear transformation which maps to , to , and to .
31. Determine a basis for the kernel and image of the linear transformation which maps to , to , and to .
32. Determine a basis for kernel and image of the linear transformation which maps to , to , to 5, and to .
33. Find, if one exists, a linear transformation such that:
34. Find, if one exists, a linear transformation such that:
35. Let , and let be the linear operator , for . Enumerate the possible values of and .
36. Let be linear with and . Enumerate the possible values of and .
37. Let be linear with and . Enumerate the possible values of and .
38. Let be a one-to-one linear map. Determine the rank and nullity of T.
39. Let be an onto linear map. Determine the rank and nullity of T.
40. Let be a linear operator, with . Prove that if and only if T is one-to-one.
41. Let be linear, with . Prove that is a basis for V if and only if is a basis for W.
42. Give an example of a linear transformation such that .
43. Let be a linear transformation, with . Prove that T is one-to-one if and only if .
44. Let be linear, with . Prove that T is one-to-one if and only if T is onto.
45. Let and be linear.
46. If then .
47. There exists a one-to-one linear map .
48. There exists a one-to-one linear map .
49. There exists an onto linear map .
50. There exists an onto linear map .
51. If is linear and , then T cannot be onto.
52. If is linear and , then T cannot be one-to-one.
53. If is linear and , then T cannot be onto.
54. If is linear and , then T cannot be one-to-one.
55. There exists a linear transformation such that .
56. There exists a linear transformation such that .
57. There exists a linear transformation such that .
58. There exists a linear transformation such that .
59. If is linear and if W is finite dimensional, then V is finite dimensional.
60. If is linear and if V is finite dimensional, then W is finite dimensional.
61. If is linear and if V is finite dimensional, then is finite dimensional.
62. If is linear and if is finite dimensional, then V is finite dimensional. If is linear and if is finite dimensional, then either V or W is finite dimensional.
63. Let and be linear. If , and , then .
64. Let and be linear. If , , and , then .
65. Let and be linear. If , , and , then .
66. Let and be linear, with and . If T is one- to-one and L is onto, then .
§3. Isomorphisms
Bijections and Inverse Functions
DEFINITION 4.5
A bijection serves to pair of each elements of A with those of B (see margin).
Only bijections
have inverses
Figure 4.2
DEFINITION 4.6
Answer: See page B-14.
Back to Linear Algebra
EXAMPLE 4.8
Answer: See page B-14.
DEFINITION 4.7 Isomorphism
EXAMPLE 4.9
This theorem asserts that “isomorphic” is an equivalence relation on any set of vector spaces. See Exercises 37-39.
Answer: See page B-15.
DEFINITION 4.8
Answer: See page B-15.
A rose by any other name
Answer: See page B-16.
EXAMPLE 4.10
For , and :
Answer: See page B-16.
turns out to be the zero in X.
turns out to be the inverse of .
EXAMPLE 4.11
Answer: See page B-17.
1. , where .
2. , where .
3. , where .
4. , where .
5. , where .
6. , where .
7. , where .
8. , where .
9. , where .
10. , where .
11. ,where .
12. , where .
13. , where .
14. , where .
15. , where .
16. , where .
17. , where .
18. , where .
19. , where .
20. , where .
21. , where .
22. , where .
23. given by .
24. given by .
25. Show that if the functions and have inverses, then the function also has an inverse and that .
26. For , let be given by . For what values of r is an isomorphism?
27. For a vector in the space V let be given by .
28. Find a specific isomorphism from to .
29. Show that the vector space of Example 2.4, page 46, is isomorphic to the vector space of real numbers, .
30. Find an isomorphism between the vector space of Example 2.5, page 47 and .
31. Suppose that a linear transformation is one-to-one, and that is a linearly independent subset of V. Show that is a linearly independent subset of W. (In particular, the above holds if T is an isomorphism.)
32. Suppose that a linear transformation is onto, and that is a spanning set for V. Show that is a spanning set for W. (In particular, the above holds if T is an isomorphism.)
33. Prove that a linear transformation is an isomorphism if and only if for any given basis for V, is a basis for W.
34. Let V be a vector space of dimension n, and let be the vector space of linear transformations from to (see Exercise 35, page 122). Prove that is also of dimension n and is therefore isomorphic to V. (The space is called the dual space of V.)
35. Let V be a vector space of dimension n, and let W be a vector space of dimension m. Let be the vector space of linear transformations from to W (see Exercise 35, page 122). Prove that .
36. A partition of a set X is a collection of mutually disjoint (nonempty) subsets of X whose union equals X. (In words: a partition breaks the set X into disjoint pieces.)
37. Show that the relation defined by if and only if is an equivalence relation on the set Q of rational numbers (“fractions”).
38. Show that the relation if the vector space V is isomorphic to the vector space W is an equivalence relation on any set of vector spaces.
39. (a) If is an onto function, then so is the function onto for any function .
40. (a) Let and . If is onto, then f must also be onto.
41. (a) If is a one-to-one function, then so is the function one-to-one for any function .
42. If and are isomorphisms, then .
43. If is an isomorphism, and if , then given by is also an isomorphism.
44. Let and be linear. If is an isomorphism, then T and L must both be isomorphisms.
45. If and are isomorphisms, then so is the function given by an isomorphism.
Linear Transformation
The two conditions for linearity can be incorporated into one statement.
The above result can be extended to encompass n-vectors and scalars.
Linear transformations map zeros to zeros and inverses to inverses.
A linear transformation is completely determined by its action on a basis.
A method for constructing all linear transformations from a finite dimensional vector space to any other vector space.
The composition of linear maps is linear.
Kernel
Image
Both the kernel and image of a linear transformation are subspaces.
Nullity
Rank
The Dimension
Theorem.
One-To-One
Onto
Bijection
The composite of bijections is again a bijection.
Inverse Function
The inverse of a linear bijection is again linear.
Isomorphism
Every vector space is isomorphic to itself. If V is isomorphic to W, then W is isomorphic to V. If V is isomorphic to W, and W is isomorphic to Z, then V is isomorphic to Z.
All n-dimensional vector spaces are isomorphic to Euclidean n-space.
CHAPTER 5
MATRICES AND LINEAR MAPS
§1. Matrix Multiplication
Take two matrices of equal dimension, and simply multiply corresponding entries to obtain their product.
As with:
In general, we will use:
or
to denote an m by n matrix with entries .
DEFINITION 5.1
In Words: To get of , run across the row of A and down the column of B, multiplying and adding along the way (see margin).
Note: The above is meaningful only if the number of columns of the matrix on the left equals the number of rows of the matrix on the right.
EXAMPLE 5.1
Answer: (a)
(b) Number of columns in A does not equal the number of rows in B.
Matrix multiplication is not commutative.
THEOREM 5.1
The properties of this theorem are not particularly difficult to establish. The trick is to carefully keep track of the entries of the matrices along the way,
Why we are restricting this discussion to square matrices?
Because:
Powers of Square Matrices
DEFINITION 5.2
In , .
Why not for ?
Answer: See page B-18.
THEOREM 5.2
Column and Row Spaces
DEFINITION 5.3
THEOREM 5.3
DEFINITION 5.4
EXAMPLE 5.2
Answer:
System of Equations Revisited
Figure 5.1
THEOREM 5.4
Answer: See page B-18.
THEOREM 5.5
Prove that the solution set of any homogeneous system of m equations in n unknowns is a subspace of .
From matrices to Linear Transformations
Note that:
THEOREM 5.6
THEOREM 5.7
DEFINITION 5.5
THEOREM 5.8
Note: The dimension of the null space of A is called the nullity of A
Fair terminology, in that .
EXAMPLE 5.3
Answer:
Answer: Seepage B-18.
THEOREM 5.9
1.
2.
3.
4.
5.
6.
7.
8.
9. (a) Show that each column of (as a vertical two-tuple) is a linear combination of the columns of .
10. Let and let be the column matrix whose entry is 1 and all other entries are 0. Show that is the column of A, for .
11. (a) (Dilation and Contraction) Let for . Show that maps every point in the plane to a point r times a far from the origin.
12. Show that for any given linear transformation there exists a unique matrix such that .
13. Let . Prove that if for every , then .
14. Determine all such that for every .
15. Prove Theorem 5.1(ii).
16. Prove Theorem 5.1(iii).
17. Prove Theorem 5.1(iv).
18. A square matrix for which if is said to be a diagonal matrix. Show that if is a diagonal matrix and if is a column matrix, then . For example: .
19. The transpose of a matrix is the matrix , where . In other words, the transpose of A is that matrix obtained by interchanging the rows and columns of A.
20. A square matrix A is symmetric if the transpose of A equals A: (see Exercise 19).
21. A square matrix A is said to be skew-symmetric if (see Exercise 19).
22. A matrix is said to be idempotent if .
23. A matrix is said to be nilpotent if for some integer k.
24. The sum of the diagonal entries in the matrix is called the trace of A and is denoted by : .
25.
26.
27.
28.
29.
30.
31. (PMI) Show that if , then for any positive integer n, .
32. (PMI) Let and . Show that if , then for every positive integer n.
33. (PMI) Show that if the entries in each column of sum to 1, then the entries in each column of also sum to 1, for any positive integer m.
34. (PMI) Show that if is a diagonal matrix, then so is . (See Exercise 18.)
35. (PMI) Show that if is an idempotent matrix, then for all integers . (See Exercise 22.)
36. (PMI) Show that for any , and for any positive integer n, . (See Exercise 19.)
37. (PMI) Let , for . Show that:
38. (PMI) Let for . show that:
39. For and , if then either or .
40. Let A and B be two-by-two matrices with . If , then .
41. If A and B are square matrices of the same dimension, and if AB is idempotent, then . (See Exercise 22.)
42. For all , .
43. For any given matrix , all entries in the matrix are nonnegative.
44. For and , if A has a column consisting entirely of 0’s, then so does AB.
45. For and , if A has a row consisting entirely of 0’s, then so does AB.
46. For and , if A has two identical columns, then so does AB.
47. For and , if A has two identical rows, then so does AB.
48. For , is a subspace of .
49. For , is a subspace of .
50. For , is a subspace of . (See Exercise 24.)
51. If A is a nilpotent matrix, then so is . (See Exercise 23.)
52. A is idempotnet if and only if is idempotent. (See Exercise 22 and 19.)
Since all identity matrices are square we can get away by specifying just one of its dimensions, as with:
instead of .
We will soon show, that a matrix A can have but one inverse.
§2. Invertible Matrices
Invertible Matrices
DEFINITION 5.6
EXAMPLE 5.4
Actually, you need not verify that both AB and BA equal I, for if one does, then so must the other (Theorem 5.19).
EXAMPLE 5.5
The system of equation on the right also has no solution.
Answer: Invertible with inverse .
From the given conditions, we know that and exist. What we do here is to show that the product is, in fact, the inverse of AB.
Answer: See page B-19.
The inverse of a product of invertible matrices is the product of their inverses, in the reverse order.
DEFINITION 5.7
Elementary Matrices
Elementary row operations were introduced on page 3.
DEFINITION 5.8
Answer: See page B-19
Figure 5.2
Answer: (a)
(b)
EXAMPLE 5.6
Answer: A is invertible with inverse:
Answer: See page B-20.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25. Prove Theorem 5.11(ii).
26. Prove Theorem 5.12.
27. Prove Theorem 5.13.
28. Prove that if A is invertible, then .
29. Let . Prove that there exists an invertible matrix such that .
30. Let be a linearly independent set of vectors in , and let be invertible. Show that is linearly independent.
31. Let be a basis for , and let be invertible. Show that is also a basis.
32. Show that a (square) matrix that has a row consisting entirely of zeros cannot be invertible.
33. Show that a (square) matrix that has a column consisting entirely of zeros cannot be invertible.
34. Show that if a row of a (square) matrix is a multiple of one of its other rows, then it is not invertible.
35. State necessary and sufficient conditions for a diagonal matrix to be invertible. (See Exercise 18, page 161.)
36. Prove that is invertible if and only if the rows of A constitute a basis for .
37. Prove that is invertible if and only if the columns of A constitute a basis for .
38. Prove that the transpose of an invertible matrix A is invertible, and that . (See Exercise 19, page 161.)
39. Prove that if a symmetric matrix is invertible, then its inverse is also symmetric. (See Exercise 20, page 161.)
40. Prove that if is an idempotent invertible matrix, then . (See Exercise 22, page 162.)
41. Prove that every nilpotent matrix is singular. (See Exercise 23, page 162.)
42. (a) Prove that is invertible if and only if .
43. Let be such that . Show that A is invertible.
44. Let be such that , with . Show that A is invertible.
45. Let be such that . Show that A is invertible.
46. (PMI) Show that if is invertible, then so is for every positive integer n.
47. (PMI) Let A and B be invertible matrices of the same dimension with . Sow that:
48. If A is invertible, then so is , and .
49. If is a linearly independent set in the vector space , and if is not the zero vector, then is linearly independent.
50. Let A be an invertible matrix, and . If , then .
51. Let be invertible, and . If , then
52. If A and B are invertible matrices, then is also invertible, and .
53. If and , then A is not invertible.
54. If a square matrix A is singular, then .
55. If A and B are matrices, and if is invertible, then both A and B are invertible.
56. If A and B are matrices, and if is singular, then both A and B are singular.
§3. Matrix Representation of Linear Maps
DEFINITION 5.9
EXAMPLE 5.7
Answer:
Throughout this section the term “basis” will be understood to mean “ordered basis.”
We remind you that we are using to denote
(gamma) is the Greek letter c.
Note the order of the two subscripts in . It kind of “looks backward,” since T “goes from to .” As you will soon see, however, the chosen order is the more suitable for the task at hand, that of representing a linear map in matrix form.
DEFINITION 5.10
EXAMPLE 5.8
Answer:
Figure 5.2
Note that the dimensions match up:
Since
EXAMPLE 5.9
Incidentally, noting that the coefficient matrix of system () is identical to that of (*) we could save a bit of time by doing this
Answer: See page B-20.
EXAMPLE 5.10
Answer: See page B-21.
We recall that denotes the identity matrix of dimension n, and that denotes the identity map from V to V.
Answer: See page B-21.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23. (Calculus Dependent) Let be the linear map given by and let be the differentiation linear function: . Determine the given matrices for the basis of , and the basis of .
24. (Calculus Dependent) Let V be the subspace of spanned by the three vectors 1, , and . Let be the differentiation operator. Determine for , and show directly that .
25. (Calculus Dependent) Let be the differentiation operator. Determine for , and show directly that .
26. Find the linear function , if for .
27. Find the linear function if for and .
28. Find the linear function if for and .
29. Find the linear function if for and .
30. Let and be the linear maps given by:
31. Let and be the linear maps given by:
32. Prove that the linear function of Theorem 5.21 is an isomorphism.
33. Prove Theorem 5.22.
34. Let be a linear map from a vector space V of dimension n to a vector space W of dimension m. Let and be bases for V and W, respectively. Show that if is such that for every , then .
35.
36.
37.
38.
39. Let be given by (See Exercise 19, page 161). Let:
40. Let be a linear operator. A nontrivial subspace of V is said to be invariant under T if . Assume that and . Show that there exists a basis for V such that , where is the zero matrix.
41. Let be a linear function and let and be bases for the finite dimensional vector spaces V and W, respectively. Let . Show that:
42. Let V and W be vector spaces of dimensions n and m, respectively. Prove that the vector space of Exercise 35, page 122, is isomorphic to .
43. (PMI) Let be vector spaces and let be a basis for , . Let be a linear map, . Use the Principle of Mathematical Induction to show that .
44. Let be an isomorphism, and let be a basis for V. Then, for every , , where .
45. Let be linear, and let and be bases for V and W, respectively. Let be defined by . Then: .
46. Let be a basis for V, and let . If is a linear operator on V, then .
47. If is the zero transformation from the n-dimensional vector space V to the m- dimensional vector space W, then is the zero matrix for every pair of bases and for V and W, respectively.
48. Let be the identity map on a space V of dimension n, and let and be (ordered) basis for V. Then if and only if .
49. Let be given by . There exists a basis such that is a diagonal matrix (See Exercise 18, page 161).
50. Let be given by . There exists a basis such that is a diagonal matrix (See Exercise 18, page 161).
51. For and and any basis for : .
§4. Change of Basis
Change of Base Matrix
EXAMPLE 5.11
Answer: See page B-21.
EXAMPLE 5.12
Figure 5.3
Answer:
The adjacent identity map is pointing in two directions. The left-to-right direction gives rise to the change-of- base matrix , while the right-to- left directions brings us to . Are and related? Yes:
In other words: and are invertible, with each being the inverse of the other.
A generalization of this result appears in Exercise 24.
In reading the composition of functions, you kind of have to read from right to left: the right-most function being performed first.
Figure 5.4
EXAMPLE 5.13
Answer: See page B-22.
In Exercise 21 you are asked to show that “similar” is an equivalence relation on . (See Exercises 37-39, page 147 for the definition of an equivalence relation).
DEFINITION 5.11
The column of , namely:
equals the column of P,
since:
EXAMPLE 5.14
Answer: See page B-22.
1. , , , and .
2. , , , and .
3. , , , and .
4. , , , and ,
5. , , , and .
6. , , , and .
7. , , , and .
8. Find the coordinates of the point in the xy-plane with respect to the coordinate axes obtained by rotating the standard axes in a counterclockwise direction. (See Example 5.12.)
9. , given by , , and .
10. , given by , , and .
11. , given by , , and .
12. , given by , , and .
13. (Calculus Dependent) , given by , , and .
14. Let be the linear operator given by . Find a basis for such that , where and .
15. Let be a linear operator. Find the basis for such that , where: and .
16. Let be a linear operator. Find the basis for such that , where and .
17. Show that and are similar.
18. Show that and are not similar.
19. Find all matrices that are similar to the identity matrix .
20. Let be the linear map given by .
21. Show that “similar” is an equivalence relation on . (See Exercises 37-39, page 147 for the definition of an equivalence relation).
22. Show that in the proof of Theorem 5.27 is a basis for V.
23. Let be similar. Show that there exists a linear operator and bases and for such that and .
24. (A generalization of Theorem 5.26) Let be a linear map from the finite dimensional vector space V to the finite dimensional vector space W. Let and be bases for V, and let and be bases for W. Prove that: .
25. , , , , , and
26. , , , , , and .
27. , , , , , and .
28. , , , , , and .
29. , , , , and .
30. Let be linear. Let be bases for the n-dimensional space V, and let be bases for the m-dimensional space W. Prove that there exists an invertible matrix and an invertible matrix such that .
31. Let and be linear maps. Let be bases for V, be bases for W, and be bases for Z. Show that .
32. Let be a linear operator, and let and be a bases for V. If , then .
33. If A and B are similar matrices, then and are also similar.
34. If A and B are similar invertible matrices, then and are also similar.
35. If A and B are similar matrices, then at least one of them must be invertible.
36. If A and B are similar matrices, then so are their transpose. (See Exercise 19, page 161.)
37. If A and B are similar matrices, and if A is symmetric, then so is B. (See Exercises 20, page 161.)
38. If A and B are similar matrices, and if A is idempotent, then so is B. (See Exercises 22, page 162.)
39. If A and B are similar matrices, then . (See Exercises 24, page 162.)
A connection between matrix multiplication and linear transformations.
Properties
Coordinate
Vector
Matrix Representation of a Linear Map
The matrix representation of a linear map T describes the “action” of T.
The matrix of a composition function is the product of matrices of those functions.
Relating coordinate vectors with respect to different bases.
The matrix of the inverse of a transformation is the inverse of the matrix of that transformation.
Relating matrix representations of a linear operator with respect to different bases.
Similar Matrices
Similar matrices represent linear maps with respect to different basis.
CHAPTER 6
Determinants and Eigenvectors
§1. Determinants
DEFINITION 6.1 Determinant
EXAMPLE 6.1
Note that the sign of the has an alternating checkerboard pattern
Note: is called the minor of , and is called the cofactor of A
EXAMPLE 6.2
Answer: See page B-23.
An upper triangular matrix is a square matrix with zero entries below its main diagonal. For example:
A lower triangular matrix is a square matrix with zero entries above its main diagonal. For example:
Answer: See page B-23.
Prove that the determinant of a lower diagonal matrix equals the product of the entries along its diagonal.
Row Operations and Determinants
Matrix A and B differ only in the row, and that row has been removed from both A and B to arrive at the matrices and .
Answer: See page B-23.
EXAMPLE 6.3
Answer: 15
Note that
The restriction is imposed in (b) since we are concerned with elementary row operations (see page 3).
Answer: See page B-24.
You can add this result to the list of equivalences for invertibility appearing in Theorem 5.17, page 172:
(vi)
If , then:
If , then its last row consists entirely of zeros, and
Austin Cauchy, a prolific French mathematician (1789-1857).
Answer: See page B-24.
For the brave at heart:
The column-expansion part of the theorem is relegated to the exercises.
Proof of the Laplace Expansion Theorem
This will show that the expansion about any row equals that of expanding about the first row.
Figure 6.1
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23. While one can certainly find matrices such that , prove that one can not find matrices such that .
24. Show that the matrix is invertible if and only if the numbers a, b, and c, are all distinct.
25. Prove that if a matrix A contains a row (or column) consisting entirely of zeros, then .
26. If is a diagonal matrix and if is the column matrix whose entry is 1 and all other entries are 0, then .
27. Let . Prove that , where denotes the transpose of A (see Exercise 19, page 161).
28. Prove that if is skew-symmetric, then (see Exercise 21, page 162). What conclusion can you draw from this result?
29. For , let B be obtained from A by interchanging pairs of rows of A m times. Express as a function of m and .
30. Let A be similar to B (see Definition 5.11, page 195). Prove that:
31. Show that is an equation of the line passing through the points and in .
32. Show that is an equation of the plane passing through the points , , and in .
33. Show that the area of the triangle with vertices at , , and is given by , where the sign () is chosen to yield a positive number.
34. (Cramer’s Rule) If is a system of n equations in n unknowns, with A invertible, then the system has a unique solution [Theorem 5.17(ii), page 172]. Cramer’s rule asserts that:
35. Prove the “column-expansion-part” of Theorem 6.3 (Laplace Expansion Theorem).
36. For any m and , .
37. For any and : .
38. Prove that for and any positive integer m: .
39. If is of the form , where I is the identity matrix, 0 is the zero matrix, and X and Y are and matrices, respectively, then: .
40. If is of the form , where X and Z are square matrices and 0 is a zero matrix, then: .
41. For , if , then .
42. For , if , then A is the zero matrix.
43. For , if , then both A and B are invertible and .
44. For any , .
45. For any , .
46. If is nilpotent, then (see Exercise 23, page 162).
47. If and , and if , then: .
,
The German word eigen translates to: characteristic.
At one time, eigenvalues were called latent values, and it is for this reason that (lamba), the Greek letter for “l” is used.
We remind you that we use to denote , and that is the vector in “column form.”
§2. Eigenspaces
DEFINITION 6.2
EXAMPLE 6.4
Recall that null(A) denotes the solution set of the homogeneous system of equations .
DEFINITION 6.3
EXAMPLE 6.5
is the solution set of the homogeneous system:
Answer:
Characteristic Polynomials
How does one go about finding the eigenvalues of a matrix?
DEFINITION 6.4
For , the n-degree polynomial is said to be the characteristic polynomial of A, and is said to be the characteristic equation of A.
EXAMPLE 6.6
A better choice is to expand about the second column. If you do, pay particular attention to the checkerboard sign pattern of page 206.
EXAMPLE 6.7
A TI-92 teaser:
Answer: See page B-24.
Turning to Linear Operators
Compare with Definition 6.3, page 218.
DEFINITION 6.5
Note that the linear map T stretches the eigenvector by its eigenvalue 4, and by :
EXAMPLE 6.8
Compare with Definition 6.4, page 219.
Compare with Example 6.5.
DEFINITION 6.6
EXAMPLE 6.9
Answer:
Note that
(Why?)
Theorem 5.26, page 193, and Exercise 30(b), page 216, tell us that
for any bases and
DEFINITION 6.7
Let be a linear operator on a vector space V of dimension n. The characteristic polynomial of T is the n-degree polynomial where is any basis for V, and is said to be the characteristic equation of T.
Compare with Theorem 6.8
EXAMPLE 6.10
Answer: See page B-25.
1.
2.
3.
4.
5.
6.
7.
8.
The adjacent example illustrates how the above result can be used to factor certain polynomials.
9.
10.
11.
12.
13.
14.
15. given by .
16. where and .
17. given by .
18. given by .
19. given by .
20. , where .
21. , where .
22. , where .
23. , where and .
24. given by .
25. given by .
26. , if , , and .
27. , if .
28. given by .
29. , where , , , and .
30. , where I is the identity map: .
31. , where Z is the zero map: .
32. (Calculus Dependent) Let V be the vector space of differentiable functions, and let be the derivative operator. Show that is an eigenvector for D.
33. Prove that a square matrix A is invertible if and only if 0 is not an eigenvalue of A.
34. Let A be an invertible matrix with eigenvalue and corresponding eigenvector v. Prove that is an eigenvalue of with corresponding eigenvector v.
35. Let and be distinct eigenvalues of . Prove that
36. (a) Show that similar matrices have equal characteristic polynomials (see Definition 5.11, page 195).
37. Let , with P invertible. Prove that if is an eigenvector of A, then is an eigenvector of .
38. Let . Prove that a, and c are eigenvalues of A.
39. For , find necessary and sufficient conditions for A to have:
40. Let . Prove that , and are eigenvalues of A.
41. (a) Let be a linear operator with eigenvalue . Prove that:
42. For , show that .
43. Prove that 0 is an eigenvalue for a linear operator if and only if .
44. Show that if v is an eigenvector for the linear operator be a linear operator, then so is for any .
45. Let be an isomorphism. Show that v is an eigenvector in V if and only if is an eigenvector in W.
46. Let v be an eigenvector for the linear operators and . Show that v is also an eigenvector for the linear operator . Find a relation between the eigenvalues corresponding to v for T, L, and .
47. Show that if and are distinct eigenvalues of a linear operator , then .
48. Let and be eigenvectors corresponding to distinct eigenvalues and of a linear operator . Show that is a linearly independent set.
49. Let be a linear operator on a vector space V of dimension n, and let be an isomorphisms. Prove that is an eigenvalue of T if and only if is an eigenvalue of the matrix , where S is the standard basis of , and that .
50. Let be an isomorphism. Show that if v is an eigenvector of the linear operator , then is an eigenvector of the linear operator .
51. Let be a basis for a space V of dimension n, and a linear operator. Prove that if is an eigenvector of T with eigenvalue , then is an eigenvector of with eigenvalue .
52. Show that if is an eigenvalue of then is also an eigenvalue of the transpose . (See Exercise 19, page 162)
53. Show that if is nilpotent, then 0 is the only eigenvalue of A. (See Exercise 23, page 163.)
54. Show that the characteristic polynomial of can be expressed in the form , where Trace(A) denotes the trace of A (see Exercise 24, page 163).
55. Let . Prove that the characteristic polynomial of A is of the form , and that . (This is the Cayley-Hamilton Theorem for square matrices of dimension 2.)
56. (PMI) Let . Use the Principle of Mathematical Induction to show that the coefficient of the leading term of the characteristic polynomial of A is .
57. (PMI) Let . Show that the constant term of the characteristic polynomial of A is .
58. (PMI) Let be the distinct eigenvalues of A for . Prove that are the distinct eigenvalues of .
59. (PMI) Let A be a square matrix with eigenvalue and corresponding eigenvector v. Show that for any positive integer n, is an eigenvalue of with corresponding eigenvalue v.
60. (PMI) Let A be a square matrix with eigenvalue and corresponding eigenvector v. Show that for any integer n, is an eigenvalue of with corresponding eigenvalue v.
61. (PMI) Let be an eigenvalue for a linear operator . Use the Principle of Mathematical Induction to show that is an eigenvalue for , where is defined inductively as follows: , and .
62. If is an eigenvalue for then it is also an eigenvalue for , where .
63. If is an eigenvalue for the two operators and , then it is also an eigenvalue for the operator , where .
64. For , if and are eigenvalues of A and B, respectively, then is an eigenvalue of .
65. For , if and are eigenvalues for A and B, respectively, then is an eigenvalue for AB.
66. If is an eigenvalue of the linear operator , then is an eigenvalue of .
67. If and are eigenvalues for the linear operators and , respectively, then is an eigenvalue for .
68. If and are eigenvalues for the linear operators and , respectively, then is an eigenvalue for .
69. If v is an eigenvector for and , then v is also an eigenvector for .
70. If is a linear operator with eigenvector v, then is also an eigenvector of T for every .
71. For , if and only if is the only eigenvalue of .
72. Let T be a linear operator on a vector space V of dimension n. Let be an eigenvalue for T and let be a basis for . Then, for any , is an eigenvalue for , and is a basis for .
§3. Diagonalization
DEFINITION 6.8
The ‘s can be zero and need not be distinct (several of the eigenvectors in may share a common eigenvalue).
Answer: See page B-25.
Since is an eigenvector corresponding to :
Answer: See page B-26.
Returning to Matrices
DEFINITION 6.9
This theorem asserts that any diagonalizable matrix is similar to a diagonal matrix. The converse also holds (Exercise 37). And so we have:
is diagonalizable if and only if it is similar to a diagonal matrix.
(See page 193)
EXAMPLE 6.11
Answer: See page B-26.
Answer:
Algebraic and geometric multiplicity of eigenvalues
EXAMPLE 6.12
Recall that is the linear operator given by:
Recall that the column of consists of the coefficients of the vector with respect to the basis .
EXAMPLE 6.13
Answer: See page B-27.
1. given by .
2. given by .
3. given by .
4. where and .
5. where and .
6. given by .
7. given by .
8. where , , and .
9. where , , and .
10. given by .
11. given by .
12. given by .
13. where , , , and .
14. where , , , and .
15. given by .
16. given by .
17. given by .
18. where , and .
19. given by .
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35. Let be such that . Show that:
36. Let be diagonalizable. Prove that the rank of A is equal to the number of nonzero eigenvalues of A.
37. Prove that if is similar to a diagonal matrix, then A is diagonalizable.
38. Let . Prove that A and its transpose have the same eigenvalues, and that they occur with equal algebraic multiplicity (see Exercise 19, page 161).
39. Let . Prove that if is an eigenvalue of A with geometric multiplicity d, then is an eigenvalue of its transpose with geometric multiplicity d (see Exercise 19, page 161).
40. Let be an isomorphism on a finite dimensional vector space. Prove that:
41. Let be a linear operator on a space of dimension n. If are distinct eigenvalues of T, and if there exists a basis for V such that is a diagonal matrix, then .
42. Let be the distinct eigenvalues of a linear operator on a vector space V of dimension n. The operator T is diagonalizable if and only if .
43. If are both diagonalizable, then so is .
44. If are such that is diagonalizable, then both A and B are diagonalizable.
§4. Applications
Fibonacci numbers and beyond
Leonardo Fibonacci (Italian; circa 1170 - 1250), is considered by many to be the best mathematician of the Middle Ages. The sequence bearing his name evolved from the following question he posed and resolve in 1220:
Assume that pairs of rabbits do no produce offspring during their first month of life, but will produce a new pair of offspring each month thereafter. Assuming that no rabbit dies, how many pairs of rabbits will there be after k months?
The number has an interesting history dating back to the time of Pythagoras (c. 500 B.C.). It is called the golden ratio ( is the first letter in the Greek spelling of Phydias, a sculptor who used the golden ratio in his work).
Basically, and for whatever aesthetic reason, it is generally maintained that the most “visually appealing” partition of a line segment into two pieces is that for which the ratio of the length of the longer piece L to the length of the sorter pi...
Recursive Relation
Answer:
Systems of Differential Equations (calculus dependent)
If the derivative of a function is zero, then the function must be constant.
In alternate notation form:
EXAMPLE 6.14
Any other two eigenvectors corresponding to the two eigenvalues will do just as well.
Answer: See page B-28.
Answer:
EXAMPLE 6.15
Answer: 1.3 years
1. , , and for .
2. , , and for .
3. , , and for .
4. , , and for .
5. , , and for .
6. , , and for and .
7. , , , and for .
8. , , , and for .
9. (PMI) Let denote the Fibonacci number. Prove that , for .
10. Let and be the first two elements of a sequence and let be a recurrence relation which defines the remaining elements of the sequence. Prove that if the quadratic equation has two distinct solutions, and , then for some .
11. Let the entries of the matrices and be differentiable function, and let C be a matrix with scalar entries (real numbers). Given that the dimensions of the matrices are such that the operations can be performed, prove that:
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22. Given enough space and nourishment, the rate of growth of plants A and B are given by and , respectively, where t denotes the number of months after planting. One year, 50 of A and 30 of B were planted, and in such a fashion that the rates of gro...
23. Assume that initially, tank A contains 20 gallons of a liquid solution that is 10% alcohol, and that tank B contains 30 gallons of a solution that is 20% alcohol. At time , the mixture in A is pumped to B at a rate of 1 gallons/minute, while that...
Stochos: Greek for “guess.” Stochastices: Greek for “one who predicts the future.” Andrei Markov: Russian Mathematician (1856-1922).
Transition matrices are also called probability matrices.
Since the entries in the transition matrix are probabilities, they must lie between 0 and 1 (inclusive). Moreover, since the entries down either column account for all possible outcomes (staying in Y, or leaving Y, for example), their sum must equal ...
Figure 6.2
If T is the transition matrix of a Markov process with initial-state matrix , then the state matrix in the chain is given by:
Answer: 757, 686, and 636 of the current freshmen will live in the dorm in their sophomore, junior, and senior year, respectively.
Powers of the Transition Matrix
DEFINITION 6.10
Let T denote the transition matrix of a Markov chain. If the process starts in state j, then the element in the row of the column of represents the probability of ending up at state i after m steps.
EXAMPLE 6.16
DEFINITION 6.11
EXAMPLE 6.17
For example:
is regular, since:
DEFINITION 6.12
Note that it is possible to eventually go from any state to any other state in a regular Markov chain (see Theorem 6.24).
In other words, is that matrix obtained by interchanging the rows and columns of A. For example:
DEFINITION 6.13
A-1: If and , then . [Exercise 19(f), page 161.]
A-2: If is an eigenvalue of then is also an eigenvalue of . (Exercise 52, page 231.)
Note that is an eigenvector of the transpose of T, and not necessarily of T.
In a sense, independently of its initial state:
The fixed state of a regular transition matrix is also the final state of the matrix
That “(s)” in is not an exponent; it is there to indicate that we are considering the matrix
EXAMPLE 6.18
How large is large enough? If the rows look different, then take a higher power.
Answer: Approximately 41%, 26%, 33% of the population, will vote democratic, republican, green, respectively.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12. Determine the probability of ending up at states A and B after two steps of the Markov chain associated with the transition matrix in Exercise 9, given that you are initially in state:
13. Determine the probability of ending up at states A, B and C after two steps of the Markov chain associated with the transition matrix in Exercise 10, given that you are initially in state:
14. Determine the probability of ending up at states A, B, C and D after two steps of the Markov chain associated with the transition matrix in Exercise 11, given that you are initially in state:
15.
16.
17.
18.
19.
20.
21. Show that the matrix is not a regular matrix, by:
22. Show that for any transition matrix T, the system of equations stemming from has infinitely many solutions.
23. Let be a regular transition matrix. Prove that is a factor of the characteristic polynomial of T.
24. Show that if the entries in each column of sum to k, then k is an eigenvalue of A.
25. Referring to the proof of Theorem 6.26, show that:
26. Establish Theorem 6.26 for an arbitrary transitional matrix T.
27. Prove that if is any eigenvalue of a regular transition matrix, then .
28. Show that if is an eigenvalue of a regular transition matrix, then .
29. (Rapid Transit) A study has shown that in a certain city, if a daily (including Saturday and Sunday) commuter uses rapid transit on a given day, then he will do so again on his next commute with probability 0.85, and that a commuter who does not ...
30. (Dental Plans) A company offers its employees 3 different dental plans: A, B, and C. Last year, 550 employees were in plan A, 340 in plan B, and 260 were in plan C. This year, there are 500 employees in plan A, 360 in plan B, and 290 in plan C. A...
31. (Campus Life) The following transition matrix gives the probabilities that a student living in the Dorms, at Home, or Off-campus (but not at home), will be living in the Dorms, at Home, or Off-campus (but not at home) next year (assume that all f...
32. (Higher Learning) The transition matrix below represents the probabilities that a female child will receive a Doctorate, a Masters, or a Bachelors (terminal degree), or No degree; given that her mother received a D, M, B (terminal degree), or No ...
33. (HMO Plans) A company offers its employees 5 different HMO health plans: A, B, C, D, and E. An employee can switch plans in January of each year, resulting in the following transition matrix:
34. (Mouse in Maze) On Monday, a mouse is placed in a maze consisting of paths A and B. At the end of path A is a cheese treat, and at the end of path B there is bread. Experience has shown that if the mouse takes path A, then there is a 0.9 probabil...
35. (Cities, Suburbs, and Country) Within the period of a year, 2% of a population currently residing in cities will move to the suburbs, while 2% of them will move to the country. 4% of those living in the suburbs will move to the cities, while 3% o...
36. (Crop Rotation) A farmer rotates a field between crops of beans, potatoes and carrots. If she grows beans this year, then next year she will grow potatoes or carrots, each with 0.5 probability. If she grows carrots, then she will grow beans with ...
37. (Wolf Pack) A wolf pack hunts on one of four regions: A, B, C, and D:
Eigenvalue and Eigenvector
Eigenspace
Characteristic Polynomial and Characteristic Equation
Diagonal Matrix
Diagonalizable Matrices and Linear Operators
Algebraic and Geometric Multiplicity of Eigenvalues
Diagonalizing a Matrix
is a fixed state for a transition matrix if .
If T is the transition matrix of a Markov process with initial-state matrix , then the state matrix in the chain is given by:
CHAPTER 7
Inner Product Spaces
Basically, an inner product space is a vector space augmented with an additional structure, one that will enable us to generalize the familiar concepts of distance and angles in the plane to general vector spaces.
§1. Dot Product
DEFINITION 7.1
Answer: See page B-30.
DEFINITION 7.2
Figure 7.1
is defined to be the length of v.
is defined to be the distance between .
Answer: See page B-30.
Angle Between Vectors
Figure 7.2
For any , is defined to be that angle whose cosine is x.
In Exercise 44 you are asked to verify that
Assuring us that: exists.
DEFINITION 7.3
EXAMPLE 7.1
Answer:
We remind you that, for any , is that angle such that .
So, if , then , or:.
Orthogonal Vectors in
The angle between the vectors depicted in the adjacent figure has a measure of (), and we say that those vectors are perpendicular or orthogonal. Appealing to Definition 7.3 we see that:
Answer: See page B-31.
DEFINITION 7.4
Note: The zero vector in is orthogonal to every vector in .
Orthogonal
Projection
Figure 7.3
EXAMPLE 7.2
Answer:
EXAMPLE 7.3
Answer: (a) (b) 4
Planes Revisited
Note that a normal to the plane can easily be spotted from any of the above forms. For example, is a normal to the plane:
EXAMPLE 7.4
Answer: See page B-31.
EXAMPLE 7.5
Answer: See page B-31.
EXAMPLE 7.6
Any point satisfying the equation would do just as well.
Answer: See page B-32.
Note that if you apply this formula in Example 7.6 you obtain:
Cross Product
DEFINITION 7.5
The cross product of and is denoted by , and is expressed in the form:
EXAMPLE 7.7
Answer: See page B-33.
1.
2.
3.
4.
5.
6. Find all values of c such that .
7. Find all values of a such that the vector is orthogonal to the vector .
8. Find all values of a such that the vector is orthogonal to the vector .
9. Find all values of a and b such that the vector is orthogonal to the vector .
10.
11.
12.
13.
14.
15.
16.
17.
18.
19. Find the distance from the point and the line L in passing through the points and .
20. Find the distance from the point and the line L in passing through the points and .
21. Find the distance from the point and the line L in passing through the points and .
22. Find the distance from the point to the plane .
23. Find the distance from the point to the plane .
24. Determine the angle of intersection of the planes and . Suggestions: Consider the normals to those planes.
25. Find the set of vectors in orthogonal to:
26.
27.
28. Find the angle between a main diagonal and an adjacent edge of a cube of volume .
29. Prove Theorem 7.1(i).
30. Prove Theorem 7.1(ii).
31. Prove Theorem 7.1(iv).
32. Establish the following properties for and :
33. Show that two nonzero vectors and are normal to a given plane if and only if each is a scalar multiple of the other.
34. (Normal form equation of a line in ) Express the line in the form , where , p is a point on the line, and is a normal to the line
35. Let , and let . Show that . (See Exercise 19, page 161).
36. . Show that the function given by is linear. What is the kernel of ?
37. Let . Show that if for every , then .
38. (Pythagorean Theorem in ) Let . Show that if and only if .
39. (Parallelogram Law in ) Let . Show that:
40. Let . Prove that if and only if and are orthogonal.
41. Prove that if is such that if , then is a basis for .
42. Let . Prove that if u is orthogonal to each , , then u is orthogonal to every .
43. (Cauchy-Schwarz Inequality in ) Show that if , then .
44. Use the above Cauchy-Schwuarz Inequality to show that for any nonzero vectors :
45. Establish the following properties for and :
46. (Metric Space Structure of ) Define the distance between two vectors to be . Prove that for all :
47. (PMI) Use the principle of mathematical induction to show that for any and any : .
48. Let . If and if , then .
49. Let . If for every , then .
50. Let . If and with and multiples of v, and if and are orthogonal to u, then and .
51. Let , with . If u is orthogonal to both v and z, then for some .
52. The function given by is linear.
53. for all .
§2. Inner Product
While the scalar product assigns a vector to a scalar r and a vector v, the inner product assigns a real number to a pair of vectors.
DEFINITION 7.6
Why are we requiring the c’s to be positive?
EXAMPLE 7.8
For :
Answer: See page B-33.
In the exercises you are asked to establish the following generalization and combination of (b) and (c).
Answer: See page B-33.
distance in an inner product space
Answer: See page B-33.
DEFINITION 7.7
EXAMPLE 7.9
Answer: (a) (b)
The Cauchy-Schwarz Inequality
The proof sketched out in Exercise 43, page 291, can also be used to establish this result.
Answer: Seepage B-34.
The Cauchy-Schwarz inequality plays a hidden role in this definition. (Where?)
DEFINITION 7.8
EXAMPLE 7.10
Answer:
1. The magnitude of the vector .
2. The magnitude of the vector .
3. The distance between the vectors and .
4. The distance between the vectors and .
5. The angle between the vectors and .
6. The angle between the vectors and .
7. Verify that the Cauchy-Schwarz inequality holds for the vectors and .
8. Verify that the Cauchy-Schwarz inequality holds for the vectors and .
9. The magnitude of the vector .
10. The magnitude of the vector .
11. The distance between the vectors and .
12. The distance between the vectors and .
13. The angle between the vectors and .
14. The angle between the vectors and .
15. Verify that the Cauchy-Schwarz inequality holds for the vectors and .
16. Verify that the Cauchy-Schwarz inequality holds for the vectors and .
17. For in the vector space , define:
18. The magnitude of the vector .
19. The magnitude of the vector .
20. The distance between the vectors and .
21. The angle between the vectors and .
22. Verify that the Cauchy-Schwarz inequality holds for the vectors and .
23. Verify that is an inner product on the polynomial space .
24. (Calculus Dependent) (a) Show that is a subset of the function vector space of Theorem 2.4, page 44.
25. The magnitude of the vector in the inner product space .
26. The distance between the vectors and in the inner product space .
27. The angle between the vectors and in the inner product space .
28. The magnitude of the vector in the inner product space .
29. The distance between the vectors and in the inner product space .
30. The angle between the vectors and in the inner product space .
31. The magnitude of the vector in the inner product space .
32. The distance between the vectors and in the inner product space .
33. The angle between the vectors and in the inner product space .
34. Verify that the Cauchy-Schwarz inequality holds for the vectors and in the inner product space .
35. Verify that the Cauchy-Schwarz inequality holds for the vectors and in the inner product space .
36. Prove that ordinary multiplication in the set of real numbers R is an inner product on the vector space .
37. Prove Theorem 7.3(a).
38. Prove Theorem 7.3(b).
39. Prove Theorem 7.3(c).
40. Prove Theorem 7.3(e).
41. Prove Theorem 7.3(f).
42. Let , V an inner product space. Show that
43. Let , V an inner product space. Show that .
44. Let , V an inner product space. Show that if and only if .
45. Let , V an inner product space. Show that is a subspace of V.
46. (PMI) Let V be an inner product space.Use the principle of mathematical induction to show that for any and any :
47. Let , V an inner product space. If and if , then .
48. Let . If for every , then .
49. There exists an inner product on for which .
50. There exists an inner product on for which .
51. There exists an inner product on for which .
§3. Orthogonality
DEFINITION 7.9
EXAMPLE 7.11
THEOREM 7.7
Answer: See page B-34.
Normalization
DEFINITION 7.10
DEFINITION 7.11
THEOREM 7.8
Answer: See page B-34.
THEOREM 7.9
Note: To obtain an orthonormal basis for an inner product space, simply normalize the orthogonal basis generated by the Gram-Schmidt process.
EXAMPLE 7.12
Multiplying any in the Gram-Schmidt process by a nonzero constant will not alter that vectors “orthogonality-feature,” but will simplify subsequent calculations.
This brute force approach is not always practical. Software, such as Maple and MATLAB, include the Gram-Schmidt process as a built-in procedure. Yes, the Gram-Schmidt process works off of a basis for the inner product space, but that is not a problem...
Answer: See page B-35.
Orthogonal Complement
W
Line W passing through the origin.
Plane passing through the origin with normal W.
Plane W passing through the origin.
Line passing through the origin orthogonal to W.
Orthogonal Complement
THEOREM 7.10
In this part of the theorem we assume that W is finite dimensional (the result does, however, hold in general).
Answer: See page B-35.
Compare with Theorem 7.2, page 283.
THEOREM 7.11
Note: is said to be the orthogonal projection of v onto W, and we write: .
EXAMPLE 7.13
We know that will turn out to be of dimension 2. How?
and
Since is orthogonal to , so is
Answer: See page B-35.
Consider Example 7.3, page 284.
THEOREM 7.12
Answer: 3
1. in the Euclidean inner product space .
2. in the weighted inner product space of Example 7.8, page 292, with .
3. in the weighted inner product space of Example 7.8, page 292, with .
4. in the polynomial inner product space of CYU 7.11, page 293.
5. in the inner product space of Exercise 17, page 298.
6. (Calculus Dependent) in the inner product space of Exercise 24, page 299.
7. (Calculus Dependent) in the inner product space of Exercise 24, page 299.
8. Use Theorem 7.8 to express in the Euclidean inner product space as a linear combination of the vectors in the orthonormal basis .
9. Use Theorem 7.8 to express in the polynomial inner product space of CYU 7.11, page 293, as a linear combination of the vectors in the orthonormal basis .
10. Find all values of a for which is an orthogonal set in the Euclidean inner product space .
11. Find all values of a and b for which is an orthogonal set in the Euclidean inner product space .
12. Find all values of a and b for which is an orthogonal set in the weighted inner product space of Example 7.8, page 292, with .
13. Find all values of a and b for which is an orthogonal set in the weighted inner product space of Example 7.8, page 292, with .
14. Find all values of a, b, and c for which is a n orthogonal set in the weighted inner product space of Example 7.8, page 292, with .
15. (Calculus Dependent) Find all values of a and b for which is an orthogonal set in the inner product space of Exercise 24, page 299.
16. (Calculus Dependent) Find all values of a, and b for which is an orthogonal set in the inner product space of Exercise 24, page 299.
17. in the Euclidean inner product space .
18. in the Euclidean inner product space .
19. in the Euclidean inner product space
20. in the Euclidean inner product space .
21. in the weighted inner product space of Example 7.8, page 292, with .
22. in the polynomial inner product space of CYU 7.11, page 293.
23. The solution set of in the Euclidean inner product space .
24. The solution set of in the Euclidean inner product space .
25. (Calculus Dependent) in the inner product space of Exercise 24, page 299.
26. (Calculus Dependent) in the inner product space of Exercise 24, page 299.
27. Find an orthonormal basis for in the Euclidean inner product space .
28. Find an orthonormal basis for in the Euclidean inner product space .
29. Find an orthonormal basis for in the Euclidean inner product space .
30. Find an orthonormal basis for in the weighted inner product space of Example 7.8, page 292, with .
31. Find an orthonormal basis for in the polynomial inner product space of CYU 7.11, page 293.
32. , .
33. , .
34. , .
35. , .
36. , .
37. Find a basis for the orthogonal complement of the subspace in the weighted inner product space of Example 7.8, page 292, with , and express the vector as a sum of a vector in W and a vector in .
38. Find a basis for the orthogonal complement of the subspace in the polynomial inner product space of CYU 7.11, page 293, and express the vector as a sum of a vector in W and a vector in .
39. Prove that the standard basis of page 94 is an orthonormal basis in the Euclidean inner product space .
40. Prove that is an orthonormal basis in the polynomial inner product space of CYU 7.11, page 277.
41. Let V be an inner product space. Prove that and that .
42. Let in an inner product space V. Prove that if and only if for all , .
43. Let be an orthogonal set in an inner product space V. Show that if and , then .
44. Let be a subspace in an inner product space V. Prove that .
45. Let w be a vector in an inner product space V of dimension n. Prove that is a subspace of V of dimension .
46. Let S be a subset of an inner product space V. Prove that is a subspace of V.
47. Let S be a subspace of an inner product space V of dimension n. Prove that
48. Let be an orthonormal basis in an inner product space V. Show that for any . (See Definition 5.9, page 178.)
49. Sow that the following are equivalent:
50. Prove that every orthogonal matrix is invertible, and that its inverse is also orthogonal.
51. Prove that a product of orthogonal matrices (or the same dimension) is again orthogonal.
52. Prove that if A is orthogonal, then .
53. Prove that if A is orthogonal then the rows of A also constitute an orthonormal set.
54. Prove that if A is orthogonal, and if B is equivalent to A, then B is also orthogonal.
55. Prove that every orthogonal matrix is of the form or where .
56. Show that every orthogonal matrix is of the form or .
57. Show that every orthogonal matrix corresponds to either a rotation or a reflection about a line through the origin in .
58. (a) Prove that the null space of is the orthogonal complement of the row space of A.
59. (Bessel’s Equality) Let be an orthonormal basis for an inner product space V. Prove that for any : .
60. If is an orthogonal set in an inner product space V, then is an orthogonal set for all scalars .
61. If is an orthonormal set in an inner product space V, then is an orthonormal set for all scalars .
62. Let W be a subspace of an inner product space V. If with , then .
63. Let be a basis for an inner product space V such that each for is orthogonal to every for . If , then .
64. Let be an orthogonal basis for an inner product space V. If hen .
§4. The Spectral Theorem
In other words, the row of A is the column of . For example:
The transpose of a matrix is the matrix , where
DEFINITION 7.12
THEOREM 7.13
We remind you that we are using to denote . For we now define to be the dot product of the corresponding vertical n-tuples (see margin). It is easy to show that ,with defined to be , is an inner product space (see Definition 7.6, page 292). Note, tha...
THEOREM 7.14
EXAMPLE 7.14
Note:
Answer: See page B-37.
THEOREM 7.15
Answer: See page B-37.
Symmetric Operators
Compare with Theorem 7.15.
DEFINITION 7.13
THEOREM 7.16
THEOREM 7.17
Answer: See page B-38.
Note that V contains an orthonormal basis if and only if it contains a normal basis.
THEOREM 7.18
Answer: See page B-38.
Matrix Version of the Spectral Theorem
THEOREM 7.19
Recall that for :
(see page 307)
DEFINITION 7.14
THEOREM 7.20
THEOREM 7.21
Answer: See page B-39.
Note: In the literature the term orthogonally diagonalizable is typically used to refer to what we are calling .
DEFINITION 7.15
THEOREM 7.22
See Theorem 5.26, page 193
(1), (2), (3) and () tell us that:
is a diagonal matrix, with an orthonormal matrix.
In particular:
A is !
Moreover:
with the column of P.
Answer: See page B-40.
1.
2.
3.
4.
5.
6.
7.
8.
9. Exercise 5.
10. Exercise 6.
11. Exercise 67
12. Exercise 8.
13.
14.
15.
16.
17. Verify that is a symmetric operator on the weighted inner product space with . Verify that is an orthonormal basis in this inner product space, and determine .
18. (a) Verify that is a symmetric operator on the standard inner product space : . (b) Use the Grahm-Schmidt process of page 303 on the basis to arrive at the orthonormal basis . Verify that is not symmetric, and that is symmetric.
19. Let denote the standard Euclidean dot product inner product space. Find a symmetric linear operator and a basis for which .
20. Let denote the weighted inner product space with . Find a symmetric linear operator and a basis for which .
21. Let denote the standard inner product space : . Find a symmetric linear operator and a basis for which
22. Show that for any both and are symmetric.
23. Show that if are orthonormally diagonalizable, then so is:
24. (PMI) Show that if is orthonormally diagonalizable, then so is for any positive integer n.
25. (PMI) Show that if is orthonormally diagonalizable for , then so is .
26. Show that if is an invertible orthonormally diagonalizable matrix, then so is .
27. Prove that if A is a real symmetric matrix, then the eigenvalues of A are real.
28. If is a symmetric matrices, then so is .
29. If is a symmetric matrices, then so is .
30. If are symmetric matrices, then so is .
31. If are symmetric matrices, then so is .
32. If are orthonormally diagonalizable, then so is .
33. If is orthonormally diagonalizable, then so is .
34. If is orthonormally diagonalizable, then so is .
35. Let V be an inner product space. If is a symmetric operator, then so is for every .
36. Let V be an inner product space. If and are symmetric operators, then so is .
37. Let V be an inner product space. If and are symmetric operators, then so is .
Dot Product
Properties
distributive property:
Angle between
vectors
Orthogonal Vectors
Properties
Distance
Cauchy-Schwarz
Inequality
Properties
Angle between
vectors
Orthogonal Set
Theorem
Unit Vector
Orthonormal Set
Theorem
Grahm-Schmidt
Process
Orthogonal
Complement
Properties
Decomposition
Symmetric Matrix
Theorems
Symmetric
Operator
Theorems
Spectral Theorem
(PMI)
Let denote a proposition that is either true or false, depending on the value of the integer n.
If:
I. is True.
And if, from the assumption that:
II. is True
one can show that:
III. is also True.
then the proposition is valid for all integers
The Principle of Mathematical Induction might have been better labeled a Principle of Mathematical Deduction; for:
Inductive reasoning is a process used to formulate a hypotheses or conjecture, while deductive reasoning is a process used to rigorously establish whether or not the conjecture is valid.
Figure 1.1
The sum of the first 3 odd integers is: The sum of the first 4 odd integers is: Suggesting that the sum of the first k odd integers is:
(see Exercise 1).
EXAMPLE 1.1
EXAMPLE 1.2
III: We need to show that holds for ; which is to say, that: :
Recall that:.
EXAMPLE 1.3
EXAMPLE 1.4
Appendix C
Answers to Selected Exercises
1.1 Systems of Linear Equations, page 11.
1.2 Consistent and Inconsistent Systems of Equations, page 23.
2.1 Vectors in the Plane and Beyond, page 38.
1.
3.
5.
7. 9. 11. 15. 17.
19. (a) (b) (c)
21. 23.
2.2 Abstract Vectors Spaces. page 49.
2.3 Properties of Vectors Spaces, page 57.
(All exercises call for either a proof or a counterexample)
2.4 Subspaces, page 65.
2.5 Lines and Planes, page 73.
1. 3. 5.
43. 45.
47. 49.
51. 53.
55.
3.1 Spanning Sets, page 84.
3.2 Linear Independence, page 91.
📜 SIMILAR VOLUMES
Since many students feel uncomfortable at first or unfamiliar with the theoretical nature of many topics in linear algebra, numerous discussions of the logical structure of proofs, the need to translate terminology into notations, and suggestions about efficient ways to discover a proof are discusse
<p>Based on lectures given at Claremont McKenna College, this text constitutes a substantial, abstract introduction to linear algebra. The presentation emphasizes the structural elements over the computational - for example by connecting matrices to linear transformations from the outset - and prepa
<p>Based on lectures given at Claremont McKenna College, this text constitutes a substantial, abstract introduction to linear algebra. The presentation emphasizes the structural elements over the computational - for example by connecting matrices to linear transformations from the outset - and prepa
<p>Based on lectures given at Claremont McKenna College, this text constitutes a substantial, abstract introduction to linear algebra. The presentation emphasizes the structural elements over the computational - for example by connecting matrices to linear transformations from the outset - and prepa