with Open Texts
A First Course in
LINEAR ALGEBRA
an Open Text
BASE TEXTBOOK
VERSION 2017 – REVISION A
ADAPTABLE | ACCESSIBLE | AFFORDABLE
by Lyryx Learning based on the original text by K. Kuttler
Creative Commons License (CC BY)
a d v a n c i n g l e a r n i n g
Champions of Access to Knowledge
OPEN TEXT ONLINE ASSESSMENT
All digital forms of access to our high-quality
open texts are entirely FREE! All content is
reviewed for excellence and is wholly adapt-
able; custom editions are produced by Lyryx
for those adopting Lyryx assessment. Access
to the original source files is also open to any-
one!
We have been developing superior online for-
mative assessment for more than 15 years. Our
questions are continuously adapted with the
content and reviewed for quality and sound
pedagogy. To enhance learning, students re-
ceive immediate personalized feedback. Stu-
dent grade reports and performance statistics
are also provided.
SUPPORT INSTRUCTOR SUPPLEMENTS
Access to our in-house support team is avail-
able 7 days/week to provide prompt resolution
to both student and instructor inquiries. In ad-
dition, we work one-on-one with instructors to
provide a comprehensive system, customized
for their course. This can include adapting the
text, managing multiple sections, and more!
Additional instructor resources are also freely
accessible. Product dependent, these supple-
ments include: full sets of adaptable slides and
lecture notes, solutions manuals, and multiple
choice question banks with an exam building
tool.
Contact Lyryx Today!
info@lyryx.com
a d v a n c i n g l e a r n i n g
A First Course in Linear Algebra an Open Text
BE A CHAMPION OF OER!
Contribute suggestions for improvements, new content, or errata:
A new topic
A new example
An interesting new question
A new or better proof to an existing theorem
Any other suggestions to improve the material
Contact Lyryx at info@lyryx.com with your ideas.
CONTRIBUTIONS
Ilijas Farah, York University
Ken Kuttler, Brigham Young University
Lyryx Learning Team
Bruce Bauslaugh
Peter Chow
Nathan Friess
Stephanie Keyowski
Claude Laflamme
Martha Laflamme
Jennifer MacKenzie
Tamsyn Murnaghan
Bogdan Sava
Larissa Stone
Ryan Yee
Ehsun Zahedi
LICENSE
Creative Commons License (CC BY): This text, including the art and illustrations, are available under
the Creative Commons license (CC BY), allowing anyone to reuse, revise, remix and redistribute the text.
To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
https://creativecommons.org/licenses/by/4.0/
a d v a n c i n g l e a r n i n g
A First Course in Linear Algebra an Open Text
Base Text Revision History
Current Revision: Version 2017 — Revision A
Extensive edits, additions, and revisions have been completed by the editorial staff at Lyryx Learning.
All new content (text and images) is released under the same license as noted above.
2017 A
• Lyryx: Front matter has been updated including cover, copyright, and revision pages.
• I. Farah: contributed edits and revisions, particularly the proofs in the Properties of Determinants II:
Some Important Proofs section
2016 B
• Lyryx: The text has been updated with the addition of subsections on Resistor Networks and the
Matrix Exponential based on original material by K. Kuttler.
• Lyryx: New example 7.35 on Random Walks developed.
2016 A • Lyryx: The layout and appearance of the text has been updated, including the title page and newly
designed back cover.
2015 A
• Lyryx: The content was modified and adapted with the addition of new material and several im-
agesthroughout.
• Lyryx: Additional examples and proofs were added to existing material throughout.
2012 A
• Original text by K. Kuttler of Brigham Young University. That version is used under Creative Com-
mons license CC BY (https://creativecommons.org/licenses/by/3.0/) made possible by
funding from The Saylor Foundation’s Open Textbook Challenge. See Elementary Linear Algebra for
more information and the original version.
https://creativecommons.org/licenses/by/3.0/
https://www.saylor.org/site/wp-content/uploads/2012/02/Elementary-Linear-Algebra-1-30-11-Kuttler-OTC.pdf
Contents
Contents iii
Preface 1
1 Systems of Equations 3
1.1 Systems of Equations, Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Systems Of Equations, Algebraic Procedures . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1 Elementary Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.2 Gaussian Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.3 Uniqueness of the Reduced Row-Echelon Form . . . . . . . . . . . . . . . . . . 25
1.2.4 Rank and Homogeneous Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.2.5 Balancing Chemical Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.2.6 Dimensionless Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.2.7 An Application to Resistor Networks . . . . . . . . . . . . . . . . . . . . . . . . 38
2 Matrices 53
2.1 Matrix Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.1.1 Addition of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.1.2 Scalar Multiplication of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.1.3 Multiplication of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.1.4 The i jth Entry of a Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.1.5 Properties of Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.1.6 The Transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.1.7 The Identity and Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.1.8 Finding the Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.1.9 Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.1.10 More on Matrix Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.2 LU Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
2.2.1 Finding An LU Factorization By Inspection . . . . . . . . . . . . . . . . . . . . . 99
2.2.2 LU Factorization, Multiplier Method . . . . . . . . . . . . . . . . . . . . . . . . 100
2.2.3 Solving Systems using LU Factorization . . . . . . . . . . . . . . . . . . . . . . . 101
2.2.4 Justification for the Multiplier Method . . . . . . . . . . . . . . . . . . . . . . . . 102
iii
iv CONTENTS
3 Determinants 107
3.1 Basic Techniques and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.1.1 Cofactors and 2×2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . 107 3.1.2 The Determinant of a Triangular Matrix . . . . . . . . . . . . . . . . . . . . . . . 112
3.1.3 Properties of Determinants I: Examples . . . . . . . . . . . . . . . . . . . . . . . 114
3.1.4 Properties of Determinants II: Some Important Proofs . . . . . . . . . . . . . . . 118
3.1.5 Finding Determinants using Row Operations . . . . . . . . . . . . . . . . . . . . 123
3.2 Applications of the Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.2.1 A Formula for the Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.2.2 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
3.2.3 Polynomial Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4 Rn 145
4.1 Vectors in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.2 Algebra in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.2.1 Addition of Vectors in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.2.2 Scalar Multiplication of Vectors in Rn . . . . . . . . . . . . . . . . . . . . . . . . 150
4.3 Geometric Meaning of Vector Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.4 Length of a Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4.5 Geometric Meaning of Scalar Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . 159
4.6 Parametric Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
4.7 The Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.7.1 The Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
4.7.2 The Geometric Significance of the Dot Product . . . . . . . . . . . . . . . . . . . 170
4.7.3 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
4.8 Planes in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
4.9 The Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
4.9.1 The Box Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
4.10 Spanning, Linear Independence and Basis in Rn . . . . . . . . . . . . . . . . . . . . . . . 192
4.10.1 Spanning Set of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
4.10.2 Linearly Independent Set of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . 194
4.10.3 A Short Application to Chemistry . . . . . . . . . . . . . . . . . . . . . . . . . . 200
4.10.4 Subspaces and Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
4.10.5 Row Space, Column Space, and Null Space of a Matrix . . . . . . . . . . . . . . . 211
4.11 Orthogonality and the Gram Schmidt Process . . . . . . . . . . . . . . . . . . . . . . . . 232
4.11.1 Orthogonal and Orthonormal Sets . . . . . . . . . . . . . . . . . . . . . . . . . . 233
4.11.2 Orthogonal Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
CONTENTS v
4.11.3 Gram-Schmidt Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
4.11.4 Orthogonal Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
4.11.5 Least Squares Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
4.12 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
4.12.1 Vectors and Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
4.12.2 Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
5 Linear Transformations 269
5.1 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
5.2 The Matrix of a Linear Transformation I . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
5.3 Properties of Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
5.4 Special Linear Transformations in R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
5.5 One to One and Onto Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
5.6 Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
5.7 The Kernel And Image Of A Linear Map . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
5.8 The Matrix of a Linear Transformation II . . . . . . . . . . . . . . . . . . . . . . . . . . 315
5.9 The General Solution of a Linear System . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
6 Complex Numbers 329
6.1 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
6.2 Polar Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
6.3 Roots of Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
6.4 The Quadratic Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
7 Spectral Theory 347
7.1 Eigenvalues and Eigenvectors of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 347
7.1.1 Definition of Eigenvectors and Eigenvalues . . . . . . . . . . . . . . . . . . . . . 347
7.1.2 Finding Eigenvectors and Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . 350
7.1.3 Eigenvalues and Eigenvectors for Special Types of Matrices . . . . . . . . . . . . 356
7.2 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
7.2.1 Similarity and Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
7.2.2 Diagonalizing a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
7.2.3 Complex Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
7.3 Applications of Spectral Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
7.3.1 Raising a Matrix to a High Power . . . . . . . . . . . . . . . . . . . . . . . . . . 373
7.3.2 Raising a Symmetric Matrix to a High Power . . . . . . . . . . . . . . . . . . . . 375
7.3.3 Markov Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
7.3.3.1 Eigenvalues of Markov Matrices . . . . . . . . . . . . . . . . . . . . . 384
vi CONTENTS
7.3.4 Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
7.3.5 The Matrix Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
7.4 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
7.4.1 Orthogonal Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
7.4.2 The Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . 409
7.4.3 Positive Definite Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
7.4.3.1 The Cholesky Factorization . . . . . . . . . . . . . . . . . . . . . . . . 420
7.4.4 QR Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
7.4.4.1 The QR Factorization and Eigenvalues . . . . . . . . . . . . . . . . . . 424
7.4.4.2 Power Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
7.4.5 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
8 Some Curvilinear Coordinate Systems 439
8.1 Polar Coordinates and Polar Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
8.2 Spherical and Cylindrical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
9 Vector Spaces 455
9.1 Algebraic Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
9.2 Spanning Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
9.3 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
9.4 Subspaces and Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
9.5 Sums and Intersections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
9.6 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
9.7 Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
9.7.1 One to One and Onto Transformations . . . . . . . . . . . . . . . . . . . . . . . . 505
9.7.2 Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
9.8 The Kernel And Image Of A Linear Map . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
9.9 The Matrix of a Linear Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
A Some Prerequisite Topics 537
A.1 Sets and Set Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
A.2 Well Ordering and Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
B Selected Exercise Answers 543
Index 591
Preface
A First Course in Linear Algebra presents an introduction to the fascinating subject of linear algebra for
students who have a reasonable understanding of basic algebra. Major topics of linear algebra are pre-
sented in detail, with proofs of important theorems provided. Separate sections may be included in which
proofs are examined in further depth and in general these can be excluded without loss of contrinuity.
Where possible, applications of key concepts are explored. In an effort to assist those students who are
interested in continuing on in linear algebra connections to additional topics covered in advanced courses
are introduced.
Each chapter begins with a list of desired outcomes which a student should be able to achieve upon
completing the chapter. Throughout the text, examples and diagrams are given to reinforce ideas and
provide guidance on how to approach various problems. Students are encouraged to work through the
suggested exercises provided at the end of each section. Selected solutions to these exercises are given at
the end of the text.
As this is an open text, you are encouraged to interact with the textbook through annotating, revising,
and reusing to your advantage.
1
1. Systems of Equations
1.1 Systems of Equations, Geometry
Outcomes
A. Relate the types of solution sets of a system of two (three) variables to the intersections of
lines in a plane (the intersection of planes in three space)
As you may remember, linear equations like 2x+3y = 6 can be graphed as straight lines in the coordi- nate plane. We say that this equation is in two variables, in this case x and y. Suppose you have two such
equations, each of which can be graphed as a straight line, and consider the resulting graph of two lines.
What would it mean if there exists a point of intersection between the two lines? This point, which lies on
both graphs, gives x and y values for which both equations are true. In other words, this point gives the
ordered pair (x,y) that satisfy both equations. If the point (x,y) is a point of intersection, we say that (x,y) is a solution to the two equations. In linear algebra, we often are concerned with finding the solution(s)
to a system of equations, if such solutions exist. First, we consider graphical representations of solutions
and later we will consider the algebraic methods for finding solutions.
When looking for the intersection of two lines in a graph, several situations may arise. The follow-
ing picture demonstrates the possible situations when considering two equations (two lines in the graph)
involving two variables.
x
y
One Solution
x
y
No Solutions
x
y
Infinitely Many Solutions
In the first diagram, there is a unique point of intersection, which means that there is only one (unique)
solution to the two equations. In the second, there are no points of intersection and no solution. When no
solution exists, this means that the two lines are parallel and they never intersect. The third situation which
can occur, as demonstrated in diagram three, is that the two lines are really the same line. For example,
x+ y = 1 and 2x+ 2y = 2 are equations which when graphed yield the same line. In this case there are infinitely many points which are solutions of these two equations, as every ordered pair which is on the
graph of the line satisfies both equations. When considering linear systems of equations, there are always
three types of solutions possible; exactly one (unique) solution, infinitely many solutions, or no solution.
3
4 Systems of Equations
Example 1.1: A Graphical Solution
Use a graph to find the solution to the following system of equations
x+ y = 3 y− x = 5
Solution. Through graphing the above equations and identifying the point of intersection, we can find the
solution(s). Remember that we must have either one solution, infinitely many, or no solutions at all. The
following graph shows the two equations, as well as the intersection. Remember, the point of intersection
represents the solution of the two equations, or the (x,y) which satisfy both equations. In this case, there is one point of intersection at (−1,4) which means we have one unique solution, x =−1,y = 4.
−4 −3 −2 −1 1
2
4
6
x
y
(x,y) = (−1,4)
♠
In the above example, we investigated the intersection point of two equations in two variables, x and
y. Now we will consider the graphical solutions of three equations in two variables.
Consider a system of three equations in two variables. Again, these equations can be graphed as
straight lines in the plane, so that the resulting graph contains three straight lines. Recall the three possible
types of solutions; no solution, one solution, and infinitely many solutions. There are now more complex
ways of achieving these situations, due to the presence of the third line. For example, you can imagine
the case of three intersecting lines having no common point of intersection. Perhaps you can also imagine
three intersecting lines which do intersect at a single point. These two situations are illustrated below.
x
y
No Solution
x
y
One Solution
1.1. Systems of Equations, Geometry 5
Consider the first picture above. While all three lines intersect with one another, there is no common
point of intersection where all three lines meet at one point. Hence, there is no solution to the three
equations. Remember, a solution is a point (x,y) which satisfies all three equations. In the case of the second picture, the lines intersect at a common point. This means that there is one solution to the three
equations whose graphs are the given lines. You should take a moment now to draw the graph of a system
which results in three parallel lines. Next, try the graph of three identical lines. Which type of solution is
represented in each of these graphs?
We have now considered the graphical solutions of systems of two equations in two variables, as well
as three equations in two variables. However, there is no reason to limit our investigation to equations in
two variables. We will now consider equations in three variables.
You may recall that equations in three variables, such as 2x+ 4y− 5z = 8, form a plane. Above, we were looking for intersections of lines in order to identify any possible solutions. When graphically solving
systems of equations in three variables, we look for intersections of planes. These points of intersection
give the (x,y,z) that satisfy all the equations in the system. What types of solutions are possible when working with three variables? Consider the following picture involving two planes, which are given by
two equations in three variables.
Notice how these two planes intersect in a line. This means that the points (x,y,z) on this line satisfy both equations in the system. Since the line contains infinitely many points, this system has infinitely
many solutions.
It could also happen that the two planes fail to intersect. However, is it possible to have two planes
intersect at a single point? Take a moment to attempt drawing this situation, and convince yourself that it
is not possible! This means that when we have only two equations in three variables, there is no way to
have a unique solution! Hence, the types of solutions possible for two equations in three variables are no
solution or infinitely many solutions.
Now imagine adding a third plane. In other words, consider three equations in three variables. What
types of solutions are now possible? Consider the following diagram.
✠ New Plane
In this diagram, there is no point which lies in all three planes. There is no intersection between all
6 Systems of Equations
planes so there is no solution. The picture illustrates the situation in which the line of intersection of the
new plane with one of the original planes forms a line parallel to the line of intersection of the first two
planes. However, in three dimensions, it is possible for two lines to fail to intersect even though they are
not parallel. Such lines are called skew lines.
Recall that when working with two equations in three variables, it was not possible to have a unique
solution. Is it possible when considering three equations in three variables? In fact, it is possible, and we
demonstrate this situation in the following picture.
✠
New Plane
In this case, the three planes have a single point of intersection. Can you think of other types of
solutions possible? Another is that the three planes could intersect in a line, resulting in infinitely many
solutions, as in the following diagram.
We have now seen how three equations in three variables can have no solution, a unique solution, or
intersect in a line resulting in infinitely many solutions. It is also possible that the three equations graph
the same plane, which also leads to infinitely many solutions.
You can see that when working with equations in three variables, there are many more ways to achieve
the different types of solutions than when working with two variables. It may prove enlightening to spend
time imagining (and drawing) many possible scenarios, and you should take some time to try a few.
You should also take some time to imagine (and draw) graphs of systems in more than three variables.
Equations like x+y−2z+4w = 8 with more than three variables are often called hyper-planes. You may soon realize that it is tricky to draw the graphs of hyper-planes! Through the tools of linear algebra, we
can algebraically examine these types of systems which are difficult to graph. In the following section, we
will consider these algebraic tools.
1.2. Systems Of Equations, Algebraic Procedures 7
Exercises
Exercise 1.1.1 Graphically, find the point (x1,y1) which lies on both lines, x+ 3y = 1 and 4x− y = 3. That is, graph each line and see where they intersect.
Exercise 1.1.2 Graphically, find the point of intersection of the two lines 3x+ y = 3 and x+2y = 1. That is, graph each line and see where they intersect.
Exercise 1.1.3 You have a system of k equations in two variables, k ≥ 2. Explain the geometric signifi- cance of
(a) No solution.
(b) A unique solution.
(c) An infinite number of solutions.
1.2 Systems Of Equations, Algebraic Procedures
Outcomes
A. Use elementary operations to find the solution to a linear system of equations.
B. Find the row-echelon form and reduced row-echelon form of a matrix.
C. Determine whether a system of linear equations has no solution, a unique solution or an
infinite number of solutions from its row-echelon form.
D. Solve a system of equations using Gaussian Elimination and Gauss-Jordan Elimination.
E. Model a physical system with linear equations and then solve.
We have taken an in depth look at graphical representations of systems of equations, as well as how to
find possible solutions graphically. Our attention now turns to working with systems algebraically.
8 Systems of Equations
Definition 1.2: System of Linear Equations
A system of linear equations is a list of equations,
a11x1 +a12x2 + · · ·+a1nxn = b1 a21x1 +a22x2 + · · ·+a2nxn = b2
...
am1x1 +am2x2 + · · ·+amnxn = bm
where ai j and b j are real numbers. The above is a system of m equations in the n variables,
x1,x2 · · · ,xn. Written more simply in terms of summation notation, the above can be written in the form
n
∑ j=1
ai jx j = bi, i = 1,2,3, · · · ,m
The relative size of m and n is not important here. Notice that we have allowed ai j and b j to be any
real number. We can also call these numbers scalars . We will use this term throughout the text, so keep
in mind that the term scalar just means that we are working with real numbers.
Now, suppose we have a system where bi = 0 for all i. In other words every equation equals 0. This is a special type of system.
Definition 1.3: Homogeneous System of Equations
A system of equations is called homogeneous if each equation in the system is equal to 0. A
homogeneous system has the form
a11x1 +a12x2 + · · ·+a1nxn = 0 a21x1 +a22x2 + · · ·+a2nxn = 0
...
am1x1 +am2x2 + · · ·+amnxn = 0
where ai j are scalars and xi are variables.
Recall from the previous section that our goal when working with systems of linear equations was to
find the point of intersection of the equations when graphed. In other words, we looked for the solutions to
the system. We now wish to find these solutions algebraically. We want to find values for x1, · · · ,xn which solve all of the equations. If such a set of values exists, we call (x1, · · · ,xn) the solution set.
Recall the above discussions about the types of solutions possible. We will see that systems of linear
equations will have one unique solution, infinitely many solutions, or no solution. Consider the following
definition.
Definition 1.4: Consistent and Inconsistent Systems
A system of linear equations is called consistent if there exists at least one solution. It is called
inconsistent if there is no solution.
1.2. Systems Of Equations, Algebraic Procedures 9
If you think of each equation as a condition which must be satisfied by the variables, consistent would
mean there is some choice of variables which can satisfy all the conditions. Inconsistent would mean there
is no choice of the variables which can satisfy all of the conditions.
The following sections provide methods for determining if a system is consistent or inconsistent, and
finding solutions if they exist.
1.2.1. Elementary Operations
We begin this section with an example. Recall from Example 1.1 that the solution to the given system was
(x,y) = (−1,4).
Example 1.5: Verifying an Ordered Pair is a Solution
Algebraically verify that (x,y) = (−1,4) is a solution to the following system of equations.
x+ y = 3 y− x = 5
Solution. By graphing these two equations and identifying the point of intersection, we previously found
that (x,y) = (−1,4) is the unique solution. We can verify algebraically by substituting these values into the original equations, and ensuring that
the equations hold. First, we substitute the values into the first equation and check that it equals 3.
x+ y = (−1)+(4) = 3 This equals 3 as needed, so we see that (−1,4) is a solution to the first equation. Substituting the values into the second equation yields
y− x = (4)− (−1) = 4+1 = 5 which is true. For (x,y) = (−1,4) each equation is true and therefore, this is a solution to the system. ♠
Now, the interesting question is this: If you were not given these numbers to verify, how could you
algebraically determine the solution? Linear algebra gives us the tools needed to answer this question.
The following basic operations are important tools that we will utilize.
Definition 1.6: Elementary Operations
Elementary operations are those operations consisting of the following.
1. Interchange the order in which the equations are listed.
2. Multiply any equation by a nonzero number.
3. Replace any equation with itself added to a multiple of another equation.
It is important to note that none of these operations will change the set of solutions of the system of
equations. In fact, elementary operations are the key tool we use in linear algebra to find solutions to
systems of equations.
10 Systems of Equations
Consider the following example.
Example 1.7: Effects of an Elementary Operation
Show that the system x+ y = 7
2x− y = 8 has the same solution as the system
x+ y = 7 −3y =−6
Solution. Notice that the second system has been obtained by taking the second equation of the first system
and adding -2 times the first equation, as follows:
2x− y+(−2)(x+ y) = 8+(−2)(7)
By simplifying, we obtain
−3y =−6 which is the second equation in the second system. Now, from here we can solve for y and see that y = 2. Next, we substitute this value into the first equation as follows
x+ y = x+2 = 7
Hence x = 5 and so (x,y) = (5,2) is a solution to the second system. We want to check if (5,2) is also a solution to the first system. We check this by substituting (x,y) = (5,2) into the system and ensuring the equations are true.
x+ y = (5)+(2) = 7 2x− y = 2(5)− (2) = 8
Hence, (5,2) is also a solution to the first system. ♠
This example illustrates how an elementary operation applied to a system of two equations in two
variables does not affect the solution set. However, a linear system may involve many equations and many
variables and there is no reason to limit our study to small systems. For any size of system in any number
of variables, the solution set is still the collection of solutions to the equations. In every case, the above
operations of Definition 1.6 do not change the set of solutions to the system of linear equations.
In the following theorem, we use the notation Ei to represent an equation, while bi denotes a constant.
1.2. Systems Of Equations, Algebraic Procedures 11
Theorem 1.8: Elementary Operations and Solutions
Suppose you have a system of two linear equations
E1 = b1 E2 = b2
(1.1)
Then the following systems have the same solution set as 1.1:
1. E2 = b2 E1 = b1
(1.2)
2. E1 = b1
kE2 = kb2 (1.3)
for any scalar k, provided k 6= 0.
3. E1 = b1
E2 + kE1 = b2 + kb1 (1.4)
for any scalar k (including k = 0).
Before we proceed with the proof of Theorem 1.8, let us consider this theorem in context of Example
1.7. Then, E1 = x+ y, b1 = 7
E2 = 2x− y, b2 = 8 Recall the elementary operations that we used to modify the system in the solution to the example. First,
we added (−2) times the first equation to the second equation. In terms of Theorem 1.8, this action is given by
E2 +(−2)E1 = b2 +(−2)b1 or
2x− y+(−2)(x+ y) = 8+(−2)7 This gave us the second system in Example 1.7, given by
E1 = b1 E2 +(−2)E1 = b2 +(−2)b1
From this point, we were able to find the solution to the system. Theorem 1.8 tells us that the solution
we found is in fact a solution to the original system.
We will now prove Theorem 1.8.
Proof.
1. The proof that the systems 1.1 and 1.2 have the same solution set is as follows. Suppose that
(x1, · · · ,xn) is a solution to E1 = b1,E2 = b2. We want to show that this is a solution to the system in 1.2 above. This is clear, because the system in 1.2 is the original system, but listed in a different
order. Changing the order does not effect the solution set, so (x1, · · · ,xn) is a solution to 1.2.
12 Systems of Equations
2. Next we want to prove that the systems 1.1 and 1.3 have the same solution set. That is E1 = b1,E2 = b2 has the same solution set as the system E1 = b1,kE2 = kb2 provided k 6= 0. Let (x1, · · · ,xn) be a solution of E1 = b1,E2 = b2,. We want to show that it is a solution to E1 = b1,kE2 = kb2. Notice that the only difference between these two systems is that the second involves multiplying the equation,
E2 = b2 by the scalar k. Recall that when you multiply both sides of an equation by the same number, the sides are still equal to each other. Hence if (x1, · · · ,xn) is a solution to E2 = b2, then it will also be a solution to kE2 = kb2. Hence, (x1, · · · ,xn) is also a solution to 1.3. Similarly, let (x1, · · · ,xn) be a solution of E1 = b1,kE2 = kb2. Then we can multiply the equation kE2 = kb2 by the scalar 1/k, which is possible only because we have required that k 6= 0. Just as above, this action preserves equality and we obtain the equation E2 = b2. Hence (x1, · · · ,xn) is also a solution to E1 = b1,E2 = b2.
3. Finally, we will prove that the systems 1.1 and 1.4 have the same solution set. We will show that
any solution of E1 = b1,E2 = b2 is also a solution of 1.4. Then, we will show that any solution of 1.4 is also a solution of E1 = b1,E2 = b2. Let (x1, · · · ,xn) be a solution to E1 = b1,E2 = b2. Then in particular it solves E1 = b1. Hence, it solves the first equation in 1.4. Similarly, it also solves E2 = b2. By our proof of 1.3, it also solves kE1 = kb1. Notice that if we add E2 and kE1, this is equal to b2+kb1. Therefore, if (x1, · · · ,xn) solves E1 = b1,E2 = b2 it must also solve E2+kE1 = b2+kb1. Now suppose (x1, · · · ,xn) solves the system E1 = b1,E2 + kE1 = b2 + kb1. Then in particular it is a solution of E1 = b1. Again by our proof of 1.3, it is also a solution to kE1 = kb1. Now if we subtract these equal quantities from both sides of E2 + kE1 = b2 + kb1 we obtain E2 = b2, which shows that the solution also satisfies E1 = b1,E2 = b2.
♠
Stated simply, the above theorem shows that the elementary operations do not change the solution set
of a system of equations.
We will now look at an example of a system of three equations and three variables. Similarly to the
previous examples, the goal is to find values for x,y,z such that each of the given equations are satisfied
when these values are substituted in.
Example 1.9: Solving a System of Equations with Elementary Operations
Find the solutions to the system,
x+3y+6z = 25 2x+7y+14z = 58
2y+5z = 19 (1.5)
Solution. We can relate this system to Theorem 1.8 above. In this case, we have
E1 = x+3y+6z, b1 = 25 E2 = 2x+7y+14z, b2 = 58
E3 = 2y+5z, b3 = 19
Theorem 1.8 claims that if we do elementary operations on this system, we will not change the solution
set. Therefore, we can solve this system using the elementary operations given in Definition 1.6. First,
1.2. Systems Of Equations, Algebraic Procedures 13
replace the second equation by (−2) times the first equation added to the second. This yields the system
x+3y+6z = 25 y+2z = 8
2y+5z = 19 (1.6)
Now, replace the third equation with (−2) times the second added to the third. This yields the system
x+3y+6z = 25 y+2z = 8
z = 3 (1.7)
At this point, we can easily find the solution. Simply take z = 3 and substitute this back into the previous equation to solve for y, and similarly to solve for x.
x+3y+6(3) = x+3y+18 = 25 y+2(3) = y+6 = 8
z = 3
The second equation is now
y+6 = 8
You can see from this equation that y = 2. Therefore, we can substitute this value into the first equation as follows:
x+3(2)+18 = 25
By simplifying this equation, we find that x = 1. Hence, the solution to this system is (x,y,z) = (1,2,3). This process is called back substitution.
Alternatively, in 1.7 you could have continued as follows. Add (−2) times the third equation to the second and then add (−6) times the second to the first. This yields
x+3y = 7 y = 2 z = 3
Now add (−3) times the second to the first. This yields
x = 1 y = 2 z = 3
a system which has the same solution set as the original system. This avoided back substitution and led
to the same solution set. It is your decision which you prefer to use, as both methods lead to the correct
solution, (x,y,z) = (1,2,3). ♠
14 Systems of Equations
1.2.2. Gaussian Elimination
The work we did in the previous section will always find the solution to the system. In this section, we
will explore a less cumbersome way to find the solutions. First, we will represent a linear system with
an augmented matrix. A matrix is simply a rectangular array of numbers. The size or dimension of a
matrix is defined as m× n where m is the number of rows and n is the number of columns. In order to construct an augmented matrix from a linear system, we create a coefficient matrix from the coefficients
of the variables in the system, as well as a constant matrix from the constants. The coefficients from one
equation of the system create one row of the augmented matrix.
For example, consider the linear system in Example 1.9
x+3y+6z = 25 2x+7y+14z = 58
2y+5z = 19
This system can be written as an augmented matrix, as follows
1 3 6 25
2 7 14 58
0 2 5 19
Notice that it has exactly the same information as the original system. Here it is understood that the
first column contains the coefficients from x in each equation, in order,
1
2
0
. Similarly, we create a
column from the coefficients on y in each equation,
3
7
2
and a column from the coefficients on z in each
equation,
6
14
5
. For a system of more than three variables, we would continue in this way constructing
a column for each variable. Similarly, for a system of less than three variables, we simply construct a
column for each variable.
Finally, we construct a column from the constants of the equations,
25
58
19
.
The rows of the augmented matrix correspond to the equations in the system. For example, the top
row in the augmented matrix, [
1 3 6 | 25 ]
corresponds to the equation
x+3y+6z = 25.
Consider the following definition.
1.2. Systems Of Equations, Algebraic Procedures 15
Definition 1.10: Augmented Matrix of a Linear System
For a linear system of the form
a11x1 + · · ·+a1nxn = b1 ...
am1x1 + · · ·+amnxn = bm
where the xi are variables and the ai j and bi are constants, the augmented matrix of this system is
given by
a11 · · · a1n b1 ...
... ...
am1 · · · amn bm
Now, consider elementary operations in the context of the augmented matrix. The elementary opera-
tions in Definition 1.6 can be used on the rows just as we used them on equations previously. Changes to
a system of equations in as a result of an elementary operation are equivalent to changes in the augmented
matrix resulting from the corresponding row operation. Note that Theorem 1.8 implies that any elementary
row operations used on an augmented matrix will not change the solution to the corresponding system of
equations. We now formally define elementary row operations. These are the key tool we will use to find
solutions to systems of equations.
Definition 1.11: Elementary Row Operations
The elementary row operations (also known as row operations) consist of the following
1. Switch two rows.
2. Multiply a row by a nonzero number.
3. Replace a row by any multiple of another row added to it.
Recall how we solved Example 1.9. We can do the exact same steps as above, except now in the
context of an augmented matrix and using row operations. The augmented matrix of this system is
1 3 6 25