Loading...

Messages

Proposals

Stuck in your homework and missing deadline? Get urgent help in $10/Page with 24 hours deadline

Get Urgent Writing Help In Your Essays, Assignments, Homeworks, Dissertation, Thesis Or Coursework & Achieve A+ Grades.

Privacy Guaranteed - 100% Plagiarism Free Writing - Free Turnitin Report - Professional And Experienced Writers - 24/7 Online Support

Express a as a product of elementary matrices

20/10/2021 Client: muhammad11 Deadline: 2 Day

Linear Algebra Problems

OPEN TEXT ONLINE ASSESSMENT

All digital forms of access to our high-quality

open texts are entirely FREE! All content is

reviewed for excellence and is wholly adapt-

able; custom editions are produced by Lyryx

for those adopting Lyryx assessment. Access

to the original source files is also open to any-

one!

We have been developing superior online for-

mative assessment for more than 15 years. Our

questions are continuously adapted with the

content and reviewed for quality and sound

pedagogy. To enhance learning, students re-

ceive immediate personalized feedback. Stu-

dent grade reports and performance statistics

are also provided.

SUPPORT INSTRUCTOR SUPPLEMENTS

Access to our in-house support team is avail-

able 7 days/week to provide prompt resolution

to both student and instructor inquiries. In ad-

dition, we work one-on-one with instructors to

provide a comprehensive system, customized

for their course. This can include adapting the

text, managing multiple sections, and more!

Additional instructor resources are also freely

accessible. Product dependent, these supple-

ments include: full sets of adaptable slides and

lecture notes, solutions manuals, and multiple

choice question banks with an exam building

tool.

Contact Lyryx Today!

info@lyryx.com

a d v a n c i n g l e a r n i n g

A First Course in Linear Algebra an Open Text

BE A CHAMPION OF OER!

Contribute suggestions for improvements, new content, or errata:

A new topic

A new example

An interesting new question

A new or better proof to an existing theorem

Any other suggestions to improve the material

Contact Lyryx at info@lyryx.com with your ideas.

CONTRIBUTIONS

Ilijas Farah, York University

Ken Kuttler, Brigham Young University

Lyryx Learning Team

Bruce Bauslaugh

Peter Chow

Nathan Friess

Stephanie Keyowski

Claude Laflamme

Martha Laflamme

Jennifer MacKenzie

Tamsyn Murnaghan

Bogdan Sava

Larissa Stone

Ryan Yee

Ehsun Zahedi

LICENSE

Creative Commons License (CC BY): This text, including the art and illustrations, are available under

the Creative Commons license (CC BY), allowing anyone to reuse, revise, remix and redistribute the text.

To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/

https://creativecommons.org/licenses/by/4.0/
a d v a n c i n g l e a r n i n g

A First Course in Linear Algebra an Open Text

Base Text Revision History

Current Revision: Version 2017 — Revision A

Extensive edits, additions, and revisions have been completed by the editorial staff at Lyryx Learning.

All new content (text and images) is released under the same license as noted above.

2017 A

• Lyryx: Front matter has been updated including cover, copyright, and revision pages.

• I. Farah: contributed edits and revisions, particularly the proofs in the Properties of Determinants II:

Some Important Proofs section

2016 B

• Lyryx: The text has been updated with the addition of subsections on Resistor Networks and the

Matrix Exponential based on original material by K. Kuttler.

• Lyryx: New example 7.35 on Random Walks developed.

2016 A • Lyryx: The layout and appearance of the text has been updated, including the title page and newly

designed back cover.

2015 A

• Lyryx: The content was modified and adapted with the addition of new material and several im-

agesthroughout.

• Lyryx: Additional examples and proofs were added to existing material throughout.

2012 A

• Original text by K. Kuttler of Brigham Young University. That version is used under Creative Com-

mons license CC BY (https://creativecommons.org/licenses/by/3.0/) made possible by

funding from The Saylor Foundation’s Open Textbook Challenge. See Elementary Linear Algebra for

more information and the original version.

https://creativecommons.org/licenses/by/3.0/
https://www.saylor.org/site/wp-content/uploads/2012/02/Elementary-Linear-Algebra-1-30-11-Kuttler-OTC.pdf
Contents

Contents iii

Preface 1

1 Systems of Equations 3

1.1 Systems of Equations, Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Systems Of Equations, Algebraic Procedures . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2.1 Elementary Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.2.2 Gaussian Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.2.3 Uniqueness of the Reduced Row-Echelon Form . . . . . . . . . . . . . . . . . . 25

1.2.4 Rank and Homogeneous Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 28

1.2.5 Balancing Chemical Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

1.2.6 Dimensionless Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

1.2.7 An Application to Resistor Networks . . . . . . . . . . . . . . . . . . . . . . . . 38

2 Matrices 53

2.1 Matrix Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

2.1.1 Addition of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

2.1.2 Scalar Multiplication of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 57

2.1.3 Multiplication of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

2.1.4 The i jth Entry of a Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

2.1.5 Properties of Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . 67

2.1.6 The Transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

2.1.7 The Identity and Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

2.1.8 Finding the Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

2.1.9 Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

2.1.10 More on Matrix Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

2.2 LU Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

2.2.1 Finding An LU Factorization By Inspection . . . . . . . . . . . . . . . . . . . . . 99

2.2.2 LU Factorization, Multiplier Method . . . . . . . . . . . . . . . . . . . . . . . . 100

2.2.3 Solving Systems using LU Factorization . . . . . . . . . . . . . . . . . . . . . . . 101

2.2.4 Justification for the Multiplier Method . . . . . . . . . . . . . . . . . . . . . . . . 102

iii

iv CONTENTS

3 Determinants 107

3.1 Basic Techniques and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

3.1.1 Cofactors and 2×2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . 107 3.1.2 The Determinant of a Triangular Matrix . . . . . . . . . . . . . . . . . . . . . . . 112

3.1.3 Properties of Determinants I: Examples . . . . . . . . . . . . . . . . . . . . . . . 114

3.1.4 Properties of Determinants II: Some Important Proofs . . . . . . . . . . . . . . . 118

3.1.5 Finding Determinants using Row Operations . . . . . . . . . . . . . . . . . . . . 123

3.2 Applications of the Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

3.2.1 A Formula for the Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

3.2.2 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

3.2.3 Polynomial Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

4 Rn 145

4.1 Vectors in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

4.2 Algebra in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

4.2.1 Addition of Vectors in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

4.2.2 Scalar Multiplication of Vectors in Rn . . . . . . . . . . . . . . . . . . . . . . . . 150

4.3 Geometric Meaning of Vector Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

4.4 Length of a Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

4.5 Geometric Meaning of Scalar Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . 159

4.6 Parametric Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

4.7 The Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

4.7.1 The Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

4.7.2 The Geometric Significance of the Dot Product . . . . . . . . . . . . . . . . . . . 170

4.7.3 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

4.8 Planes in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

4.9 The Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

4.9.1 The Box Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

4.10 Spanning, Linear Independence and Basis in Rn . . . . . . . . . . . . . . . . . . . . . . . 192

4.10.1 Spanning Set of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

4.10.2 Linearly Independent Set of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . 194

4.10.3 A Short Application to Chemistry . . . . . . . . . . . . . . . . . . . . . . . . . . 200

4.10.4 Subspaces and Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

4.10.5 Row Space, Column Space, and Null Space of a Matrix . . . . . . . . . . . . . . . 211

4.11 Orthogonality and the Gram Schmidt Process . . . . . . . . . . . . . . . . . . . . . . . . 232

4.11.1 Orthogonal and Orthonormal Sets . . . . . . . . . . . . . . . . . . . . . . . . . . 233

4.11.2 Orthogonal Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238

CONTENTS v

4.11.3 Gram-Schmidt Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241

4.11.4 Orthogonal Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244

4.11.5 Least Squares Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251

4.12 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

4.12.1 Vectors and Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

4.12.2 Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264

5 Linear Transformations 269

5.1 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269

5.2 The Matrix of a Linear Transformation I . . . . . . . . . . . . . . . . . . . . . . . . . . . 272

5.3 Properties of Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

5.4 Special Linear Transformations in R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286

5.5 One to One and Onto Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292

5.6 Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298

5.7 The Kernel And Image Of A Linear Map . . . . . . . . . . . . . . . . . . . . . . . . . . . 310

5.8 The Matrix of a Linear Transformation II . . . . . . . . . . . . . . . . . . . . . . . . . . 315

5.9 The General Solution of a Linear System . . . . . . . . . . . . . . . . . . . . . . . . . . . 321

6 Complex Numbers 329

6.1 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329

6.2 Polar Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336

6.3 Roots of Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339

6.4 The Quadratic Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343

7 Spectral Theory 347

7.1 Eigenvalues and Eigenvectors of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 347

7.1.1 Definition of Eigenvectors and Eigenvalues . . . . . . . . . . . . . . . . . . . . . 347

7.1.2 Finding Eigenvectors and Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . 350

7.1.3 Eigenvalues and Eigenvectors for Special Types of Matrices . . . . . . . . . . . . 356

7.2 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362

7.2.1 Similarity and Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362

7.2.2 Diagonalizing a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364

7.2.3 Complex Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369

7.3 Applications of Spectral Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372

7.3.1 Raising a Matrix to a High Power . . . . . . . . . . . . . . . . . . . . . . . . . . 373

7.3.2 Raising a Symmetric Matrix to a High Power . . . . . . . . . . . . . . . . . . . . 375

7.3.3 Markov Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378

7.3.3.1 Eigenvalues of Markov Matrices . . . . . . . . . . . . . . . . . . . . . 384

vi CONTENTS

7.3.4 Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384

7.3.5 The Matrix Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392

7.4 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401

7.4.1 Orthogonal Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401

7.4.2 The Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . 409

7.4.3 Positive Definite Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417

7.4.3.1 The Cholesky Factorization . . . . . . . . . . . . . . . . . . . . . . . . 420

7.4.4 QR Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422

7.4.4.1 The QR Factorization and Eigenvalues . . . . . . . . . . . . . . . . . . 424

7.4.4.2 Power Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424

7.4.5 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427

8 Some Curvilinear Coordinate Systems 439

8.1 Polar Coordinates and Polar Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439

8.2 Spherical and Cylindrical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449

9 Vector Spaces 455

9.1 Algebraic Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455

9.2 Spanning Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471

9.3 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475

9.4 Subspaces and Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483

9.5 Sums and Intersections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498

9.6 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499

9.7 Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505

9.7.1 One to One and Onto Transformations . . . . . . . . . . . . . . . . . . . . . . . . 505

9.7.2 Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509

9.8 The Kernel And Image Of A Linear Map . . . . . . . . . . . . . . . . . . . . . . . . . . . 518

9.9 The Matrix of a Linear Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524

A Some Prerequisite Topics 537

A.1 Sets and Set Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537

A.2 Well Ordering and Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539

B Selected Exercise Answers 543

Index 591

Preface

A First Course in Linear Algebra presents an introduction to the fascinating subject of linear algebra for

students who have a reasonable understanding of basic algebra. Major topics of linear algebra are pre-

sented in detail, with proofs of important theorems provided. Separate sections may be included in which

proofs are examined in further depth and in general these can be excluded without loss of contrinuity.

Where possible, applications of key concepts are explored. In an effort to assist those students who are

interested in continuing on in linear algebra connections to additional topics covered in advanced courses

are introduced.

Each chapter begins with a list of desired outcomes which a student should be able to achieve upon

completing the chapter. Throughout the text, examples and diagrams are given to reinforce ideas and

provide guidance on how to approach various problems. Students are encouraged to work through the

suggested exercises provided at the end of each section. Selected solutions to these exercises are given at

the end of the text.

As this is an open text, you are encouraged to interact with the textbook through annotating, revising,

and reusing to your advantage.

1

1. Systems of Equations

1.1 Systems of Equations, Geometry

Outcomes

A. Relate the types of solution sets of a system of two (three) variables to the intersections of

lines in a plane (the intersection of planes in three space)

As you may remember, linear equations like 2x+3y = 6 can be graphed as straight lines in the coordi- nate plane. We say that this equation is in two variables, in this case x and y. Suppose you have two such

equations, each of which can be graphed as a straight line, and consider the resulting graph of two lines.

What would it mean if there exists a point of intersection between the two lines? This point, which lies on

both graphs, gives x and y values for which both equations are true. In other words, this point gives the

ordered pair (x,y) that satisfy both equations. If the point (x,y) is a point of intersection, we say that (x,y) is a solution to the two equations. In linear algebra, we often are concerned with finding the solution(s)

to a system of equations, if such solutions exist. First, we consider graphical representations of solutions

and later we will consider the algebraic methods for finding solutions.

When looking for the intersection of two lines in a graph, several situations may arise. The follow-

ing picture demonstrates the possible situations when considering two equations (two lines in the graph)

involving two variables.

x

y

One Solution

x

y

No Solutions

x

y

Infinitely Many Solutions

In the first diagram, there is a unique point of intersection, which means that there is only one (unique)

solution to the two equations. In the second, there are no points of intersection and no solution. When no

solution exists, this means that the two lines are parallel and they never intersect. The third situation which

can occur, as demonstrated in diagram three, is that the two lines are really the same line. For example,

x+ y = 1 and 2x+ 2y = 2 are equations which when graphed yield the same line. In this case there are infinitely many points which are solutions of these two equations, as every ordered pair which is on the

graph of the line satisfies both equations. When considering linear systems of equations, there are always

three types of solutions possible; exactly one (unique) solution, infinitely many solutions, or no solution.

3

4 Systems of Equations

Example 1.1: A Graphical Solution

Use a graph to find the solution to the following system of equations

x+ y = 3 y− x = 5

Solution. Through graphing the above equations and identifying the point of intersection, we can find the

solution(s). Remember that we must have either one solution, infinitely many, or no solutions at all. The

following graph shows the two equations, as well as the intersection. Remember, the point of intersection

represents the solution of the two equations, or the (x,y) which satisfy both equations. In this case, there is one point of intersection at (−1,4) which means we have one unique solution, x =−1,y = 4.

−4 −3 −2 −1 1

2

4

6

x

y

(x,y) = (−1,4)

In the above example, we investigated the intersection point of two equations in two variables, x and

y. Now we will consider the graphical solutions of three equations in two variables.

Consider a system of three equations in two variables. Again, these equations can be graphed as

straight lines in the plane, so that the resulting graph contains three straight lines. Recall the three possible

types of solutions; no solution, one solution, and infinitely many solutions. There are now more complex

ways of achieving these situations, due to the presence of the third line. For example, you can imagine

the case of three intersecting lines having no common point of intersection. Perhaps you can also imagine

three intersecting lines which do intersect at a single point. These two situations are illustrated below.

x

y

No Solution

x

y

One Solution

1.1. Systems of Equations, Geometry 5

Consider the first picture above. While all three lines intersect with one another, there is no common

point of intersection where all three lines meet at one point. Hence, there is no solution to the three

equations. Remember, a solution is a point (x,y) which satisfies all three equations. In the case of the second picture, the lines intersect at a common point. This means that there is one solution to the three

equations whose graphs are the given lines. You should take a moment now to draw the graph of a system

which results in three parallel lines. Next, try the graph of three identical lines. Which type of solution is

represented in each of these graphs?

We have now considered the graphical solutions of systems of two equations in two variables, as well

as three equations in two variables. However, there is no reason to limit our investigation to equations in

two variables. We will now consider equations in three variables.

You may recall that equations in three variables, such as 2x+ 4y− 5z = 8, form a plane. Above, we were looking for intersections of lines in order to identify any possible solutions. When graphically solving

systems of equations in three variables, we look for intersections of planes. These points of intersection

give the (x,y,z) that satisfy all the equations in the system. What types of solutions are possible when working with three variables? Consider the following picture involving two planes, which are given by

two equations in three variables.

Notice how these two planes intersect in a line. This means that the points (x,y,z) on this line satisfy both equations in the system. Since the line contains infinitely many points, this system has infinitely

many solutions.

It could also happen that the two planes fail to intersect. However, is it possible to have two planes

intersect at a single point? Take a moment to attempt drawing this situation, and convince yourself that it

is not possible! This means that when we have only two equations in three variables, there is no way to

have a unique solution! Hence, the types of solutions possible for two equations in three variables are no

solution or infinitely many solutions.

Now imagine adding a third plane. In other words, consider three equations in three variables. What

types of solutions are now possible? Consider the following diagram.

✠ New Plane

In this diagram, there is no point which lies in all three planes. There is no intersection between all

6 Systems of Equations

planes so there is no solution. The picture illustrates the situation in which the line of intersection of the

new plane with one of the original planes forms a line parallel to the line of intersection of the first two

planes. However, in three dimensions, it is possible for two lines to fail to intersect even though they are

not parallel. Such lines are called skew lines.

Recall that when working with two equations in three variables, it was not possible to have a unique

solution. Is it possible when considering three equations in three variables? In fact, it is possible, and we

demonstrate this situation in the following picture.

New Plane

In this case, the three planes have a single point of intersection. Can you think of other types of

solutions possible? Another is that the three planes could intersect in a line, resulting in infinitely many

solutions, as in the following diagram.

We have now seen how three equations in three variables can have no solution, a unique solution, or

intersect in a line resulting in infinitely many solutions. It is also possible that the three equations graph

the same plane, which also leads to infinitely many solutions.

You can see that when working with equations in three variables, there are many more ways to achieve

the different types of solutions than when working with two variables. It may prove enlightening to spend

time imagining (and drawing) many possible scenarios, and you should take some time to try a few.

You should also take some time to imagine (and draw) graphs of systems in more than three variables.

Equations like x+y−2z+4w = 8 with more than three variables are often called hyper-planes. You may soon realize that it is tricky to draw the graphs of hyper-planes! Through the tools of linear algebra, we

can algebraically examine these types of systems which are difficult to graph. In the following section, we

will consider these algebraic tools.

1.2. Systems Of Equations, Algebraic Procedures 7

Exercises

Exercise 1.1.1 Graphically, find the point (x1,y1) which lies on both lines, x+ 3y = 1 and 4x− y = 3. That is, graph each line and see where they intersect.

Exercise 1.1.2 Graphically, find the point of intersection of the two lines 3x+ y = 3 and x+2y = 1. That is, graph each line and see where they intersect.

Exercise 1.1.3 You have a system of k equations in two variables, k ≥ 2. Explain the geometric signifi- cance of

(a) No solution.

(b) A unique solution.

(c) An infinite number of solutions.

1.2 Systems Of Equations, Algebraic Procedures

Outcomes

A. Use elementary operations to find the solution to a linear system of equations.

B. Find the row-echelon form and reduced row-echelon form of a matrix.

C. Determine whether a system of linear equations has no solution, a unique solution or an

infinite number of solutions from its row-echelon form.

D. Solve a system of equations using Gaussian Elimination and Gauss-Jordan Elimination.

E. Model a physical system with linear equations and then solve.

We have taken an in depth look at graphical representations of systems of equations, as well as how to

find possible solutions graphically. Our attention now turns to working with systems algebraically.

8 Systems of Equations

Definition 1.2: System of Linear Equations

A system of linear equations is a list of equations,

a11x1 +a12x2 + · · ·+a1nxn = b1 a21x1 +a22x2 + · · ·+a2nxn = b2

...

am1x1 +am2x2 + · · ·+amnxn = bm

where ai j and b j are real numbers. The above is a system of m equations in the n variables,

x1,x2 · · · ,xn. Written more simply in terms of summation notation, the above can be written in the form

n

∑ j=1

ai jx j = bi, i = 1,2,3, · · · ,m

The relative size of m and n is not important here. Notice that we have allowed ai j and b j to be any

real number. We can also call these numbers scalars . We will use this term throughout the text, so keep

in mind that the term scalar just means that we are working with real numbers.

Now, suppose we have a system where bi = 0 for all i. In other words every equation equals 0. This is a special type of system.

Definition 1.3: Homogeneous System of Equations

A system of equations is called homogeneous if each equation in the system is equal to 0. A

homogeneous system has the form

a11x1 +a12x2 + · · ·+a1nxn = 0 a21x1 +a22x2 + · · ·+a2nxn = 0

...

am1x1 +am2x2 + · · ·+amnxn = 0

where ai j are scalars and xi are variables.

Recall from the previous section that our goal when working with systems of linear equations was to

find the point of intersection of the equations when graphed. In other words, we looked for the solutions to

the system. We now wish to find these solutions algebraically. We want to find values for x1, · · · ,xn which solve all of the equations. If such a set of values exists, we call (x1, · · · ,xn) the solution set.

Recall the above discussions about the types of solutions possible. We will see that systems of linear

equations will have one unique solution, infinitely many solutions, or no solution. Consider the following

definition.

Definition 1.4: Consistent and Inconsistent Systems

A system of linear equations is called consistent if there exists at least one solution. It is called

inconsistent if there is no solution.

1.2. Systems Of Equations, Algebraic Procedures 9

If you think of each equation as a condition which must be satisfied by the variables, consistent would

mean there is some choice of variables which can satisfy all the conditions. Inconsistent would mean there

is no choice of the variables which can satisfy all of the conditions.

The following sections provide methods for determining if a system is consistent or inconsistent, and

finding solutions if they exist.

1.2.1. Elementary Operations

We begin this section with an example. Recall from Example 1.1 that the solution to the given system was

(x,y) = (−1,4).

Example 1.5: Verifying an Ordered Pair is a Solution

Algebraically verify that (x,y) = (−1,4) is a solution to the following system of equations.

x+ y = 3 y− x = 5

Solution. By graphing these two equations and identifying the point of intersection, we previously found

that (x,y) = (−1,4) is the unique solution. We can verify algebraically by substituting these values into the original equations, and ensuring that

the equations hold. First, we substitute the values into the first equation and check that it equals 3.

x+ y = (−1)+(4) = 3 This equals 3 as needed, so we see that (−1,4) is a solution to the first equation. Substituting the values into the second equation yields

y− x = (4)− (−1) = 4+1 = 5 which is true. For (x,y) = (−1,4) each equation is true and therefore, this is a solution to the system. ♠

Now, the interesting question is this: If you were not given these numbers to verify, how could you

algebraically determine the solution? Linear algebra gives us the tools needed to answer this question.

The following basic operations are important tools that we will utilize.

Definition 1.6: Elementary Operations

Elementary operations are those operations consisting of the following.

1. Interchange the order in which the equations are listed.

2. Multiply any equation by a nonzero number.

3. Replace any equation with itself added to a multiple of another equation.

It is important to note that none of these operations will change the set of solutions of the system of

equations. In fact, elementary operations are the key tool we use in linear algebra to find solutions to

systems of equations.

10 Systems of Equations

Consider the following example.

Example 1.7: Effects of an Elementary Operation

Show that the system x+ y = 7

2x− y = 8 has the same solution as the system

x+ y = 7 −3y =−6

Solution. Notice that the second system has been obtained by taking the second equation of the first system

and adding -2 times the first equation, as follows:

2x− y+(−2)(x+ y) = 8+(−2)(7)

By simplifying, we obtain

−3y =−6 which is the second equation in the second system. Now, from here we can solve for y and see that y = 2. Next, we substitute this value into the first equation as follows

x+ y = x+2 = 7

Hence x = 5 and so (x,y) = (5,2) is a solution to the second system. We want to check if (5,2) is also a solution to the first system. We check this by substituting (x,y) = (5,2) into the system and ensuring the equations are true.

x+ y = (5)+(2) = 7 2x− y = 2(5)− (2) = 8

Hence, (5,2) is also a solution to the first system. ♠

This example illustrates how an elementary operation applied to a system of two equations in two

variables does not affect the solution set. However, a linear system may involve many equations and many

variables and there is no reason to limit our study to small systems. For any size of system in any number

of variables, the solution set is still the collection of solutions to the equations. In every case, the above

operations of Definition 1.6 do not change the set of solutions to the system of linear equations.

In the following theorem, we use the notation Ei to represent an equation, while bi denotes a constant.

1.2. Systems Of Equations, Algebraic Procedures 11

Theorem 1.8: Elementary Operations and Solutions

Suppose you have a system of two linear equations

E1 = b1 E2 = b2

(1.1)

Then the following systems have the same solution set as 1.1:

1. E2 = b2 E1 = b1

(1.2)

2. E1 = b1

kE2 = kb2 (1.3)

for any scalar k, provided k 6= 0.

3. E1 = b1

E2 + kE1 = b2 + kb1 (1.4)

for any scalar k (including k = 0).

Before we proceed with the proof of Theorem 1.8, let us consider this theorem in context of Example

1.7. Then, E1 = x+ y, b1 = 7

E2 = 2x− y, b2 = 8 Recall the elementary operations that we used to modify the system in the solution to the example. First,

we added (−2) times the first equation to the second equation. In terms of Theorem 1.8, this action is given by

E2 +(−2)E1 = b2 +(−2)b1 or

2x− y+(−2)(x+ y) = 8+(−2)7 This gave us the second system in Example 1.7, given by

E1 = b1 E2 +(−2)E1 = b2 +(−2)b1

From this point, we were able to find the solution to the system. Theorem 1.8 tells us that the solution

we found is in fact a solution to the original system.

We will now prove Theorem 1.8.

Proof.

1. The proof that the systems 1.1 and 1.2 have the same solution set is as follows. Suppose that

(x1, · · · ,xn) is a solution to E1 = b1,E2 = b2. We want to show that this is a solution to the system in 1.2 above. This is clear, because the system in 1.2 is the original system, but listed in a different

order. Changing the order does not effect the solution set, so (x1, · · · ,xn) is a solution to 1.2.

12 Systems of Equations

2. Next we want to prove that the systems 1.1 and 1.3 have the same solution set. That is E1 = b1,E2 = b2 has the same solution set as the system E1 = b1,kE2 = kb2 provided k 6= 0. Let (x1, · · · ,xn) be a solution of E1 = b1,E2 = b2,. We want to show that it is a solution to E1 = b1,kE2 = kb2. Notice that the only difference between these two systems is that the second involves multiplying the equation,

E2 = b2 by the scalar k. Recall that when you multiply both sides of an equation by the same number, the sides are still equal to each other. Hence if (x1, · · · ,xn) is a solution to E2 = b2, then it will also be a solution to kE2 = kb2. Hence, (x1, · · · ,xn) is also a solution to 1.3. Similarly, let (x1, · · · ,xn) be a solution of E1 = b1,kE2 = kb2. Then we can multiply the equation kE2 = kb2 by the scalar 1/k, which is possible only because we have required that k 6= 0. Just as above, this action preserves equality and we obtain the equation E2 = b2. Hence (x1, · · · ,xn) is also a solution to E1 = b1,E2 = b2.

3. Finally, we will prove that the systems 1.1 and 1.4 have the same solution set. We will show that

any solution of E1 = b1,E2 = b2 is also a solution of 1.4. Then, we will show that any solution of 1.4 is also a solution of E1 = b1,E2 = b2. Let (x1, · · · ,xn) be a solution to E1 = b1,E2 = b2. Then in particular it solves E1 = b1. Hence, it solves the first equation in 1.4. Similarly, it also solves E2 = b2. By our proof of 1.3, it also solves kE1 = kb1. Notice that if we add E2 and kE1, this is equal to b2+kb1. Therefore, if (x1, · · · ,xn) solves E1 = b1,E2 = b2 it must also solve E2+kE1 = b2+kb1. Now suppose (x1, · · · ,xn) solves the system E1 = b1,E2 + kE1 = b2 + kb1. Then in particular it is a solution of E1 = b1. Again by our proof of 1.3, it is also a solution to kE1 = kb1. Now if we subtract these equal quantities from both sides of E2 + kE1 = b2 + kb1 we obtain E2 = b2, which shows that the solution also satisfies E1 = b1,E2 = b2.

Stated simply, the above theorem shows that the elementary operations do not change the solution set

of a system of equations.

We will now look at an example of a system of three equations and three variables. Similarly to the

previous examples, the goal is to find values for x,y,z such that each of the given equations are satisfied

when these values are substituted in.

Example 1.9: Solving a System of Equations with Elementary Operations

Find the solutions to the system,

x+3y+6z = 25 2x+7y+14z = 58

2y+5z = 19 (1.5)

Solution. We can relate this system to Theorem 1.8 above. In this case, we have

E1 = x+3y+6z, b1 = 25 E2 = 2x+7y+14z, b2 = 58

E3 = 2y+5z, b3 = 19

Theorem 1.8 claims that if we do elementary operations on this system, we will not change the solution

set. Therefore, we can solve this system using the elementary operations given in Definition 1.6. First,

1.2. Systems Of Equations, Algebraic Procedures 13

replace the second equation by (−2) times the first equation added to the second. This yields the system

x+3y+6z = 25 y+2z = 8

2y+5z = 19 (1.6)

Now, replace the third equation with (−2) times the second added to the third. This yields the system

x+3y+6z = 25 y+2z = 8

z = 3 (1.7)

At this point, we can easily find the solution. Simply take z = 3 and substitute this back into the previous equation to solve for y, and similarly to solve for x.

x+3y+6(3) = x+3y+18 = 25 y+2(3) = y+6 = 8

z = 3

The second equation is now

y+6 = 8

You can see from this equation that y = 2. Therefore, we can substitute this value into the first equation as follows:

x+3(2)+18 = 25

By simplifying this equation, we find that x = 1. Hence, the solution to this system is (x,y,z) = (1,2,3). This process is called back substitution.

Alternatively, in 1.7 you could have continued as follows. Add (−2) times the third equation to the second and then add (−6) times the second to the first. This yields

x+3y = 7 y = 2 z = 3

Now add (−3) times the second to the first. This yields

x = 1 y = 2 z = 3

a system which has the same solution set as the original system. This avoided back substitution and led

to the same solution set. It is your decision which you prefer to use, as both methods lead to the correct

solution, (x,y,z) = (1,2,3). ♠

14 Systems of Equations

1.2.2. Gaussian Elimination

The work we did in the previous section will always find the solution to the system. In this section, we

will explore a less cumbersome way to find the solutions. First, we will represent a linear system with

an augmented matrix. A matrix is simply a rectangular array of numbers. The size or dimension of a

matrix is defined as m× n where m is the number of rows and n is the number of columns. In order to construct an augmented matrix from a linear system, we create a coefficient matrix from the coefficients

of the variables in the system, as well as a constant matrix from the constants. The coefficients from one

equation of the system create one row of the augmented matrix.

For example, consider the linear system in Example 1.9

x+3y+6z = 25 2x+7y+14z = 58

2y+5z = 19

This system can be written as an augmented matrix, as follows

 

1 3 6 25

2 7 14 58

0 2 5 19

 

Notice that it has exactly the same information as the original system. Here it is understood that the

first column contains the coefficients from x in each equation, in order,

 

1

2

0

  . Similarly, we create a

column from the coefficients on y in each equation,

 

3

7

2

  and a column from the coefficients on z in each

equation,

 

6

14

5

  . For a system of more than three variables, we would continue in this way constructing

a column for each variable. Similarly, for a system of less than three variables, we simply construct a

column for each variable.

Finally, we construct a column from the constants of the equations,

 

25

58

19

  .

The rows of the augmented matrix correspond to the equations in the system. For example, the top

row in the augmented matrix, [

1 3 6 | 25 ]

corresponds to the equation

x+3y+6z = 25.

Consider the following definition.

1.2. Systems Of Equations, Algebraic Procedures 15

Definition 1.10: Augmented Matrix of a Linear System

For a linear system of the form

a11x1 + · · ·+a1nxn = b1 ...

am1x1 + · · ·+amnxn = bm

where the xi are variables and the ai j and bi are constants, the augmented matrix of this system is

given by  

a11 · · · a1n b1 ...

... ...

am1 · · · amn bm

 

Now, consider elementary operations in the context of the augmented matrix. The elementary opera-

tions in Definition 1.6 can be used on the rows just as we used them on equations previously. Changes to

a system of equations in as a result of an elementary operation are equivalent to changes in the augmented

matrix resulting from the corresponding row operation. Note that Theorem 1.8 implies that any elementary

row operations used on an augmented matrix will not change the solution to the corresponding system of

equations. We now formally define elementary row operations. These are the key tool we will use to find

solutions to systems of equations.

Definition 1.11: Elementary Row Operations

The elementary row operations (also known as row operations) consist of the following

1. Switch two rows.

2. Multiply a row by a nonzero number.

3. Replace a row by any multiple of another row added to it.

Recall how we solved Example 1.9. We can do the exact same steps as above, except now in the

context of an augmented matrix and using row operations. The augmented matrix of this system is

 

1 3 6 25

2 7 14 58

0 2 5 19

 

Thus the first step in solving the system given by 1.5 would be to take (−2) times the first row of the augmented matrix and add it to the second row,

 

1 3 6 25

0 1 2 8

0 2 5 19

 

16 Systems of Equations

Note how this corresponds to 1.6. Next take (−2) times the second row and add to the third,  

1 3 6 25

0 1 2 8

0 0 1 3

 

This augmented matrix corresponds to the system

x+3y+6z = 25 y+2z = 8

z = 3

which is the same as 1.7. By back substitution you obtain the solution x = 1,y = 2, and z = 3.

Through a systematic procedure of row operations, we can simplify an augmented matrix and carry it

to row-echelon form or reduced row-echelon form, which we define next. These forms are used to find

the solutions of the system of equations corresponding to the augmented matrix.

In the following definitions, the term leading entry refers to the first nonzero entry of a row when

scanning the row from left to right.

Definition 1.12: Row-Echelon Form

An augmented matrix is in row-echelon form if

1. All nonzero rows are above any rows of zeros.

2. Each leading entry of a row is in a column to the right of the leading entries of any row above

it.

3. Each leading entry of a row is equal to 1.

We also consider another reduced form of the augmented matrix which has one further condition.

Definition 1.13: Reduced Row-Echelon Form

An augmented matrix is in reduced row-echelon form if

1. All nonzero rows are above any rows of zeros.

2. Each leading entry of a row is in a column to the right of the leading entries of any rows above

it.

3. Each leading entry of a row is equal to 1.

4. All entries in a column above and below a leading entry are zero.

Notice that the first three conditions on a reduced row-echelon form matrix are the same as those for

row-echelon form.

Hence, every reduced row-echelon form matrix is also in row-echelon form. The converse is not

necessarily true; we cannot assume that every matrix in row-echelon form is also in reduced row-echelon

1.2. Systems Of Equations, Algebraic Procedures 17

form. However, it often happens that the row-echelon form is sufficient to provide information about the

solution of a system.

The following examples describe matrices in these various forms. As an exercise, take the time to

carefully verify that they are in the specified form.

Example 1.14: Not in Row-Echelon Form

The following augmented matrices are not in row-echelon form (and therefore also not in reduced

row-echelon form).

 

0 0 0 0

1 2 3 3

0 1 0 2

0 0 0 1

0 0 0 0

 

,

 

1 2 3

2 4 −6 4 0 7

  ,

 

0 2 3 3

1 5 0 2

7 5 0 1

0 0 1 0

 

Example 1.15: Matrices in Row-Echelon Form

The following augmented matrices are in row-echelon form, but not in reduced row-echelon form.

 

1 0 6 5 8 2

0 0 1 2 7 3

0 0 0 0 1 1

0 0 0 0 0 0

  ,

 

1 3 5 4

0 1 0 7

0 0 1 0

0 0 0 1

0 0 0 0

 

,

 

1 0 6 0

0 1 4 0

0 0 1 0

0 0 0 0

 

Notice that we could apply further row operations to these matrices to carry them to reduced row-

echelon form. Take the time to try that on your own. Consider the following matrices, which are in

reduced row-echelon form.

Example 1.16: Matrices in Reduced Row-Echelon Form

The following augmented matrices are in reduced row-echelon form.

 

1 0 0 5 0 0

0 0 1 2 0 0

0 0 0 0 1 1

0 0 0 0 0 0

  ,

 

1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 1

0 0 0 0

 

,

 

1 0 0 4

0 1 0 3

0 0 1 2

 

One way in which the row-echelon form of a matrix is useful is in identifying the pivot positions and

pivot columns of the matrix.

18 Systems of Equations

Definition 1.17: Pivot Position and Pivot Column

A pivot position in a matrix is the location of a leading entry in the row-echelon formof a matrix.

A pivot column is a column that contains a pivot position.

For example consider the following.

Example 1.18: Pivot Position

Let

A =

 

1 2 3 4

3 2 1 6

4 4 4 10

 

Where are the pivot positions and pivot columns of the augmented matrix A?

Solution. The row-echelon form of this matrix is

 

1 2 3 4

0 1 2 3 2

0 0 0 0

 

This is all we need in this example, but note that this matrix is not in reduced row-echelon form.

In order to identify the pivot positions in the original matrix, we look for the leading entries in the

row-echelon form of the matrix. Here, the entry in the first row and first column, as well as the entry in

the second row and second column are the leading entries. Hence, these locations are the pivot positions.

We identify the pivot positions in the original matrix, as in the following:

 

1 2 3 4

3 2 1 6

4 4 4 10

 

Thus the pivot columns in the matrix are the first two columns. ♠

The following is an algorithm for carrying a matrix to row-echelon form and reduced row-echelon

form. You may wish to use this algorithm to carry the above matrix to row-echelon form or reduced

row-echelon form yourself for practice.

1.2. Systems Of Equations, Algebraic Procedures 19

Algorithm 1.19: Reduced Row-Echelon Form Algorithm

This algorithm provides a method for using row operations to take a matrix to its reduced row-

echelon form. We begin with the matrix in its original form.

1. Starting from the left, find the first nonzero column. This is the first pivot column, and the

position at the top of this column is the first pivot position. Switch rows if necessary to place

a nonzero number in the first pivot position.

2. Use row operations to make the entries below the first pivot position (in the first pivot column)

equal to zero.

3. Ignoring the row containing the first pivot position, repeat steps 1 and 2 with the remaining

rows. Repeat the process until there are no more rows to modify.

4. Divide each nonzero row by the value of the leading entry, so that the leading entry becomes

1. The matrix will then be in row-echelon form.

The following step will carry the matrix from row-echelon form to reduced row-echelon form.

5. Moving from right to left, use row operations to create zeros in the entries of the pivot columns

which are above the pivot positions. The result will be a matrix in reduced row-echelon form.

Most often we will apply this algorithm to an augmented matrix in order to find the solution to a system

of linear equations. However, we can use this algorithm to compute the reduced row-echelon form of any

matrix which could be useful in other applications.

Consider the following example of Algorithm 1.19.

Example 1.20: Finding Row-Echelon Form and

Reduced Row-Echelon Form of a Matrix

Let

A =

 

0 −5 −4 1 4 3

5 10 7

 

Find the row-echelon form of A. Then complete the process until A is in reduced row-echelon form.

Solution. In working through this example, we will use the steps outlined in Algorithm 1.19.

1. The first pivot column is the first column of the matrix, as this is the first nonzero column from the

left. Hence the first pivot position is the one in the first row and first column. Switch the first two

rows to obtain a nonzero entry in the first pivot position, outlined in a box below.

 

1 4 3

0 −5 −4 5 10 7

 

20 Systems of Equations

2. Step two involves creating zeros in the entries below the first pivot position. The first entry of the

second row is already a zero. All we need to do is subtract 5 times the first row from the third row.

The resulting matrix is  

1 4 3

0 −5 −4 0 10 8

 

3. Now ignore the top row. Apply steps 1 and 2 to the smaller matrix

[ −5 −4 10 8

]

In this matrix, the first column is a pivot column, and −5 is in the first pivot position. Therefore, we need to create a zero below it. To do this, add 2 times the first row (of this matrix) to the second.

The resulting matrix is [ −5 −4

0 0

]

Our original matrix now looks like  

1 4 3

0 −5 −4 0 0 0

 

We can see that there are no more rows to modify.

4. Now, we need to create leading 1s in each row. The first row already has a leading 1 so no work is

needed here. Divide the second row by −5 to create a leading 1. The resulting matrix is  

1 4 3

0 1 4 5

0 0 0

 

This matrix is now in row-echelon form.

5. Now create zeros in the entries above pivot positions in each column, in order to carry this matrix

all the way to reduced row-echelon form. Notice that there is no pivot position in the third column

so we do not need to create any zeros in this column! The column in which we need to create zeros

is the second. To do so, subtract 4 times the second row from the first row. The resulting matrix is

 

1 0 −1 5

0 1 4 5

0 0 0

 

This matrix is now in reduced row-echelon form. ♠

The above algorithm gives you a simple way to obtain the row-echelon form and reduced row-echelon

form of a matrix. The main idea is to do row operations in such a way as to end up with a matrix in

row-echelon form or reduced row-echelon form. This process is important because the resulting matrix

will allow you to describe the solutions to the corresponding linear system of equations in a meaningful

way.

1.2. Systems Of Equations, Algebraic Procedures 21

In the next example, we look at how to solve a system of equations using the corresponding augmented

matrix.

Example 1.21: Finding the Solution to a System

Give the complete solution to the following system of equations

2x+4y−3z =−1 5x+10y−7z =−2

3x+6y+5z = 9

Solution. The augmented matrix for this system is

 

2 4 −3 −1 5 10 −7 −2 3 6 5 9

 

In order to find the solution to this system, we wish to carry the augmented matrix to reduced row-

echelon form. We will do so using Algorithm 1.19. Notice that the first column is nonzero, so this is our

first pivot column. The first entry in the first row, 2, is the first leading entry and it is in the first pivot

position. We will use row operations to create zeros in the entries below the 2. First, replace the second

row with −5 times the first row plus 2 times the second row. This yields  

2 4 −3 −1 0 0 1 1

3 6 5 9

 

Now, replace the third row with −3 times the first row plus to 2 times the third row. This yields  

2 4 −3 −1 0 0 1 1

0 0 1 21

 

Now the entries in the first column below the pivot position are zeros. We now look for the second pivot

column, which in this case is column three. Here, the 1 in the second row and third column is in the pivot

position. We need to do just one row operation to create a zero below the 1.

Taking −1 times the second row and adding it to the third row yields  

2 4 −3 −1 0 0 1 1

0 0 0 20

 

We could proceed with the algorithm to carry this matrix to row-echelon form or reduced row-echelon

form. However, remember that we are looking for the solutions to the system of equations. Take another

look at the third row of the matrix. Notice that it corresponds to the equation

0x+0y+0z = 20

22 Systems of Equations

There is no solution to this equation because for all x,y,z, the left side will equal 0 and 0 6= 20. This shows there is no solution to the given system of equations. In other words, this system is inconsistent. ♠

The following is another example of how to find the solution to a system of equations by carrying the

corresponding augmented matrix to reduced row-echelon form.

Example 1.22: An Infinite Set of Solutions

Give the complete solution to the system of equations

3x− y−5z = 9 y−10z = 0

−2x+ y =−6 (1.8)

Solution. The augmented matrix of this system is

 

3 −1 −5 9 0 1 −10 0

−2 1 0 −6

 

In order to find the solution to this system, we will carry the augmented matrix to reduced row-echelon

form, using Algorithm 1.19. The first column is the first pivot column. We want to use row operations to

create zeros beneath the first entry in this column, which is in the first pivot position. Replace the third

row with 2 times the first row added to 3 times the third row. This gives

 

3 −1 −5 9 0 1 −10 0 0 1 −10 0

 

Now, we have created zeros beneath the 3 in the first column, so we move on to the second pivot column

(which is the second column) and repeat the procedure. Take −1 times the second row and add to the third row. 

 3 −1 −5 9 0 1 −10 0 0 0 0 0

 

The entry below the pivot position in the second column is now a zero. Notice that we have no more pivot

columns because we have only two leading entries.

At this stage, we also want the leading entries to be equal to one. To do so, divide the first row by 3.

 

1 −1 3

−5 3

3

0 1 −10 0 0 0 0 0

 

This matrix is now in row-echelon form.

Let’s continue with row operations until the matrix is in reduced row-echelon form. This involves

creating zeros above the pivot positions in each pivot column. This requires only one step, which is to add

1.2. Systems Of Equations, Algebraic Procedures 23

1 3

times the second row to the first row.

 

1 0 −5 3 0 1 −10 0 0 0 0 0

 

This is in reduced row-echelon form, which you should verify using Definition 1.13. The equations

corresponding to this reduced row-echelon form are

x−5z = 3 y−10z = 0

or x = 3+5z

y = 10z

Observe that z is not restrained by any equation. In fact, z can equal any number. For example, we can

let z = t, where we can choose t to be any number. In this context t is called a parameter . Therefore, the solution set of this system is

x = 3+5t y = 10t

z = t

where t is arbitrary. The system has an infinite set of solutions which are given by these equations. For

any value of t we select, x,y, and z will be given by the above equations. For example, if we choose t = 4 then the corresponding solution would be

x = 3+5(4) = 23 y = 10(4) = 40

z = 4

In Example 1.22 the solution involved one parameter. It may happen that the solution to a system

involves more than one parameter, as shown in the following example.

Example 1.23: A Two Parameter Set of Solutions

Find the solution to the system x+2y− z+w = 3 x+ y− z+w = 1

x+3y− z+w = 5

Solution. The augmented matrix is  

1 2 −1 1 3 1 1 −1 1 1 1 3 −1 1 5

 

We wish to carry this matrix to row-echelon form. Here, we will outline the row operations used. However,

make sure that you understand the steps in terms of Algorithm 1.19.

24 Systems of Equations

Take −1 times the first row and add to the second. Then take −1 times the first row and add to the third. This yields 

 1 2 −1 1 3 0 −1 0 0 −2 0 1 0 0 2

 

Now add the second row to the third row and divide the second row by −1.  

1 2 −1 1 3 0 1 0 0 2

0 0 0 0 0

  (1.9)

This matrix is in row-echelon form and we can see that x and y correspond to pivot columns, while

z and w do not. Therefore, we will assign parameters to the variables z and w. Assign the parameter s

to z and the parameter t to w. Then the first row yields the equation x+2y− s+ t = 3, while the second row yields the equation y = 2. Since y = 2, the first equation becomes x+4− s+ t = 3 showing that the solution is given by

x =−1+ s− t y = 2 z = s w = t

It is customary to write this solution in the form

 

x

y

z

w

 =

 

−1+ s− t 2

s

t

  (1.10)

This example shows a system of equations with an infinite solution set which depends on two param-

eters. It can be less confusing in the case of an infinite solution set to first place the augmented matrix in

reduced row-echelon form rather than just row-echelon form before seeking to write down the description

of the solution.

In the above steps, this means we don’t stop with the row-echelon form in equation 1.9. Instead we

first place it in reduced row-echelon form as follows.

 

1 0 −1 1 −1 0 1 0 0 2

0 0 0 0 0

 

Then the solution is y = 2 from the second row and x = −1+ z−w from the first. Thus letting z = s and w = t, the solution is given by 1.10.

You can see here that there are two paths to the correct answer, which both yield the same answer.

Hence, either approach may be used. The process which we first used in the above solution is called

Gaussian Elimination This process involves carrying the matrix to row-echelon form, converting back to

equations, and using back substitution to find the solution. When you do row operations until you obtain

reduced row-echelon form, the process is called Gauss-Jordan Elimination.

1.2. Systems Of Equations, Algebraic Procedures 25

We have now found solutions for systems of equations with no solution and infinitely many solutions,

with one parameter as well as two parameters. Recall the three types of solution sets which we discussed

in the previous section; no solution, one solution, and infinitely many solutions. Each of these types of

solutions could be identified from the graph of the system. It turns out that we can also identify the type

of solution from the reduced row-echelon form of the augmented matrix.

• No Solution: In the case where the system of equations has no solution, the row-echelon form of

the augmented matrix will have a row of the form

[ 0 0 0 | 1

]

This row indicates that the system is inconsistent and has no solution.

• One Solution: In the case where the system of equations has one solution, every column of the

coefficient matrix is a pivot column. The following is an example of an augmented matrix in reduced

row-echelon form for a system of equations with one solution.

 

1 0 0 5

0 1 0 0

0 0 1 2

 

• Infinitely Many Solutions: In the case where the system of equations has infinitely many solutions,

the solution contains parameters. There will be columns of the coefficient matrix which are not

pivot columns. The following are examples of augmented matrices in reduced row-echelon form for

systems of equations with infinitely many solutions.

 

1 0 0 5

0 1 2 −3 0 0 0 0

 

or [ 1 0 0 5

0 1 0 −3

]

1.2.3. Uniqueness of the Reduced Row-Echelon Form

As we have seen in earlier sections, we know that every matrix can be brought into reduced row-echelon

form by a sequence of elementary row operations. Here we will prove that the resulting matrix is unique;

in other words, the resulting matrix in reduced row-echelon form does not depend upon the particular

sequence of elementary row operations or the order in which they were performed.

Let A be the augmented matrix of a homogeneous system of linear equations in the variables x1,x2, · · · ,xn which is also in reduced row-echelon form. The matrix A divides the set of variables in two different types.

We say that xi is a basic variable whenever A has a leading 1 in column number i, in other words, when

column i is a pivot column. Otherwise we say that xi is a free variable.

Recall Example 1.23.

26 Systems of Equations

Example 1.24: Basic and Free Variables

Find the basic and free variables in the system

x+2y− z+w = 3 x+ y− z+w = 1

x+3y− z+w = 5

Solution. Recall from the solution of Example 1.23 that the row-echelon form of the augmented matrix of

this system is given by  

1 2 −1 1 3 0 1 0 0 2

0 0 0 0 0

 

You can see that columns 1 and 2 are pivot columns. These columns correspond to variables x and y,

making these the basic variables. Columns 3 and 4 are not pivot columns, which means that z and w are

free variables.

We can write the solution to this system as

x =−1+ s− t y = 2 z = s w = t

Here the free variables are written as parameters, and the basic variables are given by linear functions

of these parameters. ♠

In general, all solutions can be written in terms of the free variables. In such a description, the free

variables can take any values (they become parameters), while the basic variables become simple linear

functions of these parameters. Indeed, a basic variable xi is a linear function of only those free variables

x j with j > i. This leads to the following observation.

Proposition 1.25: Basic and Free Variables

If xi is a basic variable of a homogeneous system of linear equations, then any solution of the system

with x j = 0 for all those free variables x j with j > i must also have xi = 0.

Using this proposition, we prove a lemma which will be used in the proof of the main result of this

section below.

Lemma 1.26: Solutions and the Reduced Row-Echelon Form of a Matrix

Let A and B be two distinct augmented matrices for two homogeneous systems of m equations in n

variables, such that A and B are each in reduced row-echelon form. Then, the two systems do not

have exactly the same solutions.

Proof. With respect to the linear systems associated with the matrices A and B, there are two cases to

consider:

1.2. Systems Of Equations, Algebraic Procedures 27

• Case 1: the two systems have the same basic variables

• Case 2: the two systems do not have the same basic variables

In case 1, the two matrices will have exactly the same pivot positions. However, since A and B are not

identical, there is some row of A which is different from the corresponding row of B and yet the rows each

have a pivot in the same column position. Let i be the index of this column position. Since the matrices are

in reduced row-echelon form, the two rows must differ at some entry in a column j > i. Let these entries be a in A and b in B, where a 6= b. Since A is in reduced row-echelon form, if x j were a basic variable for its linear system, we would have a = 0. Similarly, if x j were a basic variable for the linear system of the matrix B, we would have b = 0. Since a and b are unequal, they cannot both be equal to 0, and hence x j cannot be a basic variable for both linear systems. However, since the systems have the same basic

variables, x j must then be a free variable for each system. We now look at the solutions of the systems in

which x j is set equal to 1 and all other free variables are set equal to 0. For this choice of parameters, the

solution of the system for matrix A has x j =−a, while the solution of the system for matrix B has x j =−b, so that the two systems have different solutions.

In case 2, there is a variable xi which is a basic variable for one matrix, let’s say A, and a free variable

for the other matrix B. The system for matrix B has a solution in which xi = 1 and x j = 0 for all other free variables x j. However, by Proposition 1.25 this cannot be a solution of the system for the matrix A. This

completes the proof of case 2. ♠

Now, we say that the matrix B is equivalent to the matrix A provided that B can be obtained from A

by performing a sequence of elementary row operations beginning with A. The importance of this concept

lies in the following result.

Theorem 1.27: Equivalent Matrices

The two linear systems of equations corresponding to two equivalent augmented matrices have

exactly the same solutions.

The proof of this theorem is left as an exercise.

Now, we can use Lemma 1.26 and Theorem 1.27 to prove the main result of this section.

Theorem 1.28: Uniqueness of the Reduced Row-Echelon Form

Every matrix A is equivalent to a unique matrix in reduced row-echelon form.

Proof. Let A be an m×n matrix and let B and C be matrices in reduced row-echelon form, each equivalent to A. It suffices to show that B =C.

Let A+ be the matrix A augmented with a new rightmost column consisting entirely of zeros. Similarly,

augment matrices B and C each with a rightmost column of zeros to obtain B+ and C+. Note that B+ and

C+ are matrices in reduced row-echelon form which are obtained from A+ by respectively applying the

same sequence of elementary row operations which were used to obtain B and C from A.

Now, A+, B+, and C+ can all be considered as augmented matrices of homogeneous linear systems

in the variables x1,x2, · · · ,xn. Because B+ and C+ are each equivalent to A+, Theorem 1.27 ensures that

28 Systems of Equations

all three homogeneous linear systems have exactly the same solutions. By Lemma 1.26 we conclude that

B+ =C+. By construction, we must also have B =C. ♠

According to this theorem we can say that each matrix A has a unique reduced row-echelon form.

1.2.4. Rank and Homogeneous Systems

There is a special type of system which requires additional study. This type of system is called a homo-

geneous system of equations, which we defined above in Definition 1.3. Our focus in this section is to

consider what types of solutions are possible for a homogeneous system of equations.

Consider the following definition.

Definition 1.29: Trivial Solution

Consider the homogeneous system of equations given by

a11x1 +a12x2 + · · ·+a1nxn = 0 a21x1 +a22x2 + · · ·+a2nxn = 0

...

am1x1 +am2x2 + · · ·+amnxn = 0

Then, x1 = 0,x2 = 0, · · · ,xn = 0 is always a solution to this system. We call this the trivial solution .

If the system has a solution in which not all of the x1, · · · ,xn are equal to zero, then we call this solution nontrivial . The trivial solution does not tell us much about the system, as it says that 0 = 0! Therefore, when working with homogeneous systems of equations, we want to know when the system has a nontrivial

solution.

Suppose we have a homogeneous system of m equations, using n variables, and suppose that n > m. In other words, there are more variables than equations. Then, it turns out that this system always has

a nontrivial solution. Not only will the system have a nontrivial solution, but it also will have infinitely

many solutions. It is also possible, but not required, to have a nontrivial solution if n = m and n < m.

Consider the following example.

Example 1.30: Solutions to a Homogeneous System of Equations

Find the nontrivial solutions to the following homogeneous system of equations

2x+ y− z = 0 x+2y−2z = 0

Solution. Notice that this system has m = 2 equations and n = 3 variables, so n > m. Therefore by our previous discussion, we expect this system to have infinitely many solutions.

The process we use to find the solutions for a homogeneous system of equations is the same process

1.2. Systems Of Equations, Algebraic Procedures 29

we used in the previous section. First, we construct the augmented matrix, given by

[ 2 1 −1 0 1 2 −2 0

]

Then, we carry this matrix to its reduced row-echelon form, given below.

[ 1 0 0 0

0 1 −1 0

]

The corresponding system of equations is

x = 0 y− z = 0

Since z is not restrained by any equation, we know that this variable will become our parameter. Let z = t where t is any number. Therefore, our solution has the form

x = 0 y = z = t

z = t

Hence this system has infinitely many solutions, with one parameter t. ♠

Suppose we were to write the solution to the previous example in another form. Specifically,

x = 0 y = 0+ t z = 0+ t

can be written as  

x

y

z

 =

 

0

0

0

 + t

 

0

1

1

 

Notice that we have constructed a column from the constants in the solution (all equal to 0), as well as a

column corresponding to the coefficients on t in each equation. While we will discuss this form of solution

more in further chapters, for now consider the column of coefficients of the parameter t. In this case, this

is the column

 

0

1

1

 .

There is a special name for this column, which is basic solution. The basic solutions of a system are

columns constructed from the coefficients on parameters in the solution. We often denote basic solutions

by X1,X2 etc., depending on how many solutions occur. Therefore, Example 1.30 has the basic solution

X1 =

 

0

1

1

 .

We explore this further in the following example.

30 Systems of Equations

Example 1.31: Basic Solutions of a Homogeneous System

Consider the following homogeneous system of equations.

x+4y+3z = 0 3x+12y+9z = 0

Find the basic solutions to this system.

Solution. The augmented matrix of this system and the resulting reduced row-echelon form are

[ 1 4 3 0

3 12 9 0

] → ·· · →

[ 1 4 3 0

0 0 0 0

]

When written in equations, this system is given by

x+4y+3z = 0

Notice that only x corresponds to a pivot column. In this case, we will have two parameters, one for y and

one for z. Let y = s and z = t for any numbers s and t. Then, our solution becomes

x =−4s−3t y = s z = t

which can be written as  

x

y

z

 =

 

0

0

0

 + s

 

−4 1

0

 + t

 

−3 0

1

 

You can see here that we have two columns of coefficients corresponding to parameters, specifically one

for s and one for t. Therefore, this system has two basic solutions! These are

X1 =

 

−4 1

0

  ,X2 =

 

−3 0

1

 

We now present a new definition.

Definition 1.32: Linear Combination

Let X1, · · · ,Xn,V be column matrices. Then V is said to be a linear combination of the columns X1, · · · ,Xn if there exist scalars, a1, · · · ,an such that

V = a1X1+ · · ·+anXn

A remarkable result of this section is that a linear combination of the basic solutions is again a solution

to the system. Even more remarkable is that every solution can be written as a linear combination of these

1.2. Systems Of Equations, Algebraic Procedures 31

solutions. Therefore, if we take a linear combination of the two solutions to Example 1.31, this would also

be a solution. For example, we could take the following linear combination

3

 

−4 1

0

 +2

 

−3 0

1

 =

 

−18 3

2

 

You should take a moment to verify that

 

x

y

z

 =

 

−18 3

2

 

is in fact a solution to the system in Example 1.31.

Another way in which we can find out more information about the solutions of a homogeneous system

is to consider the rank of the associated coefficient matrix. We now define what is meant by the rank of a

matrix.

Definition 1.33: Rank of a Matrix

Let A be a matrix and consider any row-echelon form of A. Then, the number r of leading entries

of A does not depend on the row-echelon form you choose, and is called the rank of A. We denote

it by rank(A).

Similarly, we could count the number of pivot positions (or pivot columns) to determine the rank of A.

Example 1.34: Finding the Rank of a Matrix

Consider the matrix  

1 2 3

1 5 9

2 4 6

 

What is its rank?

Solution. First, we need to find the reduced row-echelon form of A. Through the usual algorithm, we find

that this is  

1 0 −1 0 1 2

0 0 0

 

Here we have two leading entries, or two pivot positions, shown above in boxes.The rank of A is r = 2. ♠

Notice that we would have achieved the same answer if we had found the row-echelon form of A

instead of the reduced row-echelon form.

Suppose we have a homogeneous system of m equations in n variables, and suppose that n > m. From our above discussion, we know that this system will have infinitely many solutions. If we consider the

32 Systems of Equations

rank of the coefficient matrix of this system, we can find out even more about the solution. Note that we

are looking at just the coefficient matrix, not the entire augmented matrix.

Theorem 1.35: Rank and Solutions to a Homogeneous System

Let A be the m× n coefficient matrix corresponding to a homogeneous system of equations, and suppose A has rank r. Then, the solution to the corresponding system has n− r parameters.

Consider our above Example 1.31 in the context of this theorem. The system in this example has m = 2 equations in n = 3 variables. First, because n > m, we know that the system has a nontrivial solution, and therefore infinitely many solutions. This tells us that the solution will contain at least one parameter. The

rank of the coefficient matrix can tell us even more about the solution! The rank of the coefficient matrix

of the system is 1, as it has one leading entry in row-echelon form. Theorem 1.35 tells us that the solution

will have n− r = 3−1 = 2 parameters. You can check that this is true in the solution to Example 1.31. Notice that if n = m or n < m, it is possible to have either a unique solution (which will be the trivial

solution) or infinitely many solutions.

We are not limited to homogeneous systems of equations here. The rank of a matrix can be used to

learn about the solutions of any system of linear equations. In the previous section, we discussed that a

system of equations can have no solution, a unique solution, or infinitely many solutions. Suppose the

system is consistent, whether it is homogeneous or not. The following theorem tells us how we can use

the rank to learn about the type of solution we have.

Theorem 1.36: Rank and Solutions to a Consistent System of Equations

Let A be the m× (n+1) augmented matrix corresponding to a consistent system of equations in n variables, and suppose A has rank r. Then

1. the system has a unique solution if r = n

2. the system has infinitely many solutions if r < n

We will not present a formal proof of this, but consider the following discussions.

1. No Solution The above theorem assumes that the system is consistent, that is, that it has a solution.

It turns out that it is possible for the augmented matrix of a system with no solution to have any

rank r as long as r > 1. Therefore, we must know that the system is consistent in order to use this theorem!

2. Unique Solution Suppose r = n. Then, there is a pivot position in every column of the coefficient matrix of A. Hence, there is a unique solution.

3. Infinitely Many Solutions Suppose r < n. Then there are infinitely many solutions. There are less pivot positions (and hence less leading entries) than columns, meaning that not every column is a

pivot column. The columns which are not pivot columns correspond to parameters. In fact, in this

case we have n− r parameters.

1.2. Systems Of Equations, Algebraic Procedures 33

1.2.5. Balancing Chemical Reactions

The tools of linear algebra can also be used in the subject area of Chemistry, specifically for balancing

chemical reactions.

Consider the chemical reaction

SnO2 +H2 → Sn+H2O Here the elements involved are tin (Sn), oxygen (O), and hydrogen (H). A chemical reaction occurs and

the result is a combination of tin (Sn) and water (H2O). When considering chemical reactions, we want

to investigate how much of each element we began with and how much of each element is involved in the

result.

An important theory we will use here is the mass balance theory. It tells us that we cannot create or

delete elements within a chemical reaction. For example, in the above expression, we must have the same

number of oxygen, tin, and hydrogen on both sides of the reaction. Notice that this is not currently the

case. For example, there are two oxygen atoms on the left and only one on the right. In order to fix this,

we want to find numbers x,y,z,w such that

xSnO2 + yH2 → zSn+wH2O

where both sides of the reaction have the same number of atoms of the various elements.

This is a familiar problem. We can solve it by setting up a system of equations in the variables x,y,z,w.

Thus you need

Sn : x = z O : 2x = w H : 2y = 2w

We can rewrite these equations as

Sn : x− z = 0 O : 2x−w = 0 H : 2y−2w = 0

The augmented matrix for this system of equations is given by

 

1 0 −1 0 0 2 0 0 −1 0 0 2 0 −2 0

 

The reduced row-echelon form of this matrix is

 

1 0 0 −1 2

0

0 1 0 −1 0 0 0 1 −1

2 0

 

The solution is given by

x− 1 2 w = 0

y−w = 0 z− 1

2 w = 0

34 Systems of Equations

which we can write as x = 1

2 t

y = t

z = 1 2 t

w = t

For example, let w = 2 and this would yield x = 1,y = 2, and z = 1. We can put these values back into the expression for the reaction which yields

SnO2 +2H2 → Sn+2H2O

Observe that each side of the expression contains the same number of atoms of each element. This means

that it preserves the total number of atoms, as required, and so the chemical reaction is balanced.

Consider another example.

Example 1.37: Balancing a Chemical Reaction

Potassium is denoted by K, oxygen by O, phosphorus by P and hydrogen by H. Consider the

reaction given by

KOH +H3PO4 → K3PO4 +H2O Balance this chemical reaction.

Solution. We will use the same procedure as above to solve this problem. We need to find values for

x,y,z,w such that

xKOH + yH3PO4 → zK3PO4 +wH2O preserves the total number of atoms of each element.

Finding these values can be done by finding the solution to the following system of equations.

K : x = 3z O : x+4y = 4z+w H : x+3y = 2w P : y = z

The augmented matrix for this system is

 

1 0 −3 0 0 1 4 −4 −1 0 1 3 0 −2 0 0 1 −1 0 0

 

and the reduced row-echelon form is

 

1 0 0 −1 0 0 1 0 −1

3 0

0 0 1 −1 3

0

0 0 0 0 0

 

1.2. Systems Of Equations, Algebraic Procedures 35

The solution is given by

x−w = 0 y− 1

3 w = 0

z− 1 3 w = 0

which can be written as x = t

y = 1 3 t

z = 1 3 t

w = t

Choose a value for t, say 3. Then w = 3 and this yields x = 3,y = 1,z = 1. It follows that the balanced reaction is given by

3KOH +1H3PO4 → 1K3PO4 +3H2O Note that this results in the same number of atoms on both sides. ♠

Of course these numbers you are finding would typically be the number of moles of the molecules on

each side. Thus three moles of KOH added to one mole of H3PO4 yields one mole of K3PO4 and three

moles of H2O.

1.2.6. Dimensionless Variables

This section shows how solving systems of equations can be used to determine appropriate dimensionless

variables. It is only an introduction to this topic and considers a specific example of a simple airplane

wing shown below. We assume for simplicity that it is a flat plane at an angle to the wind which is blowing

against it with speed V as shown.

A Bθ

V

The angle θ is called the angle of incidence, B is the span of the wing and A is called the chord. Denote by l the lift. Then this should depend on various quantities like θ ,V ,B,A and so forth. Here is a table which indicates various quantities on which it is reasonable to expect l to depend.

Variable Symbol Units

chord A m

span B m

angle incidence θ m0kg0 sec0

speed of wind V msec−1

speed of sound V0 msec −1

density of air ρ kgm−3

viscosity µ kgsec−1 m−1

lift l kgsec−2 m

36 Systems of Equations

Here m denotes meters, sec refers to seconds and kg refers to kilograms. All of these are likely familiar

except for µ , which we will discuss in further detail now.

Viscosity is a measure of how much internal friction is experienced when the fluid moves. It is roughly

a measure of how “sticky" the fluid is. Consider a piece of area parallel to the direction of motion of the

fluid. To say that the viscosity is large is to say that the tangential force applied to this area must be large

in order to achieve a given change in speed of the fluid in a direction normal to the tangential force. Thus

µ (area)(velocity gradient) = tangential force

Hence

(units on µ)m2 ( m

secm

) = kgsec−2 m

Thus the units on µ are kgsec−1 m−1

as claimed above.

Returning to our original discussion, you may think that we would want

l = f (A,B,θ ,V ,V0,ρ , µ)

This is very cumbersome because it depends on seven variables. Also, it is likely that without much care,

a change in the units such as going from meters to feet would result in an incorrect value for l. The way to

get around this problem is to look for l as a function of dimensionless variables multiplied by something

which has units of force. It is helpful because first of all, you will likely have fewer independent variables

and secondly, you could expect the formula to hold independent of the way of specifying length, mass and

so forth. One looks for

l = f (g1, · · · ,gk)ρV 2AB where the units on ρV 2AB are

kg

m3

( m sec

)2 m2 =

kg×m sec2

which are the units of force. Each of these gi is of the form

Ax1Bx2θ x3V x4V x50 ρ x6 µx7 (1.11)

and each gi is independent of the dimensions. That is, this expression must not depend on meters, kilo-

grams, seconds, etc. Thus, placing in the units for each of these quantities, one needs

mx1mx2 ( mx4 sec−x4

)( mx5 sec−x5

)( kgm−3

)x6 ( kgsec−1 m−1

)x7 = m0kg0 sec0

Notice that there are no units on θ because it is just the radian measure of an angle. Hence its dimensions consist of length divided by length, thus it is dimensionless. Then this leads to the following equations for

the xi. m : x1 + x2 + x4 + x5 −3x6 − x7 = 0

sec : −x4 − x5 − x7 = 0 kg : x6 + x7 = 0

The augmented matrix for this system is  

1 1 0 1 1 −3 −1 0 0 0 0 1 1 0 1 0

0 0 0 0 0 1 1 0

Homework is Completed By:

Writer Writer Name Amount Client Comments & Rating
Instant Homework Helper

ONLINE

Instant Homework Helper

$36

She helped me in last minute in a very reasonable price. She is a lifesaver, I got A+ grade in my homework, I will surely hire her again for my next assignments, Thumbs Up!

Order & Get This Solution Within 3 Hours in $25/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 3 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 6 Hours in $20/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 6 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 12 Hours in $15/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 12 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

6 writers have sent their proposals to do this homework:

Assignment Hut
Financial Analyst
Finance Professor
Innovative Writer
Best Coursework Help
Finance Homework Help
Writer Writer Name Offer Chat
Assignment Hut

ONLINE

Assignment Hut

I have read your project description carefully and you will get plagiarism free writing according to your requirements. Thank You

$19 Chat With Writer
Financial Analyst

ONLINE

Financial Analyst

I have read your project description carefully and you will get plagiarism free writing according to your requirements. Thank You

$22 Chat With Writer
Finance Professor

ONLINE

Finance Professor

I have done dissertations, thesis, reports related to these topics, and I cover all the CHAPTERS accordingly and provide proper updates on the project.

$39 Chat With Writer
Innovative Writer

ONLINE

Innovative Writer

I am an elite class writer with more than 6 years of experience as an academic writer. I will provide you the 100 percent original and plagiarism-free content.

$34 Chat With Writer
Best Coursework Help

ONLINE

Best Coursework Help

I am an elite class writer with more than 6 years of experience as an academic writer. I will provide you the 100 percent original and plagiarism-free content.

$24 Chat With Writer
Finance Homework Help

ONLINE

Finance Homework Help

I have worked on wide variety of research papers including; Analytical research paper, Argumentative research paper, Interpretative research, experimental research etc.

$48 Chat With Writer

Let our expert academic writers to help you in achieving a+ grades in your homework, assignment, quiz or exam.

Similar Homework Questions

Business ethics mcgraw hill pdf - Type of molecule with an uneven distribution of electrons - Loading and unloading employment - How to introduce a speech topic - Cscc research database - Access control assignment 7 - Business Law - Database Assignment - Marketing implications of introducing new coke - Tanya menon ohio state - What is net income available to common stockholders - APA 7TH EDITION (3) - Norming forming storming performing pdf - Latitude and longitude lab earth science answer key - Total slack free slack and critical path - Week -3 research paper -832 - Mcgraw hill practice operations module 6 - What is a relevant market in compensation - Circuit breakers western power - Code of ethics and professional conduct in nursing ppt - Jeff nippard chest hypertrophy program - A share of stock with a beta of 75 - Data flow diagram microsoft - Paper - Hnc electrical engineering graded unit - Disney's design case study answers - Discuss the formal element and their purpose in art - Intermittent pilot ignition wiring diagram - Ops 571 week 6 signature assignment - Followership and servant leadership compare and contrast - Mediation analysis spss output interpretation - Level 3 ready access - Michelle bridges lifestyle program - What are the different methods of development - Qantas club sunshine coast - How to write a private investigation report - Partial income statement wileyplus - Reconstruction and the Compromise of 1877 - What was walmart's early global expansion strategy - Pro position proposal - Ethnic conflict - Global supply chain management simulation cell phone tips - Budget assignment for students - Latrobe mid semester break - Persuasive speech outline on smoking - Discussion - Holly cottage kennels warwick - Cybertext building blocks of accounting a financial perspective - What does the matchmaker criticize mulan for - Why did montag go to see faber in fahrenheit 451 - Make a Poster (in Landscape) - Research Methods used rhetoric 400 WORDS POST need it in 12 hours - Hamlet vocabulary words and definitions - Marketing Exercise Assignment - Custom molds inc case study answers - Jon jorgenson the wall - Why our screens make us less happy ted talk - Facilitator Training Program - Week 1 Discussion - Phet isotopes and atomic mass - Macedon ranges walking trail - Prodelin 3.8 m antenna manual - Distance from salt lake city ut to reno nv - The breakfast club communication analysis - Dennis mumby organizational communication - Adrp 6 0 army pubs - Costco membership application form - First cirque du soleil show 1984 - Discussion: Interactive Tools for Production and Costs - English_Double Entry Journal - Trifles - Amnion and chorion ultrasound - A dirigible is tethered by a cable attached - The accounting process involves all of the following except - Hunger games the game how to dodge knife - Direct financing vs sales type lease - Paper - Operational amplifier circuits lab report - Malcolm x learning to read answers - Assignment OE - Types of microscope ppt - Discussion Question 1 week 2 - Portifolio - Phar mor fraud case - Wire rope safety barrier vicroads - Number of primes less than 10000 - Shoreline stadium case - 9409 buffalo avenue rancho cucamonga - Different types of sensors used in robots ppt - Agar jelly diffusion experiment method - Bbc sheep reaction time - Case Study - Automatic water level sensor - Arnold blueprint to cut phase 1 pdf free download - Activity 4 - Discussion Forum Week 5: economies of scale - Concordia university registrar office - Crisis Communication - President jefferson's cipher worksheet answers - Bed by robert rauschenberg - 1 chlorobutane sn1 or sn2