ABSTRACT ALGEBRA
Second Edition
ABSTRACT ALGEBRA
A First Course
Second Edition
Dan Saracino Colgate University
WAVEIAND
PRESS, INC. Long Grove, illinois
For information about this book, contact: Waveland Press, Inc. 4180 IL Route 83, Suite 101 Long Grove, IL 60047-9580 (847) 634-0081 info@waveland.com www.waveland.com
Copyright © 2008, 1980 by Dan Saracino
I O-digit ISBN 1-57766-536-8 13-digit ISBN 978-1-57766-536-6
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means without permission in writing from the publisher.
Printed in the United States of America
7 6 543 2
CONTENTS
o Sets and Induction ............ ......... .... .... ................ ......... .... ........ ... ............... ...... 1 1 Binary Operations ..... .... ... ........... .. ... ......... ... .......... .... .. .. ....... ..... .. ... .. ........... 10 2 Groups .. ... ............. ..... .................. ... ........... ........ ............. ....... .... .... .. ... .......... 16 3 Fundamental Theorems about Groups ...... ....... .. ... .......... ...... ........ .......... ...... 25 4 Powers of an Element; Cyclic Groups .......................................................... 33 5 Subgroups .......... .......... ................ .. ........ ....... ........ ............... ... ....... ........ ........ 43 6 Direct Products ............................................................................................. 55 7 Functions ...................................................................................................... 59 8 Symmetric Groups ................................................ ..................... ........... ....... 66 9 Equivalence Relations; Cosets ....... ... .... ... ............... ............. ............... ......... 80
10 Counting the Elements of a Finite Group ..................................................... 88 11 Normal Subgroups ....................................................................................... 99 12 Homomorphisms ........................................................................................ 109 13 Homomorphisms and Normal Subgroups .................................... ..... .. ... .... 121 14 Direct Products and Finite Abelian Groups ....... ... .......... ......... .... ... ...... ...... 133 15 Sylow Theorems ........................................................................... ..... ... .. .... 143 16 Rings .......................................................................................................... 153 17 Subrings, Ideals, and Quotient Rings ....................... .... ............ .. ....... ......... 164 18 Ring Homomorphisms ............................................................................... 177 19 Polynomials .................. ..... ....... ..... .............. ............ ........... ................. .... ... 191 20 From Polynomials to Fields .... ......... ........ ...... .. ....... .................. ........... ... ... 205 21 Unique Factorization Domains ...... .... ..... ........ ............ .......... ......... ... .......... 211 22 Extensions of Fields ................. ..... .. .... ....................... ............... ........ ...... .. . 227 23 Constructions with Straightedge and Compass .......... .. ............ ........... ... .. .. 240 24 Normal and Separable Extensions ... ..... .... ....................... ......... ....... ..... ... ... 249 25 Galois Theory ............................................................................................. 265 26 Solvability ........... ... ........... ................... ...... .... ............................................ 279
Suggestions for Further Reading .... .. .......... ............... ............ ..... .... ........ ... 297 Answers to Selected Exercises ....................... ....... ........ ............. ... .... ......... 301 Index .......................................................................................... ........ ....... .. 307
PREFACE
This book is intended for use in a junior-senior level course in abstract algebra. The main change from the first edition is the addition of five new sections on field extensions and Galois theory, providing enough material for a two- semester course. More minor changes include the simplification of some points in the presentation, the addition of some new exercises, and the updating of some historical material.
In the earlier sections of the book I have preserved the emphasis on providing a large number of examples and on helping students learn how to write proofs. In the new sections the presentation is at a somewhat higher level. Unusual features, for a book that is still relatively short, are the inclusion of full proofs of both directions of Gauss' theorem on constructible regular polygons and Galois ' theorem on solvability by radicals, a Galois-theoretic proof of the Fundamental Theorem of Algebra, and a proof of the Primitive Element Theorem.
A one-semester course should probably include the material of Sections 0-13 , and some of the material on rings in Section 16 and the following sections. Sections 14 and 15 allow the inclusion of some deeper results on groups. The results of Section 14 are used in Section 15, and the First Sylow Theorem from Section 15 is used in Sections 25 and 26.
In two semesters it should be possible to cover the whole book, possibly omitting Section 21.
I want to express my appreciation to my students who used the manuscript for the five new sections as a text and pointed out to me parts of the presentation that needed clarification. I also want to thank all those who have sent me comments about the book over the years, and those who suggested that a new edition would be a good idea. I hope this second edition will be useful.
Dan Saracino
SECTION 0
SETS AND INDUCTION
One of the most fundamental notions in any part of mathematics is that of a set. You are probably already familiar with the basics about sets, but we will start out by running through them quickly, if for no other reason than to establish some notational conventions. After these generalities, we will make some remarks about the set of positive integers, and in particular about the method of mathematical induction, which will be useful to us in later proofs.
For us, a set will be just a collection of entities, called the elements or members of the set. We indicate that some object x is an element of a set S by writing xES. If x is not an element of S, we write x f£ S.
In order to specify a set S, we must indicate which objects are elements of S. If S is finite, we can do this by writing down all the elements inside braces. For example, we write
S={1,2,3,4}
to signify that S consists of the positive integers 1,2,3, and 4. If S is infinite, then we cannot list all its elements, but sometimes we can give enough of them to make it clear what set S is. For instance,
S= {1,4, 7,10,13,16, ... }
indicates the set of all positive integers that are of the form 1 + 3k for some nonnegative integer k.
We can also specify a set by giving a criterion that determines which objects are in the set. Using this method, the set {l,2,3,4} could be denoted by
{xix is a positive integer ~4},
where the vertical bar stands for the words "such that." Likewise, the set {1,4,7, 10, 13, 16, ... } could be written as
{xix = 1 + 3k for some nonnegative integer k}.
1
2 Section O. Sets and Induction
Some sets occur so frequently that it IS worthwhile to adopt special notations for them. For example, we use
Z to denote the set of all integers,
Q to denote the set of all rational numbers,
R to denote the set of all real numbers, and
C to denote the set of all complex numbers.
The symbol 0 denotes the empty set or null set, i.e., the set with no elements. Sometimes we wish to express the fact that one set is included in another,
i.e., that every element of the first set is also an element of the second set. We do so by saying that the first set is a subset of the second.
DEFINITION If Sand T are sets, then we say that S is a subset of T, and write S k T, if every element of S is an element of T.
Examples If S={1,2,3} and T={l,2,3,4,5}, then SkT. If S= {7T, V2} and T= {7T,5, V2 }, then S k T. We write S g{5, V2}
because 71' E S but 71' f1. {5, V2 }. If we let
z+ = {xix is a positive integer}, then Z+ kZ; similarly, we have Q+ kQ and R+ kR
Observe that for any set S, S k S, that is, S is a subset of itself. Also observe that 0 k S, no matter what set S is. Perhaps the best way to see this is to ask yourself how it could be false. If 0 g S, then there is some x E0 which is not in S; but this is nonsense, because there is no x E0, period.
We say that two sets Sand T are equal, and we write S= T, if S and T have the same elements. Clearly, then, saying that S = T is equivalent to saying that both S k T and T k S. If S k T but S=I=T, we say that S is a proper subset of T. If we wish to emphasize that S is a proper subset, we write
S 1- T. Very often we consider sets that are obtained by performing some
operation on one or more given sets. For example, if Sand T are sets, then their intersection, denoted by S n T, is defined by
S n T= {xlxES and xE T}. The union of Sand T, denoted by S U T, is given by
S U T = { x I xES or x E T or both}.
Section O. Sets and Induction 3
The union and intersection of more than two sets are defined in an analogous way; for instance,
S n Tn U = {xix E S and x E T and x E U}.
Examples Let S={1,2,3,4,5} and T={2,4,6}. Then SnT={2,4} and S U T= {l,2,3,4,5,6}.
Again let S= {l,2,3,4,5}. Then S n ~= Sand S u ~= IR.
We can illustrate many of the notions we have introduced by generalizing this last example.
THEOREM 0.1 Let Sand T be sets. Then S ~ T if and only if S n T= S.
PROOF. We must show that S ~ T implies S n T= S, and that conversely S n T= S implies S ~ T.
Assume S ~ T. To show that S n T= S we have to show that S n T ~ S and S ~ S n T. The first is clear; for the second we must show that every element of S is an element of S and of T. Clearly any element of S is an element of S; and since we are assuming S ~ T, any element of S is an element of T, too, so we are done with the first half of the proof.
Now assume S n T= S; we show that S ~ T. Why is it true that any element of S is an element of T? Because any element of S is an element of S n T by our assumption, and any element of S nTis clearly an element of T·O
It is also true that S ~ T if and only if S u T= T. The proof of this is left as an exercise.
As a matter of notation, we adopt the abbreviation "iff" for "if and only if." Thus we say that S ~ Tiff S u T= T. Sometimes the symbol ~ is used in place of iff; using ~, we would write S ~ T~S U T= T.
One set that is particularly important in mathematics is the set 1.. + of positive integers. We will see that, in abstract algebra, concepts defined in terms of positive integers can often help to clarify what is going on. For this reason, methods for working with integers can be very valuable tools. Perhaps the most useful strategy for proving things about 1..+ is the method of mathematical induction.
Suppose we have in mind a statement P(n) about the integer n. For example, P(n) might say "n is even," or "n is the square of some integer," or "If p is a prime, then every group of order p n has nontrivial center" (whatever that means). Mathematical induction provides us with a way of trying to prove that P(n) is true for every positive n.
4 Section O. Sets and Induction
The technique rests on an intuitively acceptable axiom called the
Well-Ordering Principle: Every nonempty subset of 7L.+ has a smallest element.
The well-ordering principle yields two slightly different forms of induction, both of which are good to know.
nIEOREM 0.2 (Mathematical Induction, first form) Suppose pen) is a state- ment about positive integers, and we know two things:
i) pel) is true;
ii) for every positive m, if P( m) is true, then P( m + 1) is true. Under these circumstances, we can conclude that pen) is true for all positive n.
PROOF. Suppose pen) is false for some positive n. Then S={nlnE7L.+ and P( n) is false} is a nonempty subset of 7L. +. By the well-ordering principle, S has a smallest element, say no' Clearly no*" 1, because P(1) is true by (i). Therefore, no - 1 is a positive integer, and P( no - I) is true because no - I is smaller than no' By (ii), this means that P(no -I + I) is true; that is, P(no) is true, and this contradicts the fact that P(no) is false!
Since the supposition that Pen) is false for some n has led us to a contradiction, we conclude that P( n) holds for all n E 7L. +. 0
What you do to prove something by induction, then, is this. You first show that P(I) is true (this is usually trivial). You then show that for an arbitrary positive m, if P(m) is true, then P(m + I) is true. You do this by assuming that P(m) is true and using this assumption to establish that P(m + 1) is true.
Sometimes people are bothered by the word "assuming" in the last sentence. They get worried that "assuming that P(m) is true" amounts to assuming what we are trying to prove. But it does not, for we are not assuming that P(m) is true for all m. Rather we are arguing, for an arbitrary fixed m, that if P( m) is true for that m, then so is P( m + I). The only way of doing this is to show that P( m + 1) is true on the basis of the assumption that P( m) is.
Examples
1. You may recall that, in calculus, when you are evaluating definite integrals from the definition as a limit of Riemann sums, you run into sums such as 1 + 2 + 3 + ... + n, and you need formulas for these sums in terms of n. The formula for 1 +2+··· + n, for example, is n(n+ 1)/2. Let's prove it, by induction.
We take for pen) the statement that
1+2+··· +n=n(n+l)/2;
Section O. Sets and Induction 5
we hope to show that P(n) is true for all positive n. First we check P(l): It says that 1 = 1(1 + 1)/2, which is certainly true. Second, we assume that P(m) is true for some arbitrary positive m, and we use this assumption to show that P(m + 1) is true. That is, we assume that
1 +2+··· + m=m(m+ 1)/2 [0.1]
and we try to show that
1 + 2 + ... + m + m + 1 = (m + 1)( m + 1 + 1)/2.
The natural thing to do is to add m + 1 to both sides of Eq. [0.1]: 1 +2+··· +m+m+ 1 = [m(m+ 1)/2] +m+ 1.
Now
[U]
[m(m+ 1)/2] +m+ I = [m(m+ 1)+2(m+ 1)]/2= [(m+ 1)(m+2)]/2,
so we have Eq. [0.2]. Therefore, by induction, P(n) holds for all positive n, and we are done.
2. A similar formula, also used in calculus, is
12+22+ ... + n2= n(n + 1~(2n+ 1) .
If we take this equation as P(n), then P(I) says
12= 1(1+1)(2+1) 6 '
which is true. To show that P(m) implies P(m+ I), we assume that
12+22+ ... +m2= m(m+ 1)(2m+ I) 6 '
and we try to show that
2 2 2 (m+ 1)(m+2)[2(m+ 1)+ 1] 1 +2 + ... +(m+ 1) = 6 .
If we add (m+ 1)2 to both sides of Eq. [0.3], we conclude that
12+22+ ... +m2+(m+ 1)2= m(m+ 1!(2m+ 1) +(m+ 1)2,
and the right-hand side is
m(m+ 1)(2m+ 1) +6(m+ Ii (m+ 1)[ m(2m+ 1)+6(m+ 1)] 6 = 6
[0.3]
[004]
(m+ 1)(2m2+7m+6) (m+ 1)(m+2)(2m+3) = 6 = 6 '
so we have Eq. [0.4]. Thus our formula is established by induction.
6 Section O. Sets and Induction
3. The following popular example illustrates the fact that some care is necessary in trying to prove something by induction. We shall "prove" that all horses are the same color.
Let P(n) be the statement: "For every set of n horses, all the horses in the set are the same color." We "prove" P(n) for all n by induction. Clearly, P(l) is true since any horse is the same color as itself. Now assume that P(m) is true and let us show that P(m + 1) holds. Let S be a set of m+ 1 horses; say the horses in S are h l ,h2, ••• ,hm + 1 (h for horse). Now h l ,h2, ••• ,hm comprise a set of m-horses, so since P(m) holds, hl ,h2, ••• ,hm are all the same color. Likewise, h2, h3, • •• , hm + I make up a set of m horses, so h2, h3, ••• , hm + I are all the same color. Combining these statements, we see that all m + I horses are the same color (for instance, they are all the same color as h0.
There must be something wrong with this, but what?
The second form of induction is similar to the first, except that (ii) is modified somewhat.
THEOREM 0.3 (Mathematical Induction, second form) Suppose P(n) is a state- ment about positive integers and we know two things:
i) P(l) is true;
ii) for every positive m, if P(k) is true for all positive k
Under these circumstances, we can conclude that P(n) is true for all positive n.
The verification that this works is like that for the first form, and we leave it as an exercise.
The part that one assumes in trying to establish (ii) in a proof by induction (either form) is called the inductive hypothesis. Thus in the first form, the inductive hypothesis is the assumption that P(m) is true, from which we try to argue that P(m+ 1) is true. In the second form, the inductive hypothesis is that P(k) is true for all positive k
Which form of induction one should use in any particular case depends on the situation at hand. We have seen some examples where the fif$t form is appropriate, and we would like to close with a significant instance where the second form is called for.
U a and b are integers, then we say that a divides b, and we write alb, if there is an integer c such that b=ac. If no such c exists, we write a{b. For instance, 216, 7114, and 3{1O. A positive integer p is called a prime if p > 1 and the only positive integers that divide p are 1 and p. The first few primes are 2, 3,5,7, 11, 13, 17, ....
Section O. Sets and Induction 7
The Fundamental Theorem of Arithmetic asserts that every positive n> 1 can be written as the product of finitely many primes, and that except for a possible rearrangement of the factors, there is only one such factorization.
TIlEOREM 0.4 (Fundamental Theorem of Arithmetic) Let n > 1. Then there are primes PI' P2' P3"" ,Pr (not necessarily distinct) such that n = PIP2' .. Pr' Moreover, if n = ql q2' .. qs is another such factorization, then r = s and the p;'s are the q/s, possibly rearranged.
PROOF. We prove only the existence of a factorization here. The proof of the uniqueness assertion requires some more groundwork and is left to the exercises in Section 4.
We wish to prove that pen) holds for every n ~ 2, where pen) says that n can be written as a product of primes. Accordingly, we start our induction at 2 rather than at 1; we show (see Exercise 0.13) that P(2) is true and that for any m > 2, if P(k) holds for all 2 <. k 2. If m is a prime, then P(m) holds. If m is not a prime, then we can write m = ab, where neither a nor b is m. Thus 2 <. a, b
a=PIP2" ·Pt ' b=qlq2" 'qu
for primes Pi' 'ij. Then
m'" ab = PIP2' .. Pt Qlq2' .. qu'
and we have shown that m can be factored into primes. 0
Notice that all we can say about a and b here is that they are less than m, so we need the inductive hypothesis as in the second form of induction, in order for this proof to go through.
EXEROSES
Sets
0.1 Let S={2,5, Vi ,25,'IT,5/2} and let T={4,25, Vi ,6,3/2}. Find S n T and S U T.
0.2 With Sand T as in Exercise 0.1, show that
In(Su T)=(lnS)u(ln T) and lu(Sn T)=(luS)n(lu T).
0.3 Let S and T be sets. Prove that
Sn(Su T)=S and Su(Sn T)=S.
0.4 Let Sand T be sets. Prove that S U T= T iff S k T.
0.5 Let A, B, and C be sets. Prove that A nCB u C)=(A n B)u(A n C).
0.6 Let A, B, and C be sets. Prove that A U(B n C)=(A u B)n(A u C).
8 Section O. Sets and Induction
Induction
0.7 What's wrong with the proof that all horses are the same color?
0.8 Prove that
for all n> l.
0.9 Prove that
for all n> l.
0.10 Prove that
for all n> l.
1 +3+5+ ... +(2n+ I)=(n + 1)2
2 + 4 + 6 + ... + 2n = n( n + 1)
0.11 Prove Theorem 0.3.
0.12 Prove the following more general form of Theorem 0.2:
THEOREM. Suppose P(n) is a statement about positive integers and c is some fixed positive integer. Assume
i) P(c) is true; and
ii) for every m >c, if P(m) is true, then P(m+ 1) is true. Then P(n) is true for all n >c.
0.13 Prove the following more general version of Theorem 0.3:
THEOREM. Suppose P(n) is a statement about positive integers and c is some fixed positive integer. Assume
i) P(c) is true; and
ii) for every m >c, if P(k) is true for all k such that c
0.14 Prove that
for all n >2.
1· 2 + 2·3 +3 ·4+ ... + (n - I)n = -'.,(n_-----"I).",(n.-<-)-'.,(n_+_1....<..,) 3
0.15 By trying a few cases, guess at a formula for
1 1 1 1 -+-+-+ ... + n>2 1·2 2·3 3·4 (n-I)n' .
Try to prove that your guess is correct.
0.16 Prove that for every n> 1, 3 divides n3 - n.
0.17 Prove that if a set S has n elements (where nEZ+), then S has 2n subsets.
Section O. Sets and Induction 9
0.18 The Fibonacci sequence 11.J2.J3,'" is defined as follows:
11 = 12= 1, 13=2, 14=3, Is=5, 16=8, ... ,
and in general,
In=ln-l+ln-2 foralln~3.
Prove thatlsk is divisible by 5 for every k;;;. 1, that is, 5 divides every 5th member of the sequence.
0.19 As in the preceding exercise, letfi denote the kth Fibonacci number. Prove that for every n ~ 1,
g+l- /,,[,,+2 = (-1)".
0.20 Again let//; denote the kth Fibonacci number. Let
l+JS I-JS a=-- and p=--.
2 2
Prove that for every n ~ 1
a" _po 1.= JS .
0.21 The nth Fermat number is Fn = 2(2") + 1. Prove that for every n ~ I
F'oF;Fz ... Fn_1 = F" - 2.
0.22 Prove that the following statement is true for every integer n ~ 1: If the number of squares in a "checkerboard" is 2" x 2" and we remove anyone square, then the remaining part of the board can be broken up into L-shaped regions each consisting of
3 squares.
SECTION 1
BINARY OPERATIONS
In high school algebra, we spend a great deal of time solving equations involving real or complex numbers. At the heart of it, what we are really doing is answering questions about the addition and multiplication of these numbers.
In abstract algebra, we take a more general view, starting from the observation that addition and multiplication are both just ways of taking two elements and producing a third, in such a way that certain laws are obeyed. We study the situation where we have a set and one or more "operations" for producing outputs from given inputs, subject to some specified rules.
From this description, it i~ probably not clear to you why abstract algebra is any more profitable than going off and counting the grains of sand on the nearest beach. But it can be very profitable, for a number of reasons. For one thing, the abstract approach may clarify our thinking about familiar situa- tions ~y stripping away irrelevant aspects of what is happening. For another, it may lead us to consider new systems that are valuable because they shed light on old problems. Yet again, a general approach can save us effort by dealing with a number of specific situations all at once. And finally, although it can take some time to appreciate this, abstraction can be just plain beautiful.
DEFINITION If S is a set, then a binary operation * on S is a function that associates to each ordered pair (S),S2) of elements of S an element of S, which we denote by s) * S2'
Observe that the definition says ordered pair. Thus (S),S2) is not neces- sarily the same thing as (S2'S)), and S) * S2 is not necessarily the same thing as S2 * s). Notice too that * must assign an element of S to each and every pair (S),S2)' including those pairs in which s) and s2 are the same element of S.
10
Section 1. Binary Operations 11
Examples
1. Addition is a binary operation on Z+: a * b = a + b. Subtrac- tion (a *b = a - b) is not; but subtraction is· a binary operation on Z.
2. Multiplication (a*b=a·b) is a binary operation on Z+ or Z or ~ or Ill. Division (a * b = a / b) is a binary operation on III + or ~ + but not Z or Z + or ~ or Ill.
3. a*b=a3 +b2 +1 is a binary operation on Z, 1Il,~, Z+, 1Il+, or ~+.
4. Let X be some set and let S be the set of all subsets of X. For example, if X={1}, then S={0,{I}}, and if X={1,2}, then
S = {0, { I }, {2 }, { 1,2} }.
The operation of intersection is a binary operation on S, since if A,B are elements of S, then A * B = An B is an element of S. Similarly, union gives us another binary operation on S.
S. Let S be the set of all 2 X 2 matrices with real entries. Thus an element of S looks like (: !), where a,b,c,deR. Define x*y to be the matrix product of x and y, that is,
f)=(ae+bg h ce+dg
Then * is a binary operation on S. For instance,
af+bh). cf+dh
I ) _(." -1 I
3-.,,) o . The definition of binary operation doesn't impose any restrictions on *,
and in general a binary operation can be wildly misbehaved. Ordinarily, we want to consider operations that have at least something in common with familiar, concrete examples.
DEFINITIONS If * is a binary operation on S, then * is called commutative if 3 1 *32 =32 *31 for every 3 1,32 E S. On the other hand, * is called associative if (31 *32)*33 =SI * (S2 *33) for every 31,32,S3ES.
Examples
1. Subtraction on Z is neither commutative nor associative. For example,
1-2::;=2-1 and (1-2)-3*1-(2-3).
2. Let a*b=2(a+b) on Z; then clearly * is commutative. Now (a * b) *c =2(a + b) *c=2(2(a + b) + c)=4a+4b+2c,
12 Section 1. Binary Operations
and
a. (b.c) = a .2(b+ c)=2(a+2(b+ c»=2a +4b+4c. Since these are not always the same, • is not associative.
3. Let a. b = 2ab on lL. Commutativity is again clear, and since (a.b).c =2ab .c=4abc and a. (b*c)= a*2bc=4abc,
* is also associative in this case.
4. Multiplication of 2 X 2 matrices is associative:
as may be verified by multiplying out both sides. This operation is not commutative, however. For example,
(01 01)*(31 42)=(31 4) (1 2) (0 1) (2 1) 2 but 3 4 * 1 0 = 4 3· Before we consider a slightly more complicated example, observe that
these first four examples say something about the relationship between commutativity and associativity-namely, that there isn't any! A binary operation can be commutative with or without being associative, and it can be noncommutative with or without being associative.
5. We introduce another operation on sets: If A,B are sets, then A t:;.B denotes the symmetric difference of A and B. This is by definition the set of all elements that belong to either A or B, but not to both. Thus
A t:;.B=(A - B)U(B-A),
where A - B denotes the set of elements of A that are not elements of B, and B-A denotes the set of elements of B that are not elements of A. We can also write
At:;.B=(AUB)-(AnB).
In Fig. 1.1, if A is the set of points inside the square, and B is the set of points inside the circle, then At:;. B consists of the points in the shaded region:
FIgure 1.1
Section 1. Binary Operations 13
For example, if A = {1,2,3,4,5} and B= {2,4,6}, then A .::.B= {l,3,5,6}. If
A={xlxElandx>O} and B={xlxElandx
then
A .::.B= {xIXE land x*O}.
For any set A, A.::.A =0 and A .::.0=A. Now let X be a set and let S be the set of subsets of X. Then A .B=A.::.B
defines a binary operation on S. It is easy to see that .::. is commutative (Exercise 1.7). We claim that.::. is an associative operation on S; that is, if A, B, C are subsets of X, then
(A .::.B).::.C=A .::.(B.::.C).
To verify this we have to show that each side of the equation is contained in the other. Let's assume that xE(A .::.B).::.C and show that xEA .::.(B.::.C).
Since xE(A.::.B).::.C, we have either:
Case I. x E(A .::. B) and x f1. C, or
Case II. x f1.(A .::. B) and x E C.
In Case I, xE(A.::.B) implies that either xEA and xf1.B or xf1.A and x E B. If the first of these holds, then we have x EA,x f1. B,x f1. C, so that xEA and xf1.(B.::.C), so xEA.::.(B.::.C). If the second holds, then we have xf1.A,xE B,x f1. C, so that x f1.A and xE(B.::. C), so xEA .::.(B.::. C).
In Case II, x f/. A .::. B implies that either x E A and x E B, or x f1. A and xf1.B. If the first alternative holds, then we have xEA, xEB, and xEC, so that xEA and xf1.(B.::.C), and therefore xEA.::.(B.::.C). If the second alternative holds, then we have x f1. A , x f1. B, and x E C, so x f1. A and xE(B.::.C), and therefore xEA.::.(B.::.C).
This completes the proof that (A .::.B).::. C <;:A .::.(B.::. C). The proof of the reverse inclusion is left as an exercise.
Before leaving these ideas it might be well to say a bit more about the need for talking about them in the first place. We are all so familiar with the commutativity and associativity of addition and multiplication on the in- tegers, rationals, and reals that we take them for granted. It never occurs to us that they could be called into question. The point is that as we broaden our perspective and consider less familiar systems, we run into many where commutativity and associativity no longer hold true. Thus we have to be careful in working with such systems; we can only use commutativity and associativity in carrying out calculations if we have first ascertained that they happen to hold for the system we are working with. Calculations with the
14 Section 1. Binary Operations
integers are easy because we can replace xy by yx whenever we feel like it, and we never have to worry about (x+y)+z being the same as x+(y+z). In general, we are not so lucky, and we do 4ave to worry about such things.
EXERCISES
1.1 For the given sets S and T, find SAT.
a) S .. {2,5, V2 , 25, "IT, 5/2}, T= {4,25, V2 ,6,3/2}
b)S={(! ~),(~ ~),(g !I)'(! !)}, T= {(~ ~),(g n,u ~)}
1.1 If A, B, and C are the sets of points inside the three circles below, what region represents (A A B) A C?
1.3 In each case, determine whether or not the given • is a binary operation on the given set S.
a) S .... Z, a.b=a+,b2
b) S ... Z, a.b=a2b3 a
c) S-a,a.b=-- a2 +b2
d) S=Z a.b= a2 +2ab+ b2 , a+b
e) S .. Z, a.b=a+b-ab
f) S-R, a.b=b
g) S- {I, -2,3,2, -4}, a.b= Ibl h) S={1,6,3,2,18}, a.b=ab.
i) S .. the set of all 2 x 2 matrices with real entries, and if
then
j) S-the set of all subsets of a set X, A .B=(A AB)AB.
Section 1. Binary Operations 15
1.4 Is division a commutative operation on R+? Is it associative?
1.5 If 8 is the set of subsets of a set X, is the operation of intersection commutative on 8? Is it associative?
1.6 For each case in Exercise 1.3 in which. is a binary operation on 8, determine whether • is commutative and whether it is associative.
1.7 Show that symmetric difference is a commutative operation.
1.8 Complete the proof that symmetric difference is an associative operation.
1.9 If 8 is a finite set, then we can define a binary operation on 8 by writing down all the values of $. *$2 in a table. For instance, if 8= {a,b,c,d}, then the following gives a binary operation on 8.
Columns
• a b c d a a c b d
Rows b c a d b c b d a c d d b c a
Here, for $.,$2 E 8, $. *$2 is the element in row $. and column $'];. For example, c. b = d.
Is the above binary operation commutative? Is it associative?
1.10 How many binary operations are there on a set 8 with n elements? How many of these binary operations are commutative?
SECTION 2
GROUPS
As we have said, algebra is the study of various kinds of abstract systems. One of the most fundamental of these systems is the kind known as a group. In this section we shall get acquainted with this concept.
Certainly one of the attractive features of modem mathematics is the intertwining that takes place among its different branches. If you study algebra, analysis, topology, or mathematical logic, for example, you will see that certain ideas recur in all of them. The notion of a group is one of these ideas that seem to crop up everywhere. Beyond that, groups are important in areas where mathematics is used as a tool, such as chemistry, quantum mechanics, and elementary particle physics.
DEFINI110N Suppose that:
i) G is a set and * is a binary operation on G, ii) * is associative,
iii) there is an element e in G such that x * e = e * x = x for all x in G, and iv) for each element x E G there is an elementy E G such that x * y = y *x = e.
Then G, together with the binary operation *, is called a group, and is denoted by (G, *).
Observe that (iii) implies that G is nonempty. The element e in (iii) is called an identity element of G (we shall soon see that there is only one such element, so we will be able to call it the identity element of G). The elementy in (iv) is called an inverse of x; we shall see that any element x has only one inverse, so we can call it the inverse of x. It is worth emphasizing the fact that the single identity element e satisfies x * e = e * x = x for any choice of x, while the y in (iv) depends on x. We shall see that two distinct elements of G can
16
Section 2. Groups 17
never have the same inverse, so that if we picked a different x we would have a differenty.
It is also worth emphasizing that we do not assume that. is commutative. Groups with the special added property that their operation is commutative are called abelian, after the Norwegian mathematician Niels Abel (1802- 1829).
Before we begin investigating the properties of groups in general, we will examine a number of examples in order to try to convey some impression of the universality of this notion. Historically, of course, the examples preceded the abstract definition; the abstract concept of "group" came into being only when people realized that many objects they were working with had various structural characteristics in common, and got the idea that some economy of effort could be achieved by studying these common features abstractly rather than over and over again in each separate case. In discussing the development of the subject, E. T. Bell has remarked that "wherever groups disclosed themselves, or could be introduced, simplicity crystallized out of comparative chaos."
We begin with the most familiar examples.
1. (I, +). 0 is an identity element, and - x is an inverse for x:
x+( - x)=( -x)+x=O.
We are all familiar with the fact that + is an associative binary operation on I.
Similarly, (0, +) and (R, + ) are groups.
2. (0 +, .). Multiplication is an associative binary operation on 0 +, 1 is an identity element, and 1/ x is an inverse for x since x . (1 / x) = (1 / x)· x = 1.
Similarly, (R+, .) is a group. Observe, however, that (l+, .) is not a group, since no element other than 1 has an inverse. (R - {OJ, .) and (0 - {OJ, .) are groups. The set R - of negative real numbers under multiplication is not a group because multiplication is not a binary operation on R-.
3. The set of n-tuples (a" ... ,a,,) of real numbers is a group under the binary operation given by addition of n-tuples:
(a"a2, ... ,a,,). (b"b2, ... ,bn) = (a, + b"a2 + b2, ... ,an +bn).
(0,0, ... ,0) is an identity element and (- a" - a2, ••• , - an) is an inverse for (a" a2, ... , an). The fact that this operation is associative follows from the associativity of addition on R.
If you have studied vector spaces, then you can see that, more generally, the elements of any vector space form a group under vector addition.
18 Section 2. Groups
4. Consider the binary operation on R given by a. b = 2( a + b). Then (R, .) fails to be a group for several reasons. First of all, • is not associative. Secondly, there is no identity element. For suppose e were one. Then we would have a.e=a for every a, that is, 2(a+e)=a. But this implies that e = - a /2, and clearly no one number can satisfy this equation for all a's. Thus there is no identity element e.
However, if we try instead R-{O} with the binary operation a.b=2ab, then we do get a group. For now • is associative, and 1/2 is an identity element because for any a we have a.(l /2) = 2a· (I /2) = a, and similarly (1/2).a=2·(1/2)·a=a. What is an inverse for a? If x is to work, then we need a. x = 1/2 = x. a, that is, 2ax = 1/2 = 2xa. Clearly x = 1/ 4a meets ·the requirement.
S. Consider the set of 2 X 2 matrices with real entries, under the binary operation given by matrix multiplication. Is this a group? The only candidate for an identity element is 1= (~ ~), because this is the one and only matrix
(; ~) such that
(~ ;)(; {)=(; {)(~ ;)=(~ ;) for all matrices (= !). [Try (= !)= U ~)!] But then an inverse for (= !) would just be a matrix (; ~) such that
0) I . Since such a matrix need not exist [try (= !) = (~ ~)], we do not have a group.
However, we can attempt to salvage something here by throwing away the matrices that do not have inverses. That is, we now let G be the set of all invertible 2 x 2 real matrices. An immediate question is whether matrix multiplication still gives us a binary operation on this smaller set. That is, if A and B are invertible, then is AB invertible? Recalling that matrix multiplica- tion is associative, we see that if A' and B' are inverses for A and B, respectively, then
(AB)(B' A') = «AB)B')A' = (A(BB'»A' = (AJ)A' = AA' = I,
and similarly (B'A')(AB)=I,
so B'A' serves as an inverse for AB. Thus we do have a binary operation on the restricted set G, and it is
associative. There is an identity element in G, because I=(~ ~) is invertible.
Section 2. Groups 19
Finally, any A E G has an inverse A' which is in G, because the equations AA I = A I A = I tell us that A I is invertible (its inverse is A). Thus G is indeed a group under matrix multiplication. It is called the general linear group of degree 2 over IR and denoted by GL(2, IR).
If A = ( : !) is a matrix, then the number ad - be is called the determi- nant of A. Determinants are often very useful in working with matrices. For instance, it is easy to check that if ad-be=l=O, then A is invertible, and the matrix
ad: be
ad-be ad-be
is an inverse for A.
( ad~~ch< -b) 6. Let G consist of all real-valued functions on the real line, with the
binary operation given by pointwise addition of functions: If f,gE G then f+g is the function whose value at any xEIR isf(x)+g(x).
It is clear that this does give us a binary operation on G. To check associativity, we must verify that
(f+g)+h=f+(g+h)
for allf,g,hEG, and this means that for every xEIR,
[(f+ g) + h ](x)= [f+ (g+ h) ](x), that is, (f + g)(x) + hex) = f(x) + (g+ h){x), that is,
[f( x) + g( x) ] + h( x) = f{ x) + [ g{ x) + h{ x) ], which is true since f(x), g(x), and h(x) are real numbers and addition of real numbers is associative.