Loading...

Messages

Proposals

Stuck in your homework and missing deadline? Get urgent help in $10/Page with 24 hours deadline

Get Urgent Writing Help In Your Essays, Assignments, Homeworks, Dissertation, Thesis Or Coursework & Achieve A+ Grades.

Privacy Guaranteed - 100% Plagiarism Free Writing - Free Turnitin Report - Professional And Experienced Writers - 24/7 Online Support

Negative effects of data mining

01/12/2021 Client: muhammad11 Deadline: 2 Day

Data Mining: Negative Aspects Of Sparsification

INTRODUCTION TO DATA MINING

INTRODUCTION TO DATA MINING SECOND EDITION

PANG-NING TAN

Michigan State Universit

MICHAEL STEINBACH

University of Minnesota

ANUJ KARPATNE

University of Minnesota

VIPIN KUMAR

University of Minnesota

330 Hudson Street, NY NY 10013

Director, Portfolio Management: Engineering, Computer Science & Global Editions: Julian Partridge

Specialist, Higher Ed Portfolio Management: Matt Goldstein

Portfolio Management Assistant: Meghan Jacoby

Managing Content Producer: Scott Disanno

Content Producer: Carole Snyder

Web Developer: Steve Wright

Rights and Permissions Manager: Ben Ferrini

Manufacturing Buyer, Higher Ed, Lake Side Communications Inc (LSC): Maura Zaldivar-Garcia

Inventory Manager: Ann Lam

Product Marketing Manager: Yvonne Vannatta

Field Marketing Manager: Demetrius Hall

Marketing Assistant: Jon Bryant

Cover Designer: Joyce Wells, jWellsDesign

Full-Service Project Management: Chandrasekar Subramanian, SPi Global

Copyright ©2019 Pearson Education, Inc. All rights reserved. Manufactured in the United States of America. This publication is protected by Copyright, and permission should be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. For information regarding permissions, request forms and the appropriate contacts within the Pearson Education Global Rights & Permissions department, please visit www.pearsonhighed.com/permissions/.

Many of the designations by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed in initial caps or all caps.

Library of Congress Cataloging-in-Publication Data on File

Names: Tan, Pang-Ning, author. | Steinbach, Michael, author. | Karpatne, Anuj, author. | Kumar, Vipin, 1956- author.

Title: Introduction to Data Mining / Pang-Ning Tan, Michigan State University, Michael Steinbach, University of Minnesota, Anuj Karpatne, University of Minnesota, Vipin Kumar, University of Minnesota.

Description: Second edition. | New York, NY : Pearson Education, [2019] | Includes bibliographical references and index.

Identifiers: LCCN 2017048641 | ISBN 9780133128901 | ISBN 0133128903

Subjects: LCSH: Data mining.

Classification: LCC QA76.9.D343 T35 2019 | DDC 006.3/12–dc23 LC record available at https://lccn.loc.gov/2017048641

1 18

ISBN-10: 0133128903

ISBN-13: 9780133128901

To our families …

Preface to the Second Edition Since the first edition, roughly 12 years ago, much has changed in the field of data analysis. The volume and variety of data being collected continues to increase, as has the rate (velocity) at which it is being collected and used to make decisions. Indeed, the term, Big Data, has been used to refer to the massive and diverse data sets now available. In addition, the term data science has been coined to describe an emerging area that applies tools and techniques from various fields, such as data mining, machine learning, statistics, and many others, to extract actionable insights from data, often big data.

The growth in data has created numerous opportunities for all areas of data analysis. The most dramatic developments have been in the area of predictive modeling, across a wide range of application domains. For instance, recent advances in neural networks, known as deep learning, have shown impressive results in a number of challenging areas, such as image classification, speech recognition, as well as text categorization and understanding. While not as dramatic, other areas, e.g., clustering, association analysis, and anomaly detection have also continued to advance. This new edition is in response to those advances.

Overview As with the first edition, the second edition of the book provides a comprehensive introduction to data mining and is designed to be accessible and useful to students, instructors, researchers, and professionals. Areas covered include data preprocessing, predictive modeling, association analysis, cluster analysis, anomaly detection, and avoiding false discoveries. The goal is to present fundamental concepts and algorithms for each topic, thus providing the reader with the necessary background for the application of data mining to real problems. As before, classification, association analysis and cluster analysis, are each covered in a pair of chapters. The introductory

chapter covers basic concepts, representative algorithms, and evaluation techniques, while the more following chapter discusses advanced concepts and algorithms. As before, our objective is to provide the reader with a sound understanding of the foundations of data mining, while still covering many important advanced topics. Because of this approach, the book is useful both as a learning tool and as a reference.

To help readers better understand the concepts that have been presented, we provide an extensive set of examples, figures, and exercises. The solutions to the original exercises, which are already circulating on the web, will be made public. The exercises are mostly unchanged from the last edition, with the exception of new exercises in the chapter on avoiding false discoveries. New exercises for the other chapters and their solutions will be available to instructors via the web. Bibliographic notes are included at the end of each chapter for readers who are interested in more advanced topics, historically important papers, and recent trends. These have also been significantly updated. The book also contains a comprehensive subject and author index.

What is New in the Second Edition? Some of the most significant improvements in the text have been in the two chapters on classification. The introductory chapter uses the decision tree classifier for illustration, but the discussion on many topics—those that apply across all classification approaches—has been greatly expanded and clarified, including topics such as overfitting, underfitting, the impact of training size, model complexity, model selection, and common pitfalls in model evaluation. Almost every section of the advanced classification chapter has been significantly updated. The material on Bayesian networks, support vector machines, and artificial neural networks has been significantly expanded. We have added a separate section on deep networks to address the current developments in this area. The discussion of evaluation, which occurs in the section on imbalanced classes, has also been updated and improved.

The changes in association analysis are more localized. We have completely reworked the section on the evaluation of association patterns (introductory chapter), as well as the sections on sequence and graph mining (advanced

chapter). Changes to cluster analysis are also localized. The introductory chapter added the K-means initialization technique and an updated the discussion of cluster evaluation. The advanced clustering chapter adds a new section on spectral graph clustering. Anomaly detection has been greatly revised and expanded. Existing approaches—statistical, nearest neighbor/density-based, and clustering based—have been retained and updated, while new approaches have been added: reconstruction-based, one- class classification, and information-theoretic. The reconstruction-based approach is illustrated using autoencoder networks that are part of the deep learning paradigm. The data chapter has been updated to include discussions of mutual information and kernel-based techniques.

The last chapter, which discusses how to avoid false discoveries and produce valid results, is completely new, and is novel among other contemporary textbooks on data mining. It supplements the discussions in the other chapters with a discussion of the statistical concepts (statistical significance, p-values, false discovery rate, permutation testing, etc.) relevant to avoiding spurious results, and then illustrates these concepts in the context of data mining techniques. This chapter addresses the increasing concern over the validity and reproducibility of results obtained from data analysis. The addition of this last chapter is a recognition of the importance of this topic and an acknowledgment that a deeper understanding of this area is needed for those analyzing data.

The data exploration chapter has been deleted, as have the appendices, from the print edition of the book, but will remain available on the web. A new appendix provides a brief discussion of scalability in the context of big data.

To the Instructor As a textbook, this book is suitable for a wide range of students at the advanced undergraduate or graduate level. Since students come to this subject with diverse backgrounds that may not include extensive knowledge of statistics or databases, our book requires minimal prerequisites. No database knowledge is needed, and we assume only a modest background in statistics or mathematics, although such a background will make for easier going in

some sections. As before, the book, and more specifically, the chapters covering major data mining topics, are designed to be as self-contained as possible. Thus, the order in which topics can be covered is quite flexible. The core material is covered in chapters 2 (data), 3 (classification), 5 (association analysis), 7 (clustering), and 9 (anomaly detection). We recommend at least a cursory coverage of Chapter 10 (Avoiding False Discoveries) to instill in students some caution when interpreting the results of their data analysis. Although the introductory data chapter (2) should be covered first, the basic classification (3), association analysis (5), and clustering chapters (7), can be covered in any order. Because of the relationship of anomaly detection (9) to classification (3) and clustering (7), these chapters should precede Chapter 9. Various topics can be selected from the advanced classification, association analysis, and clustering chapters (4, 6, and 8, respectively) to fit the schedule and interests of the instructor and students. We also advise that the lectures be augmented by projects or practical exercises in data mining. Although they are time consuming, such hands-on assignments greatly enhance the value of the course.

Support Materials Support materials available to all readers of this book are available at http://www-users.cs.umn.edu/~kumar/dmbook.

PowerPoint lecture slides

Suggestions for student projects

Data mining resources, such as algorithms and data sets

Online tutorials that give step-by-step examples for selected data mining techniques described in the book using actual data sets and data analysis software

Additional support materials, including solutions to exercises, are available only to instructors adopting this textbook for classroom use. The book’s resources will be mirrored at www.pearsonhighered.com/cs-resources.

http://www.pearsonhighered.com/cs-resources
Comments and suggestions, as well as reports of errors, can be sent to the authors through dmbook@cs.umn.edu.

Acknowledgments Many people contributed to the first and second editions of the book. We begin by acknowledging our families to whom this book is dedicated. Without their patience and support, this project would have been impossible.

We would like to thank the current and former students of our data mining groups at the University of Minnesota and Michigan State for their contributions. Eui-Hong (Sam) Han and Mahesh Joshi helped with the initial data mining classes. Some of the exercises and presentation slides that they created can be found in the book and its accompanying slides. Students in our data mining groups who provided comments on drafts of the book or who contributed in other ways include Shyam Boriah, Haibin Cheng, Varun Chandola, Eric Eilertson, Levent Ertöz, Jing Gao, Rohit Gupta, Sridhar Iyer, Jung-Eun Lee, Benjamin Mayer, Aysel Ozgur, Uygar Oztekin, Gaurav Pandey, Kashif Riaz, Jerry Scripps, Gyorgy Simon, Hui Xiong, Jieping Ye, and Pusheng Zhang. We would also like to thank the students of our data mining classes at the University of Minnesota and Michigan State University who worked with early drafts of the book and provided invaluable feedback. We specifically note the helpful suggestions of Bernardo Craemer, Arifin Ruslim, Jamshid Vayghan, and Yu Wei.

Joydeep Ghosh (University of Texas) and Sanjay Ranka (University of Florida) class tested early versions of the book. We also received many useful suggestions directly from the following UT students: Pankaj Adhikari, Rajiv Bhatia, Frederic Bosche, Arindam Chakraborty, Meghana Deodhar, Chris Everson, David Gardner, Saad Godil, Todd Hay, Clint Jones, Ajay Joshi, Joonsoo Lee, Yue Luo, Anuj Nanavati, Tyler Olsen, Sunyoung Park, Aashish Phansalkar, Geoff Prewett, Michael Ryoo, Daryl Shannon, and Mei Yang.

Ronald Kostoff (ONR) read an early version of the clustering chapter and offered numerous suggestions. George Karypis provided invaluable LATEX assistance in creating an author index. Irene Moulitsas also provided

assistance with LATEX and reviewed some of the appendices. Musetta Steinbach was very helpful in finding errors in the figures.

We would like to acknowledge our colleagues at the University of Minnesota and Michigan State who have helped create a positive environment for data mining research. They include Arindam Banerjee, Dan Boley, Joyce Chai, Anil Jain, Ravi Janardan, Rong Jin, George Karypis, Claudia Neuhauser, Haesun Park, William F. Punch, György Simon, Shashi Shekhar, and Jaideep Srivastava. The collaborators on our many data mining projects, who also have our gratitude, include Ramesh Agrawal, Maneesh Bhargava, Steve Cannon, Alok Choudhary, Imme Ebert-Uphoff, Auroop Ganguly, Piet C. de Groen, Fran Hill, Yongdae Kim, Steve Klooster, Kerry Long, Nihar Mahapatra, Rama Nemani, Nikunj Oza, Chris Potter, Lisiane Pruinelli, Nagiza Samatova, Jonathan Shapiro, Kevin Silverstein, Brian Van Ness, Bonnie Westra, Nevin Young, and Zhi-Li Zhang.

The departments of Computer Science and Engineering at the University of Minnesota and Michigan State University provided computing resources and a supportive environment for this project. ARDA, ARL, ARO, DOE, NASA, NOAA, and NSF provided research support for Pang-Ning Tan, Michael Stein-bach, Anuj Karpatne, and Vipin Kumar. In particular, Kamal Abdali, Mitra Basu, Dick Brackney, Jagdish Chandra, Joe Coughlan, Michael Coyle, Stephen Davis, Frederica Darema, Richard Hirsch, Chandrika Kamath, Tsengdar Lee, Raju Namburu, N. Radhakrishnan, James Sidoran, Sylvia Spengler, Bhavani Thuraisingham, Walt Tiernin, Maria Zemankova, Aidong Zhang, and Xiaodong Zhang have been supportive of our research in data mining and high-performance computing.

It was a pleasure working with the helpful staff at Pearson Education. In particular, we would like to thank Matt Goldstein, Kathy Smith, Carole Snyder, and Joyce Wells. We would also like to thank George Nichols, who helped with the art work and Paul Anagnostopoulos, who provided LATEX support.

We are grateful to the following Pearson reviewers: Leman Akoglu (Carnegie Mellon University), Chien-Chung Chan (University of Akron), Zhengxin Chen (University of Nebraska at Omaha), Chris Clifton (Purdue University), Joy-deep Ghosh (University of Texas, Austin), Nazli Goharian (Illinois

Institute of Technology), J. Michael Hardin (University of Alabama), Jingrui He (Arizona State University), James Hearne (Western Washington University), Hillol Kargupta (University of Maryland, Baltimore County and Agnik, LLC), Eamonn Keogh (University of California-Riverside), Bing Liu (University of Illinois at Chicago), Mariofanna Milanova (University of Arkansas at Little Rock), Srinivasan Parthasarathy (Ohio State University), Zbigniew W. Ras (University of North Carolina at Charlotte), Xintao Wu (University of North Carolina at Charlotte), and Mohammed J. Zaki (Rensselaer Polytechnic Institute).

Over the years since the first edition, we have also received numerous comments from readers and students who have pointed out typos and various other issues. We are unable to mention these individuals by name, but their input is much appreciated and has been taken into account for the second edition.

Contents 1. Preface to the Second Edition v

1. 1 Introduction 1

1. 1.1 What Is Data Mining? 4

2. 1.2 Motivating Challenges 5

3. 1.3 The Origins of Data Mining 7

4. 1.4 Data Mining Tasks 9

5. 1.5 Scope and Organization of the Book 13

6. 1.6 Bibliographic Notes 15

1. 1.7 Exercises 21

2. 2 Data 23

1. 2.1 Types of Data 26

1. 2.1.1 Attributes and Measurement 27

2. 2.1.2 Types of Data Sets 34

2. 2.2 Data Quality 42

1. 2.2.1 Measurement and Data Collection Issues 42

2. 2.2.2 Issues Related to Applications 49

3. 2.3 Data Preprocessing 50

1. 2.3.1 Aggregation 51

2. 2.3.2 Sampling 52

3. 2.3.3 Dimensionality Reduction 56

4. 2.3.4 Feature Subset Selection 58

5. 2.3.5 Feature Creation 61

6. 2.3.6 Discretization and Binarization 63

7. 2.3.7 Variable Transformation 69

4. 2.4 Measures of Similarity and Dissimilarity 71

1. 2.4.1 Basics 72

2. 2.4.2 Similarity and Dissimilarity between Simple Attributes 74

3. 2.4.3 Dissimilarities between Data Objects 76

4. 2.4.4 Similarities between Data Objects 78

5. 2.4.5 Examples of Proximity Measures 79

6. 2.4.6 Mutual Information 88

7. 2.4.7 Kernel Functions* 90

8. 2.4.8 Bregman Divergence* 94

9. 2.4.9 Issues in Proximity Calculation 96

10. 2.4.10 Selecting the Right Proximity Measure 98

5. 2.5 Bibliographic Notes 100

1. 2.6 Exercises 105

3. 3 Classification: Basic Concepts and Techniques 113

1. 3.1 Basic Concepts 114

2. 3.2 General Framework for Classification 117

3. 3.3 Decision Tree Classifier 119

1. 3.3.1 A Basic Algorithm to Build a Decision Tree 121

2. 3.3.2 Methods for Expressing Attribute Test Conditions 124

3. 3.3.3 Measures for Selecting an Attribute Test Condition 127

4. 3.3.4 Algorithm for Decision Tree Induction 136

5. 3.3.5 Example Application: Web Robot Detection 138

6. 3.3.6 Characteristics of Decision Tree Classifiers 140

4. 3.4 Model Overfitting 147

1. 3.4.1 Reasons for Model Overfitting 149

5. 3.5 Model Selection 156

1. 3.5.1 Using a Validation Set 156

2. 3.5.2 Incorporating Model Complexity 157

3. 3.5.3 Estimating Statistical Bounds 162

4. 3.5.4 Model Selection for Decision Trees 162

6. 3.6 Model Evaluation 164

1. 3.6.1 Holdout Method 165

2. 3.6.2 Cross-Validation 165

7. 3.7 Presence of Hyper-parameters 168

1. 3.7.1 Hyper-parameter Selection 168

2. 3.7.2 Nested Cross-Validation 170

8. 3.8 Pitfalls of Model Selection and Evaluation 172

1. 3.8.1 Overlap between Training and Test Sets 172

2. 3.8.2 Use of Validation Error as Generalization Error 172

9. 3.9 Model Comparison* 173

1. 3.9.1 Estimating the Confidence Interval for Accuracy 174

2. 3.9.2 Comparing the Performance of Two Models 175

10. 3.10 Bibliographic Notes 176

1. 3.11 Exercises 185

4. 4 Classification: Alternative Techniques 193

1. 4.1 Types of Classifiers 193

2. 4.2 Rule-Based Classifier 195

1. 4.2.1 How a Rule-Based Classifier Works 197

2. 4.2.2 Properties of a Rule Set 198

3. 4.2.3 Direct Methods for Rule Extraction 199

4. 4.2.4 Indirect Methods for Rule Extraction 204

5. 4.2.5 Characteristics of Rule-Based Classifiers 206

3. 4.3 Nearest Neighbor Classifiers 208

1. 4.3.1 Algorithm 209

2. 4.3.2 Characteristics of Nearest Neighbor Classifiers 210

4. 4.4 Naïve Bayes Classifier 212

1. 4.4.1 Basics of Probability Theory 213

2. 4.4.2 Naïve Bayes Assumption 218

5. 4.5 Bayesian Networks 227

1. 4.5.1 Graphical Representation 227

2. 4.5.2 Inference and Learning 233

3. 4.5.3 Characteristics of Bayesian Networks 242

6. 4.6 Logistic Regression 243

1. 4.6.1 Logistic Regression as a Generalized Linear Model 244

2. 4.6.2 Learning Model Parameters 245

3. 4.6.3 Characteristics of Logistic Regression 248

7. 4.7 Artificial Neural Network (ANN) 249

1. 4.7.1 Perceptron 250

2. 4.7.2 Multi-layer Neural Network 254

3. 4.7.3 Characteristics of ANN 261

8. 4.8 Deep Learning 262

1. 4.8.1 Using Synergistic Loss Functions 263

2. 4.8.2 Using Responsive Activation Functions 266

3. 4.8.3 Regularization 268

4. 4.8.4 Initialization of Model Parameters 271

5. 4.8.5 Characteristics of Deep Learning 275

9. 4.9 Support Vector Machine (SVM) 276

1. 4.9.1 Margin of a Separating Hyperplane 276

2. 4.9.2 Linear SVM 278

3. 4.9.3 Soft-margin SVM 284

4. 4.9.4 Nonlinear SVM 290

5. 4.9.5 Characteristics of SVM 294

10. 4.10 Ensemble Methods 296

1. 4.10.1 Rationale for Ensemble Method 297

2. 4.10.2 Methods for Constructing an Ensemble Classifier 297

3. 4.10.3 Bias-Variance Decomposition 300

4. 4.10.4 Bagging 302

5. 4.10.5 Boosting 305

6. 4.10.6 Random Forests 310

7. 4.10.7 Empirical Comparison among Ensemble Methods 312

11. 4.11 Class Imbalance Problem 313

1. 4.11.1 Building Classifiers with Class Imbalance 314

2. 4.11.2 Evaluating Performance with Class Imbalance 318

3. 4.11.3 Finding an Optimal Score Threshold 322

4. 4.11.4 Aggregate Evaluation of Performance 323

12. 4.12 Multiclass Problem 330

13. 4.13 Bibliographic Notes 333

1. 4.14 Exercises 345

5. 5 Association Analysis: Basic Concepts and Algorithms 357

1. 5.1 Preliminaries 358

2. 5.2 Frequent Itemset Generation 362

1. 5.2.1 The Apriori Principle 363

2. 5.2.2 Frequent Itemset Generation in the Apriori Algorithm 364

3. 5.2.3 Candidate Generation and Pruning 368

4. 5.2.4 Support Counting 373

5. 5.2.5 Computational Complexity 377

3. 5.3 Rule Generation 380

1. 5.3.1 Confidence-Based Pruning 380

2. 5.3.2 Rule Generation in Apriori Algorithm 381

3. 5.3.3 An Example: Congressional Voting Records 382

4. 5.4 Compact Representation of Frequent Itemsets 384

1. 5.4.1 Maximal Frequent Itemsets 384

2. 5.4.2 Closed Itemsets 386

5. 5.5 Alternative Methods for Generating Frequent Itemsets* 389

6. 5.6 FP-Growth Algorithm* 393

1. 5.6.1 FP-Tree Representation 394

2. 5.6.2 Frequent Itemset Generation in FP-Growth Algorithm 397

7. 5.7 Evaluation of Association Patterns 401

1. 5.7.1 Objective Measures of Interestingness 402

2. 5.7.2 Measures beyond Pairs of Binary Variables 414

3. 5.7.3 Simpson’s Paradox 416

8. 5.8 Effect of Skewed Support Distribution 418

9. 5.9 Bibliographic Notes 424

1. 5.10 Exercises 438

6. 6 Association Analysis: Advanced Concepts 451

1. 6.1 Handling Categorical Attributes 451

2. 6.2 Handling Continuous Attributes 454

1. 6.2.1 Discretization-Based Methods 454

2. 6.2.2 Statistics-Based Methods 458

3. 6.2.3 Non-discretization Methods 460

3. 6.3 Handling a Concept Hierarchy 462

4. 6.4 Sequential Patterns 464

1. 6.4.1 Preliminaries 465

2. 6.4.2 Sequential Pattern Discovery 468

3. 6.4.3 Timing Constraints∗ 473

4. 6.4.4 Alternative Counting Schemes∗ 477

5. 6.5 Subgraph Patterns 479

1. 6.5.1 Preliminaries 480

2. 6.5.2 Frequent Subgraph Mining 483

3. 6.5.3 Candidate Generation 487

4. 6.5.4 Candidate Pruning 493

5. 6.5.5 Support Counting 493

6. 6.6 Infrequent Patterns∗ 493

1. 6.6.1 Negative Patterns 494

2. 6.6.2 Negatively Correlated Patterns 495

3. 6.6.3 Comparisons among Infrequent Patterns, Negative Patterns, and Negatively Correlated Patterns 496

4. 6.6.4 Techniques for Mining Interesting Infrequent Patterns 498

5. 6.6.5 Techniques Based on Mining Negative Patterns 499

6. 6.6.6 Techniques Based on Support Expectation 501

7. 6.7 Bibliographic Notes 505

1. 6.8 Exercises 510

7. 7 Cluster Analysis: Basic Concepts and Algorithms 525

1. 7.1 Overview 528

1. 7.1.1 What Is Cluster Analysis? 528

2. 7.1.2 Different Types of Clusterings 529

3. 7.1.3 Different Types of Clusters 531

2. 7.2 K-means 534

1. 7.2.1 The Basic K-means Algorithm 535

2. 7.2.2 K-means: Additional Issues 544

3. 7.2.3 Bisecting K-means 547

4. 7.2.4 K-means and Different Types of Clusters 548

5. 7.2.5 Strengths and Weaknesses 549

6. 7.2.6 K-means as an Optimization Problem 549

3. 7.3 Agglomerative Hierarchical Clustering 554

1. 7.3.1 Basic Agglomerative Hierarchical Clustering Algorithm 555

2. 7.3.2 Specific Techniques 557

3. 7.3.3 The Lance-Williams Formula for Cluster Proximity 562

4. 7.3.4 Key Issues in Hierarchical Clustering 563

5. 7.3.5 Outliers 564

6. 7.3.6 Strengths and Weaknesses 565

4. 7.4 DBSCAN 565

1. 7.4.1 Traditional Density: Center-Based Approach 565

2. 7.4.2 The DBSCAN Algorithm 567

3. 7.4.3 Strengths and Weaknesses 569

5. 7.5 Cluster Evaluation 571

1. 7.5.1 Overview 571

2. 7.5.2 Unsupervised Cluster Evaluation Using Cohesion and Separation 574

3. 7.5.3 Unsupervised Cluster Evaluation Using the Proximity Matrix 582

4. 7.5.4 Unsupervised Evaluation of Hierarchical Clustering 585

5. 7.5.5 Determining the Correct Number of Clusters 587

6. 7.5.6 Clustering Tendency 588

7. 7.5.7 Supervised Measures of Cluster Validity 589

8. 7.5.8 Assessing the Significance of Cluster Validity Measures 594

9. 7.5.9 Choosing a Cluster Validity Measure 596

6. 7.6 Bibliographic Notes 597

1. 7.7 Exercises 603

8. 8 Cluster Analysis: Additional Issues and Algorithms 613

1. 8.1 Characteristics of Data, Clusters, and Clustering Algorithms 614

1. 8.1.1 Example: Comparing K-means and DBSCAN 614

2. 8.1.2 Data Characteristics 615

3. 8.1.3 Cluster Characteristics 617

4. 8.1.4 General Characteristics of Clustering Algorithms 619

2. 8.2 Prototype-Based Clustering 621

1. 8.2.1 Fuzzy Clustering 621

2. 8.2.2 Clustering Using Mixture Models 627

3. 8.2.3 Self-Organizing Maps (SOM) 637

3. 8.3 Density-Based Clustering 644

1. 8.3.1 Grid-Based Clustering 644

2. 8.3.2 Subspace Clustering 648

3. 8.3.3 DENCLUE: A Kernel-Based Scheme for Density-Based Clustering 652

4. 8.4 Graph-Based Clustering 656

1. 8.4.1 Sparsification 657

2. 8.4.2 Minimum Spanning Tree (MST) Clustering 658

3. 8.4.3 OPOSSUM: Optimal Partitioning of Sparse Similarities Using METIS 659

4. 8.4.4 Chameleon: Hierarchical Clustering with Dynamic Modeling 660

5. 8.4.5 Spectral Clustering 666

6. 8.4.6 Shared Nearest Neighbor Similarity 673

7. 8.4.7 The Jarvis-Patrick Clustering Algorithm 676

8. 8.4.8 SNN Density 678

9. 8.4.9 SNN Density-Based Clustering 679

5. 8.5 Scalable Clustering Algorithms 681

1. 8.5.1 Scalability: General Issues and Approaches 681

2. 8.5.2 BIRCH 684

3. 8.5.3 CURE 686

6. 8.6 Which Clustering Algorithm? 690

7. 8.7 Bibliographic Notes 693

1. 8.8 Exercises 699

9. 9 Anomaly Detection 703

1. 9.1 Characteristics of Anomaly Detection Problems 705

1. 9.1.1 A Definition of an Anomaly 705

2. 9.1.2 Nature of Data 706

3. 9.1.3 How Anomaly Detection is Used 707

2. 9.2 Characteristics of Anomaly Detection Methods 708

3. 9.3 Statistical Approaches 710

1. 9.3.1 Using Parametric Models 710

2. 9.3.2 Using Non-parametric Models 714

3. 9.3.3 Modeling Normal and Anomalous Classes 715

4. 9.3.4 Assessing Statistical Significance 717

5. 9.3.5 Strengths and Weaknesses 718

4. 9.4 Proximity-based Approaches 719

1. 9.4.1 Distance-based Anomaly Score 719

2. 9.4.2 Density-based Anomaly Score 720

3. 9.4.3 Relative Density-based Anomaly Score 722

4. 9.4.4 Strengths and Weaknesses 723

5. 9.5 Clustering-based Approaches 724

1. 9.5.1 Finding Anomalous Clusters 724

2. 9.5.2 Finding Anomalous Instances 725

3. 9.5.3 Strengths and Weaknesses 728

6. 9.6 Reconstruction-based Approaches 728

1. 9.6.1 Strengths and Weaknesses 731

7. 9.7 One-class Classification 732

1. 9.7.1 Use of Kernels 733

2. 9.7.2 The Origin Trick 734

3. 9.7.3 Strengths and Weaknesses 738

8. 9.8 Information Theoretic Approaches 738

1. 9.8.1 Strengths and Weaknesses 740

9. 9.9 Evaluation of Anomaly Detection 740

10. 9.10 Bibliographic Notes 742

1. 9.11 Exercises 749

10. 10 Avoiding False Discoveries 755

1. 10.1 Preliminaries: Statistical Testing 756

1. 10.1.1 Significance Testing 756

2. 10.1.2 Hypothesis Testing 761

3. 10.1.3 Multiple Hypothesis Testing 767

4. 10.1.4 Pitfalls in Statistical Testing 776

2. 10.2 Modeling Null and Alternative Distributions 778

1. 10.2.1 Generating Synthetic Data Sets 781

2. 10.2.2 Randomizing Class Labels 782

3. 10.2.3 Resampling Instances 782

4. 10.2.4 Modeling the Distribution of the Test Statistic 783

3. 10.3 Statistical Testing for Classification 783

1. 10.3.1 Evaluating Classification Performance 783

2. 10.3.2 Binary Classification as Multiple Hypothesis Testing 785

3. 10.3.3 Multiple Hypothesis Testing in Model Selection 786

4. 10.4 Statistical Testing for Association Analysis 787

1. 10.4.1 Using Statistical Models 788

2. 10.4.2 Using Randomization Methods 794

5. 10.5 Statistical Testing for Cluster Analysis 795

1. 10.5.1 Generating a Null Distribution for Internal Indices 796

2. 10.5.2 Generating a Null Distribution for External Indices 798

3. 10.5.3 Enrichment 798

6. 10.6 Statistical Testing for Anomaly Detection 800

7. 10.7 Bibliographic Notes 803

1. 10.8 Exercises 808

1. Author Index 816

2. Subject Index 829

3. Copyright Permissions 839

1 Introduction Rapid advances in data collection and storage technology, coupled with the ease with which data can be generated and disseminated, have triggered the explosive growth of data, leading to the current age of big data. Deriving actionable insights from these large data sets is increasingly important in decision making across almost all areas of society, including business and industry; science and engineering; medicine and biotechnology; and government and individuals. However, the amount of data (volume), its complexity (variety), and the rate at which it is being collected and processed (velocity) have simply become too great for humans to analyze unaided. Thus, there is a great need for automated tools for extracting useful information from the big data despite the challenges posed by its enormity and diversity.

Data mining blends traditional data analysis methods with sophisticated algorithms for processing this abundance of data. In this introductory chapter, we present an overview of data mining and outline the key topics to be covered in this book. We start with a description of some applications that require more advanced techniques for data analysis.

Business and Industry Point-of-sale data collection (bar code scanners, radio frequency identification (RFID), and smart card technology) have allowed retailers to collect up-to-the-minute data about customer purchases at the checkout counters of their stores. Retailers can utilize this information, along with other business-critical data, such as web server logs from e-commerce websites and customer service records from call centers, to help them better understand the needs of their customers and make more informed business decisions.

Data mining techniques can be used to support a wide range of business intelligence applications, such as customer profiling, targeted marketing, workflow management, store layout, fraud detection, and automated buying and selling. An example of the last application is high-speed stock trading, where decisions on buying and selling have to be made in less than a second

using data about financial transactions. Data mining can also help retailers answer important business questions, such as “Who are the most profitable customers?” “What products can be cross-sold or up-sold?” and “What is the revenue outlook of the company for next year?” These questions have inspired the development of such data mining techniques as association analysis (Chapters 5 and 6).

As the Internet continues to revolutionize the way we interact and make decisions in our everyday lives, we are generating massive amounts of data about our online experiences, e.g., web browsing, messaging, and posting on social networking websites. This has opened several opportunities for business applications that use web data. For example, in the e-commerce sector, data about our online viewing or shopping preferences can be used to provide personalized recommendations of products. Data mining also plays a prominent role in supporting several other Internet-based services, such as filtering spam messages, answering search queries, and suggesting social updates and connections. The large corpus of text, images, and videos available on the Internet has enabled a number of advancements in data mining methods, including deep learning, which is discussed in Chapter 4. These developments have led to great advances in a number of applications, such as object recognition, natural language translation, and autonomous driving.

Another domain that has undergone a rapid big data transformation is the use of mobile sensors and devices, such as smart phones and wearable computing devices. With better sensor technologies, it has become possible to collect a variety of information about our physical world using low-cost sensors embedded on everyday objects that are connected to each other, termed the Internet of Things (IOT). This deep integration of physical sensors in digital systems is beginning to generate large amounts of diverse and distributed data about our environment, which can be used for designing convenient, safe, and energy-efficient home systems, as well as for urban planning of smart cities.

Medicine, Science, and Engineering Researchers in medicine, science, and engineering are rapidly accumulating data that is key to significant new discoveries. For example, as an important step toward improving our

understanding of the Earth’s climate system, NASA has deployed a series of Earth-orbiting satellites that continuously generate global observations of the land surface, oceans, and atmosphere. However, because of the size and spatio-temporal nature of the data, traditional methods are often not suitable for analyzing these data sets. Techniques developed in data mining can aid Earth scientists in answering questions such as the following: “What is the relationship between the frequency and intensity of ecosystem disturbances such as droughts and hurricanes to global warming?” “How is land surface precipitation and temperature affected by ocean surface temperature?” and “How well can we predict the beginning and end of the growing season for a region?”

As another example, researchers in molecular biology hope to use the large amounts of genomic data to better understand the structure and function of genes. In the past, traditional methods in molecular biology allowed scientists to study only a few genes at a time in a given experiment. Recent breakthroughs in microarray technology have enabled scientists to compare the behavior of thousands of genes under various situations. Such comparisons can help determine the function of each gene, and perhaps isolate the genes responsible for certain diseases. However, the noisy, high- dimensional nature of data requires new data analysis methods. In addition to analyzing gene expression data, data mining can also be used to address other important biological challenges such as protein structure prediction, multiple sequence alignment, the modeling of biochemical pathways, and phylogenetics.

Another example is the use of data mining techniques to analyze electronic health record (EHR) data, which has become increasingly available. Not very long ago, studies of patients required manually examining the physical records of individual patients and extracting very specific pieces of information pertinent to the particular question being investigated. EHRs allow for a faster and broader exploration of such data. However, there are significant challenges since the observations on any one patient typically occur during their visits to a doctor or hospital and only a small number of details about the health of the patient are measured during any particular visit.

Currently, EHR analysis focuses on simple types of data, e.g., a patient’s

blood pressure or the diagnosis code of a disease. However, large amounts of more complex types of medical data are also being collected, such as electrocardiograms (ECGs) and neuroimages from magnetic resonance imaging (MRI) or functional Magnetic Resonance Imaging (fMRI). Although challenging to analyze, this data also provides vital information about patients. Integrating and analyzing such data, with traditional EHR and genomic data is one of the capabilities needed to enable precision medicine, which aims to provide more personalized patient care.

1.1 What Is Data Mining? Data mining is the process of automatically discovering useful information in large data repositories. Data mining techniques are deployed to scour large data sets in order to find novel and useful patterns that might otherwise remain unknown. They also provide the capability to predict the outcome of a future observation, such as the amount a customer will spend at an online or a brick-and-mortar store.

Not all information discovery tasks are considered to be data mining. Examples include queries, e.g., looking up individual records in a database or finding web pages that contain a particular set of keywords. This is because such tasks can be accomplished through simple interactions with a database management system or an information retrieval system. These systems rely on traditional computer science techniques, which include sophisticated indexing structures and query processing algorithms, for efficiently organizing and retrieving information from large data repositories. Nonetheless, data mining techniques have been used to enhance the performance of such systems by improving the quality of the search results based on their relevance to the input queries.

Data Mining and Knowledge Discovery in Databases Data mining is an integral part of knowledge discovery in databases (KDD), which is the overall process of converting raw data into useful information, as shown in Figure 1.1. This process consists of a series of steps, from data preprocessing to postprocessing of data mining results.

Figure 1.1. The process of knowledge discovery in databases (KDD).

Figure 1.1. Full Alternative Text

The input data can be stored in a variety of formats (flat files, spreadsheets, or relational tables) and may reside in a centralized data repository or be distributed across multiple sites. The purpose of preprocessing is to transform the raw input data into an appropriate format for subsequent analysis. The steps involved in data preprocessing include fusing data from multiple sources, cleaning data to remove noise and duplicate observations, and selecting records and features that are relevant to the data mining task at hand. Because of the many ways data can be collected and stored, data preprocessing is perhaps the most laborious and time-consuming step in the overall knowledge discovery process.

“Closing the loop” is a phrase often used to refer to the process of integrating data mining results into decision support systems. For example, in business applications, the insights offered by data mining results can be integrated with campaign management tools so that effective marketing promotions can be conducted and tested. Such integration requires a postprocessing step to ensure that only valid and useful results are incorporated into the decision support system. An example of postprocessing is visualization, which allows analysts to explore the data and the data mining results from a variety of viewpoints. Hypothesis testing methods can also be applied during

postprocessing to eliminate spurious data mining results. (See Chapter 10.)

1.2 Motivating Challenges As mentioned earlier, traditional data analysis techniques have often encountered practical difficulties in meeting the challenges posed by big data applications. The following are some of the specific challenges that motivated the development of data mining.

Scalability Because of advances in data generation and collection, data sets with sizes of terabytes, petabytes, or even exabytes are becoming common. If data mining algorithms are to handle these massive data sets, they must be scalable. Many data mining algorithms employ special search strategies to handle exponential search problems. Scalability may also require the implementation of novel data structures to access individual records in an efficient manner. For instance, out-of-core algorithms may be necessary when processing data sets that cannot fit into main memory. Scalability can also be improved by using sampling or developing parallel and distributed algorithms. A general overview of techniques for scaling up data mining algorithms is given in Appendix F.

High Dimensionality It is now common to encounter data sets with hundreds or thousands of attributes instead of the handful common a few decades ago. In bioinformatics, progress in microarray technology has produced gene expression data involving thousands of features. Data sets with temporal or spatial components also tend to have high dimensionality. For example, consider a data set that contains measurements of temperature at various locations. If the temperature measurements are taken repeatedly for an extended period, the number of dimensions (features) increases in proportion to the number of measurements taken. Traditional data analysis techniques

that were developed for low-dimensional data often do not work well for such high-dimensional data due to issues such as curse of dimensionality (to be discussed in Chapter 2). Also, for some data analysis algorithms, the computational complexity increases rapidly as the dimensionality (the number of features) increases.

Heterogeneous and Complex Data Traditional data analysis methods often deal with data sets containing attributes of the same type, either continuous or categorical. As the role of data mining in business, science, medicine, and other fields has grown, so has the need for techniques that can handle heterogeneous attributes. Recent years have also seen the emergence of more complex data objects. Examples of such non-traditional types of data include web and social media data containing text, hyperlinks, images, audio, and videos; DNA data with sequential and three-dimensional structure; and climate data that consists of measurements (temperature, pressure, etc.) at various times and locations on the Earth’s surface. Techniques developed for mining such complex objects should take into consideration relationships in the data, such as temporal and spatial autocorrelation, graph connectivity, and parent-child relationships between the elements in semi-structured text and XML documents.

Data Ownership and Distribution Sometimes, the data needed for an analysis is not stored in one location or owned by one organization. Instead, the data is geographically distributed among resources belonging to multiple entities. This requires the development of distributed data mining techniques. The key challenges faced by distributed data mining algorithms include the following: (1) how to reduce the amount of communication needed to perform the distributed computation, (2) how to effectively consolidate the data mining results obtained from multiple sources, and (3) how to address data security and privacy issues.

Non-traditional Analysis The traditional statistical approach is based on a hypothesize-and-test paradigm. In other words, a hypothesis is proposed, an experiment is designed to gather the data, and then the data is analyzed with respect to the hypothesis. Unfortunately, this process is extremely labor-intensive. Current data analysis tasks often require the generation and evaluation of thousands of hypotheses, and consequently, the development of some data mining techniques has been motivated by the desire to automate the process of hypothesis generation and evaluation. Furthermore, the data sets analyzed in data mining are typically not the result of a carefully designed experiment and often represent opportunistic samples of the data, rather than random samples.

1.3 The Origins of Data Mining While data mining has traditionally been viewed as an intermediate process within the KDD framework, as shown in Figure 1.1, it has emerged over the years as an academic field within computer science, focusing on all aspects of KDD, including data preprocessing, mining, and postprocessing. Its origin can be traced back to the late 1980s, following a series of workshops organized on the topic of knowledge discovery in databases. The workshops brought together researchers from different disciplines to discuss the challenges and opportunities in applying computational techniques to extract actionable knowledge from large databases. The workshops quickly grew into hugely popular conferences that were attended by researchers and practitioners from both the academia and industry. The success of these conferences, along with the interest shown by businesses and industry in recruiting new hires with data mining background, have fueled the tremendous growth of this field.

The field was initially built upon the methodology and algorithms that researchers had previously used. In particular, data mining researchers draw upon ideas, such as (1) sampling, estimation, and hypothesis testing from statistics and (2) search algorithms, modeling techniques, and learning theories from artificial intelligence, pattern recognition, and machine learning. Data mining has also been quick to adopt ideas from other areas, including optimization, evolutionary computing, information theory, signal processing, visualization, and information retrieval, and extending them to solve the challenges of mining big data.

A number of other areas also play key supporting roles. In particular, database systems are needed to provide support for efficient storage, indexing, and query processing. Techniques from high performance (parallel) computing are often important in addressing the massive size of some data sets. Distributed techniques can also help address the issue of size and are essential when the data cannot be gathered in one location. Figure 1.2 shows the relationship of data mining to other areas.

Figure 1.2. Data mining as a confluence of many disciplines.

Data Science and Data-Driven Discovery Data science is an interdisciplinary field that studies and applies tools and techniques for deriving useful insights from data. Although data science is regarded as an emerging field with a distinct identity of its own, the tools and techniques often come from many different areas of data analysis, such as data mining, statistics, AI, machine learning, pattern recognition, database technology, and distributed and parallel computing. (See Figure 1.2.)

The emergence of data science as a new field is a recognition that, often, none of the existing areas of data analysis provides a complete set of tools for the data analysis tasks that are often encountered in emerging applications. Instead, a broad range of computational, mathematical, and statistical skills is often required. To illustrate the challenges that arise in analyzing such data,

consider the following example. Social media and the Web present new opportunities for social scientists to observe and quantitatively measure human behavior on a large scale. To conduct such a study, social scientists work with analysts who possess skills in areas such as web mining, natural language processing (NLP), network analysis, data mining, and statistics. Compared to more traditional research in social science, which is often based on surveys, this analysis requires a broader range of skills and tools, and involves far larger amounts of data. Thus, data science is, by necessity, a highly interdisciplinary field that builds on the continuing work of many fields.

The data-driven approach of data science emphasizes the direct discovery of patterns and relationships from data, especially in large quantities of data, often without the need for extensive domain knowledge. A notable example of the success of this approach is represented by advances in neural networks, i.e., deep learning, which have been particularly successful in areas which have long proved challenging, e.g., recognizing objects in photos or videos and words in speech, as well as in other application areas. However, note that this is just one example of the success of data-driven approaches, and dramatic improvements have also occurred in many other areas of data analysis. Many of these developments are topics described later in this book.

Some cautions on potential limitations of a purely data-driven approach are given in the Bibliographic Notes.

1.4 Data Mining Tasks Data mining tasks are generally divided into two major categories:

Predictive tasks The objective of these tasks is to predict the value of a particular attribute based on the values of other attributes. The attribute to be predicted is commonly known as the target or dependent variable, while the attributes used for making the prediction are known as the explanatory or independent variables.

Descriptive tasks Here, the objective is to derive patterns (correlations, trends, clusters, trajectories, and anomalies) that summarize the underlying relationships in data. Descriptive data mining tasks are often exploratory in nature and frequently require postprocessing techniques to validate and explain the results.

Figure 1.3 illustrates four of the core data mining tasks that are described in the remainder of this book.

Figure 1.3. Four of the core data mining tasks.

Figure 1.3. Full Alternative Text

Predictive modeling refers to the task of building a model for the target variable as a function of the explanatory variables. There are two types of predictive modeling tasks: classification, which is used for discrete target variables, and regression, which is used for continuous target variables. For example, predicting whether a web user will make a purchase at an online bookstore is a classification task because the target variable is binary-valued. On the other hand, forecasting the future price of a stock is a regression task because price is a continuous-valued attribute. The goal of both tasks is to

learn a model that minimizes the error between the predicted and true values of the target variable. Predictive modeling can be used to identify customers who will respond to a marketing campaign, predict disturbances in the Earth’s ecosystem, or judge whether a patient has a particular disease based on the results of medical tests.

Example 1.1 (Predicting the Type of a Flower). Consider the task of predicting a species of flower based on the characteristics of the flower. In particular, consider classifying an Iris flower as one of the following three Iris species: Setosa, Versicolour, or Virginica. To perform this task, we need a data set containing the characteristics of various flowers of these three species. A data set with this type of information is the well-known Iris data set from the UCI Machine Learning Repository at http://www.ics.uci.edu/~mlearn. In addition to the species of a flower, this data set contains four other attributes: sepal width, sepal length, petal length, and petal width. Figure 1.4 shows a plot of petal width versus petal length for the 150 flowers in the Iris data set. Petal width is broken into the categories low, medium, and high, which correspond to the intervals [0, 0.75), [0.75, 1.75), [1.75, ∞), respectively. Also, petal length is broken into categories low, medium,and high, which correspond to the intervals [0, 2.5), [2.5, 5), [5, ∞), respectively. Based on these categories of petal width and length, the following rules can be derived:

Petal width low and petal length low implies Setosa.

Petal width medium and petal length medium implies Versicolour.

Petal width high and petal length high implies Virginica.

While these rules do not classify all the flowers, they do a good (but not perfect) job of classifying most of the flowers. Note that flowers from the Setosa species are well separated from the Versicolour and Virginica species with respect to petal width and length, but the latter two species overlap

http://www.ics.uci.edu/~mlearn
somewhat with respect to these attributes.

Figure 1.4. Petal width versus petal length for 150 Iris flowers.

Figure 1.4. Full Alternative Text

Association analysis is used to discover patterns that describe strongly associated features in the data. The discovered patterns are typically represented in the form of implication rules or feature subsets. Because of the exponential size of its search space, the goal of association analysis is to extract the most interesting patterns in an efficient manner. Useful applications of association analysis include finding groups of genes that have related functionality, identifying web pages that are accessed together, or

understanding the relationships between different elements of Earth’s climate system.

Example 1.2 (Market Basket Analysis). The transactions shown in Table 1.1 illustrate point-of-sale data collected at the checkout counters of a grocery store. Association analysis can be applied to find items that are frequently bought together by customers. For example, we may discover the rule {Diapers}→{Milk}, which suggests that customers who buy diapers also tend to buy milk. This type of rule can be used to identify potential cross-selling opportunities among related items.

Table 1.1. Market basket data. Transaction ID Items

1 {Bread, Butter, Diapers, Milk} 2 {Coffee, Sugar, Cookies, Salmon} 3 {Bread, Butter, Coffee, Diapers, Milk, Eggs} 4 {Bread, Butter, Salmon, Chicken} 5 {Eggs, Bread, Butter} 6 {Salmon, Diapers, Milk} 7 {Bread, Tea, Sugar, Eggs} 8 {Coffee, Sugar, Chicken, Eggs} 9 {Bread, Diapers, Milk, Salt} 10 {Tea, Eggs, Cookies, Diapers, Milk}

Cluster analysis seeks to find groups of closely related observations so that observations that belong to the same cluster are more similar to each other than observations that belong to other clusters. Clustering has been used to group sets of related customers, find areas of the ocean that have a significant

impact on the Earth’s climate, and compress data.

Example 1.3 (Document Clustering). The collection of news articles shown in Table 1.2 can be grouped based on their respective topics. Each article is represented as a set of word-frequency pairs (w : c), where w is a word and c is the number of times the word appears in the article. There are two natural clusters in the data set. The first cluster consists of the first four articles, which correspond to news about the economy, while the second cluster contains the last four articles, which correspond to news about health care. A good clustering algorithm should be able to identify these two clusters based on the similarity between words that appear in the articles.

Table 1.2. Collection of news articles. Article Word-frequency pairs

1 dollar: 1, industry: 4, country: 2, loan: 3, deal: 2, government: 2

2 machinery: 2, labor: 3, market: 4, industry: 2, work: 3, country: 1

3 job: 5, inflation: 3, rise: 2, jobless: 2, market: 3, country: 2, index: 3

4 domestic: 3, forecast: 2, gain: 1, market: 2, sale: 3, price: 2

5 patient: 4, symptom: 2, drug: 3, health: 2, clinic: 2, doctor: 2

6 pharmaceutical: 2, company: 3, drug: 2, vaccine: 1, flu: 3 7 death: 2, cancer: 4, drug: 3, public: 4, health: 3, director: 2 8 medical: 2, cost: 3, increase: 2, patient: 2, health: 3, care: 1

Anomaly detection is the task of identifying observations whose characteristics are significantly different from the rest of the data. Such observations are known as anomalies or outliers. The goal of an anomaly detection algorithm is to discover the real anomalies and avoid falsely labeling normal objects as anomalous. In other words, a good anomaly detector must have a high detection rate and a low false alarm rate. Applications of anomaly detection include the detection of fraud, network intrusions, unusual patterns of disease, and ecosystem disturbances, such as droughts, floods, fires, hurricanes, etc.

Example 1.4 (Credit Card Fraud Detection). A credit card company records the transactions made by every credit card holder, along with personal information such as credit limit, age, annual income, and address. Since the number of fraudulent cases is relatively small compared to the number of legitimate transactions, anomaly detection techniques can be applied to build a profile of legitimate transactions for the users. When a new transaction arrives, it is compared against the profile of the user. If the characteristics of the transaction are very different from the previously created profile, then the transaction is flagged as potentially fraudulent.

1.5 Scope and Organization of the Book This book introduces the major principles and techniques used in data mining from an algorithmic perspective. A study of these principles and techniques is essential for developing a better understanding of how data mining technology can be applied to various kinds of data. This book also serves as a starting point for readers who are interested in doing research in this field.

We begin the technical discussion of this book with a chapter on data (Chapter 2), which discusses the basic types of data, data quality, preprocessing techniques, and measures of similarity and dissimilarity. Although this material can be covered quickly, it provides an essential foundation for data analysis. Chapters 3 and 4 cover classification. Chapter 3 provides a foundation by discussing decision tree classifiers and several issues that are important to all classification: overfitting, underfitting, model selection, and performance evaluation. Using this foundation, Chapter 4 describes a number of other important classification techniques: rule-based systems, nearest neighbor classifiers, Bayesian classifiers, artificial neural networks, including deep learning, support vector machines, and ensemble classifiers, which are collections of classifiers. The multiclass and imbalanced class problems are also discussed. These topics can be covered independently.

Association analysis is explored in Chapters 5 and 6. Chapter 5 describes the basics of association analysis: frequent itemsets, association rules, and some of the algorithms used to generate them. Specific types of frequent itemsets— maximal, closed, and hyperclique—that are important for data mining are also discussed, and the chapter concludes with a discussion of evaluation measures for association analysis. Chapter 6 considers a variety of more advanced topics, including how association analysis can be applied to categorical and continuous data or to data that has a concept hierarchy. (A concept hierarchy is a hierarchical categorization of objects, e.g., store itemsstore items→clothing→shoes→sneakers.) This chapter also describes

how association analysis can be extended to find sequential patterns (patterns involving order), patterns in graphs, and negative relationships (if one item is present, then the other is not).

Cluster analysis is discussed in Chapters 7 and 8. Chapter 7 first describes the different types of clusters, and then presents three specific clustering techniques: K-means, agglomerative hierarchical clustering, and DBSCAN. This is followed by a discussion of techniques for validating the results of a clustering algorithm. Additional clustering concepts and techniques are explored in Chapter 8, including fuzzy and probabilistic clustering, Self- Organizing Maps (SOM), graph-based clustering, spectral clustering, and density-based clustering. There is also a discussion of scalability issues and factors to consider when selecting a clustering algorithm.

Chapter 9, is on anomaly detection. After some basic definitions, several different types of anomaly detection are considered: statistical, distance- based, density-based, clustering-based, reconstruction-based, one-class classification, and information theoretic. The last chapter, Chapter 10, supplements the discussions in the other Chapters with a discussion of the statistical concepts important for avoiding spurious results, and then discusses those concepts in the context of data mining techniques studied in the previous chapters. These techniques include statistical hypothesis testing, p-values, the false discovery rate, and permutation testing. Appendices A through F give a brief review of important topics that are used in portions of the book: linear algebra, dimensionality reduction, statistics, regression, optimization, and scaling up data mining techniques for big data.

The subject of data mining, while relatively young compared to statistics or machine learning, is already too large to cover in a single book. Selected references to topics that are only briefly covered, such as data quality, are provided in the Bibliographic Notes section of the appropriate chapter. References to topics not covered in this book, such as mining streaming data and privacy-preserving data mining are provided in the Bibliographic Notes of this chapter.

1.6 Bibliographic Notes The topic of data mining has inspired many textbooks. Introductory textbooks include those by Dunham [16], Han et al. [29], Hand et al. [31], Roiger and Geatz [50], Zaki and Meira [61], and Aggarwal [2]. Data mining books with a stronger emphasis on business applications include the works by Berry and Linoff [5], Pyle [47], and Parr Rud [45]. Books with an emphasis on statistical learning include those by Cherkassky and Mulier [11], and Hastie et al. [32]. Similar books with an emphasis on machine learning or pattern recognition are those by Duda et al. [15], Kantardzic [34], Mitchell [43], Webb [57], and Witten and Frank [58]. There are also some more specialized books: Chakrabarti [9] (web mining), Fayyad et al. [20] (collection of early articles on data mining), Fayyad et al. [18] (visualization), Grossman et al. [25] (science and engineering), Kargupta and Chan [35] (distributed data mining), Wang et al. [56] (bioinformatics), and Zaki and Ho [60] (parallel data mining).

Homework is Completed By:

Writer Writer Name Amount Client Comments & Rating
Instant Homework Helper

ONLINE

Instant Homework Helper

$36

She helped me in last minute in a very reasonable price. She is a lifesaver, I got A+ grade in my homework, I will surely hire her again for my next assignments, Thumbs Up!

Order & Get This Solution Within 3 Hours in $25/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 3 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 6 Hours in $20/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 6 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 12 Hours in $15/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 12 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

6 writers have sent their proposals to do this homework:

Math Specialist
Accounting Homework Help
Smart Homework Helper
Accounting & Finance Mentor
Top Quality Assignments
Assignment Hut
Writer Writer Name Offer Chat
Math Specialist

ONLINE

Math Specialist

I find your project quite stimulating and related to my profession. I can surely contribute you with your project.

$49 Chat With Writer
Accounting Homework Help

ONLINE

Accounting Homework Help

I have done dissertations, thesis, reports related to these topics, and I cover all the CHAPTERS accordingly and provide proper updates on the project.

$34 Chat With Writer
Smart Homework Helper

ONLINE

Smart Homework Helper

This project is my strength and I can fulfill your requirements properly within your given deadline. I always give plagiarism-free work to my clients at very competitive prices.

$45 Chat With Writer
Accounting & Finance Mentor

ONLINE

Accounting & Finance Mentor

I have worked on wide variety of research papers including; Analytical research paper, Argumentative research paper, Interpretative research, experimental research etc.

$36 Chat With Writer
Top Quality Assignments

ONLINE

Top Quality Assignments

I reckon that I can perfectly carry this project for you! I am a research writer and have been writing academic papers, business reports, plans, literature review, reports and others for the past 1 decade.

$46 Chat With Writer
Assignment Hut

ONLINE

Assignment Hut

I am a PhD writer with 10 years of experience. I will be delivering high-quality, plagiarism-free work to you in the minimum amount of time. Waiting for your message.

$35 Chat With Writer

Let our expert academic writers to help you in achieving a+ grades in your homework, assignment, quiz or exam.

Similar Homework Questions

Anglo saxon clothes facts - The accused 12 angry men - Frontline management course melbourne - Wms implementation project plan example - Discussion post - Mathswatch interactive questions answers - Vincit qui patitur pronunciation - Pathos examples in ads - Flit catcher in the rye - Accounting Homework - Sunny yesterday my life was filled with rain - Three levels of computer ethics - Fundamentals of contemporary business communication 2nd ed ober 2007 - Discussion Questions - Lady macbeth sleepwalking scene video - Psychological first aid questions and answers - Presenta tino 2 - How to insert a dropped drop cap in word 2016 - Muir graphic organizer b - A rounded version the theory of multiple intelligences summary - 2.3 cm to mm - Case Study HCS 313 - Characteristics of scholarly articles - US GAAP / IFRS Report (with research tracker) - Weld on lifting lugs - APA Paper Correction - Career break cover letter template - Journal - Cattle reproductive system ppt - Change management simulation - Management contract hotel example - Whose "music of mystic serenity" captured the conservative quality of the catholic reformation? - Uevp - Toad brigade to trick track hall - Australia new zealand food standards code definition - WK-5 - Spencers sexology bath bombs amazon - Aluminium 6061 t6 properties pdf - Business rules must be rendered in writing - Used soloflex for sale craigslist - Denied jesus three times - Prentice hall world history chapter 18 - Theoretical framework of ob - The rigid l shaped member abf is supported - Mgt 450 week 1 assignment - Change management discussion questions - Eng 123 from problem to persuasion - Approximate value of the golden ratio to the nearest thousandth - Amylase lock and key - Essay describing data engineering internship experience and how it will help my professional career - Who is twanna turner melby mother - Tin exists in earth's crust as sno - 70 486 exam questions pdf - African American Literature journal - X plane g1000 manual - Lockwood 600mm pull handle - Pearson vue test centre melbourne - New employee induction checklist template - I need a discussion about this topic - 10 pages - Hayman reese dual cam sway control - How to read newman projections - Stop googling let's talk summary - Difference between ordinary language and literary language - Database systems design implementation and management chapter 3 answers - Wk2_Discussion - Policy - Acca colleges in ahmedabad - ESL - La lune ne garde aucune rancune - Rococo ppt - How much nutella is sold each year - Shaw academy graphic design final assignment - Machine cycle in 8085 microprocessor - Mobile security patrol officer - Government SLP 4 - Business analytics implementation plan for a design firm - Comprehensive maintenance contract terms and conditions - Healthcare Finance - 05.05 should free trade be a goal - Report writing - Religion assignment 2 - Exadata x7 2 datasheet - Hollis industries produces flash drives for computers - Hoyt sector model city examples - Cockatiels for sale toowoomba - Litany of the saints song lyrics - Farmer fair phyllis - Trigonometric identities worksheet answers - Child development theorists worksheet - Sangare 12 pages due by 48 hours - Transaction processing system examples - Wileyplus exercise 7 11 - Character award certificate template - Credibility statement in a speech - Earth Science - Mars inc marketing strategy - Food chain reading comprehension pdf - Motorized awnings for decks - Z 2 in polar form