Data Science & Big Data Analytics
Discovering, Analyzing, Visualizing and Presenting Data
EMC Education Services
WILEY
'
Data Science & Big Data Analytics: Discovering, Analyzing, Visualizing and Presenting Data
Published by John Wiley & Sons, Inc. 10475 Crosspoint Boulevard Indianapolis, IN 46256 www. wiley. com
Copyright© 2015 by John Wiley & Sons, Inc., Indianapolis, Indiana
Published simultaneously in Canada
ISBN: 978-1-118-87613-8 ISBN: 978-1-118-87622-0 (ebk) ISBN: 978-1-118-87605-3 (ebk)
Manufactured in the United States of America
10987654321
No part ofthis publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permis- sion of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http: I /www. wiley. com/ go/permissions.
limit ofliability/DisclaimerofWarranty: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Web site is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or website may provide or recommendations it may make. Further, readers should be aware that Internet websites listed in this work may have changed or disappeared between when this work was written and when it is read.
For general information on our other products and services please contact our Customer Care Department within the United States at (877) 762-2974, outside the United States at (317) 572-3993 orfax (317) 572-4002.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand.lf this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http: I /book support. wiley. com. For more information about Wiley products, visit www. wiley. com.
library of Congress Control Number: 2014946681
Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates, in the United States and other coun- tries, and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.
Credits
Executive Editor
Carol Long
Project Editor
Kelly Talbot Production Manager
Kathleen Wisor Copy Editor
Karen Gill Manager of Content Development
and Assembly
Mary Beth Wakefield Marketing Director
David Mayhew
Marketing Manager
Carrie Sherrill
Professional Technology and Strategy Director
Barry Pruett
Business Manager
Amy Knies Associate Publisher
Jim Minatel Project Coordinator, Cover
Patrick Redmond Proofreader
Nancy Carrasco Indexer
Johnna Van Hoose Dinse Cover Designer
Mallesh Gurram
About the Key Contributors
David Dietrich heads the data science education team within EMC Education Services, where he leads the
curriculum, strategy and course development related to Big Data Analytics and Data Science. He co-au- thored the first course in EMC's Data Science curriculum, two additional EMC courses focused on teaching leaders and executives about Big Data and data science, and is a contributing author and editor of this
book. He has filed 14 patents in the areas of data science, data privacy, and cloud computing. David has been an advisor to severa l universities looking to develop academic programs related to data
analytics, and has been a frequent speaker at conferences and industry events. He also has been a a guest lecturer at universi- ties in the Boston area. His work has been featured in major publications including Forbes, Harvard Business Review, and the 2014 Massachusetts Big Data Report, commissioned by Governor Deval Patrick.
Involved with analytics and technology for nearly 20 years, David has worked with many Fortune 500 companies over his career, holding multiple roles involving analytics, including managing analytics and operations teams, delivering analytic con-
sulting engagements, managing a line of analytical software products for regulating the US banking industry, and developing Sohware-as-a-Service and BI-as-a-Service offerings. Additionally, David collaborated with the U.S. Federal Reserve in develop-
ing predictive models for monitoring mortgage portfolios. Barry Heller is an advisory technical education consultant at EMC Education Services. Barry is a course developer and cur-
riculum advisor in the emerging technology areas of Big Data and data science. Prior to his current role, Barry was a consul- tant research scientist leading numerous analytical initiatives within EMC's Total Customer Experience organization. Early in his EMC career, he managed the statistical engineering group as well as led the
data warehousing efforts in an Enterprise Resource Planning (ERP) implementation. Prior to joining EMC,
Barry held managerial and analytical roles in reliability engineering functions at medical diagnostic and technology companies. During his career, he has applied his quantitative skill set to a myriad of business applications in the Customer Service, Engineering, Manufacturing, Sales/Marketing, Finance, and Legal
arenas. Underscoring the importance of strong executive stakeholder engagement, many of his successes
have resulted from not only focusing on the technical details of an analysis, but on the decisions that will be resulting from the analysis. Barry earned a B.S. in Computational Mathematics from the Rochester Institute ofTechnology and an M.A. in
Mathematics from the State University of New York (SUNY) New Paltz. Beibei Yang is a Technical Education Consultant of EMC Education Services, responsible for developing severa l open courses
at EMC related to Data Science and Big Data Analytics. Beibei has seven years of experience in the IT industry. Prior to EMC she worked as a sohware engineer, systems manager, and network manager for a Fortune 500 company where she introduced
new technologies to improve efficiency and encourage collaboration. Beibei has published papers to
prestigious conferences and has filed multiple patents. She received her Ph.D. in computer science from the University of Massachusetts Lowell. She has a passion toward natural language processing and data
mining, especially using various tools and techniques to find hidden patterns and tell stories with data. Data Science and Big Data Analytics is an exciting domain where the potential of digital information is maximized for making intelligent business decisions. We believe that this is an area that will attract a lot of talented students and professionals in the short, mid, and long term.
Acknowledgments
EMC Education Services embarked on learning this subject with the intent to develop an "open" curriculum and certification. It was a challenging journey at the time as not many understood what it would take to be a true
data scientist. After initial research (and struggle), we were able to define what was needed and attract very talented professionals to work on the project. The course, "Data Science and Big Data Analytics," has become
well accepted across academia and the industry. Led by EMC Education Services, this book is the result of efforts and contributions from a number of key EMC organizations and supported by the office of the CTO, IT, Global Services, and Engineering. Many sincere
thanks to many key contributors and subject matter experts David Dietrich, Barry Heller, and Beibei Yang for their work developing content and graphics for the chapters. A special thanks to subject matter experts John Cardente and Ganesh Rajaratnam for their active involvement reviewing multiple book chapters and
providing valuable feedback throughout the project.
We are also grateful to the fol lowing experts from EMC and Pivotal for their support in reviewing and improving the content in this book:
Aidan O'Brien Joe Kambourakis
Alexander Nunes Joe Milardo
Bryan Miletich John Sopka
Dan Baskette Kathryn Stiles
Daniel Mepham Ken Taylor
Dave Reiner Lanette Wells
Deborah Stokes Michael Hancock
Ellis Kriesberg Michael Vander Donk
Frank Coleman Narayanan Krishnakumar
Hisham Arafat Richard Moore
Ira Schild Ron Glick
Jack Harwood Stephen Maloney
Jim McGroddy Steve Todd
Jody Goncalves Suresh Thankappan
Joe Dery Tom McGowan
We also thank Ira Schild and Shane Goodrich for coordinating this project, Mallesh Gurram for the cover design, Chris Conroy and Rob Bradley for graphics, and the publisher, John Wiley and Sons, for timely support in bringing this book to the
industry.
Nancy Gessler
Director, Education Services, EMC Corporation
Alok Shrivastava
Sr. Director, Education Services, EMC Corporation
Contents Introduction ................ . .. . .....• . •.. ... .... •..... .. .. . .. . .......... .. ... . ..................... •.•...... xvii
Chapter 1 • Introduction to Big Data Analytics ................... . . . ....................... 1
1.1 Big Data Overview ..................... ....... .....•... • ...... . . . ........ • .. ... . . ... ....... ....... 2 1.1.1 Data Structures .. . .. . . . .. ................ ... ... . .. . ...... . .. .. .... . .................... ..... . .. . . . .. 5 1.1.2 Analyst Perspective on Data Repositories . ............................. . .......... .......•. ... ... .. .. 9
1.2 State of the Practice in Analytics ................................................................. . 11 1.2.1 Bl Versus Data Science .............. .... ....... . .. . ........... . . . .... . ....................... .. .... 12 1.2.2 Current Analytical Architecture ... . .... .• . . ................ .... .............. .... .... ...... •.. . ..... 13 1.2.3 Drivers of Big Data .................................................... . . . .. ................. .. ... . . 15 1.2.4 Emerging Big Data Ecosystem and a New Approach to Analytics .. ....... ...... . ............ .. ....... 16
1.3 Key Roles for the New Big Data Ecosystem ....... ..... ......... . ....... . ..... .. .................... 19 1.4 Examples of Big Data Analytics ... .... .......... .... . ... ....... ... .... . ...... . .................... 22 Summary .............. ............ ... ... ......... .... • ... •....... ........ .. • ..•... . ................ 23 Exercises ..................... .... ..... .. ...... . ......•......... .. .. . ... .... . ..•.................... 23 Bibliography ........................... .... .. ... ... ... •................... .. • ...... ..... ..... ....... 24
Chapter 2 • Data Analytics Lifecycle ..................................................... . 25 2.1 Data Analytics Lifecycle Overview ... ..... . ............. • ...... •.. ..... ...... • ... •............. . . . 26
2.1.1 Key Roles for a Successful Anolytics Project .... . .. . .... .... . ........ . .. .. . ..•......... •. •....... . .. . . 26 2.1.2 Background and Overview of Data Analytics Lifecyc/e .......................... . .......•... . ..... ... 28
2.2 Phase 1: Discovery ..... .. .. .. . ............................. . ..•..................... •........... . 30 2.2.1 Learning the Business Domain .. . ....... ... ..•.•. •.... . .. ..... . . .. . ...................•........... .30 2.2.2 Resources . . ... . ................... . ...... . ......................... ..... ............. •.......•.... 31 2.2.3 Framing the Problem ............•.... . ...................................•......... •.•.... . . ...... 32 2.2.41dentifying Key Stakeholders ... .. ....................... ... . ... ......... .... . ....... •. . .......... . . 33 2.2.51nterviewing the Analytics Sponsor ...... ........ ...... .. .......... .... ... .. ... ..... .. ........... ... 33 2.2.6 Developing Initial Hypotheses ................. .. . . . .. . . . .. . . . . ... .... .. ........... . . •............ . . 35 2.2.71dentifying Potential Data Sources . ... ...•. •.. .... . . .. . ......•. •.......... . ....... . ..... . ... . .. .. . . 35
2.3 Phase 2: Data Preparation ...........................................................•...•..•..... 36 2.3.1 Preparing the Analytic Sandbox ............... . ...................... ... •. •.......•.......... .. .... 37 2.3.2 Performing ETLT ..................................................................•.•.......•... .. . 38 2.3.3 Learning About the Data .. ..... . .............. .. ........................•.•.......•.•........ ..... . 39 2.3.4 Data Conditioning ....... .. ....•.......... . ....................... .. . .. . . . ......•. •............. .. .40 2.3.5 Survey and Visualize . . . ... .. .... .. .. ...... . . ..... .. . .................. . . •. ...... . .•.. .. .. .. . . . ..... 41 2.3.6 Common Tools for the Data Preparation Phase . . . .... .. ..... ....... . •......... •.• .•.. .. ..... .. .. . . .42
2.4 Phase 3: Model Planning ............................•................. . ... . .. •..... .....•........ 42 2.4.1 Data Exploration and Variable Selection . . ... . . .. . ......... •... . ... . . ........ . .............. .. .. . . . .44 2.4.2 Model Selection . ... ................ . .. . . . ................ •.......•...•.......................... . .45 2.4.3 Common Tools for the Model Planning Phase . ...........•....... . . •. ........................... . . . .45
CONTENTS
2.5 Phase 4: Model Building ...... .................. ...... •. ... ..... .... • ... •. . •. .. •.........•...•.... 46 2.5.1 Common Tools for the Mode/Building Phase ...... .. .. . ..... .. ..... . ....... . .. . . .. . . .. . .... . . .. . .... 48
2.6 Phase 5: Communicate Results ......... .... ...... . ... •........ ........ ... . •..... .....•. ..... •.... 49 2.7 Phase 6: Operationalize ... ... ....... ... . .. ........ ....... ... ........... •. . •. . ... ....... .......... SO 2.8 Case Study: Global Innovation Network and Analysis (GINA) ................. •...................... 53
2.8.1 Phase 1: Discovery ................................................................................. 54 2.8.2 Phase 2: Data Preparation .... •........ . ...................................................... . .... 55 2.8.3 Phase 3: Model Planning . . . ...•.•. . . .. . . ..... .. . . .. . ..... .. .. ... ...... . . . ................... . . . .. . . 56 2.8.4 Phase 4: Mode/Building ..... . ....•.. .. .. .......... . .............. . . .. . ... . . ....... .. . .... ... . .. . . . 56 2.8.5 Phase 5: Communicate Results .. . . ..... . ...... .. ...... ... .. . .. . . ..................... ...... ........ 58 2.8.6 Phase 6: Operationalize . . ... ......•..... ..• .. . . . .. . . ..............•................................ 59
Summary ................................. • ................. •..•.. •.......•.....••........ . ....•.... 60 Exercises .................................•.... .. ..............•. . •....................... . . . . . •.... 61 Bibliography ....• . .••...................................•.... . . • ..... .. ............................. 61
Chapter 3 • Review of Basic Data Analytic Methods Using R . . . . . . .. . ... . .. .. . ... . . . . . .. ... . 63
3.1 Introduction toR ............................ ... .................................... ..... ......... 64 3.1.1 R Graphical User Interfaces . ............ . ............................... ...... . .. ... . . . ... ....... ... 67 3.1.2 Data Import and Export. . ......... . .. ............. ........... ........... .................... ....... 69 3.1.3 Attribute and Data Types . .......... .. ...... . ....................................................... 71 3.1.4 Descriptive Statistics ....................... . . . ..................................................... 79
3.2 Exploratory Data Analysis .............. • ... . .• •.............•........... . .................... .... 80 3.2.1 Visualization Before Analysis ........ . ..................................................•........... 82 3.2.2 Dirty Data ............ .. ................................................ . ........... ...•...... .... . 85 3.2.3 Visualizing a Single Variable ........ •.. . ................ .. .. . . ........... . .... ....... •.. . . . .... .. . . 88 3.2.4 Examining Multiple Variables . .... .... ....• . .. . ... .......... .............. ...... . .. .. .............. 91 3.2.5 Data Exploration Versus Presentation ...... . ........ •. . . . .. . . ..... ...... ................... ...... .. 99
3.3 Statistical Methods for Evaluation .................... . .. .• ......... ... . .. .................... . .. 101 3.3.1 Hypothesis Testing ........ ........ .......... .... ............................ . .. . ...... .. ...... . ... 102 3.3.2 Difference of Means ...... . .... .. . .... ..... . ..................................................... 704 3.3.3 Wilcoxon Rank-Sum Test ................•........................ ... .. . ... . .................. •... 108 3.3.4 Type I and Type II Errors ... . ...... . .. . .................. . ........ . .. .... .. ......................... 109 3.3.5 Power and Sample Size .....•.. . . .. . ... ...... . ........ ....... .............. ....... .. .... .......... 110 3.3.6 ANOVA ................. . .. ......... . . .... .. . . ... .... ........ . . .. ..... . ... .. .. .... . •. •.......•... . 110
Summary ...... ............. • ....... ...... ....• .. •... • ............................... •......•...... 114 Exercises ...... ......... ......................... . ............... ...... . ... ... ....... •............. 114 Bibliography ................................... . . . ................. .................. •.... . . .. . .... 11 5
Chapter 4 • Advanced Analytical Theory and Methods: Clustering .. . . .. . .. . ... . .. . . . ... . .. 117
4.1 Overview of Clustering ........ ...... ......... .. ................................................. 118 4.2 K-means ............... ....... ... ....................... .. ........ . ... . .......... . .... . .... .... 118
4.2.1 Use Cases ..... .. ............. . •.....• ... ... .. ..... ........ .......... . . .. ........ ...... ... .. . ...... 119 4.2.2 Overview of the Method . ............ ....... ... . .. ........ ................... ... ... .. . .•. ..... . .. . 120 4.2.3 Determining the Number of Clusters . . . .. .. •. •...................... . .......... ..... .. ... ...... . ... 123 4.2.4 Diagnostics .. ......................... ...•.... ........... ..... ....................... .. .. ....... . 128
CONTENTS
4.2.5 Reasons to Choose and Cautions .. . .. . . . . . . .. . . . . . .. ... . ..... ... .. .. . . •. •. •. . ...•. • .•. ... . ..... ... 730 4.3 Additional Algorithms .............. ... . . . . .. . ...... . ... . ........ .• .. .. . .. ................ .. .... 134 Summary ......... .... ........................ .. . ....................... . . . ..•.. . .................. 135 Exercises ........... ..................... . . ..... . ............................... . .......... .. ..... . 135 Bibliography ............................. ....... ................................ . .................. 136
Chapter 5 • Advanced Analytical Theory and Methods: Association Rules .................. 137
5.1 Overview .... . . ... ........................................ .. . .. . ..... . .. .................. .. .... 138 5.2 A priori Algorithm ........... . ............... . . ....... ... . . .... . . ..... .......... .. ......... ... ... 140 5.3 Evaluation of Candidate Rules ....................... . ... .. . .. ..... •....... . ................ ..... 141 5.4 Applications of Association Rules ............ ... ..... . ..... . . . ... ..... . . .. . . . ...... .............. 143 5.5 An Example: Transactions in a Grocery Store ... . .................... .... . . ... .......... ........... 143
5.5.1 The Groceries Dataset ................... . . .. .............. •........... •... . .......•............... 144 5.5.2 Frequent ltemset Generation . . ........................... .. ......... . . •. •......... •............... 146 5.5.3 Rule Generation and Visualization ...... . ... . ......................... . .•. •.... .•. •........... . .. . 752
5.6 Validation and Testing ........... . ... .... . . ............................................. . ....... 157 5.7 Diagnostics .. .... ..................... . .. . . ..... . ............ . ... . . ... . ...... . ......... .. .... . . . 158 Summary ....... .. ................ . ..... ... . . .. . . ...... .... .... . ........ . . .... ..... .............. . . 158 Exercises ................................ ... . . . ........ . ................. . .... ....... ......... . .... 159 Bibliography ................................ . .. .... ..... ............ ..... . ... ........... ... . ...... . 160
Chapter 6 • Advanced Analytical Theory and Methods: Regression .................. . ..... 161
6.1 Linear Regression .......... . .......... . .. . .. .. ...... . ............ .... . . . ....... ........... ...... 162 6.1.1 UseCases . . . ... . . . .. . ...... ..... ......................... .. . ....... .... .... .. ...... . .......... . .. . /62 6.1.2 Model Description .. ... .. . .. . ..... . ........... . .. . .. .... . . •. ..... . •.•.• . ...... . .•............. . .. . 163 6.1.3 Diagnostics ....................... . .... .. . . . . . . ....... •.•.• .....•. •.•...... .• . •.•.. . .. . .... . . . . . . . 773
6.2 Logistic Regression ............ ........ . ..... ................................ . ......... .. .. . .. .. 178 6.2.1 Use Cases ...... . ....................................... .... ................ .... ................... 179 6.2.2 Model Description ........ .. .... ... •..... . .... ........ .. .. •. ..... ... . .•. •...• .•................... 179 6.2.3 Diagnostics ................. ..... ...... . . .. ............•. •. ........•. ..... .• .•................... 181
6.3 Reasons to Choose and Cautions ....... . . .... .. .... ............ ........... ......... ....... ..... . 188 6.4 Additional Regression Models ............ ... .. ...... . ... . ............. . ... ........ ........... ... 189 Summary ........... .... . ............ . ....... . .........•... . ...... . ...... ... . .. . . ... .. ........... . . 190 Exercises ............ .. .......... .. . .. ................ .. .. .. ............ . . .. .......... . . . .. .. .... . . 190
Chapter 7 • Advanced Analytical Theory and Methods: Classification ...... . .......... . .... 191
7.1 Decision Trees ... .. ............... ...... ............ ............. .......... .............. ... .... 192 7.1.1 Overview of a Decision Tree ...... . .................... .. . ........................ .. .... ..... . ...... 193 7.1.2 The General Algorithm . .............. .............. ... ..•. ... .............. .• .. .. ........ .... . .. . . 197 7.1.3 Decision Tree Algorithms ............. .. . .... .. ......•. . .•.. ... •. •... .... . .... ... . .............. .. 203 7.1.4 Evaluating a Decision Tree ............. . . •... . ... . ...•... .... . ....... . .................... . ... . . . . 204 7.1.5 Decision Trees in R . . . .. ................ ...... .. .. ..... ..... .... .................. . ..... ........ .. 206
7.2 Na'lve Bayes . .... ... ................ . ..... . ...... . .......... . .. . ... . ..... .. ..... ......... . ...... 211 7.2.1 Bayes' Theorem . . .. . ........................ . ..................................................... 212 7.2.2 Nai've Bayes Classifier ................... •... . ... ..... .......•.................................. .. . 214
CONTENTS
7.2.3 Smoothing . ............... .................... . .. . ........ . .. . ...... .. •. .. .......... .. .......... . 277 7.2.4 Diagnostics .. . ........... . ..................... .. .... . .•......... •.•.....•...•........ . . . ......... 217 7.2.5 Nai've Bayes in R ............... . . .. . .....•... .. . ...•.•.........•.•.. .. . .. •. •.•.... ........ . .. .... . 278
7.3 Diagnostics of Classifiers ............ •...... ........... •.......... ...•...• .. •... •. .... ........... 224 7.4 Additional Classification Methods .... • ... • ...... • ............. • .................•... .... ......... 228 Summary ................. ..... ............ • ......•.............. .. ..........................•..... 229 Exercises .................. ... ......... .... .........................•.... . . . .................•..... 230 Bibliography ...... . ..........•......... .... ........... . ... . .............. ... ...•................... 231
Chapter 8 • Advanced Analytical Theory and Methods: Time Series Analysis . . .. ... . ... . .. . 233
8.1 Overview of Time Series Analysis ....... ....... ................ ......................... .... ..... 234 8.1.1 Box-Jenkins Methodology ................... . .. .... ...... . .................... . .. ..... ............ 235
8.2 ARIMA Model. ................ . .. . ....... •..•..... .. ...... . ... •................. • ... . ..•........ 236 8.2.1 Autocorrelation Function (ACF) .. ......... ...................... ... ........ . ......... . .. ..... ..... 236 8.2.2 Autoregressive Models . ...... ... ............ . . . .. •. ... ..... ... . .. ... ... . ......... . ....... .. . . .... 238 8.2.3 Moving Average Models . .. .. . .................................... .................... •..... . .... . 239 8.2.4 ARMA and ARIMA Models ............. . .................................•...........•.....•....... 241 8.2.5 Building and Evaluating an ARIMA Model ............................. . .•.........•. •. . ... •...... 244 8.2.6 Reasons to Choose and Cautions .. ................ . .. . ........ .. . . .. . ....... . .... .•.•. •.. . •. . .... . 252
8.3 Additional Methods ........ ... . ... ....... ... .. ...... ...... .. ....... ....... .. ... . .... . ... . ...... . 253 Summary ........................ ... ... ...... .. ............ • ......... ......... ..• .. .......• ... ..... 254 Exercises .............. . .......... ... ......... . •. .. .............................• .. . . .. • . .• ... ..... 254
Chapter 9 • Advanced Analytical Theory and Methods: Text Analysis ...... . ... . .. .. .. . . ... 255 9.1 Text Analysis Steps .......... . .... ......... ...... ... .................... . ...... . ...... . . .•....... 257 9.2 A Text Analysis Example ..... •.... .... ............................ .. ............ ...... • .... ...... 259 9.3 Collecting Raw Text ........ .. .............. 00 00 00 00 ••••• 00 ••• ••• ••• ••••• 00 ••••• 00 ••••• •• ••• 00 ••• 260 9.4 Representing Text .......................... ... .................. . ...................•.. ...... .. 264 9.5 Term Frequency-Inverse Document Frequency (TFIDF) ...... • .......... • ..... .•. ...... . ......... 269 9.6 Categorizing Documents by Topics .... ................... .. .•..... . . ... • ...... •.. . . .. . . ......... 274 9.7 Determining Sentiments ............... . ...... . ......•...•..•.... .. .. .. •.. •... •.. . . .. ........... 277 9.8 Gaining Insights ................ .. ....................... •..•....... .. ........•... . ..... . ....... 283 Summary ............... . ........... . ......... •.................... • ..... . . . ......... •..... . ....... 290 Exercises ...............•... . ..... . . .. ........ •..•... . ............. • ................. . ..... . ....... 290 Bibliography ............ •. ..•... . ..... . ....... ... . ....... . .. . ................ . ............ . ........ 291
Chapter 10 • Advanced Analytics-Technology and Tools: MapReduce and Hadoop . . . ..... 295
10.1 Analytics for Unstructured Data . 00 .. .... .. 00 ••••• 00. 00 ••• 00 00 .......... 00 ......... 00 •• 00 .. . .... 296 10.1.1 UseCasesoo .. 00.00 00 ••••• 00.00 00 •••••• 00 ••••••• 00 ••• 00 • • 00.00 .. . .................... 00 . .... .. 00. 296 10.1.2 MapReduce . .. .... ......... .. ............... . .......... •......... •.•....... •.•. •....... . ....... 298 70.7.3 Apache Hadoop ......... ... ........... . ......... . . .. ....... .. . . .. . .... ... . .• ...•.... .. . •....... 300
10.2 The Hadoop Ecosystem ....•... . ........... ..... ... . •... .. .............• . •. .. .. ....... . •• ...... 306 70.2.1 Pig . ....... ..... ........ . ......................................... . .. . . .......•... . ..... •.•..... 306 70.2.2 Hive ............... . ............•................ . ... •.•...........•.......•. . .. . .. . ..... . .. . .. 308 70.2.3 HBase ...... .. 00 .. ... . . 00 ••••••••••••••• 00 •••••• 00 . .... . ..... .. ...... 00 .. .. . . . 00 ••• 00 00 ... 00 •••• • 317 10.2.4 Mahout .. 00 • •• • ••••••••••• 00 ............ . . . .. . ... . .... .... . ..... .. .......... .. .. 00 • • • 00 .. . .. . .. • 319
CONTENTS
10.3 NoSOL ...............•........................ • ................. •..................... • ....... 322 Summary .............•...•........................•................. •.....................•....... 323 Exercises .................•........................ • ..................... •...... ................... 324 Bibliography ....... •...... • ................. •...... • .................•......... .... ........ • ..•.... 324
Chapter 11 • Advanced Analytics-Technology and Tools: In-Database Analytics ........ . . . 327
11.1 SOL Essentials ............................................................. .. . . ........ • ..•.... 328 77.1.1 Joins .. . .. . . .. .. . .. ... .. . ......... ... ............. . .. .. . ...... .. .. ... . ....... ... .... . .. ...... . .. . 330 77.1.2 Set Operations ................ . .. . ...................... . ...... ... ........................... . ... 332 11.1.3 Grouping Extensions ......... .. .. .. . . . . .. ........................ ............. .. ................ . 334
11.2 In-Database Text Analysis ............... •... . .............•......•......... . . .. ... .• . . . •..•.... 338 11 .3 Advanced SOL ... .. ......................... •.. • .................•........... . .........•....... 343
71.3.1 Window Functions . . . . ............................... ... .. .... .. . . . ..... . ....................... 343 11.3.2 User-Defined Functions and Aggregates ............................•. •. •............... .. ... .... .347 11.3.3 Ordered Aggregates ............. ..... .... ..... ....... .... .. ..................................... 351 11.3.4 MADiib ...................................... ............•. ....... . . .... . .... •. •................ .352
Summary ..........•.. • ... • .......................................................... .. . . .......... 356 Exercises ......... . .............................. ........ ............................ .. . . .......... 356 Bibliography .......•...... •. .• • .................... • ... .. ........... . •. ..• . ......... .... .. . ........ 357
Chapter 12 • The Endgame, or Putting It All Together ..................................... 359
12.1 Communicating and Operationalizing an Analytics Project. ........ . .....................•....... 360 12.2 Creating the Final Deliverables ......................... ..... . .. .. .. .•.......................... 362
12.2.1 Developing Core Material for Multiple Audiences ........................ •..... .. •.•.............. 364 12.2.2 Project Goals . . . . . .. . . ............ . ............ . ..... . ........ ..... . .. . . ..... . . . ................ 365 12.2.3 Main Findings ....... . ... . . ... . ....................... . .. ... . .. . ... ....• . . . ... . •. •........... . .. .367 12.2.4 Approach ... . .. . . .. . . ............................................................ .... .... ...... 369 12.2.5 Model Description ... . .. . .................................... .. ......... . .... . ...•..... . ..... .... 371 12.2.6 Key Points Supported with Data . .......................... . . . . . ....... . . . ..... .. .. .. . ..... . ..... .372 12.2.7 Model Details .. . . .. ................................................. . ....... •.•....... . ........ .372 12.2.8 Recommendations ........ ... .... ....... .... ........... .......... . .... . . ...... •.•.• .. .... ..... . . 374 12.2.9 Additional Tips on Final Presentation ......... . .. . ............ .. . . . . .. . .. . ..... . •. •.............. .375 12.2.10 Providing Technica15pecificarions and Code ................................... . ................ . 376
12.3 Data Visua lization Basics .......... .... ... .... ....................•.......... . .... . ............. 377 12.3.1 Key Points Supported with Data ............... . ... . . . .................. . ............... ... ...... .378 12.3.2 Evolution of a Graph ................ ..... .... ............. ...... . ...... •.•... •. •.•......... •.... 380 12.3.3 Common Representation Methods .............. .. ............ .. . . . •. •.. . .... •. . ................ 386 12.3.4 How to Clean Up a Graphic ................... •. . . .... . ..... . .......... . . . ..... . ... .......... ... .387 12.3.5 Additional Considerations ..... ................. .... ... . ..... .. . . . . •.•. .. ... . •.• ...... . ...... ... . 392
Summary ............ .. .........................•...... • ... • . ... .........•... •..................... 393 Exercises ........... . . .... . ................. .. .. . . . .... • ................. . . .. . .. • .......... . ....... 394 References and Further Reading ... .. ............ .... ...... ..... ......... . .... . . .................... 394 Bibliography .... . . ... ......... .... . ........................ • ................. .. . .. .. . ... . . ... ...... 394
Index .. . .............. . .. . .. . .. . . .. . . ........... . . . .. . .. . . . ....... . . . ... . . .. . .. .. . .. . . . ... .. . . ............... . 397
Foreword
Technological advances and the associated changes in practical daily life have produced a rapidly expanding "parallel universe" of new content, new data, and new information sources all around us. Regardless of how one defines it, the phenomenon of Big Data is ever more present, ever more pervasive, and ever more important. There is enormous value potential in Big Data: innovative insights, improved understanding of problems, and countless opportunities to predict-and even to shape-the future. Data Science is the principal means to discover and tap that potential. Data Science provides ways to deal with and benefit from Big Data: to see patterns, to discover relationships, and to make sense of stunningly varied images and information.
Not everyone has studied statistical analysis at a deep level. People with advanced degrees in applied math- ematics are not a commodity. Relatively few organizations have committed resources to large collections of data gathered primarily for the purpose of exploratory analysis. And yet, while applying the practices of Data Science to Big Data is a valuable differentiating strategy at present, it will be a standard core competency in the not so distant future.
How does an organization operationalize quickly to take advantage of this trend? We've created this book for that exact purpose.
EMC Education Services has been listening to the industry and organizations, observing the multi-faceted transformation of the technology landscape, and doing direct research in order to create curriculum and con- tent to help individuals and organizations transform themselves. For the domain of Data Science and Big Data Analytics, our educational strategy balances three things: people-especially in the context of data science teams, processes-such as the analytic lifecycle approach presented in this book, and tools and technologies-in this case with the emphasis on proven analytic tools.
So let us help you capitalize on this new "parallel universe" that surrounds us. We invite you to learn about Data Science and Big Data Analytics through this book and hope it significantly accelerates your efforts in the transformational process.
Introduction
Big Data is creating significant new opportunities for organizations to derive new value and create competitive advantage from their most valuable asset: information. For businesses, Big Data helps drive efficiency, quality, and personalized products and services, producing improved levels of customer satisfaction and profit. For scientific efforts, Big Data analytics enable new avenues of investigation with potentially richer results and deeper insights than previously available. In many cases, Big Data analytics integrate structured and unstructured data with real- time feeds and queries, opening new paths to innovation and insight.
This book provides a practitioner's approach to some of the key techniques and tools used in Big Data analytics. Knowledge ofthese methods will help people become active contributors to Big Data analytics projects. The book's content is designed to assist multiple stakeholders: business and data analysts looking to add Big Data analytics skills to their portfolio; database professionals and managers of business intelligence, analytics, or Big Data groups looking to enrich their analytic skills; and college graduates investigating data science as a career field.
The content is structured in twelve chapters. The first chapter introduces the reader to the domain of Big Data, the drivers for advanced analytics, and the role of the data scientist. The second chapter presents an analytic project lifecycle designed for the particular characteristics and challenges of hypothesis-driven analysis with Big Data.
Chapter 3 examines fundamental statistical techniques in the context of the open source R analytic software environment. This chapter also highlights the importance of exploratory data analysis via visualizations and reviews the key notions of hypothesis development and testing.
Chapters 4 through 9 discuss a range of advanced analytical methods, including clustering, classification, regression analysis, time series and text analysis.
Chapters 10 and 11 focus on specific technologies and tools that support advanced analytics with Big Data. In particular, the Map Reduce paradigm and its instantiation in the Hadoop ecosystem, as well as advanced topics in SOL and in-database text analytics form the focus of these chapters.
XVIII ! INTRODUCTION
Chapter 12 provides guidance on operationalizing Big Data analytics projects. This chapter focuses on creat· ing the final deliverables, converting an analytics project to an ongoing asset of an organization's operation, and creating clear, useful visual outputs based on the data.
EMC Academic Alliance University and college faculties are invited to join the Academic Alliance program to access unique "open" curriculum-based education on the following topics:
• Data Science and Big Data Analytics
• Information Storage and Management
• Cloud Infrastructure and Services
• Backup Recovery Systems and Architecture
The program provides faculty with course resources to prepare students for opportunities that exist in today's
evolving IT industry at no cost. For more information, visit http: // education . EMC . com/ academicalliance.
EMC Proven Professional Certification EMC Proven Professional is a leading education and certification program in the IT industry, providing compre-
hensive coverage of information storage technologies, virtualization, cloud computing, data science/Big Data analytics, and more.
Being proven means investing in yourself and formally validating your expertise.
This book prepares you for Data Science Associate (EMCDSA) certification. Visit http : I I educat i on . EMC . com for details.
INTRODUCTION TO BIG DATA ANAL YTICS
Much has been written about Big Data and the need for advanced analytics within industry, academia,
and government. Availability of new data sources and the rise of more complex analytical opportunities
have created a need to rethink existing data architectures to enable analytics that take advantage of Big Data. In addition, significant debate exists about what Big Data is and what kinds of skil ls are required to make best use of it. This chapter explains several key concepts to clarify what is meant by Big Data, why
advanced analyt ics are needed, how Data Science differs from Business Intelligence (BI), and what new
roles are needed for the new Big Data ecosystem.
1.1 Big Data Overview Data is created constantly, and at an ever-increasing rate. Mobile phones, social media, imaging technologies
to determine a medical diagnosis-all these and more create new data, and that must be stored somewhere for some purpose. Devices and sensors automatically generate diagnostic information that needs to be stored and processed in real time. Merely keeping up with this huge influx of data is difficult, but substan-
tially more cha llenging is analyzing vast amounts of it, especially when it does not conform to traditional
notions of data structure, to identify meaningful patterns and extract useful information. These challenges of the data deluge present the opportunity to transform business, government, science, and everyday life.
Several industries have led the way in developing their ability to gather and exploit data:
• Credit card companies monitor every purchase their customers make and can identify fraudulent purchases with a high degree of accuracy using rules derived by processing billions of transactions.
• Mobile phone companies analyze subscribers' calling patterns to determine, for example, whether a caller's frequent contacts are on a rival network. If that rival network is offering an attractive promo- tion that might cause the subscriber to defect, the mobile phone company can proactively offer the subscriber an incentive to remain in her contract.
• For companies such as Linked In and Facebook, data itself is their primary product. The valuations of these companies are heavi ly derived from the data they gather and host, which contains more and more intrinsic va lue as the data grows.
Three attributes stand out as defining Big Data characteristics:
• Huge volume of data: Rather than thousands or millions of rows, Big Data can be billions of rows and millions of columns.
• Complexity of data t ypes and structures: Big Data reflects the variety of new data sources, formats, and structures, including digital traces being left on the web and other digital repositories for subse- quent analysis.
• Speed of new data creation and growth: Big Data can describe high velocity data, with rapid data ingestion and near real time analysis.
Although the volume of Big Data tends to attract the most attention, generally the variety and veloc-
ity of the data provide a more apt definition of Big Data. (Big Data is sometimes described as having 3 Vs: volume, variety, and velocity.) Due to its size or structure, Big Data cannot be efficiently analyzed using only
traditional databases or methods. Big Data problems require new tools and technologies to store, manage, and realize the business benefit. These new tools and technologies enable creation, manipulation, and
1.1 Big Data Overview
management of large datasets and the storage environments that house them. Another definition of Big Data comes from the McKinsey Global report from 2011:
Big Data is data whose scale, distribution, diversity, and/or timeliness require the use of new technical architectures and analytics to enable insights that unlock ne w sources of business value.
McKinsey & Co.; Big Data: The Next Frontier for Innovation, Competition, and Productivity [1]
McKinsey's definition of Big Data impl ies that organizations will need new data architectures and ana-
lytic sandboxes, new tools, new analytical methods, and an integration of multiple skills into the new ro le of the data scientist, which will be discussed in Section 1.3. Figure 1-1 highlights several sources of the Big
Data deluge.
What's Driving Data Deluge?
Mobile Sensors
Smart Grids
Social Media
Geophysical Exploration
FtGURE 1-1 What 's driving the data deluge
Video Surveillance
• Medical Imaging Video
Rendering
Gene Sequencing
The rate of data creation is accelerating, driven by many of the items in Figure 1-1.
Social media and genetic sequencing are among the fastest-growing sources of Big Data and examples of untraditional sources of data being used for analysis.
For example, in 2012 Facebook users posted 700 status updates per second worldwide, which can be leveraged to deduce latent interests or political views of users and show relevant ads. For instance, an update in which a woman changes her relationship status from "single" to "engaged" would trigger ads
on bridal dresses, wedding planning, or name-changing services. Facebook can also construct social graphs to analyze which users are connected to each other as an
interconnected network. In March 2013, Facebook released a new feature called "Graph Search," enabling users and developers to search social graphs for people with similar interests, hobbies, and shared locations.
INTRODUCTION TO BIG DATA ANALYTICS
Another example comes from genomics. Genetic sequencing and human genome mapping provide a detailed understanding of genetic makeup and lineage. The health care industry is looking toward these
advances to help predict which illnesses a person is likely to get in his lifetime and take steps to avoid these maladies or reduce their impact through the use of personalized medicine and treatment. Such tests also
highlight typical responses to different medications and pharmaceutical drugs, heightening risk awareness of specific drug treatments.
While data has grown, the cost to perform this work has fallen dramatically. The cost to sequence one human genome has fallen from $100 million in 2001 to $10,000 in 2011, and the cost continues to drop. Now, websites such as 23andme (Figure 1-2) offer genotyping for less than $100. Although genotyping analyzes
only a fraction of a genome and does not provide as much granularity as genetic sequencing, it does point
to the fact that data and complex analysis is becoming more prevalent and less expensive to deploy.
23 pairs of chromosomes. One unique you.
Bring your ancestry to life. F1ncl out what percent or your DNA comes !rom populations around the world. rang1ng from East As1a Sub-Saharan Alllca Europe, and more. B1eak European ancestry down 1010 d1st1nct regions such as the Bnush Isles. Scnnd1navla Italy and Ashkenazi Jewish. People IVI\h mixed ancestry. Alncan Amencans. Launos. and Nauve Amencans w111 also get a detailed breakdown.
20.5% ( .t A! n
Find relatives across continents or across the street.
Build your family tree and enhance your ex erience.
:38.6% · s, b·S 1h Jn Afr c.an
24.7% Europe.,,
• ' Share your knowledge. Watch it
row.
FIGURE 1-2 Examples of what can be learned through genotyping, from 23andme.com
1.1 Big Dat a Overview
As illustrated by the examples of social media and genetic sequencing, individuals and organizations both derive benefits from analysis of ever-larger and more complex data sets that require increasingly powerful analytical capabilities.
1.1.1 Data Structures Big data can come in multiple forms, including structured and non-structured data such as financial
data, text files, multimedia files, and genetic mappings. Contrary to much of the traditional data analysis performed by organizations, most of the Big Data is unstructured or semi-structured in nature, which
requires different techniques and tools to process and analyze. [2) Distributed computing environments and massively parallel processing (MPP) architectures that enable parallelized data ingest and analysis are
the preferred approach to process such complex data. With this in mind, this section takes a closer look at data structures. Figure 1-3 shows four types of data structures, with 80-90% of future data growth coming from non-
structured data types. [2) Though different, the four are commonly mixed. For example, a classic Relational Database Management System (RDBMS) may store call logs for a software support call center. The RDBMS
may store characteristics of the support calls as typical structured data, with attributes such as time stamps, machine type, problem type, and operating system. In addition, the system will likely have unstructured,
quasi- or semi-structured data, such as free-form call log information taken from an e-mail ticket of the problem, customer chat history, or transcript of a phone call describing the technical problem and the solu- tion or audio file of the phone call conversation. Many insights could be extracted from the unstructured,
quasi- or semi-structured data in the call center data.
'0 Q)
E u 2 iii Q)
0 ~
Big Data Characteristics: Data Structures Data Growth Is Increasingly Unstructured
I Structured
FIGURE 1-3 Big Data Growth is increasingly unstructured
INTRODUCTION TO BIG DATA ANALYTICS
Although analyzing structured data tends to be the most familiar technique, a different technique is required to meet the challenges to analyze semi-structured data (shown as XML), quasi-structured (shown as a clickstream), and unstructured data.
Here are examples of how each of the four main types of data structures may look.
o Structured data: Data containing a defined data type, format, and structure (that is, transaction data, online analytical processing [OLAP] data cubes, traditional RDBMS, CSV files, and even simple spread- sheets). See Figure 1-4.
SUMMER FOOD SERVICE PROGRAM 11 Data as of August 01. 2011)
Fiscal Number of Peak (July) Meals Total Federal Year Sites Participation Served Expenditures 2]
---Thousands-- -MiL- -Million$- 1969 1.2 99 2.2 0.3 1970 1.9 227 8.2 1.8 1971 3.2 569 29.0 8.2 1972 6.5 1,080 73.5 21.9 1973 11.2 1,437 65.4 26.6 1974 10.6 1,403 63.6 33.6 1975 12.0 1,785 84.3 50.3 1976 16.0 2,453 104.8 73.4
TQ3] 22.4 3,455 198.0 88.9 1977 23.7 2,791 170.4 114.4 1978 22.4 2,333 120.3 100.3 1979 23.0 2,126 121.8 108.6 1980 21.6 1,922 108.2 110.1 1981 20.6 1,726 90.3 105.9 1982 14.4 1,397 68.2 87.1 1983 14.9 1,401 71.3 93.4 1984 15.1 1,422 73.8 96.2 1985 16.0 1,462 77.2 111.5 1986 16.1 1,509 77.1 114.7 1987 16.9 1,560 79.9 129.3 1988 17.2 1,577 80.3 133.3 1989 18.5 1.652 86.0 143.8 1990 19? 1 ~Q? 91? 1~11
FIGURE 1-4 Example of structured data
o Semi-structured data: Textual data files with a discernible pattern that enables parsing (such as Extensible Markup Language [XML] data files that are self-describing and defined by an XML schema). See Figure 1-5.
o Quasi-structured data: Textual data with erratic data formats that can be formatted with effort, tools, and time (for instance, web clickstream data that may contain inconsistencies in data values and formats). See Figure 1-6.
o Unstructured data: Data that has no inherent structure, which may include text documents, PDFs, images, and video. See Figure 1-7.
1.1 Big Data Overview
Quasi-structured data is a common phenomenon that bears closer scrutiny. Consider the following
example. A user attends the EMC World conference and subsequently runs a Google search online to find information related to EMC and Data Science. This would produce a URL such as https: I /www . google . c om/ #q=EMC+ data+scienc e and a list of results, such as in the first graphic of Figure 1-5.
- ~ ....- . . •• 0
o:.~t.a c!':a=-set.•"~t.t-e">
~~C - :ead.:. ~o Clc~d Co~~e.:.~~, 3~Q' Dace., a::d T:~sced ! ! Sol~t.:.o~s
clc::d cc::,r·..:e.:.::r; . ">
< :~c:.:.;::t. .!l:c•"' / R. /a:.sec:J(<~.;/ cgrr;;:c""/rred•--.1z .. _2 I 6 I 2 .;;,;. "'j;. ~ 3 "'> ~c:.:.pt.>
FIGURE 1-5 Example of semi-structured data
Tool!un
QUKkt~b~
b:plorerbars
Go to
Stop
R
Zoom(IOO'Jil
Tcxtsa:e
&>coding
Sty!<
C•rct brOWSing
Source Stc:unt\ frpclt
lnt~ ~loON I 0. tt u re-
Wdlpoge pnv.cy potoey_
P""""'JI>ond
Ful scr~
Ctri•Q
h e
F5
F7
Fll
After doing this search, the user may choose the second link, to read more about the headline "Data Scientist- EM( Education, Training, and Certification." This brings the user to an erne . com site focused on this topic and a new URL, ht t p s : I / e ducation . e rne . com/ guest / campai gn/ data_ science
INTRODUCTION TO BIG DATA ANALYTICS
1
. aspx, that displays the page shown as (2) in Figure 1-6. Arriving at this site, the user may decide to click to learn more about the process of becoming certified in data science. The user chooses a link toward the
top of the page on Certifications, bringing the user to a new URL: ht tps :I I education. erne. com/ guest / certifica tion/ framework/ stf/ data_science . aspx, which is (3) in Figure 1-6.
Visiting these three websites adds three URLs to the log files monitoring the user's computer or network
use. These three URLs are:
https: //www.google . com/#q=EMC+data+s cience https: //education . emc.com/ guest / campaign/ data science . aspx https : //education . emc . com/ guest / certification/ framework / stf / data_ science . aspx
- - ...... - .._.. ............. _ O.Uk*-andi'IO..~T~ · OIC~ o ---·- t..._ ·-- . -- ·-A-- ------·----- .. -,.._ , _____ .... 0.. ldHIWI • DtC (Ot.aiiOI\. l....,... and~ 0 --- -~-~· 1 .. ....... _ .. _....._. __ , ___ -~-·-· · ~----"' .. ~_.,.. ..... - :c ~::...~ and Cenbbcrt 0 t-e•·,-'""""... '•'-""'•• ..,....__ ... --...... ~ .... __ .... .....,.,_.... ... ,...._~·
- ~O•Uik~R........, A0.1t-~~_,...h", • £MC O --------.. ... .- . '"" ..._. ______ , ______ ...., -- ···-.. ... -~--.-- .... https:/ /www.google.com/#q
3
------ ---
,_ __ ----
~-:::.::.::·--===-=-== .. ------·---------·------..---::=--.....::..-..=-.:.-.=-.......
-- ·------·----·---·--·---·~--·-· -----------·--·--., ______ ... _______ _ -·----------·-______ , _______ _
- -------~· --· -----
>l __ _ __ , , _ _ _
... , ------., :::... ::
FiGURE 1-6 Example of EMC Data Science search results
1.1 Big Data Overview
FIGURE 1-7 Example of unstructured data: video about Antarctica expedition [3]
This set of three URLs reflects the websites and actions taken to find Data Science information related
to EMC. Together, this comprises a clicksrream that can be parsed and mined by data scientists to discover
usage patterns and uncover relationships among clicks and areas of interest on a website or group of sites. The four data types described in this chapter are sometimes generalized into two groups: structured
and unstructured data. Big Data describes new kinds of data with which most organizations may not be
used to working. With this in mind, the next section discusses common technology architectures from the standpoint of someone wanting to analyze Big Data.
1.1.2 Analyst Perspective on Data Repositories The introduction of spreadsheets enabled business users to create simple logic on data structured in rows
and columns and create their own analyses of business problems. Database administrator training is not requ ired to create spreadsheets: They can be set up to do many things quickly and independently of information technology (IT) groups. Spreadsheets are easy to share, and end users have control over the logic involved. However, their proliferation can result in "many versions of the truth." In other words, it can be challenging to determine if a particular user has the most relevant version of a spreadsheet, with
the most current data and logic in it. Moreover, if a laptop is lost or a file becomes corrupted, the data and logic within the spreadsheet could be lost. This is an ongoing challenge because spreadsheet programs such as Microsoft Excel still run on many computers worldwide. With the proliferation of data islands (or spread marts), the need to centralize the data is more pressing than ever.
As data needs grew, so did more scalable data warehousing solutions. These technologies enabled
data to be managed centrally, providing benefits of security, failover, and a single repository where users
INTRODUCTION TO BIG DATA ANALYTICS
could rely on getting an "official" source of data for financial reporting or other mission-critical tasks. This
structure also enabled the creation ofOLAP cubes and 81 analytical tools, which provided quick access to a set of dimensions within an RD8MS. More advanced features enabled performance of in-depth analytical
techniques such as regressions and neural networks. Enterprise Data Warehouses (EDWs) are critica l for reporting and 81 tasks and solve many of the problems that proliferating spreadsheets introduce, such as which of multiple versions of a spreadsheet is correct. EDWs-and a good 81 strategy-provide direct data
feeds from sources that are centrally managed, backed up, and secured. Despite the benefits of EDWs and 81, these systems tend to restrict the flexibility needed to perform
robust or exploratory data analysis. With the EDW model, data is managed and controlled by IT groups
and database administrators (D8As), and data analysts must depend on IT for access and changes to the data schemas. This imposes longer lead times for analysts to get data; most of the time is spent waiting for
approvals rather than starting meaningful work. Additionally, many times the EDW rules restrict analysts from building datasets. Consequently, it is common for additional systems to emerge containing critical
data for constructing analytic data sets, managed locally by power users. IT groups generally dislike exis- tence of data sources outside of their control because, unlike an EDW, these data sets are not managed, secured, or backed up. From an analyst perspective, EDW and 81 solve problems related to data accuracy
and availabi lity. However, EDW and 81 introduce new problems related to flexibility and agil ity, which were less pronounced when dealing with spreadsheets.
A solution to this problem is the analytic sandbox, which attempts to resolve the conflict for analysts and
data scientists with EDW and more formally managed corporate data. In this model, the IT group may still
manage the analytic sandboxes, but they will be purposefully designed to enable robust analytics, while being centrally managed and secured. These sandboxes, often referred to as workspaces, are designed to enable teams to explore many datasets in a controlled fashion and are not typically used for enterprise-
level financial reporting and sales dashboards. Many times, analytic sandboxes enable high-performance computing using in-database processing-
the analytics occur within the database itself. The idea is that performance of the analysis will be better if
the analytics are run in the database itself, rather than bringing the data to an analytical tool that resides somewhere else. In-database analytics, discussed further in Chapter 11, "Advanced Analytics- Technology and Tools: In-Database Analytics." creates relationships to multiple data sources within an organization and
saves time spent creating these data feeds on an individual basis. In-database processing for deep analytics
enables faster turnaround time for developing and executing new analytic models, while reducing, though not eliminating, the cost associated with data stored in local, "shadow" file systems. In addition, rather than the typical structured data in the EDW, analytic sandboxes can house a greater variety of data, such as raw data, textual data, and other kinds of unstructured data, without interfering with critical production databases. Table 1-1 summarizes the characteristics of the data repositories mentioned in this section.
TABLE 1-1 Types of Data Repositories, from an Analyst Perspective
Data Repository Characteristics
Spreadsheets and data marts
("spreadmarts")
Spreadsheets and low-volume databases for record keeping
Analyst depends on data extracts.
Data Warehouses
Analytic Sandbox
(works paces)
1.2 State of the Practice in Analytics
Centralized data containers in a purpose-built space
Supports Bl and reporting, but restricts robust analyses
Ana lyst dependent on IT and DBAs for data access and schema changes
Ana lysts must spend significant t ime to get aggregated and disaggre-
gated data extracts from multiple sources.
Data assets gathered from multiple sources and technologies for analysis
Enables flexible, high-performance analysis in a nonproduction environ- ment; can leverage in-database processing
Reduces costs and risks associated with data replication into "shadow" file
systems
"Analyst owned" rather than "DBA owned"
There are several things to consider with Big Data Analytics projects to ensure the approach fits with
the desired goals. Due to the characteristics of Big Data, these projects lend themselves to decision sup-
port for high-value, strategic decision making with high processing complexity. The analytic techniques
used in this context need to be iterative and flexible, due to the high volume of data and its complexity.
Performing rapid and complex analysis requires high throughput network connections and a consideration
for the acceptable amount of latency. For instance, developing a real-t ime product recommender for a
website imposes greater system demands than developing a near· real·time recommender, which may
still provide acceptable performance, have sl ightly greater latency, and may be cheaper to deploy. These
considerations require a different approach to thinking about analytics challenges, which will be explored
further in the next section.
1.2 State of the Practice in Analytics Current business problems provide many opportunities for organizations to become more analytical and
data driven, as shown in Table 1 ·2.
TABLE 1-2 Business Drivers for Advanced Analytics
Business Driver Examples
Optimize business operations
Identify business risk
Predict new business opportunities
Comply with laws or regu latory
requirements
Sales, pricing, profitability, efficiency
Customer churn, fraud, default
Upsell, cross-sell, best new customer prospects
Anti-Money Laundering, Fa ir Lending, Basel II-III, Sarbanes- Oxley(SOX)
INTRODUCTION TO BIG DATA ANALYTICS
Table 1-2 outlines four categories of common business problems that organizations contend with where they have an opportunity to leverage advanced analytics to create competitive advantage. Rather than only performing standard reporting on these areas, organizations can apply advanced analytical techniques to optimize processes and derive more value from these common tasks. The first three examples do not represent new problems. Organizations have been trying to reduce customer churn, increase sales, and cross-sell customers for many years. What is new is the opportunity to fuse advanced analytical techniques with Big Data to produce more impactful analyses for these traditional problems. The last example por- trays emerging regulatory requirements. Many compliance and regulatory laws have been in existence for decades, but additional requirements are added every year, which represent additional complexity and data requirements for organizations. Laws related to anti-money laundering (AML) and fraud prevention require advanced analytical techniques to comply with and manage properly.
1.2.1 81 Versus Data Science The four business drivers shown in Table 1-2 require a variety of analytical techniques to address them prop- erly. Although much is written generally about analytics, it is important to distinguish between Bland Data Science. As shown in Figure 1-8, there are several ways to compare these groups of analytical techniques.
One way to evaluate the type of analysis being performed is to examine the time horizon and the kind of analytical approaches being used. Bl tends to provide reports, dashboards, and queries on business questions for the current period or in the past. Bl systems make it easy to answer questions related to quarter-to-date revenue, progress toward quarterly targets, and understand how much of a given product was sold in a prior quarter or year. These questions tend to be closed-ended and explain current or past behavior, typically by aggregating historical data and grouping it in some way. 81 provides hindsight and some insight and generally answers questions related to "when" and "where" events occurred.
By comparison, Data Science tends to use disaggregated data in a more forward-looking, exploratory way, focusing on analyzing the present and enabling informed decisions about the future. Rather than aggregating historical data to look at how many of a given product sold in the previous quarter, a team may employ Data Science techniques such as time series analysis, further discussed in Chapter 8, "Advanced Analytical Theory and Methods: Time Series Analysis," to forecast future product sales and revenue more accurately than extending a simple trend line. In addition, Data Science tends to be more exploratory in nature and may use scenario optimization to deal with more open-ended questions. This approach provides insight into current activity and foresight into future events, while generally focusing on questions related to "how" and "why" events occur.
Where 81 problems tend to require highly structured data organized in rows and columns for accurate reporting, Data Science projects tend to use many types of data sources, including large or unconventional datasets. Depending on an organization's goals, it may choose to embark on a 81 project if it is doing reporting, creating dashboards, or performing simple visualizations, or it may choose Data Science projects if it needs to do a more sophisticated analysis with disaggregated or varied datasets.
Exploratory
Analytical Approach
Explanatory
I
, .. -- ---, 1 Business 1 1 Intelligence 1 \ , .... _____ ..,
Past
fiGUR E 1 ·8 Comparing 81 with Data Science
1.2.2 Current Analytical Architecture
1 .2 State ofthe Practice In Analytlcs
Predictive Analytics and Data Mining (Data Science)
Typical • Optimization. predictive modo lin£ Techniques forocastlnC. statlatlcal analysis
and • Structured/unstructured data. many Data Types types of sources, very Ioree datasata
Common Questions
Typical Techniques
and Data Types
Time
Common Questions
• What II ... ? • What's tho optlmaltconarlo tor our bualnoss?
• What wtll happen next? What II these trend$ continuo? Why Is this happonlnt?
Business Intelligence
• Standard and ad hoc reportlnc. dashboards. alerts, queries, details on demand
• Structured data. traditional sourcoa. manac:eable datasets
• What happened lut quarter?
• How many units sold?
• Whore Is the problem? In which situations?
Future
As described earlier, Data Science projects need workspaces that are purpose-built for experimenting with
data, with flexible and agile data architectures. Most organizations still have data warehouses that provide
excellent support for traditional reporting and simple data analysis activities but unfortunately have a more
difficult time supporting more robust analyses. This section examines a typical analytical data architecture
that may exist within an organization.
Figure 1-9 shows a typical data architecture and several of the challenges it presents to data scientists
and others trying to do advanced analytics. This section examines the data flow to the Data Scientist and
how this individual tits into the process of getting data to analyze on projects.
INTRODUCTION TO BIG DATA ANALYTICS
FIGURE 1-9 Typical analytic architecture
i..l ,_, It Analysts
Dashboards
Reports
Alerts
1. For data sources to be loaded into the data warehouse, data needs to be well understood, structured, and normalized with the appropriate data type defini t ions. Although th is kind of centralization enables security, backup, and fai lover of highly critical data, it also means that data typically must go through significant preprocessing and checkpoints before it can enter this sort of controlled environment, which does not lend itself to data exploration and iterative analytics.
2. As a result of this level of control on the EDW, additional local systems may emerge in the form of departmental warehouses and local data marts that business users create to accommodate their need for flexible analysis. These local data marts may not have the same constraints for secu- ri ty and structure as the main EDW and allow users to do some level of more in-depth analysis. However, these one-off systems reside in isolation, often are not synchronized or integrated with other data stores, and may not be backed up.
3. Once in the data warehouse, data is read by additional applications across the enterprise for Bl and reporting purposes. These are high-priority operational processes getting critical data feeds from the data warehouses and repositories.
4. At the end of this workflow, analysts get data provisioned for their downstream analytics. Because users generally are not allowed to run custom or intensive analytics on production databases, analysts create data extracts from the EDW to analyze data offline in R or other local analytical tools. Many times these tools are limited to in-memory analytics on desktops analyz- ing samples of data, rather than the entire population of a dataset. Because these analyses are based on data extracts, they reside in a separate location, and the results of the analysis-and any insights on the quality of the data or anomalies-rarely are fed back into the main data repository.
Because new data sources slowly accum ulate in the EDW due to the rigorous validation and
data structuring process, data is slow to move into the EDW, and the data schema is slow to change.
1.2 State of the Practice in Analytics
Departmental data warehouses may have been originally designed for a specific purpose and set of business needs, but over time evolved to house more and more data, some of which may be forced into existing schemas to enable Bland the creation of OLAP cubes for analysis and reporting. Although the EDW achieves the objective of reporting and sometimes the creation of dashboards, EDWs generally limit the ability of analysts to iterate on the data in a separate nonproduction environment where they can conduct in-depth analytics or perform analysis on unstructured data.
The typical data architectures just described are designed for storing and processing mission-critical data, supporting enterprise applications, and enabling corporate reporting activities. Although reports and dashboards are still important for organizations, most traditional data architectures inhibit data exploration and more sophisticated analysis. Moreover, traditional data architectures have several additional implica- tions for data scientists.
o High-value data is hard to reach and leverage, and predictive analytics and data mining activities are last in line for data. Because the EDWs are designed for central data management and reporting, those wanting data for analysis are generally prioritized after operational processes.
o Data moves in batches from EDW to local analytical tools. This workflow means that data scientists are limited to performing in-memory analytics (such as with R, SAS, SPSS, or Excel), which will restrict the size of the data sets they can use. As such, analysis may be subject to constraints of sampling, which can skew model accuracy.
o Data Science projects will remain isolated and ad hoc, rather than centrally managed. The implica- tion of this isolation is that the organization can never harness the power of advanced analytics in a scalable way, and Data Science projects will exist as nonstandard initiatives, which are frequently not aligned with corporate business goals or strategy.
All these symptoms of the traditional data architecture result in a slow "time-to-insight" and lower business impact than could be achieved if the data were more readily accessible and supported by an envi- ronment that promoted advanced analytics. As stated earlier, one solution to this problem is to introduce analytic sandboxes to enable data scientists to perform advanced analytics in a controlled and sanctioned way. Meanwhile, the current Data Warehousing solutions continue offering reporting and Bl services to support management and mission-critical operations.
1.2.3 Drivers of Big Data To better understand the market drivers related to Big Data, it is helpful to first understand some past history of data stores and the kinds of repositories and tools to manage these data stores.
As shown in Figure 1-10, in the 1990s the volume of information was often measured in terabytes. Most organizations analyzed structured data in rows and columns and used relational databases and data warehouses to manage large stores of enterprise information. The following decade saw a proliferation of different kinds of data sources-mainly productivity and publishing tools such as content management repositories and networked attached storage systems-to manage this kind of information, and the data began to increase in size and started to be measured at petabyte scales. In the 2010s, the information that organizations try to manage has broadened to include many other kinds of data. In this era, everyone and everything is leaving a digital footprint. Figure 1-10 shows a summary perspective on sources of Big Data generated by new applications and the scale and growth rate of the data. These applications, which generate data volumes that can be measured in exabyte scale, provide opportunities for new analytics and driving new value for organizations. The data now comes from multiple sources, such as these:
INTRODUCTION TO BIG DATA ANALYTICS
• Medical information, such as genomic sequencing and diagnostic imaging
• Photos and video footage uploaded to the World Wide Web
• Video surveillance, such as the thousands of video cameras spread across a city
• Mobile devices, which provide geospatiallocation data of the users, as well as metadata about text messages, phone calls, and application usage on smart phones
• Smart devices, which provide sensor-based collection of information from smart electric grids, smart buildings, and many other public and industry infrastructures
• Nontraditional IT devices, including the use of radio-frequency identification (RFID) readers, GPS navigation systems, and seismic processing
MEASURED IN MEASURED IN WILL BE MEASURED IN
TERABYTES PET A BYTES EXABYTES lTB • 1.000GB lPB • l .OOOTB lEB l .OOOPB
IIEII You(D
.... ~ .. ·, A n '' \ . ~ I b ~
~ ~ SMS
w: '-----" ORACLE =
1.9905 20005 201.05 (RDBMS & DATA (CONTENT & DIGITAL ASSET (NO-SQL & KEY VALUE)
WAREHOUSE) MANAGEMENT)
FIGURE 1-10 Data evolution and the rise of Big Data sources
The Big Data trend is generating an enormous amount of information from many new sources. This data deluge requires advanced analytics and new market players to take advantage of these opportunities and new market dynamics, which wi ll be discussed in the following section.
1.2.4 Emerging Big Data Ecosystem and a New Approach to Analytics Organizations and data collectors are realizing that the data they can gather from individuals contains intrinsic value and, as a result, a new economy is emerging. As this new digital economy continues to
1.2 State of the Practice in Analytics
evolve, the market sees the introduction of data vendors and data cleaners that use crowdsourcing (such
as Mechanical Turk and Ga laxyZoo) to test the outcomes of machine learning techniques. Other vendors
offer added va lue by repackaging open source tools in a simpler way and bringing the tools to market.
Vendors such as Cloudera, Hortonworks, and Pivotal have provided this value-add for the open source
framework Hadoop.
As the new ecosystem takes shape, there are four main groups of players within this interconnected
web. These are shown in Figure 1-11.
• Data devices [shown in the (1) section of Figure 1-1 1] and the "Sensornet" gat her data from multiple locations and continuously generate new data about th is data. For each gigabyte of new data cre- ated, an additional petabyte of data is created about that data. [2)
• For example, consider someone playing an online video game through a PC, game console, or smartphone. In this case, the video game provider captures data about the skill and levels attained by the player. Intelligent systems monitor and log how and when the user plays the game. As a consequence, the game provider can fine-tune the difficulty of the game, suggest other related games that would most likely interest the user, and offer add itional equipment and enhancements for the character based on the user's age, gender, and interests. Th is information may get stored locally or uploaded to the game provider's cloud to analyze the gaming habits and opportunities for ups ell and cross-sell, and identify archetypical profiles of specific kinds of users.
• Smartphones provide another rich source of data. In addition to messaging and basic phone usage, they store and transmit data about Internet usage, SMS usage, and real-time location. This metadata can be used for analyzing traffic patterns by scanning the density of smart- phones in locations to track the speed of cars or the relative traffic congestion on busy roads. In this way, GPS devices in cars can give drivers real-time updates and offer alternative
routes to avoid traffic delays.
• Retail shopping loyalty cards record not just the amount an individual spends, but the loca- tions of stores that person visits, the kinds of products purchased, the stores where goods are purchased most often, and the combinations of products purchased together. Collecting this data provides insights into shopping and travel habits and the likelihood of successful advertisement targeting for certa in types of retail promotions.
• Data collectors [the blue ovals, identified as (2) within Figure 1-1 1] include sample entities that col lect data from the device and users.
• Data resul ts from a cable TV provider tracking the shows a person watches, which TV channels someone wi ll and will not pay for to watch on demand, and the prices someone is will ing to pay fo r premium TV content
• Retail stores tracking the path a customer takes through their store while pushing a shop- ping cart with an RFID chip so they can gauge which products get the most foot traffic using geospatial data co llected from the RFID chips
• Data aggregators (the dark gray ovals in Figure 1-11, marked as (3)) make sense of the data co llected from the various entities from the "SensorNet" or the "Internet ofThings." These organizations compile data from the devices and usage patterns collected by government agencies, retail stores,
INTRODUCTION TO BIG DATA ANALYTICS
and websites. ln turn, they can choose to transform and package the data as products to sell to list brokers, who may want to generate marketing lists of people who may be good targets for specific ad campaigns.
• Data users and buyers are denoted by (4) in Figure 1-11. These groups directly benefit from the data collected and aggregated by others within the data value chain.
• Retai l banks, acting as a data buyer, may want to know which customers have the highest likelihood to apply for a second mortgage or a home equity line of credit. To provide input for this analysis, retai l banks may purchase data from a data aggregator. This kind of data may include demographic information about people living in specific locations; people who appear to have a specific level of debt, yet still have solid credit scores (or other characteris- tics such as paying bil ls on time and having savings accounts) that can be used to infer credit worthiness; and those who are searching the web for information about paying off debts or doing home remodeling projects. Obtaining data from these various sources and aggrega- tors will enable a more targeted marketing campaign, which would have been more chal- lenging before Big Data due to the lack of information or high-performing technologies.
• Using technologies such as Hadoop to perform natural language processing on unstructured, textual data from social media websites, users can gauge the reaction to events such as presidential campaigns. People may, for example, want to determine public sentiments toward a candidate by analyzing related blogs and online comments. Similarly, data users may want to track and prepare for natural disasters by identifying which areas a hurricane affects fi rst and how it moves, based on which geographic areas are tweeting about it or discussing it via social media.
r:t\ Data \.::J Devices {'[I t Ptto...r r.r..., l UC)(.K VlOLU l !\I ill UO\. AI''
(,.\MI CfitUII CAfW CO\tPl!UR
RfAO(H
~ .~ Iff [) \llOfO MfOICAI
IMo\C'oi"G
Law EniCHCefllefll
Data Users/ Buyers
0 Media
FIGURE 1-11 Emerging Big Data ecosystem
Do live!)' So Mea
'I If,. [Ill AN [
Privato Investigators
/ lawyors
1.3 Key Roles for the New Big Data Ecosystem
As il lustrated by this emerging Big Data ecosystem, the kinds of data and the related market dynamics vary greatly. These data sets can include sensor data, text, structured datasets, and social media. With this in mind, it is worth recall ing that these data sets will not work wel l within traditional EDWs, which were
architected to streamline reporting and dashboards and be centrally managed.lnstead, Big Data problems and projects require different approaches to succeed.
Analysts need to partner with IT and DBAs to get the data they need within an analytic sandbox. A
typical analytical sandbox contains raw data, aggregated data, and data with multiple kinds of structure. The sandbox enables robust exploration of data and requires a savvy user to leverage and take advantage
of data in the sandbox environment.
1.3 Key Roles for the New Big Data Ecosystem As explained in the context of the Big Data ecosystem in Section 1.2.4, new players have emerged to curate, store, produce, clean, and transact data. In addition, the need for applying more advanced analytical tech- niques to increasingly complex business problems has driven the emergence of new roles, new technology
platforms, and new analytical methods. This section explores the new roles that address these needs, and subsequent chapters explore some of the analytical methods and technology platforms.
The Big Data ecosystem demands three categories of roles, as shown in Figure 1-12. These roles were
described in the McKinsey Global study on Big Data, from May 2011 [1].
Three Key Roles of The New Data Ecosystem Role
Deep Analytical Talent
Data Savvy Professionals
Technology and Data Enablers
Data Scientists .. Projected U.S. talent
gap: 1.40,000 to 1.90,000
.. Projected U.S. talent gap: 1..5 million
Note: RcuresaboYe m~ • projected talent CDP In US In 201.8. as ihown In McKinsey May 2011 article "81& Data: l he Nut rront* tot Innovation. Competition. and Product~
FIGURE 1-12 Key roles of the new Big Data ecosystem
The first group- Deep Analytical Talent- is technically savvy, with strong analytical skills. Members pos- sess a combination of skills to handle raw, unstructured data and to apply complex analytical techniques at