Loading...

Messages

Proposals

Stuck in your homework and missing deadline? Get urgent help in $10/Page with 24 hours deadline

Get Urgent Writing Help In Your Essays, Assignments, Homeworks, Dissertation, Thesis Or Coursework & Achieve A+ Grades.

Privacy Guaranteed - 100% Plagiarism Free Writing - Free Turnitin Report - Professional And Experienced Writers - 24/7 Online Support

2.4 4 test ts sins and crimes answers

18/10/2021 Client: muhammad11 Deadline: 2 Day

Assignment 3

For this assignment, please provide responses to the following items:
(a) Provide a comprehensive response describing naive Bayes?
(b) Explain how naive Bayes is used to filter spam. Please make sure to explain how this process works.
(c) Explain how naive Bayes is used by insurance companies to detect potential fraud in the claim process.

Table of Contents 1. Introduction

1. EMC Academic Alliance 2. EMC Proven Professional Certification

2. Chapter 1: Introduction to Big Data Analytics 1. 1.1 Big Data Overview 2. 1.2 State of the Practice in Analytics 3. 1.3 Key Roles for the New Big Data Ecosystem 4. 1.4 Examples of Big Data Analytics 5. Summary 6. Exercises 7. Bibliography

3. Chapter 2: Data Analytics Lifecycle 1. 2.1 Data Analytics Lifecycle Overview 2. 2.2 Phase 1: Discovery 3. 2.3 Phase 2: Data Preparation 4. 2.4 Phase 3: Model Planning 5. 2.5 Phase 4: Model Building 6. 2.6 Phase 5: Communicate Results 7. 2.7 Phase 6: Operationalize 8. 2.8 Case Study: Global Innovation Network and Analysis (GINA) 9. Summary 10. Exercises 11. Bibliography

4. Chapter 3: Review of Basic Data Analytic Methods Using R 1. 3.1 Introduction to R 2. 3.2 Exploratory Data Analysis 3. 3.3 Statistical Methods for Evaluation 4. Summary 5. Exercises 6. Bibliography

5. Chapter 4: Advanced Analytical Theory and Methods: Clustering 1. 4.1 Overview of Clustering 2. 4.2 K-means 3. 4.3 Additional Algorithms 4. Summary 5. Exercises

6. Bibliography 6. Chapter 5: Advanced Analytical Theory and Methods: Association Rules

1. 5.1 Overview 2. 5.2 Apriori Algorithm 3. 5.3 Evaluation of Candidate Rules 4. 5.4 Applications of Association Rules 5. 5.5 An Example: Transactions in a Grocery Store 6. 5.6 Validation and Testing 7. 5.7 Diagnostics 8. Summary 9. Exercises 10. Bibliography

7. Chapter 6: Advanced Analytical Theory and Methods: Regression 1. 6.1 Linear Regression 2. 6.2 Logistic Regression 3. 6.3 Reasons to Choose and Cautions 4. 6.4 Additional Regression Models 5. Summary 6. Exercises

8. Chapter 7: Advanced Analytical Theory and Methods: Classification 1. 7.1 Decision Trees 2. 7.2 Naïve Bayes 3. 7.3 Diagnostics of Classifiers 4. 7.4 Additional Classification Methods 5. Summary 6. Exercises 7. Bibliography

9. Chapter 8: Advanced Analytical Theory and Methods: Time Series Analysis 1. 8.1 Overview of Time Series Analysis 2. 8.2 ARIMA Model 3. 8.3 Additional Methods 4. Summary 5. Exercises

10. Chapter 9: Advanced Analytical Theory and Methods: Text Analysis 1. 9.1 Text Analysis Steps 2. 9.2 A Text Analysis Example 3. 9.3 Collecting Raw Text

4. 9.4 Representing Text 5. 9.5 Term Frequency—Inverse Document Frequency (TFIDF) 6. 9.6 Categorizing Documents by Topics 7. 9.7 Determining Sentiments 8. 9.8 Gaining Insights 9. Summary 10. Exercises 11. Bibliography

11. Chapter 10: Advanced Analytics—Technology and Tools: MapReduce and Hadoop 1. 10.1 Analytics for Unstructured Data 2. 10.2 The Hadoop Ecosystem 3. 10.3 NoSQL 4. Summary 5. Exercises 6. Bibliography

12. Chapter 11: Advanced Analytics—Technology and Tools: In-Database Analytics 1. 11.1 SQL Essentials 2. 11.2 In-Database Text Analysis 3. 11.3 Advanced SQL 4. Summary 5. Exercises 6. Bibliography

13. Chapter 12: The Endgame, or Putting It All Together 1. 12.1 Communicating and Operationalizing an Analytics Project 2. 12.2 Creating the Final Deliverables 3. 12.3 Data Visualization Basics 4. Summary 5. Exercises 6. References and Further Reading 7. Bibliography

14. End User License Agreement

List of Illustrations 1. Figure 1.1 2. Figure 1.2 3. Figure 1.3 4. Figure 1.4 5. Figure 1.5 6. Figure 1.6 7. Figure 1.7 8. Figure 1.8 9. Figure 1.9 10. Figure 1.10 11. Figure 1.11 12. Figure 1.12 13. Figure 1.13 14. Figure 1.14 15. Figure 2.1 16. Figure 2.2 17. Figure 2.3 18. Figure 2.4 19. Figure 2.5 20. Figure 2.6 21. Figure 2.7 22. Figure 2.8 23. Figure 2.9 24. Figure 2.10 25. Figure 2.11 26. Figure 3.1 27. Figure 3.2 28. Figure 3.3 29. Figure 3.4 30. Figure 3.5 31. Figure 3.6 32. Figure 3.7

33. Figure 3.8 34. Figure 3.9 35. Figure 3.10 36. Figure 3.11 37. Figure 3.12 38. Figure 3.13 39. Figure 3.14 40. Figure 3.15 41. Figure 3.16 42. Figure 3.17 43. Figure 3.18 44. Figure 3.19 45. Figure 3.20 46. Figure 3.21 47. Figure 3.22 48. Figure 3.23 49. Figure 3.24 50. Figure 3.25 51. Figure 3.26 52. Figure 3.27 53. Figure 4.1 54. Figure 4.2 55. Figure 4.3 56. Figure 4.4 57. Figure 4.5 58. Figure 4.6 59. Figure 4.7 60. Figure 4.8 61. Figure 4.9 62. Figure 4.10 63. Figure 4.11 64. Figure 4.12 65. Figure 4.13 66. Figure 5.1

67. Figure 5.2 68. Figure 5.3 69. Figure 5.4 70. Figure 5.5 71. Figure 5.6 72. Figure 6.1 73. Figure 6.2 74. Figure 6.3 75. Figure 6.4 76. Figure 6.5 77. Figure 6.6 78. Figure 6.7 79. Figure 6.10 80. Figure 6.8 81. Figure 6.9 82. Figure 6.11 83. Figure 6.12 84. Figure 6.13 85. Figure 6.14 86. Figure 6.15 87. Figure 6.16 88. Figure 6.17 89. Figure 7.1 90. Figure 7.2 91. Figure 7.3 92. Figure 7.4 93. Figure 7.5 94. Figure 7.6 95. Figure 7.7 96. Figure 7.8 97. Figure 7.9 98. Figure 7.10 99. Figure 8.1 100. Figure 8.2

101. Figure 8.3 102. Figure 8.4 103. Figure 8.5 104. Figure 8.6 105. Figure 8.7 106. Figure 8.8 107. Figure 8.9 108. Figure 8.10 109. Figure 8.11 110. Figure 8.12 111. Figure 8.13 112. Figure 8.14 113. Figure 8.15 114. Figure 8.16 115. Figure 8.17 116. Figure 8.18 117. Figure 8.19 118. Figure 8.20 119. Figure 8.21 120. Figure 8.22 121. Figure 9.1 122. Figure 9.2 123. Figure 9.3 124. Figure 9.4 125. Figure 9.5 126. Figure 9.6 127. Figure 9.7 128. Figure 9.8 129. Figure 9.9 130. Figure 9.10 131. Figure 9.11 132. Figure 9.12 133. Figure 9.13 134. Figure 9.14

135. Figure 9.15 136. Figure 9.16 137. Figure 10.1 138. Figure 10.2 139. Figure 10.3 140. Figure 10.4 141. Figure 10.5 142. Figure 10.6 143. Figure 10.7 144. Figure 11.1 145. Figure 11.2 146. Figure 11.3 147. Figure 11.4 148. Figure 12.1 149. Figure 12.2 150. Figure 12.3 151. Figure 12.4 152. Figure 12.5 153. Figure 12.6 154. Figure 12.7 155. Figure 12.8 156. Figure 12.9 157. Figure 12.10 158. Figure 12.11 159. Figure 12.12 160. Figure 12.13 161. Figure 12.14 162. Figure 12.15 163. Figure 12.16 164. Figure 12.17 165. Figure 12.18 166. Figure 12.19 167. Figure 12.20 168. Figure 12.21

169. Figure 12.22 170. Figure 12.23 171. Figure 12.24 172. Figure 12.25 173. Figure 12.26 174. Figure 12.27 175. Figure 12.28 176. Figure 12.29 177. Figure 12.30 178. Figure 12.31 179. Figure 12.32 180. Figure 12.33 181. Figure 12.34 182. Figure 12.35

List of Tables 1. Table 1.1 2. Table 1.2 3. Table 2.1 4. Table 2.2 5. Table 2.3 6. Table 3.1 7. Table 3.2 8. Table 3.3 9. Table 3.4 10. Table 3.5 11. Table 3.6 12. Table 6.1 13. Table 7.1 14. Table 7.2 15. Table 7.3 16. Table 7.4 17. Table 7.5 18. Table 7.6 19. Table 7.7 20. Table 7.8 21. Table 8.1 22. Table 9.1 23. Table 9.2 24. Table 9.3 25. Table 9.4 26. Table 9.5 27. Table 9.6 28. Table 9.7 29. Table 10.1 30. Table 10.2 31. Table 11.1 32. Table 11.2

33. Table 11.3 34. Table 11.4 35. Table 12.1 36. Table 12.2 37. Table 12.3

Introduction Big Data is creating significant new opportunities for organizations to derive new value and create competitive advantage from their most valuable asset: information. For businesses, Big Data helps drive efficiency, quality, and personalized products and services, producing improved levels of customer satisfaction and profit. For scientific efforts, Big Data analytics enable new avenues of investigation with potentially richer results and deeper insights than previously available. In many cases, Big Data analytics integrate structured and unstructured data with real-time feeds and queries, opening new paths to innovation and insight.

This book provides a practitioner’s approach to some of the key techniques and tools used in Big Data analytics. Knowledge of these methods will help people become active contributors to Big Data analytics projects. The book’s content is designed to assist multiple stakeholders: business and data analysts looking to add Big Data analytics skills to their portfolio; database professionals and managers of business intelligence, analytics, or Big Data groups looking to enrich their analytic skills; and college graduates investigating data science as a career field.

The content is structured in twelve chapters. The first chapter introduces the reader to the domain of Big Data, the drivers for advanced analytics, and the role of the data scientist. The second chapter presents an analytic project lifecycle designed for the particular characteristics and challenges of hypothesis-driven analysis with Big Data.

Chapter 3 examines fundamental statistical techniques in the context of the open source R analytic software environment. This chapter also highlights the importance of exploratory data analysis via visualizations and reviews the key notions of hypothesis development and testing.

Chapters 4 through 9 discuss a range of advanced analytical methods, including clustering, classification, regression analysis, time series and text analysis.

Chapters 10 and 11 focus on specific technologies and tools that support advanced analytics with Big Data. In particular, the MapReduce paradigm and its instantiation in the Hadoop ecosystem, as well as advanced topics in SQL and in-database text analytics form the focus of these chapters.

Chapter 12 provides guidance on operationalizing Big Data analytics projects. This chapter focuses on creating the final deliverables, converting an analytics project to an ongoing asset of an organization’s operation, and creating clear, useful visual outputs based on the data.

EMC Academic Alliance University and college faculties are invited to join the Academic Alliance program to access unique “open” curriculum-based education on the following topics:

Data Science and Big Data Analytics Information Storage and Management Cloud Infrastructure and Services Backup Recovery Systems and Architecture

The program provides faculty with course resources to prepare students for opportunities that exist in today’s evolving IT industry at no cost. For more information, visit http://education.EMC.com/academicalliance.

http://education.EMC.com/academicalliance
EMC Proven Professional Certification EMC Proven Professional is a leading education and certification program in the IT industry, providing comprehensive coverage of information storage technologies, virtualization, cloud computing, data science/Big Data analytics, and more.

Being proven means investing in yourself and formally validating your expertise.

This book prepares you for Data Science Associate (EMCDSA) certification. Visit http://education.EMC.com for details.

http://education.EMC.com
Chapter 1 Introduction to Big Data Analytics

Key Concepts 1. Big Data overview 2. State of the practice in analytics 3. Business Intelligence versus Data Science 4. Key roles for the new Big Data ecosystem 5. The Data Scientist 6. Examples of Big Data analytics

Much has been written about Big Data and the need for advanced analytics within industry, academia, and government. Availability of new data sources and the rise of more complex analytical opportunities have created a need to rethink existing data architectures to enable analytics that take advantage of Big Data. In addition, significant debate exists about what Big Data is and what kinds of skills are required to make best use of it. This chapter explains several key concepts to clarify what is meant by Big Data, why advanced analytics are needed, how Data Science differs from Business Intelligence (BI), and what new roles are needed for the new Big Data ecosystem.

1.1 Big Data Overview Data is created constantly, and at an ever-increasing rate. Mobile phones, social media, imaging technologies to determine a medical diagnosis—all these and more create new data, and that must be stored somewhere for some purpose. Devices and sensors automatically generate diagnostic information that needs to be stored and processed in real time. Merely keeping up with this huge influx of data is difficult, but substantially more challenging is analyzing vast amounts of it, especially when it does not conform to traditional notions of data structure, to identify meaningful patterns and extract useful information. These challenges of the data deluge present the opportunity to transform business, government, science, and everyday life.

Several industries have led the way in developing their ability to gather and exploit data:

Credit card companies monitor every purchase their customers make and can identify fraudulent purchases with a high degree of accuracy using rules derived by processing billions of transactions. Mobile phone companies analyze subscribers’ calling patterns to determine, for example, whether a caller’s frequent contacts are on a rival network. If that rival network is offering an attractive promotion that might cause the subscriber to defect, the mobile phone company can proactively offer the subscriber an incentive to remain in her contract. For companies such as LinkedIn and Facebook, data itself is their primary product. The valuations of these companies are heavily derived from the data they gather and host, which contains more and more intrinsic value as the data grows.

Three attributes stand out as defining Big Data characteristics:

Huge volume of data: Rather than thousands or millions of rows, Big Data can be billions of rows and millions of columns. Complexity of data types and structures: Big Data reflects the variety of new data sources, formats, and structures, including digital traces being left on the web and other digital repositories for subsequent analysis. Speed of new data creation and growth: Big Data can describe high velocity data, with rapid data ingestion and near real time analysis.

Although the volume of Big Data tends to attract the most attention, generally the variety and velocity of the data provide a more apt definition of Big Data. (Big Data is sometimes described as having 3 Vs: volume, variety, and velocity.) Due to its size or structure, Big Data cannot be efficiently analyzed using only traditional databases or methods. Big Data problems require new tools and technologies to store, manage, and realize the business benefit. These new tools and technologies enable creation, manipulation, and management of large datasets and the storage environments that house them. Another definition of Big Data comes from the McKinsey Global report from 2011:Big Data is data whose scale,

distribution, diversity, and/or timeliness require the use of new technical architectures and analytics to enable insights that unlock new sources of business value.

McKinsey & Co.; Big Data: The Next Frontier for Innovation, Competition, and Productivity [1]

McKinsey’s definition of Big Data implies that organizations will need new data architectures and analytic sandboxes, new tools, new analytical methods, and an integration of multiple skills into the new role of the data scientist, which will be discussed in Section 1.3. Figure 1.1 highlights several sources of the Big Data deluge.

Figure 1.1 What’s driving the data deluge

The rate of data creation is accelerating, driven by many of the items in Figure 1.1.

Social media and genetic sequencing are among the fastest-growing sources of Big Data and examples of untraditional sources of data being used for analysis.

For example, in 2012 Facebook users posted 700 status updates per second worldwide, which can be leveraged to deduce latent interests or political views of users and show relevant ads. For instance, an update in which a woman changes her relationship status from “single” to “engaged” would trigger ads on bridal dresses, wedding planning, or name-changing services.

Facebook can also construct social graphs to analyze which users are connected to each other as an interconnected network. In March 2013, Facebook released a new feature called “Graph Search,” enabling users and developers to search social graphs for people with similar interests, hobbies, and shared locations.

Another example comes from genomics. Genetic sequencing and human genome mapping provide a detailed understanding of genetic makeup and lineage. The health care industry is looking toward these advances to help predict which illnesses a person is likely to get in his lifetime and take steps to avoid these maladies or reduce their impact through the use

of personalized medicine and treatment. Such tests also highlight typical responses to different medications and pharmaceutical drugs, heightening risk awareness of specific drug treatments.

While data has grown, the cost to perform this work has fallen dramatically. The cost to sequence one human genome has fallen from $100 million in 2001 to $10,000 in 2011, and the cost continues to drop. Now, websites such as 23andme (Figure 1.2) offer genotyping for less than $100. Although genotyping analyzes only a fraction of a genome and does not provide as much granularity as genetic sequencing, it does point to the fact that data and complex analysis is becoming more prevalent and less expensive to deploy.

Figure 1.2 Examples of what can be learned through genotyping, from 23andme.com

As illustrated by the examples of social media and genetic sequencing, individuals and organizations both derive benefits from analysis of ever-larger and more complex datasets that require increasingly powerful analytical capabilities.

1.1.1 Data Structures

http://23andme.com
Big data can come in multiple forms, including structured and non-structured data such as financial data, text files, multimedia files, and genetic mappings. Contrary to much of the traditional data analysis performed by organizations, most of the Big Data is unstructured or semi-structured in nature, which requires different techniques and tools to process and analyze. [2] Distributed computing environments and massively parallel processing (MPP) architectures that enable parallelized data ingest and analysis are the preferred approach to process such complex data.

With this in mind, this section takes a closer look at data structures.

Figure 1.3 shows four types of data structures, with 80–90% of future data growth coming from non-structured data types. [2] Though different, the four are commonly mixed. For example, a classic Relational Database Management System (RDBMS) may store call logs for a software support call center. The RDBMS may store characteristics of the support calls as typical structured data, with attributes such as time stamps, machine type, problem type, and operating system. In addition, the system will likely have unstructured, quasi- or semi-structured data, such as free-form call log information taken from an e-mail ticket of the problem, customer chat history, or transcript of a phone call describing the technical problem and the solution or audio file of the phone call conversation. Many insights could be extracted from the unstructured, quasi- or semi-structured data in the call center data.

Figure 1.3 Big Data Growth is increasingly unstructured

Although analyzing structured data tends to be the most familiar technique, a different technique is required to meet the challenges to analyze semi-structured data (shown as XML), quasi-structured (shown as a clickstream), and unstructured data.

Here are examples of how each of the four main types of data structures may look.

Structured data: Data containing a defined data type, format, and structure (that is, transaction data, online analytical processing [OLAP] data cubes, traditional RDBMS, CSV files, and even simple spreadsheets). See Figure 1.4. Semi-structured data: Textual data files with a discernible pattern that enables parsing (such as Extensible Markup Language [XML] data files that are self- describing and defined by an XML schema). See Figure 1.5. Quasi-structured data: Textual data with erratic data formats that can be formatted with effort, tools, and time (for instance, web clickstream data that may contain inconsistencies in data values and formats). See Figure 1.6. Unstructured data: Data that has no inherent structure, which may include text documents, PDFs, images, and video. See Figure 1.7.

Figure 1.4 Example of structured data

Figure 1.5 Example of semi-structured data

Figure 1.6 Example of EMC Data Science search results

Figure 1.7 Example of unstructured data: video about Antarctica expedition [3]

Quasi-structured data is a common phenomenon that bears closer scrutiny. Consider the following example. A user attends the EMC World conference and subsequently runs a Google search online to find information related to EMC and Data Science. This would produce a URL such as https://www.google.com/#q=EMC+ data+science and a list of results, such as in the first graphic of Figure 1.5.

After doing this search, the user may choose the second link, to read more about the headline “Data Scientist—EMC Education, Training, and Certification.” This brings the user to an emc.com site focused on this topic and a new URL, https://education.emc.com/guest/campaign/data_science.aspx, that displays the page shown as (2) in Figure 1.6. Arriving at this site, the user may decide to click to learn more about the process of becoming certified in data science. The user chooses a link toward the top of the page on Certifications, bringing the user to a new URL: https://education.emc.com/guest/certification/framework/stf/data_science.aspx which is (3) in Figure 1.6.

Visiting these three websites adds three URLs to the log files monitoring the user’s computer or network use. These three URLs are: https://www.google.com/#q=EMC+data+science

https://education.emc.com/guest/campaign/data_science.aspx

https://education.emc.com/guest/certification/framework/stf/data_science.aspx

This set of three URLs reflects the websites and actions taken to find Data Science information related to EMC. Together, this comprises a clickstream that can be parsed and mined by data scientists to discover usage patterns and uncover relationships among clicks and areas of interest on a website or group of sites.

The four data types described in this chapter are sometimes generalized into two groups:

https://www.google.com/#q=EMC+ data+science
http://emc.com
https://education.emc.com/guest/campaign/data_science.aspx
https://education.emc.com/guest/certification/framework/stf/data_science.aspx
https://www.google.com/#q=EMC+data+science
https://education.emc.com/guest/campaign/data_science.aspx
https://education.emc.com/guest/certification/framework/stf/data_science.aspx
structured and unstructured data. Big Data describes new kinds of data with which most organizations may not be used to working. With this in mind, the next section discusses common technology architectures from the standpoint of someone wanting to analyze Big Data.

1.1.2 Analyst Perspective on Data Repositories The introduction of spreadsheets enabled business users to create simple logic on data structured in rows and columns and create their own analyses of business problems. Database administrator training is not required to create spreadsheets: They can be set up to do many things quickly and independently of information technology (IT) groups. Spreadsheets are easy to share, and end users have control over the logic involved. However, their proliferation can result in “many versions of the truth.” In other words, it can be challenging to determine if a particular user has the most relevant version of a spreadsheet, with the most current data and logic in it. Moreover, if a laptop is lost or a file becomes corrupted, the data and logic within the spreadsheet could be lost. This is an ongoing challenge because spreadsheet programs such as Microsoft Excel still run on many computers worldwide. With the proliferation of data islands (or spreadmarts), the need to centralize the data is more pressing than ever.

As data needs grew, so did more scalable data warehousing solutions. These technologies enabled data to be managed centrally, providing benefits of security, failover, and a single repository where users could rely on getting an “official” source of data for financial reporting or other mission-critical tasks. This structure also enabled the creation of OLAP cubes and BI analytical tools, which provided quick access to a set of dimensions within an RDBMS. More advanced features enabled performance of in-depth analytical techniques such as regressions and neural networks. Enterprise Data Warehouses (EDWs) are critical for reporting and BI tasks and solve many of the problems that proliferating spreadsheets introduce, such as which of multiple versions of a spreadsheet is correct. EDWs—and a good BI strategy—provide direct data feeds from sources that are centrally managed, backed up, and secured.

Despite the benefits of EDWs and BI, these systems tend to restrict the flexibility needed to perform robust or exploratory data analysis. With the EDW model, data is managed and controlled by IT groups and database administrators (DBAs), and data analysts must depend on IT for access and changes to the data schemas. This imposes longer lead times for analysts to get data; most of the time is spent waiting for approvals rather than starting meaningful work. Additionally, many times the EDW rules restrict analysts from building datasets. Consequently, it is common for additional systems to emerge containing critical data for constructing analytic datasets, managed locally by power users. IT groups generally dislike existence of data sources outside of their control because, unlike an EDW, these datasets are not managed, secured, or backed up. From an analyst perspective, EDW and BI solve problems related to data accuracy and availability. However, EDW and BI introduce new problems related to flexibility and agility, which were less pronounced when dealing with spreadsheets.

A solution to this problem is the analytic sandbox, which attempts to resolve the conflict for analysts and data scientists with EDW and more formally managed corporate data. In this model, the IT group may still manage the analytic sandboxes, but they will be

purposefully designed to enable robust analytics, while being centrally managed and secured. These sandboxes, often referred to as workspaces, are designed to enable teams to explore many datasets in a controlled fashion and are not typically used for enterprise- level financial reporting and sales dashboards.

Many times, analytic sandboxes enable high-performance computing using in-database processing—the analytics occur within the database itself. The idea is that performance of the analysis will be better if the analytics are run in the database itself, rather than bringing the data to an analytical tool that resides somewhere else. In-database analytics, discussed further in Chapter 11, “Advanced Analytics—Technology and Tools: In-Database Analytics,” creates relationships to multiple data sources within an organization and saves time spent creating these data feeds on an individual basis. In-database processing for deep analytics enables faster turnaround time for developing and executing new analytic models, while reducing, though not eliminating, the cost associated with data stored in local, “shadow” file systems. In addition, rather than the typical structured data in the EDW, analytic sandboxes can house a greater variety of data, such as raw data, textual data, and other kinds of unstructured data, without interfering with critical production databases. Table 1.1 summarizes the characteristics of the data repositories mentioned in this section.

Table 1.1 Types of Data Repositories, from an Analyst Perspective

Data Repository Characteristics Spreadsheets and

data marts (“spreadmarts”)

Spreadsheets and low-volume databases for recordkeeping Analyst depends on data extracts.

Data Warehouses

Centralized data containers in a purpose-built space Supports BI and reporting, but restricts robust analyses

Analyst dependent on IT and DBAs for data access and schema changes

Analysts must spend significant time to get aggregated and disaggregated data extracts from multiple sources.

Analytic Sandbox (workspaces)

Data assets gathered from multiple sources and technologies for analysis

Enables flexible, high-performance analysis in a nonproduction environment; can leverage in-database processing

Reduces costs and risks associated with data replication into “shadow” file systems

“Analyst owned” rather than “DBA owned”

There are several things to consider with Big Data Analytics projects to ensure the approach fits with the desired goals. Due to the characteristics of Big Data, these projects lend themselves to decision support for high-value, strategic decision making with high processing complexity. The analytic techniques used in this context need to be iterative and flexible, due to the high volume of data and its complexity. Performing rapid and complex analysis requires high throughput network connections and a consideration for

the acceptable amount of latency. For instance, developing a real-time product recommender for a website imposes greater system demands than developing a near-real- time recommender, which may still provide acceptable performance, have slightly greater latency, and may be cheaper to deploy. These considerations require a different approach to thinking about analytics challenges, which will be explored further in the next section.

1.2 State of the Practice in Analytics Current business problems provide many opportunities for organizations to become more analytical and data driven, as shown in Table 1.2.

Table 1.2 Business Drivers for Advanced Analytics

Business Driver Examples Optimize business operations Sales, pricing, profitability, efficiency

Identify business risk Customer churn, fraud, default Predict new business

opportunities Upsell, cross-sell, best new customer prospects

Comply with laws or regulatory requirements

Anti-Money Laundering, Fair Lending, Basel II-III, Sarbanes-Oxley (SOX)

Table 1.2 outlines four categories of common business problems that organizations contend with where they have an opportunity to leverage advanced analytics to create competitive advantage. Rather than only performing standard reporting on these areas, organizations can apply advanced analytical techniques to optimize processes and derive more value from these common tasks. The first three examples do not represent new problems. Organizations have been trying to reduce customer churn, increase sales, and cross-sell customers for many years. What is new is the opportunity to fuse advanced analytical techniques with Big Data to produce more impactful analyses for these traditional problems. The last example portrays emerging regulatory requirements. Many compliance and regulatory laws have been in existence for decades, but additional requirements are added every year, which represent additional complexity and data requirements for organizations. Laws related to anti-money laundering (AML) and fraud prevention require advanced analytical techniques to comply with and manage properly.

1.2.1 BI Versus Data Science The four business drivers shown in Table 1.2 require a variety of analytical techniques to address them properly. Although much is written generally about analytics, it is important to distinguish between BI and Data Science. As shown in Figure 1.8, there are several ways to compare these groups of analytical techniques.

Figure 1.8 Comparing BI with Data Science

One way to evaluate the type of analysis being performed is to examine the time horizon and the kind of analytical approaches being used. BI tends to provide reports, dashboards, and queries on business questions for the current period or in the past. BI systems make it easy to answer questions related to quarter-to-date revenue, progress toward quarterly targets, and understand how much of a given product was sold in a prior quarter or year. These questions tend to be closed-ended and explain current or past behavior, typically by aggregating historical data and grouping it in some way. BI provides hindsight and some insight and generally answers questions related to “when” and “where” events occurred.

By comparison, Data Science tends to use disaggregated data in a more forward-looking, exploratory way, focusing on analyzing the present and enabling informed decisions about the future. Rather than aggregating historical data to look at how many of a given product sold in the previous quarter, a team may employ Data Science techniques such as time series analysis, further discussed in Chapter 8, “Advanced Analytical Theory and Methods: Time Series Analysis,” to forecast future product sales and revenue more accurately than extending a simple trend line. In addition, Data Science tends to be more exploratory in nature and may use scenario optimization to deal with more open-ended questions. This approach provides insight into current activity and foresight into future

events, while generally focusing on questions related to “how” and “why” events occur.

Where BI problems tend to require highly structured data organized in rows and columns for accurate reporting, Data Science projects tend to use many types of data sources, including large or unconventional datasets. Depending on an organization’s goals, it may choose to embark on a BI project if it is doing reporting, creating dashboards, or performing simple visualizations, or it may choose Data Science projects if it needs to do a more sophisticated analysis with disaggregated or varied datasets.

1.2.2 Current Analytical Architecture As described earlier, Data Science projects need workspaces that are purpose-built for experimenting with data, with flexible and agile data architectures. Most organizations still have data warehouses that provide excellent support for traditional reporting and simple data analysis activities but unfortunately have a more difficult time supporting more robust analyses. This section examines a typical analytical data architecture that may exist within an organization.

Figure 1.9 shows a typical data architecture and several of the challenges it presents to data scientists and others trying to do advanced analytics. This section examines the data flow to the Data Scientist and how this individual fits into the process of getting data to analyze on projects.

1. For data sources to be loaded into the data warehouse, data needs to be well

understood, structured, and normalized with the appropriate data type definitions. Although this kind of centralization enables security, backup, and failover of highly critical data, it also means that data typically must go through significant preprocessing and checkpoints before it can enter this sort of controlled environment, which does not lend itself to data exploration and iterative analytics.

2. As a result of this level of control on the EDW, additional local systems may emerge in the form of departmental warehouses and local data marts that business users create to accommodate their need for flexible analysis. These local data marts may not have the same constraints for security and structure as the main EDW and allow users to do some level of more in-depth analysis. However, these one-off systems reside in isolation, often are not synchronized or integrated with other data stores, and may not be backed up.

3. Once in the data warehouse, data is read by additional applications across the enterprise for BI and reporting purposes. These are high-priority operational processes getting critical data feeds from the data warehouses and repositories.

4. At the end of this workflow, analysts get data provisioned for their downstream analytics. Because users generally are not allowed to run custom or intensive analytics on production databases, analysts create data extracts from the EDW to analyze data offline in R or other local analytical tools. Many times these tools are limited to in-memory analytics on desktops analyzing samples of data, rather than the entire population of a dataset. Because these analyses are based on data extracts, they reside in a separate location, and the results of the analysis—and any insights on the

quality of the data or anomalies—rarely are fed back into the main data repository.

Figure 1.9 Typical analytic architecture

Because new data sources slowly accumulate in the EDW due to the rigorous validation and data structuring process, data is slow to move into the EDW, and the data schema is slow to change. Departmental data warehouses may have been originally designed for a specific purpose and set of business needs, but over time evolved to house more and more data, some of which may be forced into existing schemas to enable BI and the creation of OLAP cubes for analysis and reporting. Although the EDW achieves the objective of reporting and sometimes the creation of dashboards, EDWs generally limit the ability of analysts to iterate on the data in a separate nonproduction environment where they can conduct in-depth analytics or perform analysis on unstructured data.

The typical data architectures just described are designed for storing and processing mission-critical data, supporting enterprise applications, and enabling corporate reporting activities. Although reports and dashboards are still important for organizations, most traditional data architectures inhibit data exploration and more sophisticated analysis. Moreover, traditional data architectures have several additional implications for data scientists.

High-value data is hard to reach and leverage, and predictive analytics and data mining activities are last in line for data. Because the EDWs are designed for central data management and reporting, those wanting data for analysis are generally prioritized after operational processes. Data moves in batches from EDW to local analytical tools. This workflow means that data scientists are limited to performing in-memory analytics (such as with R, SAS, SPSS, or Excel), which will restrict the size of the datasets they can use. As such,

analysis may be subject to constraints of sampling, which can skew model accuracy. Data Science projects will remain isolated and ad hoc, rather than centrally managed. The implication of this isolation is that the organization can never harness the power of advanced analytics in a scalable way, and Data Science projects will exist as nonstandard initiatives, which are frequently not aligned with corporate business goals or strategy.

All these symptoms of the traditional data architecture result in a slow “time-to-insight” and lower business impact than could be achieved if the data were more readily accessible and supported by an environment that promoted advanced analytics. As stated earlier, one solution to this problem is to introduce analytic sandboxes to enable data scientists to perform advanced analytics in a controlled and sanctioned way. Meanwhile, the current Data Warehousing solutions continue offering reporting and BI services to support management and mission-critical operations.

1.2.3 Drivers of Big Data To better understand the market drivers related to Big Data, it is helpful to first understand some past history of data stores and the kinds of repositories and tools to manage these data stores.

As shown in Figure 1.10, in the 1990s the volume of information was often measured in terabytes. Most organizations analyzed structured data in rows and columns and used relational databases and data warehouses to manage large stores of enterprise information. The following decade saw a proliferation of different kinds of data sources—mainly productivity and publishing tools such as content management repositories and networked attached storage systems—to manage this kind of information, and the data began to increase in size and started to be measured at petabyte scales. In the 2010s, the information that organizations try to manage has broadened to include many other kinds of data. In this era, everyone and everything is leaving a digital footprint. Figure 1.10 shows a summary perspective on sources of Big Data generated by new applications and the scale and growth rate of the data. These applications, which generate data volumes that can be measured in exabyte scale, provide opportunities for new analytics and driving new value for organizations. The data now comes from multiple sources, such as these:

Medical information, such as genomic sequencing and diagnostic imaging Photos and video footage uploaded to the World Wide Web Video surveillance, such as the thousands of video cameras spread across a city Mobile devices, which provide geospatial location data of the users, as well as metadata about text messages, phone calls, and application usage on smart phones Smart devices, which provide sensor-based collection of information from smart electric grids, smart buildings, and many other public and industry infrastructures Nontraditional IT devices, including the use of radio-frequency identification (RFID) readers, GPS navigation systems, and seismic processing

Figure 1.10 Data evolution and rise of Big Data sources

The Big Data trend is generating an enormous amount of information from many new sources. This data deluge requires advanced analytics and new market players to take advantage of these opportunities and new market dynamics, which will be discussed in the following section.

1.2.4 Emerging Big Data Ecosystem and a New Approach to Analytics Organizations and data collectors are realizing that the data they can gather from individuals contains intrinsic value and, as a result, a new economy is emerging. As this new digital economy continues to evolve, the market sees the introduction of data vendors and data cleaners that use crowdsourcing (such as Mechanical Turk and GalaxyZoo) to test the outcomes of machine learning techniques. Other vendors offer added value by repackaging open source tools in a simpler way and bringing the tools to market. Vendors such as Cloudera, Hortonworks, and Pivotal have provided this value-add for the open source framework Hadoop.

As the new ecosystem takes shape, there are four main groups of players within this interconnected web. These are shown in Figure 1.11.

Data devices [shown in the (1) section of Figure 1.11] and the “Sensornet” gather data from multiple locations and continuously generate new data about this data. For each gigabyte of new data created, an additional petabyte of data is created about that data. [2]

For example, consider someone playing an online video game through a PC, game console, or smartphone. In this case, the video game provider captures data about the skill and levels attained by the player. Intelligent systems monitor and log how and when the user plays the game. As a consequence, the game provider can fine-tune the difficulty of the game, suggest other related games that would most likely interest the user, and offer additional equipment and enhancements for the character based on the user’s age, gender, and interests. This information may get stored locally or uploaded to the game provider’s cloud to analyze the gaming habits and opportunities for upsell and cross-sell, and identify archetypical profiles of specific kinds of users. Smartphones provide another rich source of data. In addition to messaging and basic phone usage, they store and transmit data about Internet usage, SMS usage, and real-time location. This metadata can be used for analyzing traffic patterns by scanning the density of smartphones in locations to track the speed of cars or the relative traffic congestion on busy roads. In this way, GPS devices in cars can give drivers real-time updates and offer alternative routes to avoid traffic delays. Retail shopping loyalty cards record not just the amount an individual spends, but the locations of stores that person visits, the kinds of products purchased, the stores where goods are purchased most often, and the combinations of products purchased together. Collecting this data provides insights into shopping and travel habits and the likelihood of successful advertisement targeting for certain types of retail promotions.

Data collectors [the blue ovals, identified as (2) within Figure 1.11] include sample entities that collect data from the device and users.

Data results from a cable TV provider tracking the shows a person watches, which TV channels someone will and will not pay for to watch on demand, and the prices someone is willing to pay for premium TV content Retail stores tracking the path a customer takes through their store while pushing a shopping cart with an RFID chip so they can gauge which products get the most foot traffic using geospatial data collected from the RFID chips

Data aggregators (the dark gray ovals in Figure 1.11, marked as (3)) make sense of the data collected from the various entities from the “SensorNet” or the “Internet of Things.” These organizations compile data from the devices and usage patterns collected by government agencies, retail stores, and websites. In turn, they can choose to transform and package the data as products to sell to list brokers, who may want to generate marketing lists of people who may be good targets for specific ad campaigns. Data users and buyers are denoted by (4) in Figure 1.11. These groups directly benefit from the data collected and aggregated by others within the data value chain.

Retail banks, acting as a data buyer, may want to know which customers have the highest likelihood to apply for a second mortgage or a home equity line of credit. To provide input for this analysis, retail banks may purchase data from a

data aggregator. This kind of data may include demographic information about people living in specific locations; people who appear to have a specific level of debt, yet still have solid credit scores (or other characteristics such as paying bills on time and having savings accounts) that can be used to infer credit worthiness; and those who are searching the web for information about paying off debts or doing home remodeling projects. Obtaining data from these various sources and aggregators will enable a more targeted marketing campaign, which would have been more challenging before Big Data due to the lack of information or high-performing technologies. Using technologies such as Hadoop to perform natural language processing on unstructured, textual data from social media websites, users can gauge the reaction to events such as presidential campaigns. People may, for example, want to determine public sentiments toward a candidate by analyzing related blogs and online comments. Similarly, data users may want to track and prepare for natural disasters by identifying which areas a hurricane affects first and how it moves, based on which geographic areas are tweeting about it or discussing it via social media.

Figure 1.11 Emerging Big Data ecosystems

As illustrated by this emerging Big Data ecosystem, the kinds of data and the related market dynamics vary greatly. These datasets can include sensor data, text, structured datasets, and social media. With this in mind, it is worth recalling that these datasets will not work well within traditional EDWs, which were architected to streamline reporting and dashboards and be centrally managed. Instead, Big Data problems and projects require different approaches to succeed.

Analysts need to partner with IT and DBAs to get the data they need within an analytic sandbox. A typical analytical sandbox contains raw data, aggregated data, and data with multiple kinds of structure. The sandbox enables robust exploration of data and requires a savvy user to leverage and take advantage of data in the sandbox environment.

1.3 Key Roles for the New Big Data Ecosystem As explained in the context of the Big Data ecosystem in Section 1.2.4, new players have emerged to curate, store, produce, clean, and transact data. In addition, the need for applying more advanced analytical techniques to increasingly complex business problems has driven the emergence of new roles, new technology platforms, and new analytical methods. This section explores the new roles that address these needs, and subsequent chapters explore some of the analytical methods and technology platforms.

The Big Data ecosystem demands three categories of roles, as shown in Figure 1.12. These roles were described in the McKinsey Global study on Big Data, from May 2011 [1].

Figure 1.12 Key roles of the new Big Data ecosystem

The first group—Deep Analytical Talent— is technically savvy, with strong analytical skills. Members possess a combination of skills to handle raw, unstructured data and to apply complex analytical techniques at massive scales. This group has advanced training in quantitative disciplines, such as mathematics, statistics, and machine learning. To do their jobs, members need access to a robust analytic sandbox or workspace where they can perform large-scale analytical data experiments. Examples of current professions fitting into this group include statisticians, economists, mathematicians, and the new role of the Data Scientist.

The McKinsey study forecasts that by the year 2018, the United States will have a talent gap of 140,000–190,000 people with deep analytical talent. This does not represent the number of people needed with deep analytical talent; rather, this range represents the difference between what will be available in the workforce compared with what will be

needed. In addition, these estimates only reflect forecasted talent shortages in the United States; the number would be much larger on a global basis.

The second group—Data Savvy Professionals—has less technical depth but has a basic knowledge of statistics or machine learning and can define key questions that can be answered using advanced analytics. These people tend to have a base knowledge of working with data, or an appreciation for some of the work being performed by data scientists and others with deep analytical talent. Examples of data savvy professionals include financial analysts, market research analysts, life scientists, operations managers, and business and functional managers.

The McKinsey study forecasts the projected U.S. talent gap for this group to be 1.5 million people by the year 2018. At a high level, this means for every Data Scientist profile needed, the gap will be ten times as large for Data Savvy Professionals. Moving toward becoming a data savvy professional is a critical step in broadening the perspective of managers, directors, and leaders, as this provides an idea of the kinds of questions that can be solved with data.

The third category of people mentioned in the study is Technology and Data Enablers. This group represents people providing technical expertise to support analytical projects, such as provisioning and administrating analytical sandboxes, and managing large-scale data architectures that enable widespread analytics within companies and other organizations. This role requires skills related to computer engineering, programming, and database administration.

These three groups must work together closely to solve complex Big Data challenges. Most organizations are familiar with people in the latter two groups mentioned, but the first group, Deep Analytical Talent, tends to be the newest role for most and the least understood. For simplicity, this discussion focuses on the emerging role of the Data Scientist. It describes the kinds of activities that role performs and provides a more detailed view of the skills needed to fulfill that role.

There are three recurring sets of activities that data scientists perform:

Reframe business challenges as analytics challenges. Specifically, this is a skill to diagnose business problems, consider the core of a given problem, and determine which kinds of candidate analytical methods can be applied to solve it. This concept is explored further in Chapter 2, “Data Analytics Lifecycle.” Design, implement, and deploy statistical models and data mining techniques on Big Data. This set of activities is mainly what people think about when they consider the role of the Data Scientist: namely, applying complex or advanced analytical methods to a variety of business problems using data. Chapter 3 through Chapter 11 of this book introduces the reader to many of the most popular analytical techniques and tools in this area. Develop insights that lead to actionable recommendations. It is critical to note that applying advanced methods to data problems does not necessarily drive new business value. Instead, it is important to learn how to draw insights out of the data and communicate them effectively. Chapter 12, “The Endgame, or Putting It All

Together,” has a brief overview of techniques for doing this.

Data scientists are generally thought of as having five main sets of skills and behavioral characteristics, as shown in Figure 1.13:

Quantitative skill: such as mathematics or statistics Technical aptitude: namely, software engineering, machine learning, and programming skills Skeptical mind-set and critical thinking: It is important that data scientists can examine their work critically rather than in a one-sided way. Curious and creative: Data scientists are passionate about data and finding creative ways to solve problems and portray information. Communicative and collaborative: Data scientists must be able to articulate the business value in a clear way and collaboratively work with other groups, including project sponsors and key stakeholders.

Figure 1.13 Profile of a Data Scientist

Data scientists are generally comfortable using this blend of skills to acquire, manage, analyze, and visualize data and tell compelling stories about it. The next section includes examples of what Data Science teams have created to drive new value or innovation with Big Data.

1.4 Examples of Big Data Analytics After describing the emerging Big Data ecosystem and new roles needed to support its growth, this section provides three examples of Big Data Analytics in different areas: retail, IT infrastructure, and social media.

As mentioned earlier, Big Data presents many opportunities to improve sales and marketing analytics. An example of this is the U.S. retailer Target. Charles Duhigg’s book The Power of Habit [4] discusses how Target used Big Data and advanced analytical methods to drive new revenue. After analyzing consumer-purchasing behavior, Target’s statisticians determined that the retailer made a great deal of money from three main life- event situations.

Marriage, when people tend to buy many new products Divorce, when people buy new products and change their spending habits Pregnancy, when people have many new things to buy and have an urgency to buy them

Target determined that the most lucrative of these life-events is the third situation: pregnancy. Using data collected from shoppers, Target was able to identify this fact and predict which of its shoppers were pregnant. In one case, Target knew a female shopper was pregnant even before her family knew [5]. This kind of knowledge allowed Target to offer specific coupons and incentives to their pregnant shoppers. In fact, Target could not only determine if a shopper was pregnant, but in which month of pregnancy a shopper may be. This enabled Target to manage its inventory, knowing that there would be demand for specific products and it would likely vary by month over the coming nine- to ten- month cycles.

Hadoop [6] represents another example of Big Data innovation on the IT infrastructure. Apache Hadoop is an open source framework that allows companies to process vast amounts of information in a highly parallelized way. Hadoop represents a specific implementation of the MapReduce paradigm and was designed by Doug Cutting and Mike Cafarella in 2005 to use data with varying structures. It is an ideal technical framework for many Big Data projects, which rely on large or unwieldy datasets with unconventional data structures. One of the main benefits of Hadoop is that it employs a distributed file system, meaning it can use a distributed cluster of servers and commodity hardware to process large amounts of data. Some of the most common examples of Hadoop implementations are in the social media space, where Hadoop can manage transactions, give textual updates, and develop social graphs among millions of users. Twitter and Facebook generate massive amounts of unstructured data and use Hadoop and its ecosystem of tools to manage this high volume. Hadoop and its ecosystem are covered in Chapter 10, “Advanced Analytics—Technology and Tools: MapReduce and Hadoop.”

Finally, social media represents a tremendous opportunity to leverage social and professional interactions to derive new insights. LinkedIn exemplifies a company in which data itself is the product. Early on, LinkedIn founder Reid Hoffman saw the opportunity to create a social network for working professionals. As of 2014, LinkedIn has more than

250 million user accounts and has added many additional features and data-related products, such as recruiting, job seeker tools, advertising, and InMaps, which show a social graph of a user’s professional network. Figure 1.14 is an example of an InMap visualization that enables a LinkedIn user to get a broader view of the interconnectedness of his contacts and understand how he knows most of them.

Figure 1.14 Data visualization of a user’s social network using InMaps

Summary Big Data comes from myriad sources, including social media, sensors, the Internet of Things, video surveillance, and many sources of data that may not have been considered data even a few years ago. As businesses struggle to keep up with changing market requirements, some companies are finding creative ways to apply Big Data to their growing business needs and increasingly complex problems. As organizations evolve their processes and see the opportunities that Big Data can provide, they try to move beyond traditional BI activities, such as using data to populate reports and dashboards, and move toward Data Science- driven projects that attempt to answer more open-ended and complex questions.

However, exploiting the opportunities that Big Data presents requires new data architectures, including analytic sandboxes, new ways of working, and people with new skill sets. These drivers are causing organizations to set up analytic sandboxes and build Data Science teams. Although some organizations are fortunate to have data scientists, most are not, because there is a growing talent gap that makes finding and hiring data scientists in a timely manner difficult. Still, organizations such as those in web retail, health care, genomics, new IT infrastructures, and social media are beginning to take advantage of Big Data and apply it in creative and novel ways.

Exercises 1. What are the three characteristics of Big Data, and what are the main considerations

in processing Big Data? 2. What is an analytic sandbox, and why is it important? 3. Explain the differences between BI and Data Science. 4. Describe the challenges of the current analytical architecture for data scientists. 5. What are the key skill sets and behavioral characteristics of a data scientist?

Bibliography 1. [1] C. B. B. D. Manyika, “Big Data: The Next Frontier for Innovation, Competition, and Productivity,” McKinsey Global Institute, 2011.

2. [2] D. R. John Gantz, “The Digital Universe in 2020: Big Data, Bigger Digital Shadows, and Biggest Growth in the Far East,” IDC, 2013.

3. [3] http://www.willisresilience.com/emc-datalab [Online].

4. [4] C. Duhigg, The Power of Habit: Why We Do What We Do in Life and Business, New York: Random House, 2012.

5. [5] K. Hill, “How Target Figured Out a Teen Girl Was Pregnant Before Her Father Did,” Forbes, February 2012.

6. [6] http://hadoop.apache.org [Online].

http://www.willisresilience.com/emc-datalab
http://hadoop.apache.org
Chapter 2 Data Analytics Lifecycle

Key Concepts 1. Discovery 2. Data preparation 3. Model planning 4. Model execution 5. Communicate results 6. Operationalize

Data science projects differ from most traditional Business Intelligence projects and many data analysis projects in that data science projects are more exploratory in nature. For this reason, it is critical to have a process to govern them and ensure that the participants are thorough and rigorous in their approach, yet not so rigid that the process impedes exploration.

Many problems that appear huge and daunting at first can be broken down into smaller pieces or actionable phases that can be more easily addressed. Having a good process ensures a comprehensive and repeatable method for conducting analysis. In addition, it helps focus time and energy early in the process to get a clear grasp of the business problem to be solved.

A common mistake made in data science projects is rushing into data collection and analysis, which precludes spending sufficient time to plan and scope the amount of work involved, understanding requirements, or even framing the business problem properly. Consequently, participants may discover mid-stream that the project sponsors are actually trying to achieve an objective that may not match the available data, or they are attempting to address an interest that differs from what has been explicitly communicated. When this happens, the project may need to revert to the initial phases of the process for a proper discovery phase, or the project may be canceled.

Creating and documenting a process helps demonstrate rigor, which provides additional credibility to the project when the data science team shares its findings. A well-defined process also offers a common framework for others to adopt, so the methods and analysis can be repeated in the future or as new members join a team.

2.1 Data Analytics Lifecycle Overview The Data Analytics Lifecycle is designed specifically for Big Data problems and data science projects. The lifecycle has six phases, and project work can occur in several phases at once. For most phases in the lifecycle, the movement can be either forward or backward. This iterative depiction of the lifecycle is intended to more closely portray a real project, in which aspects of the project move forward and may return to earlier stages as new information is uncovered and team members learn more about various stages of the project. This enables participants to move iteratively through the process and drive toward operationalizing the project work.

2.1.1 Key Roles for a Successful Analytics Project In recent years, substantial attention has been placed on the emerging role of the data scientist. In October 2012, Harvard Business Review featured an article titled “Data Scientist: The Sexiest Job of the 21st Century” [1], in which experts DJ Patil and Tom Davenport described the new role and how to find and hire data scientists. More and more conferences are held annually focusing on innovation in the areas of Data Science and topics dealing with Big Data. Despite this strong focus on the emerging role of the data scientist specifically, there are actually seven key roles that need to be fulfilled for a high- functioning data science team to execute analytic projects successfully.

Figure 2.1 depicts the various roles and key stakeholders of an analytics project. Each plays a critical part in a successful analytics project. Although seven roles are listed, fewer or more people can accomplish the work depending on the scope of the project, the organizational structure, and the skills of the participants. For example, on a small, versatile team, these seven roles may be fulfilled by only 3 people, but a very large project may require 20 or more people. The seven roles follow.

Business User: Someone who understands the domain area and usually benefits from the results. This person can consult and advise the project team on the context of the project, the value of the results, and how the outputs will be operationalized. Usually a business analyst, line manager, or deep subject matter expert in the project domain fulfills this role. Project Sponsor: Responsible for the genesis of the project. Provides the impetus and requirements for the project and defines the core business problem. Generally provides the funding and gauges the degree of value from the final outputs of the working team. This person sets the priorities for the project and clarifies the desired outputs. Project Manager: Ensures that key milestones and objectives are met on time and at the expected quality. Business Intelligence Analyst: Provides business domain expertise based on a deep understanding of the data, key performance indicators (KPIs), key metrics, and business intelligence from a reporting perspective. Business Intelligence Analysts generally create dashboards and reports and have knowledge of the data feeds and sources.

Database Administrator (DBA): Provisions and configures the database environment to support the analytics needs of the working team. These responsibilities may include providing access to key databases or tables and ensuring the appropriate security levels are in place related to the data repositories. Data Engineer: Leverages deep technical skills to assist with tuning SQL queries for data management and data extraction, and provides support for data ingestion into the analytic sandbox, which was discussed in Chapter 1, “Introduction to Big Data Analytics.” Whereas the DBA sets up and configures the databases to be used, the data engineer executes the actual data extractions and performs substantial data manipulation to facilitate the analytics. The data engineer works closely with the data scientist to help shape data in the right ways for analyses. Data Scientist: Provides subject matter expertise for analytical techniques, data modeling, and applying valid analytical techniques to given business problems. Ensures overall analytics objectives are met. Designs and executes analytical methods and approaches with the data available to the project.

Figure 2.1 Key roles for a successful analytics project

Although most of these roles are not new, the last two roles—data engineer and data scientist—have become popular and in high demand [2] as interest in Big Data has grown.

2.1.2 Background and Overview of Data Analytics Lifecycle The Data Analytics Lifecycle defines analytics process best practices spanning discovery to project completion. The lifecycle draws from established methods in the realm of data analytics and decision science. This synthesis was developed after gathering input from data scientists and consulting established approaches that provided input on pieces of the

process. Several of the processes that were consulted include these:

Scientific method [3], in use for centuries, still provides a solid framework for thinking about and deconstructing problems into their principal parts. One of the most valuable ideas of the scientific method relates to forming hypotheses and finding ways to test ideas. CRISP-DM [4] provides useful input on ways to frame analytics problems and is a popular approach for data mining. Tom Davenport’s DELTA framework [5]: The DELTA framework offers an approach for data analytics projects, including the context of the organization’s skills, datasets, and leadership engagement. Doug Hubbard’s Applied Information Economics (AIE) approach [6]: AIE provides a framework for measuring intangibles and provides guidance on developing decision models, calibrating expert estimates, and deriving the expected value of information. “MAD Skills” by Cohen et al. [7] offers input for several of the techniques mentioned in Phases 2–4 that focus on model planning, execution, and key findings.

Figure 2.2 presents an overview of the Data Analytics Lifecycle that includes six phases. Teams commonly learn new things in a phase that cause them to go back and refine the work done in prior phases based on new insights and information that have been uncovered. For this reason, Figure 2.2 is shown as a cycle. The circular arrows convey iterative movement between phases until the team members have sufficient information to move to the next phase. The callouts include sample questions to ask to help guide whether each of the team members has enough information and has made enough progress to move to the next phase of the process. Note that these phases do not represent formal stage gates; rather, they serve as criteria to help test whether it makes sense to stay in the current phase or move to the next.

Figure 2.2 Overview of Data Analytics Lifecycle

Here is a brief overview of the main phases of the Data Analytics Lifecycle:

Phase 1—Discovery: In Phase 1, the team learns the business domain, including relevant history such as whether the organization or business unit has attempted similar projects in the past from which they can learn. The team assesses the resources available to support the project in terms of people, technology, time, and data. Important activities in this phase include framing the business problem as an analytics challenge that can be addressed in subsequent phases and formulating initial hypotheses (IHs) to test and begin learning the data. Phase 2—Data preparation: Phase 2 requires the presence of an analytic sandbox, in which the team can work with data and perform analytics for the duration of the project. The team needs to execute extract, load, and transform (ELT) or extract, transform and load (ETL) to get data into the sandbox. The ELT and ETL are

sometimes abbreviated as ETLT. Data should be transformed in the ETLT process so the team can work with it and analyze it. In this phase, the team also needs to familiarize itself with the data thoroughly and take steps to condition the data (Section 2.3.4). Phase 3—Model planning: Phase 3 is model planning, where the team determines the methods, techniques, and workflow it intends to follow for the subsequent model building phase. The team explores the data to learn about the relationships between variables and subsequently selects key variables and the most suitable models. Phase 4—Model building: In Phase 4, the team develops datasets for testing, training, and production purposes. In addition, in this phase the team builds and executes models based on the work done in the model planning phase. The team also considers whether its existing tools will suffice for running the models, or if it will need a more robust environment for executing models and workflows (for example, fast hardware and parallel processing, if applicable). Phase 5—Communicate results: In Phase 5, the team, in collaboration with major stakeholders, determines if the results of the project are a success or a failure based on the criteria developed in Phase 1. The team should identify key findings, quantify the business value, and develop a narrative to summarize and convey findings to stakeholders. Phase 6—Operationalize: In Phase 6, the team delivers final reports, briefings, code, and technical documents. In addition, the team may run a pilot project to implement the models in a production environment.

Once team members have run models and produced findings, it is critical to frame these results in a way that is tailored to the audience that engaged the team. Moreover, it is critical to frame the results of the work in a manner that demonstrates clear value. If the team performs a technically accurate analysis but fails to translate the results into a language that resonates with the audience, people will not see the value, and much of the time and effort on the project will have been wasted.

The rest of the chapter is organized as follows. Sections 2.2–2.7 discuss in detail how each of the six phases works, and Section 2.8 shows a case study of incorporating the Data Analytics Lifecycle in a real-world data science project.

2.2 Phase 1: Discovery The first phase of the Data Analytics Lifecycle involves discovery (Figure 2.3). In this phase, the data science team must learn and investigate the problem, develop context and understanding, and learn about the data sources needed and available for the project. In addition, the team formulates initial hypotheses that can later be tested with data.

Figure 2.3 Discovery phase

2.2.1 Learning the Business Domain Understanding the domain area of the problem is essential. In many cases, data scientists will have deep computational and quantitative knowledge that can be broadly applied across many disciplines. An example of this role would be someone with an advanced degree in applied mathematics or statistics.

These data scientists have deep knowledge of the methods, techniques, and ways for applying heuristics to a variety of business and conceptual problems. Others in this area

may have deep knowledge of a domain area, coupled with quantitative expertise. An example of this would be someone with a Ph.D. in life sciences. This person would have deep knowledge of a field of study, such as oceanography, biology, or genetics, with some depth of quantitative knowledge.

Homework is Completed By:

Writer Writer Name Amount Client Comments & Rating
Instant Homework Helper

ONLINE

Instant Homework Helper

$36

She helped me in last minute in a very reasonable price. She is a lifesaver, I got A+ grade in my homework, I will surely hire her again for my next assignments, Thumbs Up!

Order & Get This Solution Within 3 Hours in $25/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 3 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 6 Hours in $20/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 6 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 12 Hours in $15/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 12 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

6 writers have sent their proposals to do this homework:

Exam Attempter
ECFX Market
Pro Writer
Math Specialist
Homework Guru
Smart Tutor
Writer Writer Name Offer Chat
Exam Attempter

ONLINE

Exam Attempter

I am an elite class writer with more than 6 years of experience as an academic writer. I will provide you the 100 percent original and plagiarism-free content.

$27 Chat With Writer
ECFX Market

ONLINE

ECFX Market

I will provide you with the well organized and well research papers from different primary and secondary sources will write the content that will support your points.

$39 Chat With Writer
Pro Writer

ONLINE

Pro Writer

I have assisted scholars, business persons, startups, entrepreneurs, marketers, managers etc in their, pitches, presentations, market research, business plans etc.

$36 Chat With Writer
Math Specialist

ONLINE

Math Specialist

I have assisted scholars, business persons, startups, entrepreneurs, marketers, managers etc in their, pitches, presentations, market research, business plans etc.

$45 Chat With Writer
Homework Guru

ONLINE

Homework Guru

As per my knowledge I can assist you in writing a perfect Planning, Marketing Research, Business Pitches, Business Proposals, Business Feasibility Reports and Content within your given deadline and budget.

$48 Chat With Writer
Smart Tutor

ONLINE

Smart Tutor

I have assisted scholars, business persons, startups, entrepreneurs, marketers, managers etc in their, pitches, presentations, market research, business plans etc.

$46 Chat With Writer

Let our expert academic writers to help you in achieving a+ grades in your homework, assignment, quiz or exam.

Similar Homework Questions

Toms shoes a dedication to social responsibility case study - New practice approaches in nursing - Issues of Diversity and Children’s Publishing in the U.K? - NTC/302: Network Web Services - Discussion W5 - Writing - Www starboard wendys com careers - Discussion Response - Othello online text with line numbers - Soontjen racing pigeons for sale - Alternating bands of gray and pigmented hair - Church christmas concert flyer - Managerial accounting - Short story analysis essay assignment - Operational Excellence - 1 4 9 16 pattern - Commitment: a cautionary tale and provide an example of this issue which has happened to you. - Proteins and enzymes worksheet - The leader who inspired me essay - Platelet count calculation formula - Leon guerrero social problems pdf - Lame v3 99.3 for windows exe - Walden nurs 6630 final exam - Capstone - Prince sultan university mba - Mail unit bx1 1lt - Daily routine of a medieval monk - Shrm bock model and its components - Parkin economics 12th edition answers - Walking for Water - To mrs ma at parting analysis - Hp elite x2 1012 g1 hdmi - Fourier sine and cosine series - Example of a speech outline in apa format - Track software inc integrative case 2 - Which of the following best describes relationship conflict? - Urrent plans that your selected company has identified for capital - International market research ppt - Troubles and issues sociology examples - Escape from the western diet they say i say essay - Aslan v murphy 1990 - Columbia southern university algebra 1 - Www kidsastronomy com solar system htm - Chapter 13 case scanner project - Compensation Strategy - Greek theatre worksheet answers - The system is initially moving with the cable taut - Elleebana one shot lash lift instructions - Initial Evaluation Design - Ionic bonding problems with answers - FOUNDATION OF PUBLIC HEALTH NURSING - Red bull imc plan - Accommodation mount baw baw - Alexis lichine & co - 6 traits of writing examples - 112 shields bldg university park pa 16802 - Churns food adds juices - Hyundai iload specs nz - The hedgehog and the fox pdf - Security systems ultimately depend on the behavior of their ________ - Steps to follow for analyzing an ethical dilemma - The greek claims his shock heir epub bud - Chi square modeling using m&m's candies - How to calculate full load current of 3 phase transformer - How did erp help improve business operations at shell - Critique of an economic evaluation using the drummond checklist - Clifford stone cosmic disclosure - 15 grand strategies identified by pearce and robinson - Health and safety guidance notes - Winfield refuse management inc case solution - Password checker online domain tools - Capella mba flexpath - Clinical Field Experience B: Mathematical Discourse and Academic Vocabulary { No plagiarism} - Point between two charges where electric field is zero - Victorian Civil Administrative Tribunal - Communication skills assignment - Jean michel basquiat horn players - 300 words - Pros and cons of community policing - SHOSHANGUVE #[ ••• +2761O482071•••##]@)) EARLY TERMINATION- PILLS FOR SALE IN SHOSHANGUVE PRETORIA WEST, KLERKSDORP - Nani poem by alberto rios - Up north poem sam cook - Final exams - A1 a2 a3 sizes - New england journal of entrepreneurship - Need help with Ph.D. Questionnaire - Possible sources of error in an experiment - What is a critical review - Final Research Paper - Current event - Factoring special polynomials worksheet - Feasibility study template doc - Business case discussion ross examples - In a diversified company the strategy making hierarchy consists of - Sean blanda the other side is not dumb - Spark plug catalogue pdf - Jones new york charging pouch instructions - En el vecindario capitulo 2 gramatica 2 answers - Chapter 16 Alterations in Blood Pressure - Fin-650 - Waves word search answers