16 Decision Support and Business Intelligence Systems (9th Edition) Instructor’s Manual
Chapter 7:
Text Analytics, Text Mining, and Sentiment Analysis
Learning Objectives for Chapter 7
1. Describe text mining and understand the need for text mining
2. Differentiate among text analytics, text mining, and data mining
3. Understand the different application areas for text mining
4. Know the process of carrying out a text mining project
5. Appreciate the different methods to introduce structure to text-based data
6. Describe sentiment analysis
7. Develop familiarity with popular applications of sentiment analysis
8. Learn the common methods for sentiment analysis
9. Become familiar with speech analytics as it relates to sentiment analysis
10. Learn three facets of Web analytics—content, structure, and usage mining
11. Know social analytics including social media and social network analyses
CHAPTER OVERVIEW
This chapter provides a comprehensive overview of text analytics/mining and Web analytics/mining along with their popular application areas such as search engines, sentiment analysis, and social network/media analytics. As we have been witnessing in recent years, the unstructured data generated over the Internet of Things (IoT) (Web, sensor networks, radio-frequency identification [RFID]–enabled supply chain systems, surveillance networks, etc.) are increasing at an exponential pace, and there is no indication of its slowing down. This changing nature of data is forcing organizations to make text and Web analytics a critical part of their business intelligence/analytics infrastructure.
CHAPTER OUTLINE
7.1 Opening Vignette: Amadori Group Converts Consumer Sentiments into
Near-Real-Time Sales
7.2 Text Analytics and Text Mining Overview
7.3 Natural Language Processing (NLP)
7.4 Text Mining Applications
7.5 Text Mining Process
7.6 Sentiment Analysis
7.7 Web Mining Overview
7.8 Search Engines
7.9 Web Usage Mining
7.10 Social Analytics
ANSWERS TO END OF SECTION REVIEW QUESTIONS( ( ( ( ( (
Section 7.1 Review Questions
1. According to the vignette and based on your opinion, what are the challenges that the food industry is facing today?
Student perceptions may vary, but some common themes related to the challenges faced by the food industry could include the changing nature and role of food in people’s lifestyles, the shift towards pre-prepared or easily prepared food, and the growing importance of marketing to keep customers interested in brands.
2. How can analytics help businesses in the food industry to survive and thrive in this competitive marketplace?
Analytics can serve dual purposes by both tracking customer interest in the brand as well as providing valuable feedback on customer preferences. An analytics system can be used to evaluate the traffic to various brand marketing campaigns (website or social) that play a pivotal role in ensuring that products are being shown to new potential buyers and reminding existing customers of their value. An analytics system can also be used to help gather customer feedback and perception information on a brand in general or products in particular. This valuable information can be used as a part of both marketing and product design.
3. What were and still are the main objectives for Amadori to embark into analytics? What were the results?
The company’s main objectives were to market more effectively to potential customers and create direct communications through social media and other channels with current customers to start a dialogue. The case illustrates how an analytics system integrated with thoughtful website design can help a company meet these goals.
4. Can you think of other businesses in the food industry that utilize analytics to become more competitive and customer focused? If not, an Internet search could help find relevant information to answer this question.
Student opinions and Web searches will vary, but will show similar strategies for packaged foods as well as fast foods in the US.
Section 7.2 Review Questions
1. What is text analytics? How does it differ from text mining?
Text analytics is a concept that includes information retrieval (e.g., searching and identifying relevant documents for a given set of key terms) as well as information extraction, data mining, and Web mining. By contrast, text mining is primarily focused on discovering new and useful knowledge from textual data sources. The overarching goal for both text analytics and text mining is to turn unstructured textual data into actionable information through the application of natural language processing (NLP) and analytics. However, text analytics is a broader term because of its inclusion of information retrieval. You can think of text analytics as a combination of information retrieval plus text mining.
2. What is text mining? How does it differ from data mining?
Text mining is the application of data mining to unstructured, or less structured, text files. As the names indicate, text mining analyzes words; and data mining analyzes numeric data.
3. Why is the popularity of text mining as an analytics tool increasing?
Text mining as a BI is increasing because of the rapid growth in text data and availability of sophisticated BI tools. The benefits of text mining are obvious in the areas where very large amounts of textual data are being generated, such as law (court orders), academic research (research articles), finance (quarterly reports), medicine (discharge summaries), biology (molecular interactions), technology (patent files), and marketing (customer comments).
4. What are some popular application areas of text mining?
· Information extraction. Identification of key phrases and relationships within text by looking for predefined sequences in text via pattern matching.
· Topic tracking. Based on a user profile and documents that a user views, text mining can predict other documents of interest to the user.
· Summarization. Summarizing a document to save time on the part of the reader.
· Categorization. Identifying the main themes of a document and then placing the document into a predefined set of categories based on those themes.
· Clustering. Grouping similar documents without having a predefined set of categories.
· Concept linking. Connects related documents by identifying their shared concepts and, by doing so, helps users find information that they perhaps would not have found using traditional search methods.
· Question answering. Finding the best answer to a given question through knowledge-driven pattern matching.
Section 7.3 Review Questions
1. What is NLP?
Natural language processing (NLP) is an important component of text mining and is a subfield of artificial intelligence and computational linguistics. It studies the problem of “understanding” the natural human language, with the view of converting depictions of human language (such as textual documents) into more formal representations (in the form of numeric and symbolic data) that are easier for computer programs to manipulate.
2. How does NLP relate to text mining?
Text mining uses natural language processing to induce structure into the text collection and then uses data mining algorithms such as classification, clustering, association, and sequence discovery to extract knowledge from it.
3. What are some of the benefits and challenges of NLP?
NLP moves beyond syntax-driven text manipulation (which is often called “word counting”) to a true understanding and processing of natural language that considers grammatical and semantic constraints as well as the context. The challenges include:
· Part-of-speech tagging. It is difficult to mark up terms in a text as corresponding to a particular part of speech because the part of speech depends not only on the definition of the term but also on the context within which it is used.
· Text segmentation. Some written languages, such as Chinese, Japanese, and Thai, do not have single-word boundaries.
· Word sense disambiguation. Many words have more than one meaning. Selecting the meaning that makes the most sense can only be accomplished by taking into account the context within which the word is used.
· Syntactic ambiguity. The grammar for natural languages is ambiguous; that is, multiple possible sentence structures often need to be considered. Choosing the most appropriate structure usually requires a fusion of semantic and contextual information.
· Imperfect or irregular input. Foreign or regional accents and vocal impediments in speech and typographical or grammatical errors in texts make the processing of the language an even more difficult task.
· Speech acts. A sentence can often be considered an action by the speaker. The sentence structure alone may not contain enough information to define this action.
4. What are the most common tasks addressed by NLP?
Following are among the most popular tasks:
• Question answering.
• Automatic summarization.
• Natural language generation.
• Natural language understanding.
• Machine translation.
• Foreign language reading.
• Foreign language writing.
• Speech recognition.
• Text-to-speech.
• Text proofing.
• Optical character recognition.
Section 7.4 Review Questions
5. List and briefly discuss some of the text mining applications in marketing.
Text mining can be used to increase cross-selling and up-selling by analyzing the unstructured data generated by call centers.
Text mining has become invaluable for customer relationship management. Companies can use text mining to analyze rich sets of unstructured text data, combined with the relevant structured data extracted from organizational databases, to predict customer perceptions and subsequent purchasing behavior.
6. How can text mining be used in security and counterterrorism?
Students may use the introductory case in this answer.
In 2007, EUROPOL developed an integrated system capable of accessing, storing, and analyzing vast amounts of structured and unstructured data sources in order to track transnational organized crime.
Another security-related application of text mining is in the area of deception detection.
7. What are some promising text mining applications in biomedicine?
As in any other experimental approach, it is necessary to analyze this vast amount of data in the context of previously known information about the biological entities under study. The literature is a particularly valuable source of information for experiment validation and interpretation. Therefore, the development of automated text mining tools to assist in such interpretation is one of the main challenges in current bioinformatics research.
Section 7.5 Review Questions
8. What are the main steps in the text mining process?
See Figure 7.6 (p. 309). Text mining entails three tasks:
· Establish the Corpus: Collect and organize the domain-specific unstructured data
· Create the Term–Document Matrix: Introduce structure to the corpus
· Extract Knowledge: Discover novel patterns from the T-D matrix
9. What is the reason for normalizing word frequencies? What are the common methods for normalizing word frequencies?
The raw indices need to be normalized in order to have a more consistent TDM for further analysis. Common methods are log frequencies, binary frequencies, and inverse document frequencies.
10. What is SVD? How is it used in text mining?
Singular value decomposition (SVD), which is closely related to principal components analysis, reduces the overall dimensionality of the input matrix (number of input documents by number of extracted terms) to a lower dimensional space, where each consecutive dimension represents the largest degree of variability (between words and documents) possible
11. What are the main knowledge extraction methods from corpus?
The main categories of knowledge extraction methods are classification, clustering, association, and trend analysis.
Section 7.6 Review Questions
12. What is sentiment analysis? How does it relate to text mining?
Sentiment analysis tries to answer the question, “What do people feel about a certain topic?” by digging into opinions of many using a variety of automated tools. It is also known as opinion mining, subjectivity analysis, and appraisal extraction
Sentiment analysis shares many characteristics and techniques with text mining. However, unlike text mining, which categorizes text by conceptual taxonomies of topics, sentiment classification generally deals with two classes (positive versus negative), a range of polarity (e.g., star ratings for movies), or a range in strength of opinion.
13. What are the most popular application areas for sentiment analysis? Why?
Customer relationship management (CRM) and customer experience management are popular “voice of the customer (VOC)” applications. Other application areas include “voice of the market (VOM)” and “voice of the employee (VOE).”
14. What would be the expected benefits and beneficiaries of sentiment analysis in politics?
Opinions matter a great deal in politics. Because political discussions are dominated by quotes, sarcasm, and complex references to persons, organizations, and ideas, politics is one of the most difficult, and potentially fruitful, areas for sentiment analysis. By analyzing the sentiment on election forums, one may predict who is more likely to win or lose. Sentiment analysis can help understand what voters are thinking and can clarify a candidate’s position on issues. Sentiment analysis can help political organizations, campaigns, and news analysts to better understand which issues and positions matter the most to voters. The technology was successfully applied by both parties to the 2008 and 2012 American presidential election campaigns.
15. What are the main steps in carrying out sentiment analysis projects?
The first step when performing sentiment analysis of a text document is called sentiment detection, during which text data is differentiated between fact and opinion (objective vs. subjective). This is followed by negative-positive (N-P) polarity classification, where a subjective text item is classified on a bipolar range. Following this comes target identification (identifying the person, product, event, etc. that the sentiment is about). Finally come collection and aggregation, in which the overall sentiment for the document is calculated based on the calculations of sentiments of individual phrases and words from the first three steps.
16. What are the two common methods for polarity identification? What is the main difference between the two?
Polarity identification can be done via a lexicon (as a reference library) or by using a collection of training documents and inductive machine learning algorithms. The lexicon approach uses a catalog of words, their synonyms, and their meanings, combined with numerical ratings indicating the position on the N-P polarity associated with these words. In this way, affective, emotional, and attitudinal phrases can be classified according to their degree of positivity or negativity. By contrast, the training-document approach uses statistical analysis and machine learning algorithms, such as neural networks, clustering approaches, and decision trees to ascertain the sentiment for a new text document based on patterns from previous “training” documents with assigned sentiment scores.
Section 7.7 Review Questions
17. What are some of the main challenges the Web poses for knowledge discovery?
• The Web is too big for effective data mining.
• The Web is too complex.
• The Web is too dynamic.
• The Web is not specific to a domain.
• The Web has everything.
18. What is Web mining? How does it differ from regular data mining or text mining?
Web mining is the discovery and analysis of interesting and useful information from the Web and about the Web, usually through Web-based tools. Text mining is less structured because it’s based on words instead of numeric data.
19. What are the three main areas of Web mining?
The three main areas of Web mining are Web content mining, Web structure mining, and Web usage (or activity) mining.
20. Identify three application areas for Web mining (at the bottom of Figure 8.1). Based on your own experiences, comment on their use cases in business settings.
(Since there are several application areas, this answer will vary for different students. Following is one possible answer.)
Three possible application areas for Web mining include sentiment analysis, clickstream analysis, and customer analytics. Clickstream analysis helps to better understand user behavior on a website. Sentiment analysis helps us understand the opinions and affective state of users on a system. Customer analytics helps to provide solutions for sales, service, marketing, and product teams, and optimize the customer life cycles. The use cases for these applications center on user experience, and primarily affect customer service and customer relationship management functions of an organization.
21. What is Web content mining? How can it be used for competitive advantage?
Web content mining refers to the extraction of useful information from Web pages. The documents may be extracted in some machine-readable format so that automated techniques can generate some information about the Web pages. Collecting and mining Web content can be used for competitive intelligence (collecting intelligence about competitors’ products, services, and customers), which can give your organization a competitive advantage.
22. What is Web structure mining? How does it differ from Web content mining?
Web structure mining is the process of extracting useful information from the links embedded in Web documents. By contrast, Web content mining involves analysis of the specific textual content of web pages. So, Web structure mining is more related to navigation through a website, whereas Web content mining is more related to text mining and the document hierarchy of a particular web page.
Section 7.8 Review Questions
1. What is a search engine? Why are search engines critically important for today’s businesses?
A search engine is a software program that searches for documents (Internet sites or files) based on the keywords (individual words, multi-word terms, or a complete sentence) that users have provided that have to do with the subject of their inquiry. This is the most prominent type of information retrieval system for finding relevant content on the Web. Search engines have become the centerpiece of most Internet-based transactions and other activities. Because people use them extensively to learn about products and services, it is very important for companies to have prominent visibility on the Web; hence the major effort of companies to enhance their search engine optimization (SEO).
2. What is a Web crawler? What is it used for? How does it work?
A Web crawler (also called a spider or a Web spider) is a piece of software that systematically browses (crawls through) the World Wide Web for the purpose of finding and fetching Web pages. It starts with a list of “seed” URLs, goes to the pages of those URLs, and then follows each page’s hyperlinks, adding them to the search engine’s database. Thus, the Web crawler navigates through the Web in order to construct the database of websites.
3. What is “search engine optimization”? Who benefits from it?
Search engine optimization (SEO) is the intentional activity of affecting the visibility of an e-commerce site or a website in a search engine’s natural (unpaid or organic) search results. It involves editing a page’s content, HTML, metadata, and associated coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines. In addition, SEO efforts include promoting a site to increase its number of inbound links. SEO primarily benefits companies with e-commerce sites by making their pages appear toward the top of search engine lists when users query.
4. What things can help Web pages rank higher in search engine results?
Cross-linking between pages of the same website to provide more links to the most important pages may improve its visibility. Writing content that includes frequently searched keyword phrases, so as to be relevant to a wide variety of search queries, will tend to increase traffic. Updating content so as to keep search engines crawling back frequently can give additional weight to a site. Adding relevant keywords to a Web page’s metadata, including the title tag and metadescription, will tend to improve the relevancy of a site’s search listings, thus increasing traffic. URL normalization of Web pages so that they are accessible via multiple URLs. Using canonical link elements and redirects can help make sure links to different versions of the URL all count toward the page’s link popularity score.
Section 7.9 Review Questions
1. What are the three types of data generated through Web page visits?
· Automatically generated data stored in server access logs, referrer logs, agent logs, and client-side cookies
· User profiles
· Metadata, such as page attributes, content attributes, and usage data.
2. What is clickstream analysis? What is it used for?
Analysis of the information collected by Web servers can help us better understand user behavior. Analysis of this data is often called clickstream analysis. By using the data and text mining techniques, a company might be able to discern interesting patterns from the clickstreams.
3. What are the main applications of Web mining?
· Determine the lifetime value of clients.
· Design cross-marketing strategies across products.
· Evaluate promotional campaigns.
· Target electronic ads and coupons at user groups based on user access patterns.
· Predict user behavior based on previously learned rules and users’ profiles.
· Present dynamic information to users based on their interests and profiles.
4. What are commonly used Web analytics metrics? What is the importance of metrics?
There are four main categories of Web analytic metrics:
· Website usability: How were they using my website? These involve page views, time on site, downloads, click map, and click paths.
· Traffic sources: Where did they come from? These include referral websites, search engines, direct, offline campaigns, and online campaigns.
· Visitor profiles: What do my visitors look like? These include keywords, content groupings, geography, time of day, and landing page profiles.
· Conversion statistics: What does all this mean for the business? Metrics include new visitors, returning visitors, leads, sales/conversions, and abandonments.
These metrics are important because they provide access to a lot of valuable marketing data, which can be leveraged for better insights to grow your business and better document your ROI. The insight and intelligence gained from Web analytics can be used to effectively manage the marketing efforts of an organization and its various products or services.