We studied Apache Pig in lecture # 4. You are supposed to do online research and find out one case study where Apache Pig was used to solve a particular problem. I am expecting 3 page write-up. Please provide as much technical details as possible about solution through Apache Pig. Please draw technical diagrams to explain the solution.
I am expecting maximum one page for business problem and 2 pages of technical solution. I want everyone to do research and provide their own write-up.Assignment # 3 is a research assignment. We studied Apache Pig in lecture # 4. You are supposed to do online research and find out one case study where Apache Pig was used to solve a particular problem. I am expecting 3 page write-up. Please provide as much technical details as possible about solution through Apache Pig. Please draw technical diagrams to explain the solution. I am expecting maximum one page for business problem and 2 pages of technical solution. I want everyone to do research and provide their own write-up. I am not happy that some students are copying from websites and not putting their effort to do research. These assignments will formulate your final grade so if you want to score high grades then show originality in your research. CPSC6730 Big Data Analytics Lecture # 4 Apache Hive • Apache Hive is part of Data Access in the Hadoop ecosystem and can be installed when you install the Hortonworks Data Platform The Problem • Until recently most of the data maintained by an enterprise has been stored in a relational database and has been analyzed using a structured query language. As a result, most data analysts today are familiar with a structured query language. However, data in Hadoop is commonly analyzed using MapReduce. Many data analysts are not familiar with MapReduce and would require training to use it. This limits how quickly an enterprise can derive value from a Hadoop deployment. How do enterprises bridge this knowledge gap? The Solution • Apache Hive bridges the knowledge gap by enabling data analysts to use familiar SQL-like commands that are automatically converted to MapReduce jobs and executed across the Hadoop cluster. Hive is a data warehouse infrastructure built on top of Hadoop. It was designed to enable users with database experience to analyze data using familiar SQL-like statements. Hive includes a SQL-like language called Hive Query Language, or HQL. Hive and HQL enable an enterprise to utilize existing skillsets to quickly derive value from a Hadoop deployment. OLTP or OLAP • Hive is used for online analytical processing (OLAP) and not online transaction processing (OLTP). This is because Hive was originally designed to run batch jobs rather than performing interactive queries or random table updates. Currently Hive offers no support for row-level inserts, updates, and deletes which are commonly required for OLTP. When Hive is run over MapReduce even the simplest Hive queries can take minutes to complete. If you run Hive over Tez (we will discuss it in later classes) rather than MapReduce, Hive is still not designed for OLTP. While Tez increases interactive performance, Hive still has no support for row-level inserts, updates, and deletes. However, work is currently being done to add these features to Hive. Structuring Unstructured Data Hive is not a relational database although, on the surface, it can appear like one. Hadoop was built to collect, store, and analyze massive amounts of data. As such, the Hadoop distributed file system, called HDFS, is a reservoir of data from multiple sources. The data is often a mix of unstructured, semi-structured, and structured data. Hive provides a mechanism to project structure onto HDFS data and then query it using HQL. However, there is a limit to what Hive can do. Sometimes it is necessary to use another tool, like Apache Pig, to pre-format the unstructured data before processing it using Hive. Structuring Unstructured Data If you are familiar with databases, then you understand that unstructured data has no schema associated with it. If you are not familiar with database schemas, they define the columns of a database along with the type of data in each column. Data types include such things as a string, an integer, a floating point number, or a date. A Hive installation includes a metastore database. Several database types are supported by Hive including an embedded Derby database used for development or testing, or an external database like MySQL used for production deployments. To project structure on HDFS data, HQL includes statements to create a table with user-defined schema information. The table schema is stored in the metastore database. The user-defined schema is associated with the data stored in one or more HDFS files when you use HQL statements to load the files into a table. The format of the data on HDFS remains unchanged but it appears as structured data when using HQL commands to submit queries. Submitting Hive Queries Hive includes many methods to submit queries. Queries submitted to either the HiveServer or newer HiveServer2 result in a MapReduce or Tez job being submitted to YARN. YARN, the Hadoop resource scheduler, works in concert with HDFS to run the job in parallel across the machines in the cluster. The Hive CLI is used to interactively or noninteractively submit HQL commands to the HiveServer. The illustration shows the Hive CLI being used interactively. Users enter HQL commands at the hive> prompt. HQL commands can also be placed into a file and run using hive –f file_name Submitting Hive Queries The remaining three methods all submit HQL queries to the newer HiveServer2. The Beeline CLI is a new JDBC client that connects to a local or remote HiveServer2.