By submitting this paper, you agree: (1) that you are submitting your paper to be used and stored as part of the SafeAssign™ services in accordance with the policy (2) that your institution may use your paper in accordance with your institution’s policies; and (3) that your use of SafeAssign will be without recourse against Blackboard Inc. and its affiliates.
Lab#6
Hive and Impala
1) Import table “webpage” via Sqoop
$ sqoop import \
–connect jdbc:mysql://localhost/loudacre \
–username training –password training \
–table webpage \
–target-dir /loudacre/webpage \
–fields-terminated-by “\t”
2) Validate the imported data in HDFS ( Please provide screenshot from Terminal and NOT HUE )
3) Create External Table via Hive session
CREATE EXTERNAL TABLE webpage
(page_id SMALLINT,
name STRING,
assoc_files STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘\t’
LOCATION ‘/loudacre/webpage’
4) Open FireFox browser and run the below query from Hive
SELECT * FROM webpage WHERE name LIKE “ifruit%”
5) Open FireFox browser and run the below query from Impala
SELECT * FROM webpage WHERE name LIKE “ifruit%”
MySQL
accounts
device
webpage
HDFS
/loudacre/accounts
/loudacre/devices
Sqoop Import
H
ad
o
o
p
C
lu
st
er
/localhost/ ….Java Code
MySQL
accounts
device
webpage
/loudacre/accounts
/loudacre/devices
Sqoop Import
H
ad
o
o
p
C
lu
st
er
/localhost/ ….Java Code Local Linux OS
HDFS
IMPALA HIVES
Hive and
Impala
Impala and Hive are both tools
that provide SQL querying of data
stored in HDFS / HBase
Source: Cloudera.com
Hive and Impala
Starting Hive and Impala
Starting Hive
• $sudo service zookeeper-server start
• $ sudo service hive-server2 start
• $Hive
01
Starting Impala
• $impala-shell
• >>Invalidate Metadata
02
Advanced Analytics – Theory and Methods
Advanced Analytics – Theory and Methods-Lab#4
Narender Reddy Kudumula
University of Cumberlands
Data Science & Big Data Analysis (ITS-836)
Prof. Dr. Gasan Elkhodari
09/29/2019
Advanced Analytics – Theory and Methods
Lab 4 Assignment
1) Plot and display the clusters for the high school
2) Discuss using your own words the impact of changing K value on the data sets used in this Lab.
When the variety of attributes is fantastically small, a not unusual method to further refine the selection of okay is to plot the information to determine how wonderful the identified clusters are from each different. In the first case, ideally whilst n = 2. The clusters are nicely described, with tremendous area among the four recognized clusters. However, in different cases the clusters can be close to every other, and the distinction might not be so apparent. In such instances, it’s miles crucial to apply a few judgment on whether or not whatever one of a kind will result through using extra clusters. If using greater clusters does not better distinguish the organizations, it’s miles almost honestly higher to go with fewer clusters.
Lab#
9
–
FALL
2019
Lab#9-Apache Spark
Narender Reddy Kudumula
University of Cumberlands
Data Science & Big Data Analysis (ITS-836)
Prof. Dr. Gasan Elkhodari
11/17/2019
The Data
1. Exercise Directory: $DEV1/exercises/spark-application
2. Data files (HDFS): /loudacre/weblogs
3. Application: CountJPG.py ( see code below)
The Task
4
. In this Exercise you will write your own Spark application instead of using the interactive Spark Shell application.
5. Write a simple program that counts the number of JPG requests in a web log file. The name of the file should be passed into the program as an argument.This is the same task you did earlier in the “Use RDDs to Transform a Dataset” exercise. The logic is the same, but this time you will need to set up the Spark Context object yourself.
6. Depending on which programming language you are using, follow the appropriate set of instructions below to write a Spark program.
7. Before running your program, be sure to exit from the Spark Shell.
The Code
import sys
from pyspark import SparkContext
from pyspark import SparkConf
if __name__ == “__main__”:
if len(sys.argv) < 2:
print >>sys.stderr, “Usage: CountJPGs
exit(-1)
sc = SparkContext()
logfile = sys.argv[1]
count = sc.textFile(logfile).filter(lambda line: ‘ ‘ in line).count()
print “Number of JPG requests: “, count
sc.stop()
4
Hive and Impala
Lab#6 Hive and Impala
Narender Reddy Kudumula
University of Cumberlands
Data Science & Big Data Analysis (ITS-836)
Prof. Dr. Gasan Elkhodari
10/13/2019
Lab#6 Hive and Impala
1) Import table “webpage” via Sqoop
$ sqoop import \
–connect jdbc:mysql://localhost/loudacre \
–username training –password training \
–table webpage \
–target-dir /loudacre/webpage \
–fields-terminated-by “\t”
2)
Validate the imported data in HDFS
3) Create External Table via Hive session
CREATE EXTERNAL TABLE webpage
(page_id SMALLINT,
name STRING,
assoc_files STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘\t’
LOCATION ‘/loudacre/webpage’
4) Open FireFox browser and run the below query from Hive
SELECT * FROM webpage WHERE name LIKE “ifruit%”
5) Open FireFox browser and run the below query from Impala
SELECT * FROM webpage WHERE name LIKE “ifruit%”
Big Data – Hadoop Ecosystems
Lab #5
Big Data – Hadoop Ecosystems Lab #5
Narender Reddy Kudumula
University of Cumberlands
Data Science & Big Data Analysis (ITS-836)
Prof. Dr. Gasan Elkhodari
10/05/2019
Big Data – Hadoop Ecosystems
Lab #5
Import the accounts table into HDFS file system:
1) Import account:
$ sqoop import \
–connect jdbc:mysql://localhost/loudacre \
–username training –password training \
–table accounts \
–target-dir /loudacre/accounts \
–null-non-string ‘\\N’
2) List the contents of the accounts directory:
$ hdfsdfs -ls /loudacre/accounts
3) Import incremental updates to accounts
As Loudacre adds new accounts in MySQL accounts table, the account data in HDFS must be updated as accounts are created. You can use Sqoop to append these new records.
Run the add_new_accounts.py script to add the latest accounts to MySQL.
$ DEV1/exercises/sqoop/add_new_accounts.py
Incrementally import and append the newly added accounts to the accounts
directory. Use Sqoop to import on the last value on the acct_num column
largest account ID:
$ sqoop import \
–connect jdbc:mysql://localhost/loudacre \
–username training –password training \
–incremental append \
–null-non-string ‘\\N’ \
–table accounts \
–target-dir /loudacre/accounts \
–check-column acct_num \
–last-value
4) You should see three new files. Use Hadoop’s cat command to view the entirecontents of these files.
hdfsdfs -cat /loudacre/accounts/part-m-0000[456]
Lab #2
Lab #2: Import the Device table from MySQL
Narender Reddy Kudumula
University of Cumberlands
Data Science & Big Data Analysis (ITS-836)
Prof. Dr. Gasan Elkhodari
09/18/2019
Import the Device table from MySQL
1. Open a new terminal window if necessary.
2. Get familiar with Sqoop by running the sqoop command line
$Sqoop help
3. List the table in Loudacre database:
$ sqoop list-tables \ –connect jdbc:mysql://localhost/loudacre \ –username training –password training
4. Run the sqoop import command to see its options: $ sqoop import –help
5. *Use Sqoop to import the device table in the loudacre database and save it in HDFS under /loudacre:
Creating import-device.sh file using vi editor to import the device table in the loudacre database and save it in HDFS under /loudacre
Given permission to import-device.sh file using “chmod 755” and run “./import-device.sh”
Viewing the moved device table files
Lab#
1
0
–
FALL
2019
Lab#10-Apache Spark
Narender Reddy Kudumula
University of Cumberlands
Data Science & Big Data Analysis (ITS-836)
Prof. Dr. Gasan Elkhodari
11/24/2019
1) Using map-reduce, count the number of requests from each user.
a) Use map to create a Pair RDD with the user ID as the key, and the integer 1 as the value. (The user ID is the third field in each line.) Your data will look something like this: (userid, 1) (userid,1) (userid,1)
b) Use reduce to sum the values for each user ID. Your RDD data will be similar to: (userid, 5) (userid,7) (userid,2)
2) Use countByKey to determine how many users visited the site for each frequency. That is, how many users visited once, twice, three times and so on.
a) Use map to reverse the key and value, like this: (5,userid) (7,userid) (2,userid)
b) Use the countByKey action to return a Map of frequency:user-count pairs.
3) Create an RDD where the user id is the key, and the value is the list of all the IP addresses that user has connected from. (IP address is the first field in each request line.)
Hint: Map to (userid,ipaddress) and then use groupByKey.
(userid, [20.1.34.55, 74.125.139.981]))
(userid, [245.33.1.1, 245.33.1.1, 66.79.233.99])
(userid, [65.50.196.141, 142.456.23.1, 671.143.222.1]))
4) Join the accounts data with the weblog data to produce a dataset keyed by user ID which contains the user account information and the number of website hits for that user.
a) Create an RDD based on the accounts data consisting of key/value-array pairs: (userid,[values…])
b) Join the Pair RDD with the set of user-id/hit-count pairs calculated in the first step.
c) Display the user ID, hit count, and first name (3rd value) and last name (4th value) for the first 5 elements, e.g.:
Lab#10 – solution
# Step 1 – Create an RDD based on a subset of weblogs (those ending in digit 6)
logs=sc.textFile(“/loudacre/weblogs/*6”)
# map each request (line) to a pair (userid, 1), then sum the values
userreqs = logs \
.map(lambda line: line.split()) \
.map(lambda words: (words[2],1)) \
.reduceByKey(lambda count1,count2: count1 + count2)
# Step 2 – Show the records for the 10 users with the highest count
# Step 3 – Group IPs by user ID
userips = logs \
.map(lambda line: line.split()) \
.map(lambda words: (words[2],words[0])) \
.groupByKey()
# print out the first 10 user ids, and their IP list
for (userid,ips) in userips.take(10):
print userid, “:”
for ip in ips: print “\t”,ip
#Step 4a – Map account data to (userid,[values….])
accounts = sc.textFile(“/loudacre/accounts”) \
.map(lambda s: s.split(‘,’)) \
.map(lambda account: (account[0],account))
# Step 4b – Join account data with userreqs then merge hit count into valuelist
accounthits = accounts.join(userreqs)
# Step 4c – Display userid, hit count, first name, last name for the first 5 elements
for (userid,(values,count)) in accounthits.take(5) :
print userid, count, values[3],values[4]
1
LAB# 8– Apache Spark 2
Narender Reddy Kudumula
University of Cumberlands
Data Science & Big Data Analysis (ITS-836)
Prof. Dr. Gasan Elkhodari
11/09/2019
The Data
Files and Data Used in This Homework:
Exercise Directory: $DEV1/exercises/spark-etl
Data files (local):
1. $DEV1DATA/activations/*
2. Review the data in $DEV1DATA/activations.
3. Copy this data to /loudacre in HDFS
4. Create a new RDD ( eg. test-01) for any file ( any single file) under:
HDFS : /loudacre/activations/
5) Display the contents of the RDD by using “*.collect()” function.
6) Create additional RDD ( eg. test-02) for any other file
7) Display the contents of the RDD by using “*.collect()” function.
8) Use ‘*.union’ function to merge and union both RDDs ( test-01 and test-02)
9) Examine and validate the new union
9) Use filter function “*.filter” to extract and display all records that has the test “account-number”.
10) Display the results by using the “*.collect()” function.
The results should be similar to the below screen. 3
Lab #7
Narender Reddy Kudumula
University of Cumberlands
Data Science & Big Data Analysis (ITS-836)
Prof. Dr. Gasan Elkhodari
11/03/2019
Lab#07
Apache web server logs are generally stored in files on the local machines running the server. In this exercise, you will simulate an Apache server by placing provided web log files into a local spool directory, and then use Flume to collect the data. Both the local and HDFS directories must exist before using the spooling directory source
1. Create a directory in HDFS called /loudacre/weblogs to hold the data files Flume ingests, e.g.
$ hdfs dfs -mkdir /loudacre/weblogs
2. Create a local directory for web server log output
sudo mkdir -p /flume/weblogs_spooldir
3. Give all users the permissions to write to the /flume/weblogs_spooldir
$ sudo chmod a+w -R /flume
4. Configure Flume
In $DEV1/exercises/flume , create a Flume configuration file with the characteristics listed in the attached file in the week’s content.
5. Run the Agent
$ flume-ng agent –conf /etc/flume-ng/conf \
–conf-file solution/spooldir.conf \
–name agent1 -Dflume.root.logger=INFO,console
6. Wait a few moments for the Flume agent to start up. You will see a message like: Component type: SOURCE, name: webserver-log-source started
7. Open a separate terminal window and change to the exercise directory. Run the script to place the web log files in the HDFS: /flume/weblogs_spooldir directory:
$ cd $DEV1/exercises/flume
./copy-move-weblogs.sh /flume/weblogs_spooldir
This script will create a temporary copy of the web log files and move them to the spooldir directory.
8. Return to the terminal that is running the Flume agent and watch the logging output. The output will give information about the files Flume is putting into HDFS.
9. Once the Flume agent has finished, enter CTRL+C to terminate the process.
10. Using the hdfs command line or Hue File Browser, list the files in HDFS that were added by the Flume agent.
Note that the files that were imported are tagged with a Unix timestamp corresponding to the time the file was imported, e.g. FlumeData.1427214989392