這期內容當中小編將會給大家帶來有關Hive常用查詢命令和使用方法,文章內容豐富且以專業的角度為大家分析和敘述,閱讀完這篇文章希望大家可以有所收獲。
創新互聯作為成都網站建設公司,專注成都網站建設、網站設計,有關成都企業網站建設方案、改版、費用等問題,行業涉及成都發電機維修等多個領域,已為上千家企業服務,得到了客戶的尊重與認可。
1. 將日志文件傳到HDFS
```bash
hdfs dfs -mkdir /user/hive/warehouse/original_access_logs_0104
hdfs dfs -put access.log /user/hive/warehouse/original_access_logs_0104
```
檢查文件是否已正確拷貝
```bash
hdfs dfs -ls /user/hive/warehouse/original_access_logs_0104
```
2. 建立Hive外部表對應于日志文件
```sql
DROP TABLE IF EXISTS original_access_logs;
CREATE EXTERNAL TABLE original_access_logs (
ip STRING,
request_time STRING,
method STRING,
url STRING,
http_version STRING,
code1 STRING,
code2 STRING,
dash STRING,
user_agent STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
'input.regex' = '([^ ]*) - - \\[([^\\]]*)\\] "([^\ ]*) ([^\ ]*) ([^\ ]*)" (\\d*) (\\d*) "([^"]*)" "([^"]*)"',
'output.format.string' = "%1$$s %2$$s %3$$s %4$$s %5$$s %6$$s %7$$s %8$$s %9$$s")
LOCATION '/user/hive/warehouse/original_access_logs_0104';
```
3. 將TEXT表轉換為PARQUET表
```sql
DROP TABLE IF EXISTS pq_access_logs;
CREATE TABLE pq_access_logs (
ip STRING,
request_time STRING,
method STRING,
url STRING,
http_version STRING,
code1 STRING,
code2 STRING,
dash STRING,
user_agent STRING,
`timestamp` int)
STORED AS PARQUET;
#ADD JAR /opt/cloudera/parcels/CDH/lib/hive/lib/hive-contrib.jar;
#ADD JAR /opt/cloudera/parcels/CDH/lib/hive/contrib/hive-contrib-2.1.1-cdh7.3.2.jar
INSERT OVERWRITE TABLE pq_access_logs
SELECT
ip,
from_unixtime(unix_timestamp(request_time, 'dd/MMM/yyyy:HH:mm:ss z'), 'yyyy-MM-dd HH:mm:ss z'),
method,
url,
http_version,
code1,
code2,
dash,
user_agent,
unix_timestamp(request_time, 'dd/MMM/yyyy:HH:mm:ss z')
FROM original_access_logs;
```
4. 統計最多訪問的5個IP
```sql
select ip, count(*) cnt
from pq_access_logs
group by ip
order by cnt desc
limit 5
```
注意觀察Hive Job拆分成Map Reduce Job并執行
如何查看Hive Job執行的日志
## 演示 - 分區表
### 步驟
1. 創建分區表
```sql
DROP TABLE IF EXISTS partitioned_access_logs;
CREATE EXTERNAL TABLE partitioned_access_logs (
ip STRING,
request_time STRING,
method STRING,
url STRING,
http_version STRING,
code1 STRING,
code2 STRING,
dash STRING,
user_agent STRING,
`timestamp` int)
PARTITIONED BY (request_date STRING)
STORED AS PARQUET
;
```
2. 將日志表寫入分區表,使用動態分區插入
```sql
set hive.exec.dynamic.partition.mode=nonstrict;
INSERT OVERWRITE TABLE partitioned_access_logs
PARTITION (request_date)
SELECT ip, request_time, method, url, http_version, code1, code2, dash, user_agent, `timestamp`, to_date(request_time) as request_date
FROM pq_access_logs
;
```
默認分區:__HIVE_DEFAULT_PARTITION__, 沒有匹配上的記錄會放在這個分區
3. 觀察分區表目錄結構
```bash
hdfs dfs -ls /user/hive/warehouse/partitioned_access_logs
```
## 演示 - 分桶表
### 步驟
1. 創建日志分桶表
按IP的第一段分桶,然后按請求時間排序
```sql
DROP TABLE IF EXISTS bucketed_access_logs;
CREATE TABLE bucketed_access_logs (
first_ip_addr INT,
request_time STRING)
CLUSTERED BY (first_ip_addr)
SORTED BY (request_time)
INTO 10 BUCKETS
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE
;
!如果DISTRIBUTE BY和SORT BY不寫,則需要設置hive參數 (2.0后不用,默認為true)
SET hive.enforce.sorting = true;
SET hive.enforce.bucketing = true;
INSERT OVERWRITE TABLE bucketed_access_logs
SELECT cast(split(ip, '\\.')[0] as int) as first_ip_addr, request_time
FROM pq_access_logs
--DISTRIBUTE BY first_ip_addr
--SORT BY request_time
;
```
2. 觀察分桶表的物理存儲結構
```bash
hdfs dfs -ls /user/hive/warehouse/bucketed_access_logs/
# 猜猜有幾個文件?
hdfs dfs -cat /user/hive/warehouse/bucketed_access_logs/000000_0 | head
hdfs dfs -cat /user/hive/warehouse/bucketed_access_logs/000001_0 | head
hdfs dfs -cat /user/hive/warehouse/bucketed_access_logs/000009_0 | head
# 能看出分桶的規則嗎?
```
## 演示 - ORC表的壓縮
1. 新建一張訪問日志的ORC表,插入數據時啟用壓縮
```sql
DROP TABLE IF EXISTS compressed_access_logs;
CREATE TABLE compressed_access_logs (
ip STRING,
request_time STRING,
method STRING,
url STRING,
http_version STRING,
code1 STRING,
code2 STRING,
dash STRING,
user_agent STRING,
`timestamp` int)
STORED AS ORC
TBLPROPERTIES ("orc.compression"="SNAPPY");
--SET hive.exec.compress.intermediate=true;
--SET mapreduce.map.output.compress=true;
INSERT OVERWRITE TABLE compressed_access_logs
SELECT * FROM pq_access_logs;
describe formatted compressed_access_logs;
```
2. 和原來不啟用壓縮的Parquet表進行比對
大小
原始TXT是38 MB.
```
hdfs dfs -ls /user/hive/warehouse/pq_access_logs/
```
Parquet無壓縮: 4,158,592 (4.1 MB)
```
hdfs dfs -ls /user/hive/warehouse/compressed_access_logs/
```
Orc壓縮后: 1,074,404 (1.0 MB)
壓縮比: 約等于5:2 (4:1 - Parquet Raw: ORC Compressed)
注意: 數據備份時建議啟用壓縮,數據讀多的情況下,啟用壓縮不一定能帶來查詢性能提升。
上述就是小編為大家分享的Hive常用查詢命令和使用方法了,如果剛好有類似的疑惑,不妨參照上述分析進行理解。如果想知道更多相關知識,歡迎關注創新互聯行業資訊頻道。
名稱欄目:Hive常用查詢命令和使用方法
本文網址:http://m.newbst.com/article32/gssdsc.html
成都網站建設公司_創新互聯,為您提供域名注冊、小程序開發、面包屑導航、動態網站、網站建設、微信小程序
聲明:本網站發布的內容(圖片、視頻和文字)以用戶投稿、用戶轉載內容為主,如果涉及侵權請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網站立場,如需處理請聯系客服。電話:028-86922220;郵箱:631063699@qq.com。內容未經允許不得轉載,或轉載時需注明來源: 創新互聯