ApacheHawq--优化笔记-成都创新互联网站建设

关于创新互联

多方位宣传企业产品与服务 突出企业形象

公司简介 公司的服务 荣誉资质 新闻动态 联系我们

ApacheHawq--优化笔记

优化工作:

  1. 数据表分区尽量采用数值类型字段,如Date类型转换为距离1970-01-01的绝对天数。
  2. SQL语法:尽量避免between and 的使用(查看查询计划,影响不大),多个子查询时尽量使用CTE(with v as…)查询。
  3. 根据表的数据量以及大多数查询的类型设计数据分布策略(随机分布or哈希分布)以及bucketnum参数设置。
  4. 冷热数据存储到不同schema相同的表中,尽量减小热数据所在表的分区数(通过动态增加和删除分区)。
  5. 根据SQL所需的资源设置参数hawq_rm_stat_nvseg和hawq_rm_vseg_memory。
  6. 每天定时执行“vacuum table_name;analyze table_name;”获取每个表的统计信息,以便生成最优的查询计划。
  7. 通过执行”vacuum pg_class; reindex table pg_class”源数据表pg_class,减少元数据记录数。
  8. 执行分析查询计划,找到SQL的性能瓶颈,有针对性的优化。
  9. 采取措施尽量提升查询数据的本地化比率。

实际测试

数据表采用随机分布: bucketnum=9

创新互联是网站建设专家,致力于互联网品牌建设与网络营销,专业领域包括做网站、网站建设、电商网站制作开发、成都微信小程序、微信营销、系统平台开发,与其他网站设计及系统开发公司不同,我们的整合解决方案结合了恒基网络品牌建设经验和互联网整合营销的理念,并将策略和执行紧密结合,且不断评估并优化我们的方案,为客户提供全方位的互联网品牌整合方案!

Apache Hawq--优化笔记

1.设置用于查询的virtual segment数量
语句级别:

SET hawq_rm_stmt_nvseg=10;
SET hawq_rm_stmt_vseg_memory='256mb';
  • 禁用语句级别
    SET hawq_rm_stmt_nvseg=0;
set hawq_rm_nvseg_perquery_perseg_limit=10;
set hawq_rm_nvseg_perquery_limit=512;

  通过hawq_rm_nvseg_perquery_limit和hawq_rm_nvseg_perquery_perseg_limit参数可以调整查询执行时使用的virtual segments的数量

2.哈希分布表的相关参数:

default_hash_table_bucket_number
hawq_rm_nvseg_perquery_limit
hawq_rm_nvseg_perquery_perseg_limit

Apache Hawq--优化笔记
3.可以使用pg_partitions视图查找有关分区设计的信息。例如,查看销售表的分区设计:

SELECT partitionboundary, partitiontablename, partitionname, partitionlevel, partitionrank
FROM pg_partitions
WHERE tablename='ins_wifi_dates';

下表和视图显示有关分区表的信息。

  • pg_partition - 跟踪分区表及其继承关系。
  • pg_partition_templates - 显示使用子分区模板创建的子分区。
  • pg_partition_columns - 显示分区设计中使用的分区键列。

4.查看表的segment file分布

SELECT gpr.tablespace_oid,
         gpr.database_oid,
         gpf.relfilenode_oid,
         gpf.segment_file_num,
         '/hawq_data/'||gpr.tablespace_oid||'/'||gpr.database_oid||'/'||gpf.relfilenode_oid ||'/'||gpf.segment_file_num as path,
         pg_class.relname,
         gpr.persistent_state,
         gpf.persistent_state
FROM gp_persistent_relfile_node gpf,pg_class, gp_persistent_relation_node gpr
WHERE gpf.relfilenode_oid = pg_class.relfilenode
        AND gpr.relfilenode_oid = pg_class.relfilenode
        AND pg_class.relname='person';
SELECT distinct gpr.tablespace_oid,
         gpr.database_oid,
         gpf.relfilenode_oid,
         pg_class.relname,
         gpr.persistent_state,
         gpf.persistent_state
FROM gp_persistent_relfile_node gpf,pg_class, gp_persistent_relation_node gpr
WHERE gpf.relfilenode_oid = pg_class.relfilenode
        AND gpr.relfilenode_oid = pg_class.relfilenode
        AND pg_class.relname like 'person_%' order by pg_class.relname ;

#schema

SELECT gpr.tablespace_oid,
         gpr.database_oid,
         gpf.relfilenode_oid,
         gpf.segment_file_num,
         '/hawq_data/'||gpr.tablespace_oid||'/'||gpr.database_oid||'/'||gpf.relfilenode_oid ||'/'||gpf.segment_file_num as path,
        pgn.nspname AS schemaname,
         pg_class.relname AS tablename,
         gpr.persistent_state,
         gpf.persistent_state
FROM gp_persistent_relfile_node gpf,pg_class, gp_persistent_relation_node gpr, pg_namespace pgn
WHERE gpf.relfilenode_oid = pg_class.relfilenode
        AND gpr.relfilenode_oid = pg_class.relfilenode 
        AND pgn.oid = pg_class.relnamespace
        AND pg_class.relname='t_wifi_terminal_chrs_1_prt_1';

经过测试发现:
 数据在hdfs中的存储位置为: tablespace/database/table/segfile
分区表A目录中有默认哈希桶数目的segfile,但大小都为0,而其字表(如a1)目录中有默认哈希桶数目的segfile,且有文件。

查看表大小:

select sotdsize from hawq_toolkit.hawq_size_of_table_disk where sotdtablename='t_net_access_log';

5.使用explain 或者 explain analyze 查看查询计划时,指定

set gp_log_dynamic_partition_pruning=on;

可以显示扫描的分区名称。

  Explain analyze和explain语句不同,explain analyze会真正执行查询,并得到查询执行过程中的统计数据。explain analyze的结果对了解查询执行的具体情况以及了解查询性能问题产生的原因有很大帮助。

SELECT * FROM pg_stats  WHERE tablename = 'inventory';

#查询会话信息

select * from pg_stat_activity;
select application_name, datname, procpid, sess_id, usename, waiting, client_addr, client_port, waiting_resource,  query_start, backend_start, xact_start from pg_stat_activity;
select application_name, datname, procpid, sess_id, usename, waiting, client_addr, client_port, waiting_resource, current_query, query_start, backend_start, xact_start from pg_stat_activity;
select application_name, datname, procpid, sess_id, usename, waiting, client_addr, client_port, waiting_resource,  query_start, backend_start, xact_start from pg_stat_activity where application_name='psql' and current_query<>'';

datname表示数据库名
procpid表示当前的SQL对应的PID
query_start表示SQL执行开始时间
current_query表示当前执行的SQL语句
waiting表示是否正在执行,t表示正在执行,f表示已经执行完成
client_addr表示客户端IP地址
284933
kill有两种方式,第一种是:

SELECT pg_cancel_backend(PID);

这种方式只能kill select查询,对update、delete 及DML不生效)

第二种是:

SELECT pg_terminate_backend(PID);

这种可以kill掉各种操作(select、update、delete、drop等)操作

在pg_cancel_backend()下,session还在,事物回退;
在pg_terminate_backend()操作后,session消失,事物回退。

如果在某些时候pg_terminate_backend()不能杀死session,那么可以在os层面,直接kill -9 pid

select * from pg_resqueue_status;

--资源队列

SELECT * FROM dump_resource_manager_status(2);

--Segment

SELECT * FROM dump_resource_manager_status(3);
SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE relname = 'ins_wifi_dates';
SELECT * FROM pg_stats  WHERE tablename = 'ins_wifi_dates';
SELECT gp_segment_id, COUNT(*)
FROM ins_wifi_dates
GROUP BY gp_segment_id
ORDER BY gp_segment_id
set gp_select_invisible=true;
select count(*) from pg_class;
set gp_select_invisible=false;
select count(*) from pg_class;

vacuum pg_class;
reindex table pg_class;

日志:
set

表重分布:

ALTER TABLE sales SET WITH (REORGANIZE=TRUE);

检查未analyze的表:

select * from hawq_toolkit.hawq_stats_missing;

http://hawq.incubator.apache.org/docs/userguide/2.2.0.0-incubating/reference/toolkit/hawq_toolkit.html#topic46
HAWQ查看表大小: //不包含分区表

SELECT relname AS name, sotdsize AS size, sotdtoastsize AS 
toast, sotdadditionalsize AS other 
FROM hawq_toolkit.hawq_size_of_table_disk AS sotd, pg_catalog.pg_class 
WHERE sotd.sotdoid=pg_class.oid and pg_class.relname='t_wifi_terminal_chrs'
ORDER BY relname;

hawq_size_of_partition_and_indexes_disk

select relname AS name, sopaidpartitionoid, sopaidpartitiontablename, sopaidpartitiontablesize as size, sotailtablesizeuncompressed as uncompressed from hawq_toolkit.hawq_size_of_partition_and_indexes_disk sopi,pg_catalog.pg_class WHERE sopi.sopaidparentoid=pg_class.oid and pg_class.relname='t_wifi_terminal_chrs'
ORDER BY sopaidpartitionoid;
select relname AS name,sum(sopaidpartitiontablesize) as size from hawq_toolkit.hawq_size_of_partition_and_indexes_disk sopi,pg_catalog.pg_class WHERE sopi.sopaidparentoid=pg_class.oid and pg_class.relname='t_wifi_terminal_chrs'
group by relname ;

内存/vore比值

[root@master2 pg_log]# cat hawq-2017-10-17_224829.csv 
2017-10-17 18:21:57.319620 CST,,,p237647,th317192736,,,,0,con4,,seg-10000,,,,,"LOG","00000","Resource manager chooses ratio 5120 MB per core as cluster level memory to core ratio, there are 2304 MB memory 6 CORE resource unable to be utilized.",,,,,,,0,,"resourcepool.c",4641,
2017-10-17 18:21:57.319668 CST,,,p237647,th317192736,,,,0,con4,,seg-10000,,,,,"LOG","00000","Resource manager adjusts segment hd4.bigdata original global resource manager resource capacity from (154368 MB, 32 CORE) to (153600 MB, 30 CORE)",,,,,,,0,,"resourcepool.c",4787,
2017-10-17 18:21:57.319716 CST,,,p237647,th317192736,,,,0,con4,,seg-10000,,,,,"LOG","00000","Resource manager adjusts segment hd1.bigdata original global resource manager resource capacity from (154368 MB, 32 CORE) to (153600 MB, 30 CORE)",,,,,,,0,,"resourcepool.c",4787,
2017-10-17 18:21:57.319762 CST,,,p237647,th317192736,,,,0,con4,,seg-10000,,,,,"LOG","00000","Resource manager adjusts segment hd2.bigdata original global resource manager resource capacity from (154368 MB, 32 CORE) to (153600 MB, 30 CORE)",,,,,,,0,,"resourcepool.c",4787,

网站栏目:ApacheHawq--优化笔记
当前路径:http://kswsj.cn/article/pcicjj.html

其他资讯