Hi all,
I've made a simple performance test under benchmark tpc-ds using spark2.1.0+carbondata1.0.0, well the result seems unsatisfactory. The details are as follows: About Env: Hadoop 2.7.2 + Spark 2.1.0 + CarbonData 1.0.0 Cluster: 5 nodes, 32G mem per node About TPC-DS: Data size: 1G (test data generation script: ./dsdgen -scale 1 -suffix '.csv' -dir /data/tpc-ds/data/) Max records num of the tables: table name - inventory, record num - 11,745,000 About Performance Tuning: Spark: SPARK_WORKER_MEMORY=4g SPARK_WORKER_INSTANCES=4 Carbondata: Leaving Default to avoid configuration difference. About Performance Test Result: SQL that can execute without modify: 70% (using sql template netezza) Max duration: 39.00s Min duration: 2.18s Average duration: 9.99s Well, I want to raise a discussion about the following topics: 1. Is the hardware of the cluster reasonable? (what's the common hardware configuration about a spark/carbondata cluster [per node?]) 2. Is the result of the performance test resonable & explicable? 3. Under interactive query circumstance, Is spark + carbondata an acceptable solution? 4. Under interactive query circumstance, what's other solution may work well.(maybe the average query duration should less then 5s or even less) Thx very much ~ |
up↑
haha~~~ ------------------ Original ------------------ From: "ﻬ.贝壳里的海";<[hidden email]>; Date: Mon, Feb 20, 2017 09:52 AM To: "dev"<[hidden email]>; Subject: carbondata performance test under benchmark tpc-ds Hi all, I've made a simple performance test under benchmark tpc-ds using spark2.1.0+carbondata1.0.0, well the result seems unsatisfactory. The details are as follows: About Env: Hadoop 2.7.2 + Spark 2.1.0 + CarbonData 1.0.0 Cluster: 5 nodes, 32G mem per node About TPC-DS: Data size: 1G (test data generation script: ./dsdgen -scale 1 -suffix '.csv' -dir /data/tpc-ds/data/) Max records num of the tables: table name - inventory, record num - 11,745,000 About Performance Tuning: Spark: SPARK_WORKER_MEMORY=4g SPARK_WORKER_INSTANCES=4 Carbondata: Leaving Default to avoid configuration difference. About Performance Test Result: SQL that can execute without modify: 70% (using sql template netezza) Max duration: 39.00s Min duration: 2.18s Average duration: 9.99s Well, I want to raise a discussion about the following topics: 1. Is the hardware of the cluster reasonable? (what's the common hardware configuration about a spark/carbondata cluster [per node?]) 2. Is the result of the performance test resonable & explicable? 3. Under interactive query circumstance, Is spark + carbondata an acceptable solution? 4. Under interactive query circumstance, what's other solution may work well.(maybe the average query duration should less then 5s or even less) Thx very much ~ |
Hi,
We are working on TPC-H performance report now, and have improved the performance with new format, we have already raised the PR(584 and 586) for the same, It is still under review and it will be merged soon. Once these PR's are merged we will start verify the TPC-DS performace as well. Regards, Ravindra. On 21 February 2017 at 13:48, Yinwei Li <[hidden email]> wrote: > up↑ > > > haha~~~ > > > > > ------------------ Original ------------------ > From: "ﻬ.贝壳里的海";<[hidden email]>; > Date: Mon, Feb 20, 2017 09:52 AM > To: "dev"<[hidden email]>; > > Subject: carbondata performance test under benchmark tpc-ds > > > > Hi all, > > > I've made a simple performance test under benchmark tpc-ds using > spark2.1.0+carbondata1.0.0, well the result seems unsatisfactory. The > details are as follows: > > > About Env: > Hadoop 2.7.2 + Spark 2.1.0 + CarbonData 1.0.0 > Cluster: 5 nodes, 32G mem per node > About TPC-DS: > Data size: 1G (test data generation script: ./dsdgen -scale 1 -suffix > '.csv' -dir /data/tpc-ds/data/) > Max records num of the tables: table name - inventory, record num - > 11,745,000 > About Performance Tuning: > Spark: > SPARK_WORKER_MEMORY=4g > SPARK_WORKER_INSTANCES=4 > Carbondata: > Leaving Default to avoid configuration difference. > About Performance Test Result: > SQL that can execute without modify: 70% (using sql template netezza) > Max duration: 39.00s > Min duration: 2.18s > Average duration: 9.99s > > > Well, I want to raise a discussion about the following topics: > 1. Is the hardware of the cluster reasonable? (what's the common > hardware configuration about a spark/carbondata cluster [per node?]) > 2. Is the result of the performance test resonable & explicable? > 3. Under interactive query circumstance, Is spark + carbondata an > acceptable solution? > 4. Under interactive query circumstance, what's other solution may > work well.(maybe the average query duration should less then 5s or even > less) > > > Thx very much ~ > -- Thanks & Regards, Ravi |
Hi Ravindra,
thx for your reply, I'm so existed that you're working on this significant job, and I'm looking forward to your performance test report based on tpc-h & tpc-ds. ------------------ 原始邮件 ------------------ 发件人: "Ravindra Pesala";<[hidden email]>; 发送时间: 2017年2月21日(星期二) 下午5:35 收件人: "dev"<[hidden email]>; 主题: Re: carbondata performance test under benchmark tpc-ds Hi, We are working on TPC-H performance report now, and have improved the performance with new format, we have already raised the PR(584 and 586) for the same, It is still under review and it will be merged soon. Once these PR's are merged we will start verify the TPC-DS performace as well. Regards, Ravindra. On 21 February 2017 at 13:48, Yinwei Li <[hidden email]> wrote: > up↑ > > > haha~~~ > > > > > ------------------ Original ------------------ > From: "ﻬ.贝壳里的海";<[hidden email]>; > Date: Mon, Feb 20, 2017 09:52 AM > To: "dev"<[hidden email]>; > > Subject: carbondata performance test under benchmark tpc-ds > > > > Hi all, > > > I've made a simple performance test under benchmark tpc-ds using > spark2.1.0+carbondata1.0.0, well the result seems unsatisfactory. The > details are as follows: > > > About Env: > Hadoop 2.7.2 + Spark 2.1.0 + CarbonData 1.0.0 > Cluster: 5 nodes, 32G mem per node > About TPC-DS: > Data size: 1G (test data generation script: ./dsdgen -scale 1 -suffix > '.csv' -dir /data/tpc-ds/data/) > Max records num of the tables: table name - inventory, record num - > 11,745,000 > About Performance Tuning: > Spark: > SPARK_WORKER_MEMORY=4g > SPARK_WORKER_INSTANCES=4 > Carbondata: > Leaving Default to avoid configuration difference. > About Performance Test Result: > SQL that can execute without modify: 70% (using sql template netezza) > Max duration: 39.00s > Min duration: 2.18s > Average duration: 9.99s > > > Well, I want to raise a discussion about the following topics: > 1. Is the hardware of the cluster reasonable? (what's the common > hardware configuration about a spark/carbondata cluster [per node?]) > 2. Is the result of the performance test resonable & explicable? > 3. Under interactive query circumstance, Is spark + carbondata an > acceptable solution? > 4. Under interactive query circumstance, what's other solution may > work well.(maybe the average query duration should less then 5s or even > less) > > > Thx very much ~ > -- Thanks & Regards, Ravi |
Hi all, I've made a simple performance test under benchmark tpc-ds using spark2.1.0+carbondata1.0.0 and Impala 2.7.0+parquet, well the result seems unsatisfactory. The details are as follows: About Env: Hadoop 2.7.2 + Spark 2.1.0 + CarbonData 1.0.0 Impala 2.7.0 Cluster: 5 nodes, 32G mem per node About TPC-DS: Data size: 1G (test data generation script: ./dsdgen -scale 1 -suffix '.csv' -dir /data/tpc-ds/data/) Max records num of the tables: table name - inventory, record num - 11,745,000 About Performance Tuning: Spark: SPARK_WORKER_MEMORY=4g SPARK_WORKER_INSTANCES=4 Carbondata: Leaving Default to avoid configuration difference. About Performance Test Result【Spark+CarbonData】: SQL that can execute without modify: 70% (using sql template netezza) Max duration: 39.00s Min duration: 2.18s Average duration: 9.99s About Performance Test Result【Impala+Parquert】: SQL that can execute without modify: 70% (using sql template netezza) Max duration: 16.75s Min duration: 0.42s Average duration: 2.18s U can get the details in the attachment of this e-mail. Sheet 1
------------------ 原始邮件 ------------------ 发件人: "ﻬ.贝壳里的海";<[hidden email]>; 发送时间: 2017年2月21日(星期二) 下午5:11 收件人: "dev"<[hidden email]>; 主题: 回复: carbondata performance test under benchmark tpc-ds thx for your reply, I'm so existed that you're working on this significant job, and I'm looking forward to your performance test report based on tpc-h & tpc-ds. ------------------ 原始邮件 ------------------ 发件人: "Ravindra Pesala";<[hidden email]>; 发送时间: 2017年2月21日(星期二) 下午5:35 收件人: "dev"<[hidden email]>; 主题: Re: carbondata performance test under benchmark tpc-ds Hi, We are working on TPC-H performance report now, and have improved the performance with new format, we have already raised the PR(584 and 586) for the same, It is still under review and it will be merged soon. Once these PR's are merged we will start verify the TPC-DS performace as well. Regards, Ravindra. On 21 February 2017 at 13:48, Yinwei Li <[hidden email]> wrote: > up↑ > > > haha~~~ > > > > > ------------------ Original ------------------ > From: "ﻬ.贝壳里的海";<[hidden email]>; > Date: Mon, Feb 20, 2017 09:52 AM > To: "dev"<[hidden email]>; > > Subject: carbondata performance test under benchmark tpc-ds > > > > Hi all, > > > I've made a simple performance test under benchmark tpc-ds using > spark2.1.0+carbondata1.0.0, well the result seems unsatisfactory. The > details are as follows: > > > About Env: > Hadoop 2.7.2 + Spark 2.1.0 + CarbonData 1.0.0 > Cluster: 5 nodes, 32G mem per node > About TPC-DS: > Data size: 1G (test data generation script: ./dsdgen -scale 1 -suffix > '.csv' -dir /data/tpc-ds/data/) > Max records num of the tables: table name - inventory, record num - > 11,745,000 > About Performance Tuning: > Spark: > SPARK_WORKER_MEMORY=4g > SPARK_WORKER_INSTANCES=4 > Carbondata: > Leaving Default to avoid configuration difference. > About Performance Test Result: > SQL that can execute without modify: 70% (using sql template netezza) > Max duration: 39.00s > Min duration: 2.18s > Average duration: 9.99s > > > Well, I want to raise a discussion about the following topics: > 1. Is the hardware of the cluster reasonable? (what's the common > hardware configuration about a spark/carbondata cluster [per node?]) > 2. Is the result of the performance test resonable & explicable? > 3. Under interactive query circumstance, Is spark + carbondata an > acceptable solution? > 4. Under interactive query circumstance, what's other solution may > work well.(maybe the average query duration should less then 5s or even > less) > > > Thx very much ~ > -- Thanks & Regards, Ravi |
Administrator
|
Hi
Thank you shared the test result. It would be more reasonable if you could do the test comparison with same compute engine. Spark 2.1+parquet , Spark 2.1+carbondata. Are you interested in participating in doing this test along with us.(carbondata,parquet) Regards Liang
|
Free forum by Nabble | Edit this page |