GitHub user xuchuanyin opened a pull request:
https://github.com/apache/carbondata/pull/1825 [CARBONDATA-2032][DataLoad] directly write carbon data files to HDFS Currently in data loading, carbondata write the final data files to local disk and then copy it to HDFS. For saving disk IO, carbondata can skip this procedure and directly write these files to HDFS. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [x] Any interfaces changed? `Only internal interfaces has been changed` - [x] Any backward compatibility impacted? `No` - [x] Document update required? `No` - [x] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? `No` - How it is tested? Please attach test report. `Tested in local node and a 3-nodes cluster` - Is it a performance related change? Please attach the performance test report. `Yes. The disk IO has decreased` - Any additional information to help reviewers in testing this change. `No` - [x] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. `Not related` You can merge this pull request into a Git repository by running: $ git pull https://github.com/xuchuanyin/carbondata 0118_opt_write_data_files_directly_to_hdfs Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1825.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1825 ---- commit fdced241cd5508d2fc7da457ed6e2e57dcaee4f1 Author: xuchuanyin <xuchuanyin@...> Date: 2018-01-18T03:24:34Z directly write carbon data files to HDFS directly write carbon data files to hdfs to reduce disk I/O ---- --- |
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1825 @xuchuanyin There is a reason why we do copy instead of directly writing to HDFS. 1. We make sure that one complete carbondata file goes to one HDFS block only, while copying it to HDFS from local disk we specify the block size. Other wise it impacts query performance a lot. 2. Remove the overhead of writing to HDFS directly (it internally writes to replication as well) , so start copying in a different thread to avoid blocking of main loading flow. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1825 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1693/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1825 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2926/ --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/1825 @ravipesala thanks, I got your point... --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1825 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2950/ --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:
https://github.com/apache/carbondata/pull/1825 @xuchuanyin Do you observe any loading performance improvement? --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/1825 @jackylk I haven't tested it in my cluster and the problems @ravipesala mentioned needs to be solved. I am working on it... --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1825 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1717/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1825 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2976/ --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/1825 retest this please --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/1825 @ravipesala I reconsidered the questions you mentioned and fixed it as below: 1. I use the user specified `table_blocksize` as the block size of data files. Actually in the current implementation, the block size is big enough to hold the entire file. 2. I directly write the data files to HDFS by specifying only 1 replication in the main thread and complete the remaining replications in another thread -- just the same way as before. After I implement this, I tested it in a 3-node cluster and the data loading performance was just the same as before while the end-2-end `total size of disk write decreased by about 11%`. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1825 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2983/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1825 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1753/ --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/1825 retest this please --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1825 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2986/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1825 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1756/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1825 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2996/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1825 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2990/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1825 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1760/ --- |
Free forum by Nabble | Edit this page |