GitHub user kumarvishal09 opened a pull request:
https://github.com/apache/carbondata/pull/2588 [CARBONDATA-2807] Fixed data load performance issue with more number of records **Problem:**Data Loading is taking more time when number of records are high. **Root cause:** As number of records are high intermediate merger is taking more time. **Solution:** Checking the number of files present in file list is done is synchronized block because of this each intermediate request is taking sometime and when number of records are high it impacting overall data loading performance Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kumarvishal09/incubator-carbondata dataloadperf Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2588.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2588 ---- commit 8972039a4c812676fb77ba73b54d5fa3a3d38dc8 Author: kumarvishal09 <kumarvishal1802@...> Date: 2018-07-30T15:51:09Z Fixed Data loading perfornace issue when number records is high commit fe9088e74e367c7b0079744a4de20e8154ca6b7e Author: kumarvishal09 <kumarvishal1802@...> Date: 2018-07-30T15:53:00Z Fixed Data loading perfornace issue when number records is high ---- --- |
Github user kumarvishal09 commented on the issue:
https://github.com/apache/carbondata/pull/2588 @ravipesala Please review --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2588 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6348/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2588 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7630/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2588 retest this please --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2588 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7645/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2588 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6371/ --- |
In reply to this post by qiuchenjian-2
Github user brijoobopanna commented on the issue:
https://github.com/apache/carbondata/pull/2588 retest this please --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2588 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7652/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2588 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6072/ --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:
https://github.com/apache/carbondata/pull/2588 @kumarvishal09 Can you describe the solution in PR description? --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2588#discussion_r206412903 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/merger/UnsafeIntermediateMerger.java --- @@ -111,18 +108,15 @@ public void addFileToMerge(File sortTempFile) { } public void startFileMergingIfPossible() { - File[] fileList = null; - synchronized (lockObject) { - if (procFiles.size() >= parameters.getNumberOfIntermediateFileToBeMerged()) { + File[] fileList; + if (procFiles.size() >= parameters.getNumberOfIntermediateFileToBeMerged()) { + synchronized (lockObject) { fileList = procFiles.toArray(new File[procFiles.size()]); --- End diff -- better to use `procFiles.toArray(new File[0]);` maybe better performance, see the JDK comment for toArray --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2588#discussion_r206413037 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/merger/UnsafeIntermediateMerger.java --- @@ -111,18 +108,15 @@ public void addFileToMerge(File sortTempFile) { } public void startFileMergingIfPossible() { - File[] fileList = null; - synchronized (lockObject) { - if (procFiles.size() >= parameters.getNumberOfIntermediateFileToBeMerged()) { + File[] fileList; + if (procFiles.size() >= parameters.getNumberOfIntermediateFileToBeMerged()) { + synchronized (lockObject) { fileList = procFiles.toArray(new File[procFiles.size()]); this.procFiles = new ArrayList<File>(); - if (LOGGER.isDebugEnabled()) { - LOGGER - .debug("Submitting request for intermediate merging no of files: " + fileList.length); - } } - } - if (null != fileList) { + if (LOGGER.isDebugEnabled()) { + LOGGER.debug("Sumitting request for intermediate merging no of files: " + fileList.length); + } startIntermediateMerging(fileList); --- End diff -- no need to check not null as the old code? --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2588 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6387/ --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2588#discussion_r206474923 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/merger/UnsafeIntermediateMerger.java --- @@ -111,18 +108,15 @@ public void addFileToMerge(File sortTempFile) { } public void startFileMergingIfPossible() { - File[] fileList = null; - synchronized (lockObject) { - if (procFiles.size() >= parameters.getNumberOfIntermediateFileToBeMerged()) { + File[] fileList; + if (procFiles.size() >= parameters.getNumberOfIntermediateFileToBeMerged()) { + synchronized (lockObject) { fileList = procFiles.toArray(new File[procFiles.size()]); --- End diff -- Ok, I will check and update --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on the issue:
https://github.com/apache/carbondata/pull/2588 @jackylk Currently I am checking if I can manage this change using exiting intermediate merge property. I will update once testing is done --- |
In reply to this post by qiuchenjian-2
Github user mohammadshahidkhan commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2588#discussion_r207162716 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/merger/UnsafeIntermediateMerger.java --- @@ -111,18 +108,15 @@ public void addFileToMerge(File sortTempFile) { } public void startFileMergingIfPossible() { - File[] fileList = null; - synchronized (lockObject) { - if (procFiles.size() >= parameters.getNumberOfIntermediateFileToBeMerged()) { + File[] fileList; + if (procFiles.size() >= parameters.getNumberOfIntermediateFileToBeMerged()) { --- End diff -- @kumarvishal09 Please check once, in my view here double check locking is needed, other wise the thread waiting to acquire the lock, will enter the synchronized block and will end up doing intermediate merging with **0 or less than configured number of files "carbon.sort.intermediate.files.limit**". --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2588#discussion_r208156750 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/merger/UnsafeIntermediateMerger.java --- @@ -111,18 +108,15 @@ public void addFileToMerge(File sortTempFile) { } public void startFileMergingIfPossible() { - File[] fileList = null; - synchronized (lockObject) { - if (procFiles.size() >= parameters.getNumberOfIntermediateFileToBeMerged()) { + File[] fileList; + if (procFiles.size() >= parameters.getNumberOfIntermediateFileToBeMerged()) { + synchronized (lockObject) { fileList = procFiles.toArray(new File[procFiles.size()]); this.procFiles = new ArrayList<File>(); - if (LOGGER.isDebugEnabled()) { - LOGGER - .debug("Submitting request for intermediate merging no of files: " + fileList.length); - } } - } - if (null != fileList) { + if (LOGGER.isDebugEnabled()) { + LOGGER.debug("Sumitting request for intermediate merging no of files: " + fileList.length); + } startIntermediateMerging(fileList); --- End diff -- No Its nor required as in first if check we are checking the size of the list is greater than number of intermediate file size --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2588#discussion_r208161675 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/merger/UnsafeIntermediateMerger.java --- @@ -111,18 +108,15 @@ public void addFileToMerge(File sortTempFile) { } public void startFileMergingIfPossible() { - File[] fileList = null; - synchronized (lockObject) { - if (procFiles.size() >= parameters.getNumberOfIntermediateFileToBeMerged()) { + File[] fileList; + if (procFiles.size() >= parameters.getNumberOfIntermediateFileToBeMerged()) { + synchronized (lockObject) { fileList = procFiles.toArray(new File[procFiles.size()]); --- End diff -- When size is passed Array list directly copy the data from one array(array list) to other when size is less than array list size in that case it will create another array and then it will copy, so passing the size is better --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2588#discussion_r208198390 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/merger/UnsafeIntermediateMerger.java --- @@ -111,18 +108,15 @@ public void addFileToMerge(File sortTempFile) { } public void startFileMergingIfPossible() { - File[] fileList = null; - synchronized (lockObject) { - if (procFiles.size() >= parameters.getNumberOfIntermediateFileToBeMerged()) { + File[] fileList; + if (procFiles.size() >= parameters.getNumberOfIntermediateFileToBeMerged()) { --- End diff -- @mohammadshahidkhan Yes you are right but in this case UnsafeSortDataRow processing will be slower as it will read/ sort and write so chances of above condition is negligible, because of this double check is not added here --- |
Free forum by Nabble | Edit this page |