GitHub user xuchuanyin opened a pull request:
https://github.com/apache/carbondata/pull/2052 [CARBONDATA-2246][DataLoad] Fix exhausted memory problem during unsafe data loading If the size of unsafe row page equals to that of working memory, the last page will exhaust the working memory and carbondata will suffer 'not enough memory' problem when we convert data to columnar format. All unsafe pages should be spilled to disk or moved to unsafe sort memory instead of be kept in unsafe working memory. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [x] Any interfaces changed? `NO` - [x] Any backward compatibility impacted? `NO` - [x] Document update required? `NO` - [x] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? `Tests added` - How it is tested? Please attach test report. `Tested in local machine` - Is it a performance related change? Please attach the performance test report. `NO` - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/xuchuanyin/carbondata 0312_bug_unsafe_memory Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2052.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2052 ---- commit da3329ca60250e07b568f905ba5cdaa115f8d522 Author: xuchuanyin <xuchuanyin@...> Date: 2018-03-12T12:25:17Z Fix exhausted memory problem during unsafe data loading If the size of unsafe row page equals to that of working memory, the last page will exhaust the working memory and carbondata will suffer 'not enough memory' problem when we convert data to columnar format. All unsafe pages should be spilled to disk or moved to unsafe sort memory instead of be kept in unsafe working memory. ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2052 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4200/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2052 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2955/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2052 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3851/ --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/2052 retest this please --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2052 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4225/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2052 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2980/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2052 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4229/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2052 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2985/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2052 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3872/ --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/2052 retest this please --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2052 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4238/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2052 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2994/ --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2052#discussion_r174195929 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/UnsafeSortDataRows.java --- @@ -218,50 +213,45 @@ public void addRow(Object[] row) throws CarbonSortKeyAndGroupByException { rowPage.addRow(row, rowBuffer.get()); } else { try { - if (enableInMemoryIntermediateMerge) { - unsafeInMemoryIntermediateFileMerger.startInmemoryMergingIfPossible(); - } - unsafeInMemoryIntermediateFileMerger.startFileMergingIfPossible(); - semaphore.acquire(); - dataSorterAndWriterExecutorService.submit(new DataSorterAndWriter(rowPage)); + handlePreviousPage(); rowPage = createUnsafeRowPage(); rowPage.addRow(row, rowBuffer.get()); } catch (Exception e) { LOGGER.error( "exception occurred while trying to acquire a semaphore lock: " + e.getMessage()); throw new CarbonSortKeyAndGroupByException(e); } - } } /** - * Below method will be used to start storing process This method will get - * all the temp files present in sort temp folder then it will create the - * record holder heap and then it will read first record from each file and - * initialize the heap + * Below method will be used to start sorting process. This method will get + * all the temp unsafe pages in memory and all the temp files and try to merge them if possible. + * Also, it will spill the pages to disk or add it to unsafe sort memory. * - * @throws InterruptedException + * @throws CarbonSortKeyAndGroupByException if error occurs during in-memory merge + * @throws InterruptedException if error occurs during data sort and write */ - public void startSorting() throws InterruptedException { + public void startSorting() throws CarbonSortKeyAndGroupByException, InterruptedException { LOGGER.info("Unsafe based sorting will be used"); if (this.rowPage.getUsedSize() > 0) { - TimSort<UnsafeCarbonRow, IntPointerBuffer> timSort = new TimSort<>( - new UnsafeIntSortDataFormat(rowPage)); - if (parameters.getNumberOfNoDictSortColumns() > 0) { - timSort.sort(rowPage.getBuffer(), 0, rowPage.getBuffer().getActualSize(), - new UnsafeRowComparator(rowPage)); - } else { - timSort.sort(rowPage.getBuffer(), 0, rowPage.getBuffer().getActualSize(), - new UnsafeRowComparatorForNormalDims(rowPage)); - } - unsafeInMemoryIntermediateFileMerger.addDataChunkToMerge(rowPage); + handlePreviousPage(); } else { rowPage.freeMemory(); } startFileBasedMerge(); } + private void handlePreviousPage() --- End diff -- can you provide comment for this function --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2052#discussion_r174364965 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/UnsafeSortDataRows.java --- @@ -218,50 +213,45 @@ public void addRow(Object[] row) throws CarbonSortKeyAndGroupByException { rowPage.addRow(row, rowBuffer.get()); } else { try { - if (enableInMemoryIntermediateMerge) { - unsafeInMemoryIntermediateFileMerger.startInmemoryMergingIfPossible(); - } - unsafeInMemoryIntermediateFileMerger.startFileMergingIfPossible(); - semaphore.acquire(); - dataSorterAndWriterExecutorService.submit(new DataSorterAndWriter(rowPage)); + handlePreviousPage(); rowPage = createUnsafeRowPage(); rowPage.addRow(row, rowBuffer.get()); } catch (Exception e) { LOGGER.error( "exception occurred while trying to acquire a semaphore lock: " + e.getMessage()); throw new CarbonSortKeyAndGroupByException(e); } - } } /** - * Below method will be used to start storing process This method will get - * all the temp files present in sort temp folder then it will create the - * record holder heap and then it will read first record from each file and - * initialize the heap + * Below method will be used to start sorting process. This method will get + * all the temp unsafe pages in memory and all the temp files and try to merge them if possible. + * Also, it will spill the pages to disk or add it to unsafe sort memory. * - * @throws InterruptedException + * @throws CarbonSortKeyAndGroupByException if error occurs during in-memory merge + * @throws InterruptedException if error occurs during data sort and write */ - public void startSorting() throws InterruptedException { + public void startSorting() throws CarbonSortKeyAndGroupByException, InterruptedException { LOGGER.info("Unsafe based sorting will be used"); if (this.rowPage.getUsedSize() > 0) { - TimSort<UnsafeCarbonRow, IntPointerBuffer> timSort = new TimSort<>( - new UnsafeIntSortDataFormat(rowPage)); - if (parameters.getNumberOfNoDictSortColumns() > 0) { - timSort.sort(rowPage.getBuffer(), 0, rowPage.getBuffer().getActualSize(), - new UnsafeRowComparator(rowPage)); - } else { - timSort.sort(rowPage.getBuffer(), 0, rowPage.getBuffer().getActualSize(), - new UnsafeRowComparatorForNormalDims(rowPage)); - } - unsafeInMemoryIntermediateFileMerger.addDataChunkToMerge(rowPage); + handlePreviousPage(); } else { rowPage.freeMemory(); } startFileBasedMerge(); } + private void handlePreviousPage() --- End diff -- fixed --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2052 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/3019/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2052 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4263/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2052 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/3115/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2052 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4349/ --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin closed the pull request at:
https://github.com/apache/carbondata/pull/2052 --- |
Free forum by Nabble | Edit this page |