GitHub user xuchuanyin opened a pull request:
https://github.com/apache/carbondata/pull/2056 [CARBONDATA-2238][DataLoad] Merge and spill in-memory pages if memory is not enough Currently in carbondata, pages will be added to memory. If memory is not enough, newly incoming pages will be spilled to disk directly. This implementation will merge&spill the in-memory pages and make room for the newly incoming pages. As a result, carbondata will spill less than before and spill bigger files instead of smaller files and the merge&sort of the pages is in-memory instead of spilled-file. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [x] Any interfaces changed? `NO` - [x] Any backward compatibility impacted? `NO` - [x] Document update required? `NO` - [x] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? `NO` - How it is tested? Please attach test report. `Tested in local machine` - Is it a performance related change? Please attach the performance test report. `Performance should be enhanced. We reduce the frequency of spill pages to disk and increase the data size for each spill. Also, the merge sort of pages is memory based instead of file based, so the performance will be enhanced.` - Any additional information to help reviewers in testing this change. - [x] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/xuchuanyin/carbondata 0313_spill_inmemory_pages Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2056.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2056 ---- commit da3329ca60250e07b568f905ba5cdaa115f8d522 Author: xuchuanyin <xuchuanyin@...> Date: 2018-03-12T12:25:17Z Fix exhausted memory problem during unsafe data loading If the size of unsafe row page equals to that of working memory, the last page will exhaust the working memory and carbondata will suffer 'not enough memory' problem when we convert data to columnar format. All unsafe pages should be spilled to disk or moved to unsafe sort memory instead of be kept in unsafe working memory. commit 16b376eb8a077440db884649744b6cd22bba95d5 Author: xuchuanyin <xuchuanyin@...> Date: 2018-03-13T02:31:36Z Merge and spill in-memory pages if memory is not enough Currently in carbondata, pages will be added to memory. If memory is not enough, newly incoming pages will be spilled to disk directly. This implementation will merge&spill the in-memory pages and make room for the newly incoming pages. As a result, carbondata will spill less than before and spill bigger files instead of smaller files. ---- --- |
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/2056 Notice: This PR relies on #2052 --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2056#discussion_r174005326 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/UnsafeSortDataRows.java --- @@ -141,13 +141,18 @@ public void initialize() throws MemoryException { semaphore = new Semaphore(parameters.getNumberOfCores()); } - private UnsafeCarbonRowPage createUnsafeRowPage() throws MemoryException { + private UnsafeCarbonRowPage createUnsafeRowPage() + throws MemoryException, CarbonSortKeyAndGroupByException { MemoryBlock baseBlock = UnsafeMemoryManager.allocateMemoryWithRetry(this.taskId, inMemoryChunkSize); boolean isMemoryAvailable = UnsafeSortMemoryManager.INSTANCE.isMemoryAvailable(baseBlock.size()); if (isMemoryAvailable) { UnsafeSortMemoryManager.INSTANCE.allocateDummyMemory(baseBlock.size()); + } else { + LOGGER.info("trigger in-memory merge and spill for table " + parameters.getTableName()); --- End diff -- FYI: The main entry point is here --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2056 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2986/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2056 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4230/ --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/2056 retest this please --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2056 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4232/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2056 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2988/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2056 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3873/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2056 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4237/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2056 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2993/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2056 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3880/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2056 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/3016/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2056 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4260/ --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/2056 retest this please --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2056 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/3113/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2056 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4347/ --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2056#discussion_r174705630 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/UnsafeSortDataRows.java --- @@ -141,13 +141,18 @@ public void initialize() throws MemoryException { semaphore = new Semaphore(parameters.getNumberOfCores()); } - private UnsafeCarbonRowPage createUnsafeRowPage() throws MemoryException { + private UnsafeCarbonRowPage createUnsafeRowPage() + throws MemoryException, CarbonSortKeyAndGroupByException { MemoryBlock baseBlock = UnsafeMemoryManager.allocateMemoryWithRetry(this.taskId, inMemoryChunkSize); boolean isMemoryAvailable = UnsafeSortMemoryManager.INSTANCE.isMemoryAvailable(baseBlock.size()); if (isMemoryAvailable) { UnsafeSortMemoryManager.INSTANCE.allocateDummyMemory(baseBlock.size()); + } else { + LOGGER.info("trigger in-memory merge and spill for table " + parameters.getTableName()); --- End diff -- I think this approach is good for case that the sort memory size is smaller than the input data size. For example, sort memory is 10GB and input data size is 25GB, then it is good. But in case sort memory is 10GB and input size is 15GB, then the new approach will spill more data to disk, so current approach way is better. So why not let user to configure how much data to spill. --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2056#discussion_r177662316 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/UnsafeSortDataRows.java --- @@ -141,13 +141,18 @@ public void initialize() throws MemoryException { semaphore = new Semaphore(parameters.getNumberOfCores()); } - private UnsafeCarbonRowPage createUnsafeRowPage() throws MemoryException { + private UnsafeCarbonRowPage createUnsafeRowPage() + throws MemoryException, CarbonSortKeyAndGroupByException { MemoryBlock baseBlock = UnsafeMemoryManager.allocateMemoryWithRetry(this.taskId, inMemoryChunkSize); boolean isMemoryAvailable = UnsafeSortMemoryManager.INSTANCE.isMemoryAvailable(baseBlock.size()); if (isMemoryAvailable) { UnsafeSortMemoryManager.INSTANCE.allocateDummyMemory(baseBlock.size()); + } else { + LOGGER.info("trigger in-memory merge and spill for table " + parameters.getTableName()); --- End diff -- Oh, I think the configuration will be difficult to understand. If the user configured 10GB, the spilled data files may be less since the file is compressed. Besides, we have to verify this configuration to make sure that it is less than sort memory. Most importantly, the sort memory is shared by all data loading. A configuration of this may not tell how it will affect the real behavior. I prefer to keep it simple now and the result is quite obvious: In the worst scenario, we only have to spill extra #sizeOfSortMemory data to disk comparing with the previous implementation. And in general scenario, we can spill bigger files to disk instead of more smaller files. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2056 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/3501/ --- |
Free forum by Nabble | Edit this page |