GitHub user kunal642 opened a pull request:
https://github.com/apache/carbondata/pull/3046 [WIP] Added check to start fallback based on size Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kunal642/carbondata oom_fix Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/3046.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3046 ---- commit cfb870789eaa93ebe03393323456cdf2ed4daf17 Author: kunal642 <kunalkapoor642@...> Date: 2019-01-02T10:09:20Z fixed ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3046 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2318/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3046 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2113/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3046 Build Failed with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/10367/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3046 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2116/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3046 Build Failed with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/10370/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3046 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2321/ --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3046#discussion_r244708172 --- Diff: core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java --- @@ -2076,4 +2076,15 @@ private CarbonCommonConstants() { */ public static final String CARBON_QUERY_DATAMAP_BLOOM_CACHE_SIZE_DEFAULT_VAL = "512"; + public static final String CARBON_LOCAL_DICTIONARY_MAX_THRESHOLD = --- End diff -- add a comment currently it is internal --- |
In reply to this post by qiuchenjian-2
Github user qiuchenjian commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3046#discussion_r244708339 --- Diff: processing/src/main/java/org/apache/carbondata/processing/store/writer/AbstractFactDataWriter.java --- @@ -205,8 +205,10 @@ public AbstractFactDataWriter(CarbonFactDataHandlerModel model) { if (model.getNumberOfCores() > 1) { numberOfCores = model.getNumberOfCores() / 2; } - fallbackExecutorService = Executors.newFixedThreadPool(numberOfCores, new CarbonThreadFactory( - "FallbackPool:" + model.getTableName() + ", range: " + model.getBucketId())); + fallbackExecutorService = model.getFallBackExecutorService() != null ? + model.getFallBackExecutorService() : + Executors.newFixedThreadPool(numberOfCores, new CarbonThreadFactory( --- End diff -- Better to use "public CarbonThreadFactory(String name, boolean withTime)", so that diff threads have diff names --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3046#discussion_r244708432 --- Diff: core/src/main/java/org/apache/carbondata/core/util/CarbonProperties.java --- @@ -1491,6 +1491,16 @@ private void validateSortMemorySpillPercentage() { } } + public int getMaxDictionaryThreshold() { + int localDictionaryMaxThreshold = Integer.parseInt(carbonProperties --- End diff -- add min max validation --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3046#discussion_r244708667 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/steps/CarbonRowDataWriterProcessorStepImpl.java --- @@ -128,16 +128,18 @@ public CarbonRowDataWriterProcessorStepImpl(CarbonDataLoadConfiguration configur CarbonTimeStatisticsFactory.getLoadStatisticsInstance() .recordDictionaryValue2MdkAdd2FileTime(CarbonTablePath.DEPRECATED_PARTITION_ID, System.currentTimeMillis()); - + ExecutorService fallBackExecutorService = + Executors.newFixedThreadPool(1, new CarbonThreadFactory("FallBackPool:")); --- End diff -- pool size cannot be always one ...please refer org/apache/carbondata/processing/store/writer/AbstractFactDataWriter.java:203 --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3046 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2119/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3046 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2326/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3046 Build Success with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/10374/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3046 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2126/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3046 Build Failed with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/10380/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3046 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2332/ --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3046#discussion_r245510098 --- Diff: core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java --- @@ -2076,4 +2076,15 @@ private CarbonCommonConstants() { */ public static final String CARBON_QUERY_DATAMAP_BLOOM_CACHE_SIZE_DEFAULT_VAL = "512"; + public static final String CARBON_LOCAL_DICTIONARY_MAX_THRESHOLD = --- End diff -- I think we should optimize this variable name. The first time I saw this I thought it was duplicated with another threshold for local dictionary. One is number based, another is storage size based. Please take care of the readability. --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3046#discussion_r245510146 --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/page/DecoderBasedFallbackEncoder.java --- @@ -57,10 +57,7 @@ public DecoderBasedFallbackEncoder(EncodedColumnPage encodedColumnPage, int page int pageSize = encodedColumnPage.getActualPage().getPageSize(); int offset = 0; - int[] reverseInvertedIndex = new int[pageSize]; --- End diff -- What does these changes forï¼ --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/3046 Hi @kunal642 ï¼in your PR, the threshold size for storage of the local dictionary is specified by system (maybe later can be specified by user). But it will come up with an obvious problem that how can the use know the exactly value. I've read about Parquet that it will compare the dictionary encoded size with the original encoded size, only if the dictionary encoded size is smaller, will Parquet use it, otherwise it will fall back. So can the current implementation suite this scenario well? --- |
Free forum by Nabble | Edit this page |