Posted by
GitBox on
Oct 27, 2020; 10:34am
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/GitHub-carbondata-maheshrajus-opened-a-new-pull-request-3912-WIP-Global-sort-partitions-should-be-dey-tp99835p102898.html
akashrn5 commented on a change in pull request #3912:
URL:
https://github.com/apache/carbondata/pull/3912#discussion_r503187128##########
File path: integration/spark/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessBuilderOnSpark.scala
##########
@@ -143,10 +143,18 @@ object DataLoadProcessBuilderOnSpark {
var numPartitions = CarbonDataProcessorUtil.getGlobalSortPartitions(
configuration.getDataLoadProperty(CarbonCommonConstants.LOAD_GLOBAL_SORT_PARTITIONS))
- if (numPartitions <= 0) {
- numPartitions = convertRDD.partitions.length
+
+ // if numPartitions user does not specify and not specified in config then dynamically calculate
+ if (numPartitions == 0) {
+ // get the size in bytes and convert to size in MB
+ val sizeOfDataFrame = SizeEstimator.estimate(inputRDD)/1000000
+ // data frame size can not be more than Int size
+ numPartitions = sizeOfDataFrame.toInt/inputRDD.getNumPartitions
Review comment:
i think here, you should try to get the partitions based on the total load size and the partition size and not the partition number.Please check again and handle correctly.
@QiangCai please have a look once
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[hidden email]