Posted by
GitBox on
Mar 18, 2021; 5:40pm
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/GitHub-carbondata-ShreelekhyaG-opened-a-new-pull-request-4107-CARBONDATA-4149-Query-with-SI-after-ads-tp106804p107036.html
Indhumathi27 commented on a change in pull request #4107:
URL:
https://github.com/apache/carbondata/pull/4107#discussion_r597099633##########
File path: integration/spark/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
##########
@@ -276,7 +290,25 @@ class CarbonTableCompactor(
segmentMetaDataAccumulator)
} else {
if (mergeRDD != null) {
- mergeRDD.collect
+ val result = mergeRDD.collect
Review comment:
Current code will not handle multi-partitions properly and Add/Drop partitions is called for each partition. Please change the code as below:
if (!updatePartitionSpecs.isEmpty) {
val tableIdentifier = new TableIdentifier(carbonTable.getTableName,
Some(carbonTable.getDatabaseName))
// To update partitionSpec in hive metastore, drop and add with latest path.
val partitionSpecList: util.List[TablePartitionSpec] =
new util.ArrayList[TablePartitionSpec]()
updatePartitionSpecs.asScala.foreach {
partitionSpec =>
var spec = PartitioningUtils.parsePathFragment(
String.join(CarbonCommonConstants.FILE_SEPARATOR, partitionSpec.getPartitions))
partitionSpecList.add(spec)
}
AlterTableDropPartitionCommand(
tableIdentifier,
partitionSpecList.asScala,
true, false, true).run(sqlContext.sparkSession)
AlterTableAddPartitionCommand(tableIdentifier,
partitionSpecList.asScala.map(p => (p, None)), false).run(sqlContext.sparkSession)
}
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[hidden email]