GitHub user ravipesala opened a pull request:
https://github.com/apache/carbondata/pull/1748 [CARBONDATA-1967] Fix autocompaction and auto merge index in partition tables Auto compaction is not working in case of the partition table and merge index files are merging always even though it is configured as false. Solution: Auto compaction code is added after finishing of partition loading. And also merge index configuration is checked before going for index merging. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ravipesala/incubator-carbondata enable-mergeindex-autocompaction-partition Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1748.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1748 ---- commit e88f9ced078c1243a89181d131f72b75d2eb2584 Author: ravipesala <ravi.pesala@...> Date: 2018-01-02T12:56:15Z Fix autocompaction and auto merge index in partition tables ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1748 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2483/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1748 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1259/ --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1748#discussion_r159367019 --- Diff: core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java --- @@ -110,6 +110,20 @@ public static boolean isCarbonDataFile(String fileNameWithPath) { return false; } + /** + * check if it is carbon partitionmap file matching extension --- End diff -- change to > Return true if the `fileNameWithPath` ends with partition map file extension name remove @param and @return --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1748#discussion_r159367112 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala --- @@ -373,7 +373,11 @@ case class CarbonLoadDataCommand( if (carbonTable.isHivePartitionTable) { try { - loadDataWithPartition(sparkSession, carbonLoadModel, hadoopConf, loadDataFrame) + loadDataWithPartition( + sparkSession, + carbonLoadModel, + hadoopConf, + loadDataFrame, operationContext) --- End diff -- move last parameter to next line --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1748#discussion_r159367225 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala --- @@ -457,7 +466,8 @@ case class CarbonLoadDataCommand( private def loadDataWithPartition(sparkSession: SparkSession, carbonLoadModel: CarbonLoadModel, hadoopConf: Configuration, - dataFrame: Option[DataFrame]) = { + dataFrame: Option[DataFrame], + operationContext: OperationContext) = { --- End diff -- either delete line 460 to 464, or add the description for them --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1748#discussion_r159367283 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala --- @@ -640,6 +650,18 @@ case class CarbonLoadDataCommand( } else { Dataset.ofRows(sparkSession, convertedPlan) } + try { + // Trigger auto compaction + CarbonDataRDDFactory.handleSegmentMerging( + sparkSession.sqlContext, + carbonLoadModel, + table, + operationContext) + } catch { + case e: Exception => + throw new Exception( + "Dataload is success. Auto-Compaction has failed. Please check logs.") --- End diff -- include the `e` in the constructor of Exception --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1748#discussion_r159470470 --- Diff: core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java --- @@ -110,6 +110,20 @@ public static boolean isCarbonDataFile(String fileNameWithPath) { return false; } + /** + * check if it is carbon partitionmap file matching extension --- End diff -- ok --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1748#discussion_r159470616 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala --- @@ -373,7 +373,11 @@ case class CarbonLoadDataCommand( if (carbonTable.isHivePartitionTable) { try { - loadDataWithPartition(sparkSession, carbonLoadModel, hadoopConf, loadDataFrame) + loadDataWithPartition( + sparkSession, + carbonLoadModel, + hadoopConf, + loadDataFrame, operationContext) --- End diff -- ok --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1748#discussion_r159470717 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala --- @@ -457,7 +466,8 @@ case class CarbonLoadDataCommand( private def loadDataWithPartition(sparkSession: SparkSession, carbonLoadModel: CarbonLoadModel, hadoopConf: Configuration, - dataFrame: Option[DataFrame]) = { + dataFrame: Option[DataFrame], + operationContext: OperationContext) = { --- End diff -- ok, removed --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1748#discussion_r159470910 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala --- @@ -640,6 +650,18 @@ case class CarbonLoadDataCommand( } else { Dataset.ofRows(sparkSession, convertedPlan) } + try { + // Trigger auto compaction + CarbonDataRDDFactory.handleSegmentMerging( + sparkSession.sqlContext, + carbonLoadModel, + table, + operationContext) + } catch { + case e: Exception => + throw new Exception( + "Dataload is success. Auto-Compaction has failed. Please check logs.") --- End diff -- ok --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1748 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2526/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1748 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1302/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1748 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2692/ --- |
In reply to this post by qiuchenjian-2
|
In reply to this post by qiuchenjian-2
|
Free forum by Nabble | Edit this page |