GitHub user kumarvishal09 opened a pull request:
https://github.com/apache/incubator-carbondata/pull/517 [CARBONDATA-621]Fixed compaction with multiple blocklet issue Problem: Compaction is failing in case of multiple blocklet and when each segment dictionary column size is changing Reason: this is because during compaction we need to update the dictionary byte value with last segment key generator, in carbon merge rdd we are passing the oldest segment cardinality value we need to pass latest segment cardinality Solution: Pass the latest segment cardinality. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kumarvishal09/incubator-carbondata CompactionWithMultipleBlockletIssue Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-carbondata/pull/517.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #517 ---- commit a9c131c30fcc813f16effc4963e105ac098ae8a4 Author: kumarvishal <[hidden email]> Date: 2017-01-10T13:37:15Z Fixed compaction with multiple blocklet issue ---- --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/517 Build Success with Spark 1.5.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/541/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/517#discussion_r95493706 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/datacompaction/DataCompactionBlockletBoundryTest.scala --- @@ -36,7 +36,7 @@ class DataCompactionBlockletBoundryTest extends QueryTest with BeforeAndAfterAll .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "mm/dd/yyyy") CarbonProperties.getInstance() .addProperty(CarbonCommonConstants.BLOCKLET_SIZE, - "55") + "120") --- End diff -- Do you want to set the value that make the file containing more than 1 blocklet? I think it is better to ensure it by calculating from the input CSV file. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/517#discussion_r95493823 --- Diff: integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala --- @@ -318,7 +318,7 @@ class CarbonMergerRDD[K, V]( // prepare the details required to extract the segment properties using last segment. if (null != carbonInputSplits && carbonInputSplits.nonEmpty) { - val carbonInputSplit = carbonInputSplits.last + val carbonInputSplit = carbonInputSplits(0) --- End diff -- use head instead of (0), and why head is the latest one? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/517#discussion_r95948533 --- Diff: integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonMergerRDD.scala --- @@ -318,7 +318,7 @@ class CarbonMergerRDD[K, V]( // prepare the details required to extract the segment properties using last segment. if (null != carbonInputSplits && carbonInputSplits.nonEmpty) { - val carbonInputSplit = carbonInputSplits.last + val carbonInputSplit = carbonInputSplits(0) --- End diff -- ok --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/517#discussion_r95948938 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/datacompaction/DataCompactionBlockletBoundryTest.scala --- @@ -36,7 +36,7 @@ class DataCompactionBlockletBoundryTest extends QueryTest with BeforeAndAfterAll .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "mm/dd/yyyy") CarbonProperties.getInstance() .addProperty(CarbonCommonConstants.BLOCKLET_SIZE, - "55") + "120") --- End diff -- Yes this is for having multiple blocklet in one carbon data file. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/517 Build Success with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/584/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:
https://github.com/apache/incubator-carbondata/pull/517 LGTM --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user asfgit closed the pull request at:
https://github.com/apache/incubator-carbondata/pull/517 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
Free forum by Nabble | Edit this page |