GitHub user xuchuanyin opened a pull request:
https://github.com/apache/carbondata/pull/2594 [CARBONDATA-2809][DataMap] Skip rebuilding for non-lazy datamap # Problems & RootCause: For non-lazy (deferred rebuild) datamap, the data of datamap will be generated immediately after: 1. the datamap is created 2. the main table is loaded So there is no need to rebuild this datamap. Actually, it will encounter error if we trigger rebuilding for these datamaps due to the existence of old data. # Solution: If the datamap is not lazy, we will skip the rebuild procedure and only print logs for it. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [x] Any interfaces changed? `NO` - [x] Any backward compatibility impacted? `NO` - [x] Document update required? `NO` - [x] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? `NO` - How it is tested? Please attach test report. `Tested in local` - Is it a performance related change? Please attach the performance test report. `NO` - Any additional information to help reviewers in testing this change. `NA` - [x] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. `NA` You can merge this pull request into a Git repository by running: $ git pull https://github.com/xuchuanyin/carbondata 0801_skip_rebuild Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2594.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2594 ---- commit 052b683a9076462c607c73a183697eaae2bc3848 Author: xuchuanyin <xuchuanyin@...> Date: 2018-08-01T08:58:54Z Skip rebuilding for non-lazy datamap Problems & RootCause: For non-lazy (deferred rebuild) datamap, the data of datamap will be generated immediately after: 1. the datamap is created 2. the main table is loaded So there is no need to rebuild this datamap. Actually, it will encounter error if we trigger rebuilding for these datamaps due to the existence of old data. Solution: If the datamap is not lazy, we will skip the rebuild procedure and only print logs for it. ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2594 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7706/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2594 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6432/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2594 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6098/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2594 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6108/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2594 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7715/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2594 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6441/ --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2594#discussion_r207433913 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonDataMapRebuildCommand.scala --- @@ -48,7 +50,17 @@ case class CarbonDataMapRebuildCommand( )(sparkSession) } val provider = DataMapManager.get().getDataMapProvider(table, schema, sparkSession) - provider.rebuild() + // for non-lazy index datamap, the data of datamap will be generated immediately after + // the datamap is created or the main table is loaded, so there is no need to + // rebuild this datamap. + if (!schema.isLazy && provider.isInstanceOf[IndexDataMapProvider]) { --- End diff -- Even if it is other datamap like preaggregat datamap, we should not rebuild it, right? --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2594#discussion_r207443911 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonDataMapRebuildCommand.scala --- @@ -48,7 +50,17 @@ case class CarbonDataMapRebuildCommand( )(sparkSession) } val provider = DataMapManager.get().getDataMapProvider(table, schema, sparkSession) - provider.rebuild() + // for non-lazy index datamap, the data of datamap will be generated immediately after + // the datamap is created or the main table is loaded, so there is no need to + // rebuild this datamap. + if (!schema.isLazy && provider.isInstanceOf[IndexDataMapProvider]) { --- End diff -- For MV, current implementation requires rebuild. For preagg, I'm not sure about its implementation, so leave it as it is. --- |
In reply to this post by qiuchenjian-2
Github user KanakaKumar commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2594#discussion_r207460748 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonDataMapRebuildCommand.scala --- @@ -48,7 +50,17 @@ case class CarbonDataMapRebuildCommand( )(sparkSession) } val provider = DataMapManager.get().getDataMapProvider(table, schema, sparkSession) - provider.rebuild() + // for non-lazy index datamap, the data of datamap will be generated immediately after + // the datamap is created or the main table is loaded, so there is no need to + // rebuild this datamap. + if (!schema.isLazy && provider.isInstanceOf[IndexDataMapProvider]) { --- End diff -- Right now rebuild call on pre-aggregate DM ithrows "NoSuchDataMapException". Please handle to give correct message as pre-aggregate also rebuild is not required. --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2594#discussion_r207462265 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonDataMapRebuildCommand.scala --- @@ -48,7 +50,17 @@ case class CarbonDataMapRebuildCommand( )(sparkSession) } val provider = DataMapManager.get().getDataMapProvider(table, schema, sparkSession) - provider.rebuild() + // for non-lazy index datamap, the data of datamap will be generated immediately after + // the datamap is created or the main table is loaded, so there is no need to + // rebuild this datamap. + if (!schema.isLazy && provider.isInstanceOf[IndexDataMapProvider]) { --- End diff -- OK. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2594 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6489/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2594 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7765/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2594 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6147/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2594 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6148/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2594 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6177/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2594 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6178/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2594 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7799/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2594 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6523/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2594 @xuchuanyin Please check MVTests, it is failing --- |
Free forum by Nabble | Edit this page |