GitHub user rahulforallp opened a pull request:
https://github.com/apache/carbondata/pull/1623 [CARBONDATA-1866] [WIP] refactored CarbonLateDecodeRule to split different ⦠â¦rule for better usablity Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [X] Any interfaces changed? **No** - [X] Any backward compatibility impacted? **No** - [X] Document update required? **No** - [X] Testing done This PR contains refactored code of **CarbonLateDecodeRule** . So no new test-case required. Only UT, FT and SDV test success report required from CI. - [X] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/rahulforallp/incubator-carbondata CARBONDATA-1866 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1623.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1623 ---- commit 115c78fa8f2c7e8b9acc1cb4132409f693b12135 Author: rahulforallp <[hidden email]> Date: 2017-12-06T09:52:44Z [CARBONDATA-1866] refactored CarbonLateDecodeRule to split different rule for better usablity ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1623 Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/498/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1623 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/500/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1623 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1761/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1623 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1763/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1623 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2144/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1623 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/520/ --- |
In reply to this post by qiuchenjian-2
Github user rahulforallp commented on the issue:
https://github.com/apache/carbondata/pull/1623 retest this please --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1623 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/521/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1623 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2153/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1623 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1781/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1623 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1788/ --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1623#discussion_r155480115 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonIUDRule.scala --- @@ -0,0 +1,58 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.sql.optimizer + +import org.apache.spark.sql.ProjectForUpdate +import org.apache.spark.sql.catalyst.expressions.{NamedExpression, PredicateHelper} +import org.apache.spark.sql.catalyst.plans.logical.{LogicalPlan, Project} +import org.apache.spark.sql.catalyst.rules.Rule +import org.apache.spark.sql.execution.command.mutation.CarbonProjectForUpdateCommand + +/** + * Rule specific for IUD operations + */ +class CarbonIUDRule extends Rule[LogicalPlan] with PredicateHelper { + override def apply(plan: LogicalPlan): LogicalPlan = { + if (CarbonDecodeRuleHelper.checkIfRuleNeedToBeApplied(plan)) { --- End diff -- This check is not required, as IUD node once replaced will not present --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1623#discussion_r155480351 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonPlanTransformRule.scala --- @@ -44,37 +44,14 @@ import org.apache.carbondata.spark.CarbonAliasDecoderRelation * 1. Change the datatype for dictionary encoded column * 2. Add the dictionary decoder operator at appropriate place. */ -class CarbonLateDecodeRule extends Rule[LogicalPlan] with PredicateHelper { +class CarbonPlanTransformRule extends Rule[LogicalPlan] with PredicateHelper { --- End diff -- This is still lateDecodeRule only, not be CarbonPlanTransformRule --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1623#discussion_r155481042 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/optimizer/CarbonUDFTransformRule.scala --- @@ -0,0 +1,74 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.sql.optimizer + +import org.apache.spark.sql.catalyst.expressions.{Alias, AttributeReference, PredicateHelper, +ScalaUDF} +import org.apache.spark.sql.catalyst.plans.logical.{Filter, Join, LogicalPlan, Project} +import org.apache.spark.sql.catalyst.rules.Rule +import org.apache.spark.sql.execution.datasources.LogicalRelation +import org.apache.spark.sql.types.StringType + +import org.apache.carbondata.core.constants.CarbonCommonConstants + +class CarbonUDFTransformRule extends Rule[LogicalPlan] with PredicateHelper { + override def apply(plan: LogicalPlan): LogicalPlan = { + + if (CarbonDecodeRuleHelper.checkIfRuleNeedToBeApplied(plan)) { --- End diff -- remove this check --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1623#discussion_r155484388 --- Diff: integration/spark2/src/main/spark2.2/CarbonSessionState.scala --- @@ -176,31 +178,40 @@ class CarbonSessionStateBuilder(sparkSession: SparkSession, override lazy val optimizer: Optimizer = new CarbonOptimizer(catalog, conf, experimentalMethods) + def extendedAnalyzerRules: Seq[Rule[LogicalPlan]] = Nil + + def internalAnalyzerRules: Seq[Rule[LogicalPlan]] = { + new ResolveHiveSerdeTable(session) +: + new FindDataSourceTable(session) +: + new ResolveSQLOnFile(session) +: + new CarbonIUDAnalysisRule(sparkSession) +: + new CarbonPreAggregateQueryRules(sparkSession) +: + new CarbonPreInsertionCasts(sparkSession) +: customResolutionRules + } + override protected def analyzer: Analyzer = { - new Analyzer(catalog, conf) { - - override val extendedResolutionRules: Seq[Rule[LogicalPlan]] = - new ResolveHiveSerdeTable(session) +: - new FindDataSourceTable(session) +: - new ResolveSQLOnFile(session) +: - new CarbonIUDAnalysisRule(sparkSession) +: - new CarbonPreAggregateQueryRules(sparkSession) +: - new CarbonPreInsertionCasts(sparkSession) +: customResolutionRules - - override val extendedCheckRules: Seq[LogicalPlan => Unit] = - PreWriteCheck :: HiveOnlyCheck :: Nil - - override val postHocResolutionRules: Seq[Rule[LogicalPlan]] = - new DetermineTableStats(session) +: - RelationConversions(conf, catalog) +: - PreprocessTableCreation(session) +: - PreprocessTableInsertion(conf) +: - DataSourceAnalysis(conf) +: - HiveAnalysis +: - customPostHocResolutionRules - } + new Analyzer(catalog, conf) { --- End diff -- Use customResolutionRules , not required to add extra interface --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1623#discussion_r155484918 --- Diff: integration/spark2/src/main/spark2.2/CarbonSessionState.scala --- @@ -213,6 +224,13 @@ class CarbonOptimizer( extends SparkOptimizer(catalog, conf, experimentalMethods) { override def execute(plan: LogicalPlan): LogicalPlan = { + val transFormedPlan: LogicalPlan = CarbonOptimizerUtil.transformForScalarSubQuery(plan) --- End diff -- This changes not required --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1623 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/576/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1623 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1820/ --- |
In reply to this post by qiuchenjian-2
Github user rahulforallp commented on the issue:
https://github.com/apache/carbondata/pull/1623 retest this please --- |
Free forum by Nabble | Edit this page |