[jira] [Resolved] (CARBONDATA-3116) set carbon.query.directQueryOnDataMap.enabled=true not working

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Resolved] (CARBONDATA-3116) set carbon.query.directQueryOnDataMap.enabled=true not working

Akash R Nilugal (Jira)

     [ https://issues.apache.org/jira/browse/CARBONDATA-3116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jacky Li resolved CARBONDATA-3116.
----------------------------------
       Resolution: Fixed
    Fix Version/s: 1.5.2

> set carbon.query.directQueryOnDataMap.enabled=true not working
> --------------------------------------------------------------
>
>                 Key: CARBONDATA-3116
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-3116
>             Project: CarbonData
>          Issue Type: Bug
>    Affects Versions: 1.5.1
>            Reporter: xubo245
>            Assignee: xubo245
>            Priority: Major
>             Fix For: 1.5.2
>
>          Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> When I run:
> {code:java}
>     spark.sql("drop table if exists mainTable")
>     spark.sql(
>       """CREATE TABLE mainTable
>     (id Int,
>       name String,
>       city String,
>       age Int)
>     STORED BY 'org.apache.carbondata.format'""".stripMargin);
>     spark.sql("LOAD DATA LOCAL INPATH '/Users/xubo/Desktop/xubo/git/carbondata2/integration/spark-common-test/src/test/resources/sample.csv' into table mainTable");
>     spark.sql("create datamap preagg_sum on table mainTable using 'preaggregate' as select id,sum(age) from mainTable group by id");
>     spark.sql("show datamap on table mainTable");
>     spark.sql("set carbon.query.directQueryOnDataMap.enabled=true");
>     spark.sql("set carbon.query.directQueryOnDataMap.enabled");
>     spark.sql("select count(*) from maintable_preagg_sum").show();
>     spark.sql("select count(*) from maintable_preagg_sum").show();
> {code}
> it will throw  Exception
> {code:java}
> 2018-11-22 00:06:01 AUDIT audit:93 - {"time":"November 22, 2018 12:06:01 AM CST","username":"xubo","opName":"SET","opId":"344656521959523","opStatus":"SUCCESS","opTime":"1 ms","table":"NA","extraInfo":{}}
> Exception in thread "main" org.apache.spark.sql.AnalysisException: Query On DataMap not supported;
> at org.apache.spark.sql.optimizer.CarbonLateDecodeRule.validateQueryDirectlyOnDataMap(CarbonLateDecodeRule.scala:131)
> at org.apache.spark.sql.optimizer.CarbonLateDecodeRule.checkIfRuleNeedToBeApplied(CarbonLateDecodeRule.scala:79)
> at org.apache.spark.sql.optimizer.CarbonLateDecodeRule.apply(CarbonLateDecodeRule.scala:53)
> at org.apache.spark.sql.optimizer.CarbonLateDecodeRule.apply(CarbonLateDecodeRule.scala:47)
> at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
> at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
> at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
> at scala.collection.immutable.List.foldLeft(List.scala:84)
> at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
> at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
> at scala.collection.immutable.List.foreach(List.scala:381)
> at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
> at org.apache.spark.sql.hive.CarbonOptimizer.execute(CarbonOptimizer.scala:35)
> at org.apache.spark.sql.hive.CarbonOptimizer.execute(CarbonOptimizer.scala:27)
> at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:78)
> at org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:78)
> at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:84)
> at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:80)
> at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:89)
> at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:89)
> at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2837)
> at org.apache.spark.sql.Dataset.head(Dataset.scala:2150)
> at org.apache.spark.sql.Dataset.take(Dataset.scala:2363)
> at org.apache.spark.sql.Dataset.showString(Dataset.scala:241)
> at org.apache.spark.sql.Dataset.show(Dataset.scala:637)
> at org.apache.spark.sql.Dataset.show(Dataset.scala:596)
> at org.apache.spark.sql.Dataset.show(Dataset.scala:605)
> at org.apache.carbondata.examples.PreAggregateDataMapExample$.exampleBody(PreAggregateDataMapExample.scala:63)
> at org.apache.carbondata.examples.PreAggregateDataMapExample$.main(PreAggregateDataMapExample.scala:34)
> at org.apache.carbondata.examples.PreAggregateDataMapExample.main(PreAggregateDataMapExample.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)