[jira] [Closed] (CARBONDATA-3795) Create external carbon table fails if the schema is not provided

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Closed] (CARBONDATA-3795) Create external carbon table fails if the schema is not provided

Akash R Nilugal (Jira)

     [ https://issues.apache.org/jira/browse/CARBONDATA-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Chetan Bhat closed CARBONDATA-3795.
-----------------------------------
    Fix Version/s: 2.0.1
       Resolution: Fixed

Issue fixed in 2.0.1

> Create external carbon table fails if the schema is not provided
> ----------------------------------------------------------------
>
>                 Key: CARBONDATA-3795
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-3795
>             Project: CarbonData
>          Issue Type: Bug
>          Components: data-query
>    Affects Versions: 2.0.0
>         Environment: Spark 2.4.5 compatible carbon jars
>            Reporter: Chetan Bhat
>            Priority: Major
>             Fix For: 2.0.1
>
>
> Create external carbon table fails if the schema is not provided.
> Example command - 
> create external table test1 stored as carbondata location '/user/sparkhive/warehouse/1_6_1.db/brinjal/';
> *Error: org.apache.spark.sql.AnalysisException: Unable to infer the schema. The schema specification is required to create the table `1_6_1`.`test1`.; (state=,code=0)*
>  
> *Logs -*
> 2020-05-05 22:57:25,638 | ERROR | [HiveServer2-Background-Pool: Thread-371] | Error executing query, currentState RUNNING, | org.apache.spark.internal.Logging$class.logError(Logging.scala:91)
> org.apache.spark.sql.AnalysisException: Unable to infer the schema. The schema specification is required to create the table `1_6_1`.`test1`.;
>  at org.apache.spark.sql.hive.ResolveHiveSerdeTable$$anonfun$apply$1.applyOrElse(HiveStrategies.scala:104)
>  at org.apache.spark.sql.hive.ResolveHiveSerdeTable$$anonfun$apply$1.applyOrElse(HiveStrategies.scala:90)
>  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
>  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
>  at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
>  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)
>  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
>  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
>  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
>  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
>  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)
>  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
>  at org.apache.spark.sql.hive.ResolveHiveSerdeTable.apply(HiveStrategies.scala:90)
>  at org.apache.spark.sql.hive.ResolveHiveSerdeTable.apply(HiveStrategies.scala:44)
>  at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
>  at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
>  at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
>  at scala.collection.immutable.List.foldLeft(List.scala:84)
>  at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
>  at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
>  at scala.collection.immutable.List.foreach(List.scala:392)
>  at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
>  at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)
>  at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)
>  at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)
>  at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
>  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
>  at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
>  at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:58)
>  at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:56)
>  at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)
>  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:78)
>  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
>  at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)
>  at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:232)
>  at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:175)
>  at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:171)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>  at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:185)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
> 2020-05-05 22:57:25,639 | ERROR | [HiveServer2-Background-Pool: Thread-371] | Error running hive query: | org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:179)
> org.apache.hive.service.cli.HiveSQLException: org.apache.spark.sql.AnalysisException: Unable to infer the schema. The schema specification is required to create the table `1_6_1`.`test1`.;
>  at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:269)
>  at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:175)
>  at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:171)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>  at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:185)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
> 2020-05-05 22:57:25,641 | INFO | [HiveServer2-Handler-Pool: Thread-337] | Asked to cancel job group 90dbd61e-85af-4e31-b3ab-f4dfb4c21249 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)