Re: Error while creating table in carbondata

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: Error while creating table in carbondata

Lionel CL
I have the same problem in CDH 5.8.0
spark2 version is 2.1.0.cloudera1
carbondata version 1.2.0.

There's no error occurred when using open source version spark.

<hadoop.version>2.6.0-cdh5.8.0</hadoop.version>
<spark.version>2.1.0.cloudera1</spark.version>
<scala.binary.version>2.11</scala.binary.version>
<scala.version>2.11.8</scala.version>


scala> cc.sql("create table t111(vin string) stored by 'carbondata'")
17/11/03 10:22:03 AUDIT command.CreateTable: [][][Thread-1]Creating Table with Database name [default] and Table name [t111]
java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.catalog.CatalogTable.copy(Lorg/apache/spark/sql/catalyst/TableIdentifier;Lorg/apache/spark/sql/catalyst/catalog/CatalogTableType;Lorg/apache/spark/sql/catalyst/catalog/CatalogStorageFormat;Lorg/apache/spark/sql/types/StructType;Lscala/Option;Lscala/collection/Seq;Lscala/Option;Ljava/lang/String;JJLscala/collection/immutable/Map;Lscala/Option;Lscala/Option;Lscala/Option;Lscala/Option;Lscala/collection/Seq;Z)Lorg/apache/spark/sql/catalyst/catalog/CatalogTable;
  at org.apache.spark.sql.CarbonSource$.updateCatalogTableWithCarbonSchema(CarbonSource.scala:253)
  at org.apache.spark.sql.execution.command.DDLStrategy.apply(DDLStrategy.scala:135)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62)
  at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
  at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
  at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)


在 2017/11/1 上午1:58,“chenliang613”<[hidden email]<mailto:[hidden email]>> 写入:

Hi

Did you use open source spark version?

Can you provide more detail info :
1. which carbondata version and spark version, you used ?
2. Can you share with us , reproduce script and steps.

Regards
Liang


hujianjun wrote
scala> carbon.sql("CREATE TABLE IF NOT EXISTS carbon_table(id string,name
string,city string,age Int)STORED BY 'carbondata'")
17/10/23 19:13:52 AUDIT command.CarbonCreateTableCommand:
[master][root][Thread-1]Creating Table with Database name [clb_carbon] and
Table name [carbon_table]
java.lang.NoSuchMethodError:
org.apache.spark.sql.catalyst.catalog.CatalogTable.copy(Lorg/apache/spark/sql/catalyst/TableIdentifier;Lorg/apache/spark/sql/catalyst/catalog/CatalogTableType;Lorg/apache/spark/sql/catalyst/catalog/CatalogStorageFormat;Lorg/apache/spark/sql/types/StructType;Lscala/Option;Lscala/collection/Seq;Lscala/Option;Ljava/lang/String;JJLscala/collection/immutable/Map;Lscala/Option;Lscala/Option;Lscala/Option;Lscala/Option;Lscala/collection/Seq;Z)Lorg/apache/spark/sql/catalyst/catalog/CatalogTable;
   at
org.apache.spark.sql.CarbonSource$.updateCatalogTableWithCarbonSchema(CarbonSource.scala:253)
   at
org.apache.spark.sql.execution.strategy.DDLStrategy.apply(DDLStrategy.scala:154)
   at
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62)
   at
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62)
   at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
   at
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
   at
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
   at
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
   at
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
   at
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
   at scala.collection.Iterator$class.foreach(Iterator.scala:893)
   at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
   at
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
   at
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:74)
   at
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:66)
   at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
   at
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
   at
org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:79)
   at
org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:75)
   at
org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:84)
   at
org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:84)
   at
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
   at
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
   at org.apache.spark.sql.Dataset.
<init>
(Dataset.scala:185)
   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
   at
org.apache.spark.sql.execution.command.CarbonCreateTableCommand.processSchema(CarbonCreateTableCommand.scala:84)
   at
org.apache.spark.sql.execution.command.CarbonCreateTableCommand.run(CarbonCreateTableCommand.scala:36)
   at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
   at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
   at
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
   at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
   at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
   at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
   at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
   at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
   at
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
   at
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
   at org.apache.spark.sql.Dataset.
<init>
(Dataset.scala:185)
   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
   ... 52 elided
--
Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/





--
Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/

Reply | Threaded
Open this post in threaded view
|

Re: Error while creating table in carbondata

bhavya411
Hi,

Can you please check if you have spark-catalyst jar in $SPARK_HOME/jars
folder for your  cloudera version, if its not there please try to include
it and retry.

Thanks and regards
Bhavya

On Sun, Nov 5, 2017 at 7:24 PM, Lionel CL <[hidden email]> wrote:

> I have the same problem in CDH 5.8.0
> spark2 version is 2.1.0.cloudera1
> carbondata version 1.2.0.
>
> There's no error occurred when using open source version spark.
>
> <hadoop.version>2.6.0-cdh5.8.0</hadoop.version>
> <spark.version>2.1.0.cloudera1</spark.version>
> <scala.binary.version>2.11</scala.binary.version>
> <scala.version>2.11.8</scala.version>
>
>
> scala> cc.sql("create table t111(vin string) stored by 'carbondata'")
> 17/11/03 10:22:03 AUDIT command.CreateTable: [][][Thread-1]Creating Table
> with Database name [default] and Table name [t111]
> java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.
> catalog.CatalogTable.copy(Lorg/apache/spark/sql/catalyst/
> TableIdentifier;Lorg/apache/spark/sql/catalyst/catalog/
> CatalogTableType;Lorg/apache/spark/sql/catalyst/catalog/
> CatalogStorageFormat;Lorg/apache/spark/sql/types/StructT
> ype;Lscala/Option;Lscala/collection/Seq;Lscala/Option;
> Ljava/lang/String;JJLscala/collection/immutable/Map;
> Lscala/Option;Lscala/Option;Lscala/Option;Lscala/Option;
> Lscala/collection/Seq;Z)Lorg/apache/spark/sql/catalyst/
> catalog/CatalogTable;
>   at org.apache.spark.sql.CarbonSource$.updateCatalogTableWithCar
> bonSchema(CarbonSource.scala:253)
>   at org.apache.spark.sql.execution.command.DDLStrategy.apply(
> DDLStrategy.scala:135)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> $1.apply(QueryPlanner.scala:62)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> $1.apply(QueryPlanner.scala:62)
>   at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
>
>
> 在 2017/11/1 上午1:58,“chenliang613”<[hidden email]<mailto:chenlia
> [hidden email]>> 写入:
>
> Hi
>
> Did you use open source spark version?
>
> Can you provide more detail info :
> 1. which carbondata version and spark version, you used ?
> 2. Can you share with us , reproduce script and steps.
>
> Regards
> Liang
>
>
> hujianjun wrote
> scala> carbon.sql("CREATE TABLE IF NOT EXISTS carbon_table(id string,name
> string,city string,age Int)STORED BY 'carbondata'")
> 17/10/23 19:13:52 AUDIT command.CarbonCreateTableCommand:
> [master][root][Thread-1]Creating Table with Database name [clb_carbon] and
> Table name [carbon_table]
> java.lang.NoSuchMethodError:
> org.apache.spark.sql.catalyst.catalog.CatalogTable.copy(Lorg
> /apache/spark/sql/catalyst/TableIdentifier;Lorg/apache/
> spark/sql/catalyst/catalog/CatalogTableType;Lorg/apache/
> spark/sql/catalyst/catalog/CatalogStorageFormat;Lorg/
> apache/spark/sql/types/StructType;Lscala/Option;Lscala/
> collection/Seq;Lscala/Option;Ljava/lang/String;JJLscala/
> collection/immutable/Map;Lscala/Option;Lscala/Option;
> Lscala/Option;Lscala/Option;Lscala/collection/Seq;Z)Lorg/
> apache/spark/sql/catalyst/catalog/CatalogTable;
>    at
> org.apache.spark.sql.CarbonSource$.updateCatalogTableWithCar
> bonSchema(CarbonSource.scala:253)
>    at
> org.apache.spark.sql.execution.strategy.DDLStrategy.apply(
> DDLStrategy.scala:154)
>    at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> $1.apply(QueryPlanner.scala:62)
>    at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> $1.apply(QueryPlanner.scala:62)
>    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
>    at
> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(Que
> ryPlanner.scala:92)
>    at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> $2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
>    at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> $2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
>    at
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(T
> raversableOnce.scala:157)
>    at
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(T
> raversableOnce.scala:157)
>    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>    at
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>    at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
>    at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> $2.apply(QueryPlanner.scala:74)
>    at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> $2.apply(QueryPlanner.scala:66)
>    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>    at
> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(Que
> ryPlanner.scala:92)
>    at
> org.apache.spark.sql.execution.QueryExecution.sparkPlan$
> lzycompute(QueryExecution.scala:79)
>    at
> org.apache.spark.sql.execution.QueryExecution.sparkPlan(
> QueryExecution.scala:75)
>    at
> org.apache.spark.sql.execution.QueryExecution.executedPlan$
> lzycompute(QueryExecution.scala:84)
>    at
> org.apache.spark.sql.execution.QueryExecution.executedPlan(
> QueryExecution.scala:84)
>    at
> org.apache.spark.sql.execution.QueryExecution.toRdd$
> lzycompute(QueryExecution.scala:87)
>    at
> org.apache.spark.sql.execution.QueryExecution.toRdd(
> QueryExecution.scala:87)
>    at org.apache.spark.sql.Dataset.
> <init>
> (Dataset.scala:185)
>    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
>    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
>    at
> org.apache.spark.sql.execution.command.CarbonCreateTableComm
> and.processSchema(CarbonCreateTableCommand.scala:84)
>    at
> org.apache.spark.sql.execution.command.CarbonCreateTableComm
> and.run(CarbonCreateTableCommand.scala:36)
>    at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.s
> ideEffectResult$lzycompute(commands.scala:58)
>    at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.s
> ideEffectResult(commands.scala:56)
>    at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.
> doExecute(commands.scala:74)
>    at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.
> apply(SparkPlan.scala:114)
>    at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.
> apply(SparkPlan.scala:114)
>    at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQue
> ry$1.apply(SparkPlan.scala:135)
>    at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperati
> onScope.scala:151)
>    at
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
>    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.
> scala:113)
>    at
> org.apache.spark.sql.execution.QueryExecution.toRdd$
> lzycompute(QueryExecution.scala:87)
>    at
> org.apache.spark.sql.execution.QueryExecution.toRdd(
> QueryExecution.scala:87)
>    at org.apache.spark.sql.Dataset.
> <init>
> (Dataset.scala:185)
>    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
>    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
>    ... 52 elided
> --
> Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/
>
>
>
>
>
> --
> Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Error while creating table in carbondata

Lionel CL
Yes, there is a catalyst jar under the path /opt/cloudera/parcels/SPARK2/lib/spark2/jars/

spark-catalyst_2.11-2.1.0.cloudera1.jar







在 2017/11/6 下午4:12,“Bhavya Aggarwal”<[hidden email]> 写入:

>Hi,
>
>Can you please check if you have spark-catalyst jar in $SPARK_HOME/jars
>folder for your  cloudera version, if its not there please try to include
>it and retry.
>
>Thanks and regards
>Bhavya
>
>On Sun, Nov 5, 2017 at 7:24 PM, Lionel CL <[hidden email]> wrote:
>
>> I have the same problem in CDH 5.8.0
>> spark2 version is 2.1.0.cloudera1
>> carbondata version 1.2.0.
>>
>> There's no error occurred when using open source version spark.
>>
>> <hadoop.version>2.6.0-cdh5.8.0</hadoop.version>
>> <spark.version>2.1.0.cloudera1</spark.version>
>> <scala.binary.version>2.11</scala.binary.version>
>> <scala.version>2.11.8</scala.version>
>>
>>
>> scala> cc.sql("create table t111(vin string) stored by 'carbondata'")
>> 17/11/03 10:22:03 AUDIT command.CreateTable: [][][Thread-1]Creating Table
>> with Database name [default] and Table name [t111]
>> java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.
>> catalog.CatalogTable.copy(Lorg/apache/spark/sql/catalyst/
>> TableIdentifier;Lorg/apache/spark/sql/catalyst/catalog/
>> CatalogTableType;Lorg/apache/spark/sql/catalyst/catalog/
>> CatalogStorageFormat;Lorg/apache/spark/sql/types/StructT
>> ype;Lscala/Option;Lscala/collection/Seq;Lscala/Option;
>> Ljava/lang/String;JJLscala/collection/immutable/Map;
>> Lscala/Option;Lscala/Option;Lscala/Option;Lscala/Option;
>> Lscala/collection/Seq;Z)Lorg/apache/spark/sql/catalyst/
>> catalog/CatalogTable;
>>   at org.apache.spark.sql.CarbonSource$.updateCatalogTableWithCar
>> bonSchema(CarbonSource.scala:253)
>>   at org.apache.spark.sql.execution.command.DDLStrategy.apply(
>> DDLStrategy.scala:135)
>>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> $1.apply(QueryPlanner.scala:62)
>>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> $1.apply(QueryPlanner.scala:62)
>>   at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
>>
>>
>> 在 2017/11/1 上午1:58,“chenliang613”<[hidden email]<mailto:chenlia
>> [hidden email]>> 写入:
>>
>> Hi
>>
>> Did you use open source spark version?
>>
>> Can you provide more detail info :
>> 1. which carbondata version and spark version, you used ?
>> 2. Can you share with us , reproduce script and steps.
>>
>> Regards
>> Liang
>>
>>
>> hujianjun wrote
>> scala> carbon.sql("CREATE TABLE IF NOT EXISTS carbon_table(id string,name
>> string,city string,age Int)STORED BY 'carbondata'")
>> 17/10/23 19:13:52 AUDIT command.CarbonCreateTableCommand:
>> [master][root][Thread-1]Creating Table with Database name [clb_carbon] and
>> Table name [carbon_table]
>> java.lang.NoSuchMethodError:
>> org.apache.spark.sql.catalyst.catalog.CatalogTable.copy(Lorg
>> /apache/spark/sql/catalyst/TableIdentifier;Lorg/apache/
>> spark/sql/catalyst/catalog/CatalogTableType;Lorg/apache/
>> spark/sql/catalyst/catalog/CatalogStorageFormat;Lorg/
>> apache/spark/sql/types/StructType;Lscala/Option;Lscala/
>> collection/Seq;Lscala/Option;Ljava/lang/String;JJLscala/
>> collection/immutable/Map;Lscala/Option;Lscala/Option;
>> Lscala/Option;Lscala/Option;Lscala/collection/Seq;Z)Lorg/
>> apache/spark/sql/catalyst/catalog/CatalogTable;
>>    at
>> org.apache.spark.sql.CarbonSource$.updateCatalogTableWithCar
>> bonSchema(CarbonSource.scala:253)
>>    at
>> org.apache.spark.sql.execution.strategy.DDLStrategy.apply(
>> DDLStrategy.scala:154)
>>    at
>> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> $1.apply(QueryPlanner.scala:62)
>>    at
>> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> $1.apply(QueryPlanner.scala:62)
>>    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
>>    at
>> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(Que
>> ryPlanner.scala:92)
>>    at
>> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> $2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
>>    at
>> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> $2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
>>    at
>> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(T
>> raversableOnce.scala:157)
>>    at
>> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(T
>> raversableOnce.scala:157)
>>    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>>    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>>    at
>> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>>    at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
>>    at
>> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> $2.apply(QueryPlanner.scala:74)
>>    at
>> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> $2.apply(QueryPlanner.scala:66)
>>    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>>    at
>> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(Que
>> ryPlanner.scala:92)
>>    at
>> org.apache.spark.sql.execution.QueryExecution.sparkPlan$
>> lzycompute(QueryExecution.scala:79)
>>    at
>> org.apache.spark.sql.execution.QueryExecution.sparkPlan(
>> QueryExecution.scala:75)
>>    at
>> org.apache.spark.sql.execution.QueryExecution.executedPlan$
>> lzycompute(QueryExecution.scala:84)
>>    at
>> org.apache.spark.sql.execution.QueryExecution.executedPlan(
>> QueryExecution.scala:84)
>>    at
>> org.apache.spark.sql.execution.QueryExecution.toRdd$
>> lzycompute(QueryExecution.scala:87)
>>    at
>> org.apache.spark.sql.execution.QueryExecution.toRdd(
>> QueryExecution.scala:87)
>>    at org.apache.spark.sql.Dataset.
>> <init>
>> (Dataset.scala:185)
>>    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
>>    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
>>    at
>> org.apache.spark.sql.execution.command.CarbonCreateTableComm
>> and.processSchema(CarbonCreateTableCommand.scala:84)
>>    at
>> org.apache.spark.sql.execution.command.CarbonCreateTableComm
>> and.run(CarbonCreateTableCommand.scala:36)
>>    at
>> org.apache.spark.sql.execution.command.ExecutedCommandExec.s
>> ideEffectResult$lzycompute(commands.scala:58)
>>    at
>> org.apache.spark.sql.execution.command.ExecutedCommandExec.s
>> ideEffectResult(commands.scala:56)
>>    at
>> org.apache.spark.sql.execution.command.ExecutedCommandExec.
>> doExecute(commands.scala:74)
>>    at
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.
>> apply(SparkPlan.scala:114)
>>    at
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.
>> apply(SparkPlan.scala:114)
>>    at
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQue
>> ry$1.apply(SparkPlan.scala:135)
>>    at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperati
>> onScope.scala:151)
>>    at
>> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
>>    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.
>> scala:113)
>>    at
>> org.apache.spark.sql.execution.QueryExecution.toRdd$
>> lzycompute(QueryExecution.scala:87)
>>    at
>> org.apache.spark.sql.execution.QueryExecution.toRdd(
>> QueryExecution.scala:87)
>>    at org.apache.spark.sql.Dataset.
>> <init>
>> (Dataset.scala:185)
>>    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
>>    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
>>    ... 52 elided
>> --
>> Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/
>>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/
>>
>>
Reply | Threaded
Open this post in threaded view
|

Re: Error while creating table in carbondata

bhavya411
Hi,

Can you please let me know how are you building the Carbondata assembly
jar, or which command you are running to build carbondata.

Regards
Bhavya

On Mon, Nov 6, 2017 at 2:18 PM, Lionel CL <[hidden email]> wrote:

> Yes, there is a catalyst jar under the path /opt/cloudera/parcels/SPARK2/
> lib/spark2/jars/
>
> spark-catalyst_2.11-2.1.0.cloudera1.jar
>
>
>
>
>
>
>
> 在 2017/11/6 下午4:12,“Bhavya Aggarwal”<[hidden email]> 写入:
>
> >Hi,
> >
> >Can you please check if you have spark-catalyst jar in $SPARK_HOME/jars
> >folder for your  cloudera version, if its not there please try to include
> >it and retry.
> >
> >Thanks and regards
> >Bhavya
> >
> >On Sun, Nov 5, 2017 at 7:24 PM, Lionel CL <[hidden email]> wrote:
> >
> >> I have the same problem in CDH 5.8.0
> >> spark2 version is 2.1.0.cloudera1
> >> carbondata version 1.2.0.
> >>
> >> There's no error occurred when using open source version spark.
> >>
> >> <hadoop.version>2.6.0-cdh5.8.0</hadoop.version>
> >> <spark.version>2.1.0.cloudera1</spark.version>
> >> <scala.binary.version>2.11</scala.binary.version>
> >> <scala.version>2.11.8</scala.version>
> >>
> >>
> >> scala> cc.sql("create table t111(vin string) stored by 'carbondata'")
> >> 17/11/03 10:22:03 AUDIT command.CreateTable: [][][Thread-1]Creating
> Table
> >> with Database name [default] and Table name [t111]
> >> java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.
> >> catalog.CatalogTable.copy(Lorg/apache/spark/sql/catalyst/
> >> TableIdentifier;Lorg/apache/spark/sql/catalyst/catalog/
> >> CatalogTableType;Lorg/apache/spark/sql/catalyst/catalog/
> >> CatalogStorageFormat;Lorg/apache/spark/sql/types/StructT
> >> ype;Lscala/Option;Lscala/collection/Seq;Lscala/Option;
> >> Ljava/lang/String;JJLscala/collection/immutable/Map;
> >> Lscala/Option;Lscala/Option;Lscala/Option;Lscala/Option;
> >> Lscala/collection/Seq;Z)Lorg/apache/spark/sql/catalyst/
> >> catalog/CatalogTable;
> >>   at org.apache.spark.sql.CarbonSource$.updateCatalogTableWithCar
> >> bonSchema(CarbonSource.scala:253)
> >>   at org.apache.spark.sql.execution.command.DDLStrategy.apply(
> >> DDLStrategy.scala:135)
> >>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> $1.apply(QueryPlanner.scala:62)
> >>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> $1.apply(QueryPlanner.scala:62)
> >>   at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
> >>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
> >>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
> >>
> >>
> >> 在 2017/11/1 上午1:58,“chenliang613”<[hidden email]<mailto:
> chenlia
> >> [hidden email]>> 写入:
> >>
> >> Hi
> >>
> >> Did you use open source spark version?
> >>
> >> Can you provide more detail info :
> >> 1. which carbondata version and spark version, you used ?
> >> 2. Can you share with us , reproduce script and steps.
> >>
> >> Regards
> >> Liang
> >>
> >>
> >> hujianjun wrote
> >> scala> carbon.sql("CREATE TABLE IF NOT EXISTS carbon_table(id
> string,name
> >> string,city string,age Int)STORED BY 'carbondata'")
> >> 17/10/23 19:13:52 AUDIT command.CarbonCreateTableCommand:
> >> [master][root][Thread-1]Creating Table with Database name [clb_carbon]
> and
> >> Table name [carbon_table]
> >> java.lang.NoSuchMethodError:
> >> org.apache.spark.sql.catalyst.catalog.CatalogTable.copy(Lorg
> >> /apache/spark/sql/catalyst/TableIdentifier;Lorg/apache/
> >> spark/sql/catalyst/catalog/CatalogTableType;Lorg/apache/
> >> spark/sql/catalyst/catalog/CatalogStorageFormat;Lorg/
> >> apache/spark/sql/types/StructType;Lscala/Option;Lscala/
> >> collection/Seq;Lscala/Option;Ljava/lang/String;JJLscala/
> >> collection/immutable/Map;Lscala/Option;Lscala/Option;
> >> Lscala/Option;Lscala/Option;Lscala/collection/Seq;Z)Lorg/
> >> apache/spark/sql/catalyst/catalog/CatalogTable;
> >>    at
> >> org.apache.spark.sql.CarbonSource$.updateCatalogTableWithCar
> >> bonSchema(CarbonSource.scala:253)
> >>    at
> >> org.apache.spark.sql.execution.strategy.DDLStrategy.apply(
> >> DDLStrategy.scala:154)
> >>    at
> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> $1.apply(QueryPlanner.scala:62)
> >>    at
> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> $1.apply(QueryPlanner.scala:62)
> >>    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
> >>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
> >>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
> >>    at
> >> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(Que
> >> ryPlanner.scala:92)
> >>    at
> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> $2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
> >>    at
> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> $2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
> >>    at
> >> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(T
> >> raversableOnce.scala:157)
> >>    at
> >> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(T
> >> raversableOnce.scala:157)
> >>    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> >>    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> >>    at
> >> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.
> scala:157)
> >>    at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
> >>    at
> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> $2.apply(QueryPlanner.scala:74)
> >>    at
> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> $2.apply(QueryPlanner.scala:66)
> >>    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
> >>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
> >>    at
> >> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(Que
> >> ryPlanner.scala:92)
> >>    at
> >> org.apache.spark.sql.execution.QueryExecution.sparkPlan$
> >> lzycompute(QueryExecution.scala:79)
> >>    at
> >> org.apache.spark.sql.execution.QueryExecution.sparkPlan(
> >> QueryExecution.scala:75)
> >>    at
> >> org.apache.spark.sql.execution.QueryExecution.executedPlan$
> >> lzycompute(QueryExecution.scala:84)
> >>    at
> >> org.apache.spark.sql.execution.QueryExecution.executedPlan(
> >> QueryExecution.scala:84)
> >>    at
> >> org.apache.spark.sql.execution.QueryExecution.toRdd$
> >> lzycompute(QueryExecution.scala:87)
> >>    at
> >> org.apache.spark.sql.execution.QueryExecution.toRdd(
> >> QueryExecution.scala:87)
> >>    at org.apache.spark.sql.Dataset.
> >> <init>
> >> (Dataset.scala:185)
> >>    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
> >>    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
> >>    at
> >> org.apache.spark.sql.execution.command.CarbonCreateTableComm
> >> and.processSchema(CarbonCreateTableCommand.scala:84)
> >>    at
> >> org.apache.spark.sql.execution.command.CarbonCreateTableComm
> >> and.run(CarbonCreateTableCommand.scala:36)
> >>    at
> >> org.apache.spark.sql.execution.command.ExecutedCommandExec.s
> >> ideEffectResult$lzycompute(commands.scala:58)
> >>    at
> >> org.apache.spark.sql.execution.command.ExecutedCommandExec.s
> >> ideEffectResult(commands.scala:56)
> >>    at
> >> org.apache.spark.sql.execution.command.ExecutedCommandExec.
> >> doExecute(commands.scala:74)
> >>    at
> >> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.
> >> apply(SparkPlan.scala:114)
> >>    at
> >> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.
> >> apply(SparkPlan.scala:114)
> >>    at
> >> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQue
> >> ry$1.apply(SparkPlan.scala:135)
> >>    at
> >> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperati
> >> onScope.scala:151)
> >>    at
> >> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:
> 132)
> >>    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.
> >> scala:113)
> >>    at
> >> org.apache.spark.sql.execution.QueryExecution.toRdd$
> >> lzycompute(QueryExecution.scala:87)
> >>    at
> >> org.apache.spark.sql.execution.QueryExecution.toRdd(
> >> QueryExecution.scala:87)
> >>    at org.apache.spark.sql.Dataset.
> >> <init>
> >> (Dataset.scala:185)
> >>    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
> >>    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
> >>    ... 52 elided
> >> --
> >> Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.
> com/
> >>
> >>
> >>
> >>
> >>
> >> --
> >> Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.
> com/
> >>
> >>
>
Reply | Threaded
Open this post in threaded view
|

Re: Error while creating table in carbondata

Lionel CL
mvn -DskipTests -Pspark-2.1 clean package
The pom file was changed as which provided in former email.



在 2017/11/6 下午7:47,“Bhavya Aggarwal”<[hidden email]> 写入:

>Hi,
>
>Can you please let me know how are you building the Carbondata assembly
>jar, or which command you are running to build carbondata.
>
>Regards
>Bhavya
>
>On Mon, Nov 6, 2017 at 2:18 PM, Lionel CL <[hidden email]> wrote:
>
>> Yes, there is a catalyst jar under the path /opt/cloudera/parcels/SPARK2/
>> lib/spark2/jars/
>>
>> spark-catalyst_2.11-2.1.0.cloudera1.jar
>>
>>
>>
>>
>>
>>
>>
>> 在 2017/11/6 下午4:12,“Bhavya Aggarwal”<[hidden email]> 写入:
>>
>> >Hi,
>> >
>> >Can you please check if you have spark-catalyst jar in $SPARK_HOME/jars
>> >folder for your  cloudera version, if its not there please try to include
>> >it and retry.
>> >
>> >Thanks and regards
>> >Bhavya
>> >
>> >On Sun, Nov 5, 2017 at 7:24 PM, Lionel CL <[hidden email]> wrote:
>> >
>> >> I have the same problem in CDH 5.8.0
>> >> spark2 version is 2.1.0.cloudera1
>> >> carbondata version 1.2.0.
>> >>
>> >> There's no error occurred when using open source version spark.
>> >>
>> >> <hadoop.version>2.6.0-cdh5.8.0</hadoop.version>
>> >> <spark.version>2.1.0.cloudera1</spark.version>
>> >> <scala.binary.version>2.11</scala.binary.version>
>> >> <scala.version>2.11.8</scala.version>
>> >>
>> >>
>> >> scala> cc.sql("create table t111(vin string) stored by 'carbondata'")
>> >> 17/11/03 10:22:03 AUDIT command.CreateTable: [][][Thread-1]Creating
>> Table
>> >> with Database name [default] and Table name [t111]
>> >> java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.
>> >> catalog.CatalogTable.copy(Lorg/apache/spark/sql/catalyst/
>> >> TableIdentifier;Lorg/apache/spark/sql/catalyst/catalog/
>> >> CatalogTableType;Lorg/apache/spark/sql/catalyst/catalog/
>> >> CatalogStorageFormat;Lorg/apache/spark/sql/types/StructT
>> >> ype;Lscala/Option;Lscala/collection/Seq;Lscala/Option;
>> >> Ljava/lang/String;JJLscala/collection/immutable/Map;
>> >> Lscala/Option;Lscala/Option;Lscala/Option;Lscala/Option;
>> >> Lscala/collection/Seq;Z)Lorg/apache/spark/sql/catalyst/
>> >> catalog/CatalogTable;
>> >>   at org.apache.spark.sql.CarbonSource$.updateCatalogTableWithCar
>> >> bonSchema(CarbonSource.scala:253)
>> >>   at org.apache.spark.sql.execution.command.DDLStrategy.apply(
>> >> DDLStrategy.scala:135)
>> >>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> >> $1.apply(QueryPlanner.scala:62)
>> >>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> >> $1.apply(QueryPlanner.scala:62)
>> >>   at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>> >>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>> >>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
>> >>
>> >>
>> >> 在 2017/11/1 上午1:58,“chenliang613”<[hidden email]<mailto:
>> chenlia
>> >> [hidden email]>> 写入:
>> >>
>> >> Hi
>> >>
>> >> Did you use open source spark version?
>> >>
>> >> Can you provide more detail info :
>> >> 1. which carbondata version and spark version, you used ?
>> >> 2. Can you share with us , reproduce script and steps.
>> >>
>> >> Regards
>> >> Liang
>> >>
>> >>
>> >> hujianjun wrote
>> >> scala> carbon.sql("CREATE TABLE IF NOT EXISTS carbon_table(id
>> string,name
>> >> string,city string,age Int)STORED BY 'carbondata'")
>> >> 17/10/23 19:13:52 AUDIT command.CarbonCreateTableCommand:
>> >> [master][root][Thread-1]Creating Table with Database name [clb_carbon]
>> and
>> >> Table name [carbon_table]
>> >> java.lang.NoSuchMethodError:
>> >> org.apache.spark.sql.catalyst.catalog.CatalogTable.copy(Lorg
>> >> /apache/spark/sql/catalyst/TableIdentifier;Lorg/apache/
>> >> spark/sql/catalyst/catalog/CatalogTableType;Lorg/apache/
>> >> spark/sql/catalyst/catalog/CatalogStorageFormat;Lorg/
>> >> apache/spark/sql/types/StructType;Lscala/Option;Lscala/
>> >> collection/Seq;Lscala/Option;Ljava/lang/String;JJLscala/
>> >> collection/immutable/Map;Lscala/Option;Lscala/Option;
>> >> Lscala/Option;Lscala/Option;Lscala/collection/Seq;Z)Lorg/
>> >> apache/spark/sql/catalyst/catalog/CatalogTable;
>> >>    at
>> >> org.apache.spark.sql.CarbonSource$.updateCatalogTableWithCar
>> >> bonSchema(CarbonSource.scala:253)
>> >>    at
>> >> org.apache.spark.sql.execution.strategy.DDLStrategy.apply(
>> >> DDLStrategy.scala:154)
>> >>    at
>> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> >> $1.apply(QueryPlanner.scala:62)
>> >>    at
>> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> >> $1.apply(QueryPlanner.scala:62)
>> >>    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>> >>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>> >>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
>> >>    at
>> >> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(Que
>> >> ryPlanner.scala:92)
>> >>    at
>> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> >> $2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
>> >>    at
>> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> >> $2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
>> >>    at
>> >> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(T
>> >> raversableOnce.scala:157)
>> >>    at
>> >> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(T
>> >> raversableOnce.scala:157)
>> >>    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>> >>    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>> >>    at
>> >> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.
>> scala:157)
>> >>    at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
>> >>    at
>> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> >> $2.apply(QueryPlanner.scala:74)
>> >>    at
>> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
>> >> $2.apply(QueryPlanner.scala:66)
>> >>    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>> >>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>> >>    at
>> >> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(Que
>> >> ryPlanner.scala:92)
>> >>    at
>> >> org.apache.spark.sql.execution.QueryExecution.sparkPlan$
>> >> lzycompute(QueryExecution.scala:79)
>> >>    at
>> >> org.apache.spark.sql.execution.QueryExecution.sparkPlan(
>> >> QueryExecution.scala:75)
>> >>    at
>> >> org.apache.spark.sql.execution.QueryExecution.executedPlan$
>> >> lzycompute(QueryExecution.scala:84)
>> >>    at
>> >> org.apache.spark.sql.execution.QueryExecution.executedPlan(
>> >> QueryExecution.scala:84)
>> >>    at
>> >> org.apache.spark.sql.execution.QueryExecution.toRdd$
>> >> lzycompute(QueryExecution.scala:87)
>> >>    at
>> >> org.apache.spark.sql.execution.QueryExecution.toRdd(
>> >> QueryExecution.scala:87)
>> >>    at org.apache.spark.sql.Dataset.
>> >> <init>
>> >> (Dataset.scala:185)
>> >>    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
>> >>    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
>> >>    at
>> >> org.apache.spark.sql.execution.command.CarbonCreateTableComm
>> >> and.processSchema(CarbonCreateTableCommand.scala:84)
>> >>    at
>> >> org.apache.spark.sql.execution.command.CarbonCreateTableComm
>> >> and.run(CarbonCreateTableCommand.scala:36)
>> >>    at
>> >> org.apache.spark.sql.execution.command.ExecutedCommandExec.s
>> >> ideEffectResult$lzycompute(commands.scala:58)
>> >>    at
>> >> org.apache.spark.sql.execution.command.ExecutedCommandExec.s
>> >> ideEffectResult(commands.scala:56)
>> >>    at
>> >> org.apache.spark.sql.execution.command.ExecutedCommandExec.
>> >> doExecute(commands.scala:74)
>> >>    at
>> >> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.
>> >> apply(SparkPlan.scala:114)
>> >>    at
>> >> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.
>> >> apply(SparkPlan.scala:114)
>> >>    at
>> >> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQue
>> >> ry$1.apply(SparkPlan.scala:135)
>> >>    at
>> >> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperati
>> >> onScope.scala:151)
>> >>    at
>> >> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:
>> 132)
>> >>    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.
>> >> scala:113)
>> >>    at
>> >> org.apache.spark.sql.execution.QueryExecution.toRdd$
>> >> lzycompute(QueryExecution.scala:87)
>> >>    at
>> >> org.apache.spark.sql.execution.QueryExecution.toRdd(
>> >> QueryExecution.scala:87)
>> >>    at org.apache.spark.sql.Dataset.
>> >> <init>
>> >> (Dataset.scala:185)
>> >>    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
>> >>    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
>> >>    ... 52 elided
>> >> --
>> >> Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.
>> com/
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.
>> com/
>> >>
>> >>
>>
Reply | Threaded
Open this post in threaded view
|

Re: Error while creating table in carbondata

bhavya411
Hi,

I think the problem is that the class signature of OpenSource spark and
Cloudera spark do not match for CatalogTable class,  there is an additional
parameter in the Cloudera spark version shown highlighted below, we may to
try building the Carbondata with the spark cloudera version to make it work.



















*case class CatalogTable(    identifier: TableIdentifier,    tableType:
CatalogTableType,    storage: CatalogStorageFormat,    schema:
StructType,    provider: Option[String] = None,    partitionColumnNames:
Seq[String] = Seq.empty,    bucketSpec: Option[BucketSpec] = None,
owner: String = "",    createTime: Long = System.currentTimeMillis,
lastAccessTime: Long = -1,    properties: Map[String, String] =
Map.empty,    stats: Option[Statistics] = None,    viewOriginalText:
Option[String] = None,    viewText: Option[String] = None,    comment:
Option[String] = None,    unsupportedFeatures: Seq[String] = Seq.empty,
tracksPartitionsInCatalog: Boolean = false,    schemaPreservesCase: Boolean
= true) {*


Thanks and regards
Bhavya

On Tue, Nov 7, 2017 at 7:17 AM, Lionel CL <[hidden email]> wrote:

> mvn -DskipTests -Pspark-2.1 clean package
> The pom file was changed as which provided in former email.
>
>
>
> 在 2017/11/6 下午7:47,“Bhavya Aggarwal”<[hidden email]> 写入:
>
> >Hi,
> >
> >Can you please let me know how are you building the Carbondata assembly
> >jar, or which command you are running to build carbondata.
> >
> >Regards
> >Bhavya
> >
> >On Mon, Nov 6, 2017 at 2:18 PM, Lionel CL <[hidden email]> wrote:
> >
> >> Yes, there is a catalyst jar under the path
> /opt/cloudera/parcels/SPARK2/
> >> lib/spark2/jars/
> >>
> >> spark-catalyst_2.11-2.1.0.cloudera1.jar
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> 在 2017/11/6 下午4:12,“Bhavya Aggarwal”<[hidden email]> 写入:
> >>
> >> >Hi,
> >> >
> >> >Can you please check if you have spark-catalyst jar in $SPARK_HOME/jars
> >> >folder for your  cloudera version, if its not there please try to
> include
> >> >it and retry.
> >> >
> >> >Thanks and regards
> >> >Bhavya
> >> >
> >> >On Sun, Nov 5, 2017 at 7:24 PM, Lionel CL <[hidden email]>
> wrote:
> >> >
> >> >> I have the same problem in CDH 5.8.0
> >> >> spark2 version is 2.1.0.cloudera1
> >> >> carbondata version 1.2.0.
> >> >>
> >> >> There's no error occurred when using open source version spark.
> >> >>
> >> >> <hadoop.version>2.6.0-cdh5.8.0</hadoop.version>
> >> >> <spark.version>2.1.0.cloudera1</spark.version>
> >> >> <scala.binary.version>2.11</scala.binary.version>
> >> >> <scala.version>2.11.8</scala.version>
> >> >>
> >> >>
> >> >> scala> cc.sql("create table t111(vin string) stored by 'carbondata'")
> >> >> 17/11/03 10:22:03 AUDIT command.CreateTable: [][][Thread-1]Creating
> >> Table
> >> >> with Database name [default] and Table name [t111]
> >> >> java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.
> >> >> catalog.CatalogTable.copy(Lorg/apache/spark/sql/catalyst/
> >> >> TableIdentifier;Lorg/apache/spark/sql/catalyst/catalog/
> >> >> CatalogTableType;Lorg/apache/spark/sql/catalyst/catalog/
> >> >> CatalogStorageFormat;Lorg/apache/spark/sql/types/StructT
> >> >> ype;Lscala/Option;Lscala/collection/Seq;Lscala/Option;
> >> >> Ljava/lang/String;JJLscala/collection/immutable/Map;
> >> >> Lscala/Option;Lscala/Option;Lscala/Option;Lscala/Option;
> >> >> Lscala/collection/Seq;Z)Lorg/apache/spark/sql/catalyst/
> >> >> catalog/CatalogTable;
> >> >>   at org.apache.spark.sql.CarbonSource$.updateCatalogTableWithCar
> >> >> bonSchema(CarbonSource.scala:253)
> >> >>   at org.apache.spark.sql.execution.command.DDLStrategy.apply(
> >> >> DDLStrategy.scala:135)
> >> >>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> >> $1.apply(QueryPlanner.scala:62)
> >> >>   at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> >> $1.apply(QueryPlanner.scala:62)
> >> >>   at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
> >> >>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
> >> >>   at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
> >> >>
> >> >>
> >> >> 在 2017/11/1 上午1:58,“chenliang613”<[hidden email]<mailto:
> >> chenlia
> >> >> [hidden email]>> 写入:
> >> >>
> >> >> Hi
> >> >>
> >> >> Did you use open source spark version?
> >> >>
> >> >> Can you provide more detail info :
> >> >> 1. which carbondata version and spark version, you used ?
> >> >> 2. Can you share with us , reproduce script and steps.
> >> >>
> >> >> Regards
> >> >> Liang
> >> >>
> >> >>
> >> >> hujianjun wrote
> >> >> scala> carbon.sql("CREATE TABLE IF NOT EXISTS carbon_table(id
> >> string,name
> >> >> string,city string,age Int)STORED BY 'carbondata'")
> >> >> 17/10/23 19:13:52 AUDIT command.CarbonCreateTableCommand:
> >> >> [master][root][Thread-1]Creating Table with Database name
> [clb_carbon]
> >> and
> >> >> Table name [carbon_table]
> >> >> java.lang.NoSuchMethodError:
> >> >> org.apache.spark.sql.catalyst.catalog.CatalogTable.copy(Lorg
> >> >> /apache/spark/sql/catalyst/TableIdentifier;Lorg/apache/
> >> >> spark/sql/catalyst/catalog/CatalogTableType;Lorg/apache/
> >> >> spark/sql/catalyst/catalog/CatalogStorageFormat;Lorg/
> >> >> apache/spark/sql/types/StructType;Lscala/Option;Lscala/
> >> >> collection/Seq;Lscala/Option;Ljava/lang/String;JJLscala/
> >> >> collection/immutable/Map;Lscala/Option;Lscala/Option;
> >> >> Lscala/Option;Lscala/Option;Lscala/collection/Seq;Z)Lorg/
> >> >> apache/spark/sql/catalyst/catalog/CatalogTable;
> >> >>    at
> >> >> org.apache.spark.sql.CarbonSource$.updateCatalogTableWithCar
> >> >> bonSchema(CarbonSource.scala:253)
> >> >>    at
> >> >> org.apache.spark.sql.execution.strategy.DDLStrategy.apply(
> >> >> DDLStrategy.scala:154)
> >> >>    at
> >> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> >> $1.apply(QueryPlanner.scala:62)
> >> >>    at
> >> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> >> $1.apply(QueryPlanner.scala:62)
> >> >>    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
> >> >>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
> >> >>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
> >> >>    at
> >> >> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(Que
> >> >> ryPlanner.scala:92)
> >> >>    at
> >> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> >> $2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
> >> >>    at
> >> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> >> $2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
> >> >>    at
> >> >> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(T
> >> >> raversableOnce.scala:157)
> >> >>    at
> >> >> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(T
> >> >> raversableOnce.scala:157)
> >> >>    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> >> >>    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> >> >>    at
> >> >> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.
> >> scala:157)
> >> >>    at scala.collection.AbstractIterator.foldLeft(
> Iterator.scala:1336)
> >> >>    at
> >> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> >> $2.apply(QueryPlanner.scala:74)
> >> >>    at
> >> >> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun
> >> >> $2.apply(QueryPlanner.scala:66)
> >> >>    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
> >> >>    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
> >> >>    at
> >> >> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(Que
> >> >> ryPlanner.scala:92)
> >> >>    at
> >> >> org.apache.spark.sql.execution.QueryExecution.sparkPlan$
> >> >> lzycompute(QueryExecution.scala:79)
> >> >>    at
> >> >> org.apache.spark.sql.execution.QueryExecution.sparkPlan(
> >> >> QueryExecution.scala:75)
> >> >>    at
> >> >> org.apache.spark.sql.execution.QueryExecution.executedPlan$
> >> >> lzycompute(QueryExecution.scala:84)
> >> >>    at
> >> >> org.apache.spark.sql.execution.QueryExecution.executedPlan(
> >> >> QueryExecution.scala:84)
> >> >>    at
> >> >> org.apache.spark.sql.execution.QueryExecution.toRdd$
> >> >> lzycompute(QueryExecution.scala:87)
> >> >>    at
> >> >> org.apache.spark.sql.execution.QueryExecution.toRdd(
> >> >> QueryExecution.scala:87)
> >> >>    at org.apache.spark.sql.Dataset.
> >> >> <init>
> >> >> (Dataset.scala:185)
> >> >>    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
> >> >>    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
> >> >>    at
> >> >> org.apache.spark.sql.execution.command.CarbonCreateTableComm
> >> >> and.processSchema(CarbonCreateTableCommand.scala:84)
> >> >>    at
> >> >> org.apache.spark.sql.execution.command.CarbonCreateTableComm
> >> >> and.run(CarbonCreateTableCommand.scala:36)
> >> >>    at
> >> >> org.apache.spark.sql.execution.command.ExecutedCommandExec.s
> >> >> ideEffectResult$lzycompute(commands.scala:58)
> >> >>    at
> >> >> org.apache.spark.sql.execution.command.ExecutedCommandExec.s
> >> >> ideEffectResult(commands.scala:56)
> >> >>    at
> >> >> org.apache.spark.sql.execution.command.ExecutedCommandExec.
> >> >> doExecute(commands.scala:74)
> >> >>    at
> >> >> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.
> >> >> apply(SparkPlan.scala:114)
> >> >>    at
> >> >> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.
> >> >> apply(SparkPlan.scala:114)
> >> >>    at
> >> >> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQue
> >> >> ry$1.apply(SparkPlan.scala:135)
> >> >>    at
> >> >> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperati
> >> >> onScope.scala:151)
> >> >>    at
> >> >> org.apache.spark.sql.execution.SparkPlan.
> executeQuery(SparkPlan.scala:
> >> 132)
> >> >>    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.
> >> >> scala:113)
> >> >>    at
> >> >> org.apache.spark.sql.execution.QueryExecution.toRdd$
> >> >> lzycompute(QueryExecution.scala:87)
> >> >>    at
> >> >> org.apache.spark.sql.execution.QueryExecution.toRdd(
> >> >> QueryExecution.scala:87)
> >> >>    at org.apache.spark.sql.Dataset.
> >> >> <init>
> >> >> (Dataset.scala:185)
> >> >>    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
> >> >>    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
> >> >>    ... 52 elided
> >> >> --
> >> >> Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble
> .
> >> com/
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble
> .
> >> com/
> >> >>
> >> >>
> >>
>