Posted by
lionel061201 on
Aug 23, 2017; 9:34am
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/method-not-found-issue-when-creating-table-tp20640p20708.html
Hi Manish,
Thank you for the response!
I have addressed the issue is caused by the spark catalyst jar of cloudera
version is different from the open source version. The case class
CatalogTable in cloudera version has some more columns.
But how can I change the instance of CatalogTable(tableDesc in the
following codes) to the cloudera version? it comes from a SparkPlan...
Though I have changed all the jar dependecies to cloudera version, it still
return the No Such Method error.
/**
* Carbon strategies for ddl commands
*/
class DDLStrategy(sparkSession: SparkSession) extends SparkStrategy {
def apply(plan: LogicalPlan): Seq[SparkPlan] = {
plan match {
case ...
case
org.apache.spark.sql.execution.datasources.CreateTable(tableDesc, mode,
None)
if tableDesc.provider.get != DDLUtils.HIVE_PROVIDER
&&
tableDesc.provider.get.equals("org.apache.spark.sql.CarbonSource") =>
val updatedCatalog =
CarbonSource.updateCatalogTableWithCarbonSchema(tableDesc,
sparkSession)
val cmd =
CreateDataSourceTableCommand(updatedCatalog, ignoreIfExists =
mode == SaveMode.Ignore)
ExecutedCommandExec(cmd) :: Nil
case _ => Nil
}
}
}
the cloudera version case class is
case class CatalogTable(
identifier: TableIdentifier,
tableType: CatalogTableType,
storage: CatalogStorageFormat,
schema: StructType,
provider: Option[String] = None,
partitionColumnNames: Seq[String] = Seq.empty,
bucketSpec: Option[BucketSpec] = None,
owner: String = "",
createTime: Long = System.currentTimeMillis,
lastAccessTime: Long = -1,
properties: Map[String, String] = Map.empty,
stats: Option[Statistics] = None,
viewOriginalText: Option[String] = None,
viewText: Option[String] = None,
comment: Option[String] = None,
unsupportedFeatures: Seq[String] = Seq.empty,
tracksPartitionsInCatalog: Boolean = false,
schemaPreservesCase: Boolean = true)
while in the error log, copy method is
java.lang.NoSuchMethodError:
org.apache.spark.sql.catalyst.catalog.CatalogTable.copy(
Lorg/apache/spark/sql/catalyst/TableIdentifier;
Lorg/apache/spark/sql/catalyst/catalog/CatalogTableType;
Lorg/apache/spark/sql/catalyst/catalog/CatalogStorageFormat;
Lorg/apache/spark/sql/types/StructType;
Lscala/Option;
Lscala/collection/Seq;
Lscala/Option;
Ljava/lang/String;
<<<--------- Miss two long type parameters here
JJLscala/collection/immutable/Map;
Lscala/Option;
Lscala/Option;
Lscala/Option;
Lscala/Option;
Lscala/collection/Seq;Z)
Thanks,
CaoLu
On Wed, Aug 23, 2017 at 5:10 PM, manishgupta88 <
[hidden email]>
wrote:
> Hi Lionel,
>
> Carbon table creation flow is executed on the driver side, Executors do not
> participate in creation of carbon table. From the logs it seems that
> spark-catalyst jar is missing which is generally placed under
> $SPARK_HOME/jars OR $SPARK_HOME/lib directory. Please check if spark jars
> directory is there in the driver classpath. You can follow the below steps:
>
> 1. On the driver node execute "jps" comamnd and find out the SparkSubmit
> process id.
> 2. Execute "jinfo <process id> and redirect it to a file.
> 3. Search for spark-catalyst jar in the file. If not found that means its
> not in the classpath and you can then add the jar in classpath and run your
> queries again.
>
> Regards
> Manish Gupta
>
>
>
> --
> View this message in context:
http://apache-carbondata-dev-> mailing-list-archive.1130556.n5.nabble.com/method-not-
> found-issue-when-creating-table-tp20640p20702.html
> Sent from the Apache CarbonData Dev Mailing List archive mailing list
> archive at Nabble.com.
>