SPARK update error

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

SPARK update error

孙而焓
Changing version from spark2.1+carbon1.1.0 to spark1.6+carbon1.1.0 and find out that data loaded using spark2.1 is not available,
from error message,it seemed that it is not using the customized carbonstore location and i alwayse use that location for carboncontext.
scala>
    import org.apache.spark.sql.CarbonContext
    val cc = new CarbonContext(sc, "hdfs://192.168.14.78:8020/apps/hive/guoht/qqdatastore")
    cc.sql("select * from  qqdata.test_table").show
17/05/24 15:41:26 INFO CarbonContext$: main Query [SELECT * FROM  QQDATA.TEST_TABLE]
17/05/24 15:41:27 INFO ParseDriver: Parsing command: select * from  qqdata.test_table
17/05/24 15:41:28 INFO ParseDriver: Parse Completed
17/05/24 15:41:28 INFO ParseDriver: Parsing command: select * from  qqdata.test_table
17/05/24 15:41:28 INFO ParseDriver: Parse Completed
org.spark-project.guava.util.concurrent.UncheckedExecutionException: java.lang.IllegalArgumentException: invalid CarbonData file path: hdfs://hdp78.ffcs.cn:8020/apps/hive/warehouse/qqdata.db/test_table
        at org.spark-project.guava.cache.LocalCache$Segment.get(LocalCache.java:2263)
        at org.spark-project.guava.cache.LocalCache.get(LocalCache.java:4000)
        at org.spark-project.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004)
        at org.spark-project.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
        at org.spark-project.guava.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4880)
        at org.spark-project.guava.cache.LocalCache$LocalLoadingCache.apply(LocalCache.java:4898)
        at org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:417)
        at org.apache.spark.sql.CarbonContext$$anon$1.org$apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(CarbonContext.scala:71)
        at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$class.lookupRelation(Catalog.scala:162)
        at org.apache.spark.sql.CarbonContext$$anon$1.lookupRelation(CarbonContext.scala:71)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:302)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$9.applyOrElse(Analyzer.scala:314)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$9.applyOrElse(Analyzer.scala:309)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:57)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:57)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:56)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:54)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:54)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:281)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
        at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
        at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
        at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
        at scala.collection.AbstractIterator.to(Iterator.scala:1157)
        at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
        at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
        at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
        at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:321)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:54)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:309)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:299)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:83)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:80)
        at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
        at scala.collection.immutable.List.foldLeft(List.scala:84)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:80)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:72)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:72)
        at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:36)
        at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:36)
        at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:34)
        at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
        at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:139)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:31)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:36)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:38)
        at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:40)
        at $iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
        at $iwC$$iwC$$iwC.<init>(<console>:44)
        at $iwC$$iwC.<init>(<console>:46)
        at $iwC.<init>(<console>:48)
        at <init>(<console>:50)
        at .<init>(<console>:54)
        at .<clinit>(<console>)
        at .<init>(<console>:7)
        at .<clinit>(<console>)
        at $print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
        at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
        at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
        at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
        at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
        at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
        at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
        at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
        at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
        at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
        at org.apache.spark.repl.Main$.main(Main.scala:31)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:745)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.IllegalArgumentException: invalid CarbonData file path: hdfs://hdp78.ffcs.cn:8020/apps/hive/warehouse/qqdata.db/test_table
        at org.apache.spark.sql.CarbonDatasourceHadoopRelation.<init>(CarbonDatasourceHadoopRelation.scala:56)
        at org.apache.spark.sql.CarbonSource.createRelation(CarbonDatasourceRelation.scala:122)
        at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:140)
        at org.apache.spark.sql.hive.HiveMetastoreCatalog$$anon$1.load(HiveMetastoreCatalog.scala:181)
        at org.apache.spark.sql.hive.HiveMetastoreCatalog$$anon$1.load(HiveMetastoreCatalog.scala:125)
        at org.spark-project.guava.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
        at org.spark-project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
        at org.spark-project.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
        at org.spark-project.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)
        ... 94 more


[hidden email]
孙而焓【FFCS研究院】
Reply | Threaded
Open this post in threaded view
|

Re: SPARK update error

Pallavi Singh
This post was updated on .
Hi Sunerhan,

CarbonData does not support access to data loaded with different versions
of Spark i.e 2.1 and 1.6.

On Wed, May 24, 2017 at 1:41 PM, sunerhan1992@sina.com <
sunerhan1992@sina.com> wrote:

> Changing version from spark2.1+carbon1.1.0 to spark1.6+carbon1.1.0 and
> find out that data loaded using spark2.1 is not available,
> from error message,it seemed that it is not using the customized
> carbonstore location and i alwayse use that location for carboncontext.
> scala>
>     import org.apache.spark.sql.CarbonContext
>     val cc = new CarbonContext(sc, "hdfs://192.168.14.78:8020/
> apps/hive/guoht/qqdatastore")
>     cc.sql("select * from  qqdata.test_table").show
> 17/05/24 15:41:26 INFO CarbonContext$: main Query [SELECT * FROM
> QQDATA.TEST_TABLE]
> 17/05/24 15:41:27 INFO ParseDriver: Parsing command: select * from
> qqdata.test_table
> 17/05/24 15:41:28 INFO ParseDriver: Parse Completed
> 17/05/24 15:41:28 INFO ParseDriver: Parsing command: select * from
> qqdata.test_table
> 17/05/24 15:41:28 INFO ParseDriver: Parse Completed
> org.spark-project.guava.util.concurrent.UncheckedExecutionException:
> java.lang.IllegalArgumentException: invalid CarbonData file path: hdfs://
> hdp78.ffcs.cn:8020/apps/hive/warehouse/qqdata.db/test_table
>         at org.spark-project.guava.cache.LocalCache$Segment.get(
> LocalCache.java:2263)
>         at org.spark-project.guava.cache.LocalCache.get(LocalCache.
> java:4000)
>         at org.spark-project.guava.cache.LocalCache.getOrLoad(
> LocalCache.java:4004)
>         at org.spark-project.guava.cache.LocalCache$LocalLoadingCache.
> get(LocalCache.java:4874)
>         at org.spark-project.guava.cache.LocalCache$LocalLoadingCache.
> getUnchecked(LocalCache.java:4880)
>         at org.spark-project.guava.cache.LocalCache$LocalLoadingCache.
> apply(LocalCache.java:4898)
>         at org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(
> HiveMetastoreCatalog.scala:417)
>         at org.apache.spark.sql.CarbonContext$$anon$1.org$
> apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(
> CarbonContext.scala:71)
>         at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$
> class.lookupRelation(Catalog.scala:162)
>         at org.apache.spark.sql.CarbonContext$$anon$1.
> lookupRelation(CarbonContext.scala:71)
>         at org.apache.spark.sql.catalyst.analysis.Analyzer$
> ResolveRelations$.getTable(Analyzer.scala:302)
>         at org.apache.spark.sql.catalyst.analysis.Analyzer$
> ResolveRelations$$anonfun$apply$9.applyOrElse(Analyzer.scala:314)
>         at org.apache.spark.sql.catalyst.analysis.Analyzer$
> ResolveRelations$$anonfun$apply$9.applyOrElse(Analyzer.scala:309)
>         at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$
> anonfun$resolveOperators$1.apply(LogicalPlan.scala:57)
>         at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$
> anonfun$resolveOperators$1.apply(LogicalPlan.scala:57)
>         at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.
> withOrigin(TreeNode.scala:69)
>         at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.
> resolveOperators(LogicalPlan.scala:56)
>         at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$
> anonfun$1.apply(LogicalPlan.scala:54)
>         at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$
> anonfun$1.apply(LogicalPlan.scala:54)
>         at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.
> apply(TreeNode.scala:281)
>         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>         at scala.collection.generic.Growable$class.$plus$plus$eq(
> Growable.scala:48)
>         at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(
> ArrayBuffer.scala:103)
>         at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(
> ArrayBuffer.scala:47)
>         at scala.collection.TraversableOnce$class.to(
> TraversableOnce.scala:273)
>         at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>         at scala.collection.TraversableOnce$class.
> toBuffer(TraversableOnce.scala:265)
>         at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>         at scala.collection.TraversableOnce$class.toArray(
> TraversableOnce.scala:252)
>         at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>         at org.apache.spark.sql.catalyst.trees.TreeNode.
> transformChildren(TreeNode.scala:321)
>         at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.
> resolveOperators(LogicalPlan.scala:54)
>         at org.apache.spark.sql.catalyst.analysis.Analyzer$
> ResolveRelations$.apply(Analyzer.scala:309)
>         at org.apache.spark.sql.catalyst.analysis.Analyzer$
> ResolveRelations$.apply(Analyzer.scala:299)
>         at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$
> execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:83)
>         at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$
> execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:80)
>         at scala.collection.LinearSeqOptimized$class.
> foldLeft(LinearSeqOptimized.scala:111)
>         at scala.collection.immutable.List.foldLeft(List.scala:84)
>         at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$
> execute$1.apply(RuleExecutor.scala:80)
>         at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$
> execute$1.apply(RuleExecutor.scala:72)
>         at scala.collection.immutable.List.foreach(List.scala:318)
>         at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(
> RuleExecutor.scala:72)
>         at org.apache.spark.sql.execution.QueryExecution.
> analyzed$lzycompute(QueryExecution.scala:36)
>         at org.apache.spark.sql.execution.QueryExecution.
> analyzed(QueryExecution.scala:36)
>         at org.apache.spark.sql.execution.QueryExecution.
> assertAnalyzed(QueryExecution.scala:34)
>         at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
>         at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:139)
>         at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:31)
>         at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:36)
>         at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:38)
>         at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:40)
>         at $iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
>         at $iwC$$iwC$$iwC.<init>(<console>:44)
>         at $iwC$$iwC.<init>(<console>:46)
>         at $iwC.<init>(<console>:48)
>         at <init>(<console>:50)
>         at .<init>(<console>:54)
>         at .<clinit>(<console>)
>         at .<init>(<console>:7)
>         at .<clinit>(<console>)
>         at $print(<console>)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(
> SparkIMain.scala:1065)
>         at org.apache.spark.repl.SparkIMain$Request.loadAndRun(
> SparkIMain.scala:1346)
>         at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(
> SparkIMain.scala:840)
>         at org.apache.spark.repl.SparkIMain.interpret(
> SparkIMain.scala:871)
>         at org.apache.spark.repl.SparkIMain.interpret(
> SparkIMain.scala:819)
>         at org.apache.spark.repl.SparkILoop.reallyInterpret$1(
> SparkILoop.scala:857)
>         at org.apache.spark.repl.SparkILoop.interpretStartingWith(
> SparkILoop.scala:902)
>         at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>         at org.apache.spark.repl.SparkILoop.processLine$1(
> SparkILoop.scala:657)
>         at org.apache.spark.repl.SparkILoop.innerLoop$1(
> SparkILoop.scala:665)
>         at org.apache.spark.repl.SparkILoop.org$apache$spark$
> repl$SparkILoop$$loop(SparkILoop.scala:670)
>         at org.apache.spark.repl.SparkILoop$$anonfun$org$
> apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>         at org.apache.spark.repl.SparkILoop$$anonfun$org$
> apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at org.apache.spark.repl.SparkILoop$$anonfun$org$
> apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(
> ScalaClassLoader.scala:135)
>         at org.apache.spark.repl.SparkILoop.org$apache$spark$
> repl$SparkILoop$$process(SparkILoop.scala:945)
>         at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>         at org.apache.spark.repl.Main$.main(Main.scala:31)
>         at org.apache.spark.repl.Main.main(Main.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:745)
>         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>         at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.lang.IllegalArgumentException: invalid CarbonData file
> path: hdfs://hdp78.ffcs.cn:8020/apps/hive/warehouse/qqdata.db/test_table
>         at org.apache.spark.sql.CarbonDatasourceHadoopRelation.<init>(
> CarbonDatasourceHadoopRelation.scala:56)
>         at org.apache.spark.sql.CarbonSource.createRelation(
> CarbonDatasourceRelation.scala:122)
>         at org.apache.spark.sql.execution.datasources.
> ResolvedDataSource$.apply(ResolvedDataSource.scala:140)
>         at org.apache.spark.sql.hive.HiveMetastoreCatalog$$anon$1.
> load(HiveMetastoreCatalog.scala:181)
>         at org.apache.spark.sql.hive.HiveMetastoreCatalog$$anon$1.
> load(HiveMetastoreCatalog.scala:125)
>         at org.spark-project.guava.cache.LocalCache$LoadingValueReference.
> loadFuture(LocalCache.java:3599)
>         at org.spark-project.guava.cache.LocalCache$Segment.loadSync(
> LocalCache.java:2379)
>         at org.spark-project.guava.cache.LocalCache$Segment.
> lockedGetOrLoad(LocalCache.java:2342)
>         at org.spark-project.guava.cache.LocalCache$Segment.get(
> LocalCache.java:2257)
>         ... 94 more
>
>
> sunerhan1992@sina.com
>



--
Regards | Pallavi Singh
Software Consultant
Reply | Threaded
Open this post in threaded view
|

Re: SPARK update error

Erlu Chen
In reply to this post by 孙而焓
Hi, sunerhan

As I know, Carbon support upward compatible,that means if we create carbon table in spark1.6 + carbon1.1.0,we can get access this created table in spark2.1 + carbon1.1.0.

Actually, Carbon will do schema path check in spark 2.1 + carbon1.1.0. Besides, Carbon use different CarbonDatasourceHadoopRelation for integrating spark1.6 and spark2.1. This part of process logic has been refactored.

Carbon can't identify your carbon table created in spark2.1 + carbon1.1.0 while you query it in spark1.6 + carbon1.1.0, it may be treated as hive table, and if the path is not exists or the schema file is incorrect, then error will happen.

Regards.
Chenerlu

Reply | Threaded
Open this post in threaded view
|

Re: SPARK update error

孙而焓
the path is solid,never change carbon store path.
孙而焓【FFCS研究院】