Login  Register

[Exception] a thrift related problem occured when trying 0.0.1 release version

classic Classic list List threaded Threaded
3 messages Options Options
Embed post
Permalink
Reply | Threaded
Open this post in threaded view
| More
Print post
Permalink

[Exception] a thrift related problem occured when trying 0.0.1 release version

Zen Wellon
Hi, guys

Congratulations for the first stable version !
Today I heard that 0.0.1 was released and build a fresh jar for my spark
cluster. But when I try to create a new table, an Exception occured, anyone
could help?

below is the full stack:

INFO  26-08 23:23:46,062 - Parsing command: create table if not exists
carbondata_001_release_test(......)
INFO  26-08 23:23:46,086 - Parse Completed
java.io.IOException: org.apache.thrift.protocol.TProtocolException:
Required field 'fact_table' was not present! Struct:
TableInfo(fact_table:null, aggregate_table_list:null)
        at
org.apache.carbondata.core.reader.ThriftReader.read(ThriftReader.java:110)
        at
org.apache.spark.sql.hive.CarbonMetastoreCatalog$$anonfun$fillMetaData$1$$anonfun$apply$1.apply(CarbonMetastoreCatalog.scala:216)
        at
org.apache.spark.sql.hive.CarbonMetastoreCatalog$$anonfun$fillMetaData$1$$anonfun$apply$1.apply(CarbonMetastoreCatalog.scala:196)
        at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at
scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at
org.apache.spark.sql.hive.CarbonMetastoreCatalog$$anonfun$fillMetaData$1.apply(CarbonMetastoreCatalog.scala:196)
        at
org.apache.spark.sql.hive.CarbonMetastoreCatalog$$anonfun$fillMetaData$1.apply(CarbonMetastoreCatalog.scala:191)
        at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at
scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at
org.apache.spark.sql.hive.CarbonMetastoreCatalog.fillMetaData(CarbonMetastoreCatalog.scala:191)
        at
org.apache.spark.sql.hive.CarbonMetastoreCatalog.loadMetadata(CarbonMetastoreCatalog.scala:177)
        at
org.apache.spark.sql.hive.CarbonMetastoreCatalog.<init>(CarbonMetastoreCatalog.scala:112)
        at
org.apache.spark.sql.CarbonContext$$anon$1.<init>(CarbonContext.scala:70)
        at
org.apache.spark.sql.CarbonContext.catalog$lzycompute(CarbonContext.scala:70)
        at
org.apache.spark.sql.CarbonContext.catalog(CarbonContext.scala:67)
        at
org.apache.spark.sql.CarbonContext$$anon$2.<init>(CarbonContext.scala:75)
        at
org.apache.spark.sql.CarbonContext.analyzer$lzycompute(CarbonContext.scala:75)
        at
org.apache.spark.sql.CarbonContext.analyzer(CarbonContext.scala:74)
        at
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:34)
        at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
        at
org.apache.carbondata.spark.rdd.CarbonDataFrameRDD.<init>(CarbonDataFrameRDD.scala:23)
        at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:130)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:35)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:40)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
        at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:44)
        at $iwC$$iwC$$iwC$$iwC.<init>(<console>:46)
        at $iwC$$iwC$$iwC.<init>(<console>:48)
        at $iwC$$iwC.<init>(<console>:50)
        at $iwC.<init>(<console>:52)
        at <init>(<console>:54)
        at .<init>(<console>:58)
        at .<clinit>(<console>)
        at .<init>(<console>:7)
        at .<clinit>(<console>)
        at $print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at
org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
        at
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
        at
org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
        at
org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
        at
org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
        at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
        at
org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
        at
org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
        at org.apache.spark.repl.SparkILoop.org
$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
        at
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
        at
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at
scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
        at org.apache.spark.repl.SparkILoop.org
$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
        at org.apache.spark.repl.Main$.main(Main.scala:31)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.thrift.protocol.TProtocolException: Required field
'fact_table' was not present! Struct: TableInfo(fact_table:null,
aggregate_table_list:null)
        at
org.apache.carbondata.format.TableInfo.validate(TableInfo.java:397)
        at
org.apache.carbondata.format.TableInfo$TableInfoStandardScheme.read(TableInfo.java:478)
        at
org.apache.carbondata.format.TableInfo$TableInfoStandardScheme.read(TableInfo.java:430)
        at org.apache.carbondata.format.TableInfo.read(TableInfo.java:363)
        at
org.apache.carbondata.core.reader.ThriftReader.read(ThriftReader.java:108)
        ... 67 more

--


Best regards,
William Zen
Reply | Threaded
Open this post in threaded view
| More
Print post
Permalink

Re: [Exception] a thrift related problem occured when trying 0.0.1 release version

ravipesala
Hi William,

It may be because you are using old carbon store. Please try using new
store path. There were changes in thrift so old store won't work on this
release.

Thanks & Regards,
Ravi

On 26 August 2016 at 21:05, Zen Wellon <[hidden email]> wrote:

> Hi, guys
>
> Congratulations for the first stable version !
> Today I heard that 0.0.1 was released and build a fresh jar for my spark
> cluster. But when I try to create a new table, an Exception occured, anyone
> could help?
>
> below is the full stack:
>
> INFO  26-08 23:23:46,062 - Parsing command: create table if not exists
> carbondata_001_release_test(......)
> INFO  26-08 23:23:46,086 - Parse Completed
> java.io.IOException: org.apache.thrift.protocol.TProtocolException:
> Required field 'fact_table' was not present! Struct:
> TableInfo(fact_table:null, aggregate_table_list:null)
>         at
> org.apache.carbondata.core.reader.ThriftReader.read(ThriftReader.java:110)
>         at
> org.apache.spark.sql.hive.CarbonMetastoreCatalog$$anonfun$fillMetaData$1$$
> anonfun$apply$1.apply(CarbonMetastoreCatalog.scala:216)
>         at
> org.apache.spark.sql.hive.CarbonMetastoreCatalog$$anonfun$fillMetaData$1$$
> anonfun$apply$1.apply(CarbonMetastoreCatalog.scala:196)
>         at
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.
> scala:33)
>         at
> scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
>         at
> org.apache.spark.sql.hive.CarbonMetastoreCatalog$$
> anonfun$fillMetaData$1.apply(CarbonMetastoreCatalog.scala:196)
>         at
> org.apache.spark.sql.hive.CarbonMetastoreCatalog$$
> anonfun$fillMetaData$1.apply(CarbonMetastoreCatalog.scala:191)
>         at
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.
> scala:33)
>         at
> scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
>         at
> org.apache.spark.sql.hive.CarbonMetastoreCatalog.fillMetaData(
> CarbonMetastoreCatalog.scala:191)
>         at
> org.apache.spark.sql.hive.CarbonMetastoreCatalog.loadMetadata(
> CarbonMetastoreCatalog.scala:177)
>         at
> org.apache.spark.sql.hive.CarbonMetastoreCatalog.<init>(
> CarbonMetastoreCatalog.scala:112)
>         at
> org.apache.spark.sql.CarbonContext$$anon$1.<init>(CarbonContext.scala:70)
>         at
> org.apache.spark.sql.CarbonContext.catalog$lzycompute(CarbonContext.
> scala:70)
>         at
> org.apache.spark.sql.CarbonContext.catalog(CarbonContext.scala:67)
>         at
> org.apache.spark.sql.CarbonContext$$anon$2.<init>(CarbonContext.scala:75)
>         at
> org.apache.spark.sql.CarbonContext.analyzer$lzycompute(CarbonContext.
> scala:75)
>         at
> org.apache.spark.sql.CarbonContext.analyzer(CarbonContext.scala:74)
>         at
> org.apache.spark.sql.execution.QueryExecution.
> assertAnalyzed(QueryExecution.scala:34)
>         at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
>         at
> org.apache.carbondata.spark.rdd.CarbonDataFrameRDD.<init>(
> CarbonDataFrameRDD.scala:23)
>         at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:130)
>         at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:35)
>         at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:40)
>         at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
>         at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:44)
>         at $iwC$$iwC$$iwC$$iwC.<init>(<console>:46)
>         at $iwC$$iwC$$iwC.<init>(<console>:48)
>         at $iwC$$iwC.<init>(<console>:50)
>         at $iwC.<init>(<console>:52)
>         at <init>(<console>:54)
>         at .<init>(<console>:58)
>         at .<clinit>(<console>)
>         at .<init>(<console>:7)
>         at .<clinit>(<console>)
>         at $print(<console>)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>         at
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>         at
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>         at org.apache.spark.repl.SparkIMain.interpret(
> SparkIMain.scala:871)
>         at org.apache.spark.repl.SparkIMain.interpret(
> SparkIMain.scala:819)
>         at
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>         at
> org.apache.spark.repl.SparkILoop.interpretStartingWith(
> SparkILoop.scala:902)
>         at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>         at
> org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>         at
> org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>         at org.apache.spark.repl.SparkILoop.org
> $apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>         at
> org.apache.spark.repl.SparkILoop$$anonfun$org$
> apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>         at
> org.apache.spark.repl.SparkILoop$$anonfun$org$
> apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at
> org.apache.spark.repl.SparkILoop$$anonfun$org$
> apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(
> ScalaClassLoader.scala:135)
>         at org.apache.spark.repl.SparkILoop.org
> $apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>         at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>         at org.apache.spark.repl.Main$.main(Main.scala:31)
>         at org.apache.spark.repl.Main.main(Main.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>         at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>         at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: org.apache.thrift.protocol.TProtocolException: Required field
> 'fact_table' was not present! Struct: TableInfo(fact_table:null,
> aggregate_table_list:null)
>         at
> org.apache.carbondata.format.TableInfo.validate(TableInfo.java:397)
>         at
> org.apache.carbondata.format.TableInfo$TableInfoStandardScheme.read(
> TableInfo.java:478)
>         at
> org.apache.carbondata.format.TableInfo$TableInfoStandardScheme.read(
> TableInfo.java:430)
>         at org.apache.carbondata.format.TableInfo.read(TableInfo.java:363)
>         at
> org.apache.carbondata.core.reader.ThriftReader.read(ThriftReader.java:108)
>         ... 67 more
>
> --
>
>
> Best regards,
> William Zen
>



--
Thanks & Regards,
Ravi
Reply | Threaded
Open this post in threaded view
| More
Print post
Permalink

Re: [Exception] a thrift related problem occured when trying 0.0.1 release version

Zen Wellon
yes, I resolved this problem by deleting the old carbon metastore.

2016-08-27 0:18 GMT+08:00 Ravindra Pesala <[hidden email]>:

> Hi William,
>
> It may be because you are using old carbon store. Please try using new
> store path. There were changes in thrift so old store won't work on this
> release.
>
> Thanks & Regards,
> Ravi
>
> On 26 August 2016 at 21:05, Zen Wellon <[hidden email]> wrote:
>
> > Hi, guys
> >
> > Congratulations for the first stable version !
> > Today I heard that 0.0.1 was released and build a fresh jar for my spark
> > cluster. But when I try to create a new table, an Exception occured,
> anyone
> > could help?
> >
> > below is the full stack:
> >
> > INFO  26-08 23:23:46,062 - Parsing command: create table if not exists
> > carbondata_001_release_test(......)
> > INFO  26-08 23:23:46,086 - Parse Completed
> > java.io.IOException: org.apache.thrift.protocol.TProtocolException:
> > Required field 'fact_table' was not present! Struct:
> > TableInfo(fact_table:null, aggregate_table_list:null)
> >         at
> > org.apache.carbondata.core.reader.ThriftReader.read(
> ThriftReader.java:110)
> >         at
> > org.apache.spark.sql.hive.CarbonMetastoreCatalog$$
> anonfun$fillMetaData$1$$
> > anonfun$apply$1.apply(CarbonMetastoreCatalog.scala:216)
> >         at
> > org.apache.spark.sql.hive.CarbonMetastoreCatalog$$
> anonfun$fillMetaData$1$$
> > anonfun$apply$1.apply(CarbonMetastoreCatalog.scala:196)
> >         at
> > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.
> > scala:33)
> >         at
> > scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> >         at
> > org.apache.spark.sql.hive.CarbonMetastoreCatalog$$
> > anonfun$fillMetaData$1.apply(CarbonMetastoreCatalog.scala:196)
> >         at
> > org.apache.spark.sql.hive.CarbonMetastoreCatalog$$
> > anonfun$fillMetaData$1.apply(CarbonMetastoreCatalog.scala:191)
> >         at
> > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.
> > scala:33)
> >         at
> > scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> >         at
> > org.apache.spark.sql.hive.CarbonMetastoreCatalog.fillMetaData(
> > CarbonMetastoreCatalog.scala:191)
> >         at
> > org.apache.spark.sql.hive.CarbonMetastoreCatalog.loadMetadata(
> > CarbonMetastoreCatalog.scala:177)
> >         at
> > org.apache.spark.sql.hive.CarbonMetastoreCatalog.<init>(
> > CarbonMetastoreCatalog.scala:112)
> >         at
> > org.apache.spark.sql.CarbonContext$$anon$1.<init>(
> CarbonContext.scala:70)
> >         at
> > org.apache.spark.sql.CarbonContext.catalog$lzycompute(CarbonContext.
> > scala:70)
> >         at
> > org.apache.spark.sql.CarbonContext.catalog(CarbonContext.scala:67)
> >         at
> > org.apache.spark.sql.CarbonContext$$anon$2.<init>(
> CarbonContext.scala:75)
> >         at
> > org.apache.spark.sql.CarbonContext.analyzer$lzycompute(CarbonContext.
> > scala:75)
> >         at
> > org.apache.spark.sql.CarbonContext.analyzer(CarbonContext.scala:74)
> >         at
> > org.apache.spark.sql.execution.QueryExecution.
> > assertAnalyzed(QueryExecution.scala:34)
> >         at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
> >         at
> > org.apache.carbondata.spark.rdd.CarbonDataFrameRDD.<init>(
> > CarbonDataFrameRDD.scala:23)
> >         at org.apache.spark.sql.CarbonContext.sql(
> CarbonContext.scala:130)
> >         at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:35)
> >         at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:40)
> >         at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
> >         at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:44)
> >         at $iwC$$iwC$$iwC$$iwC.<init>(<console>:46)
> >         at $iwC$$iwC$$iwC.<init>(<console>:48)
> >         at $iwC$$iwC.<init>(<console>:50)
> >         at $iwC.<init>(<console>:52)
> >         at <init>(<console>:54)
> >         at .<init>(<console>:58)
> >         at .<clinit>(<console>)
> >         at .<init>(<console>:7)
> >         at .<clinit>(<console>)
> >         at $print(<console>)
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >         at
> > sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:
> > 57)
> >         at
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(
> > DelegatingMethodAccessorImpl.java:43)
> >         at java.lang.reflect.Method.invoke(Method.java:606)
> >         at
> > org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(
> SparkIMain.scala:1065)
> >         at
> > org.apache.spark.repl.SparkIMain$Request.loadAndRun(
> SparkIMain.scala:1346)
> >         at
> > org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
> >         at org.apache.spark.repl.SparkIMain.interpret(
> > SparkIMain.scala:871)
> >         at org.apache.spark.repl.SparkIMain.interpret(
> > SparkIMain.scala:819)
> >         at
> > org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
> >         at
> > org.apache.spark.repl.SparkILoop.interpretStartingWith(
> > SparkILoop.scala:902)
> >         at org.apache.spark.repl.SparkILoop.command(SparkILoop.
> scala:814)
> >         at
> > org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
> >         at
> > org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
> >         at org.apache.spark.repl.SparkILoop.org
> > $apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
> >         at
> > org.apache.spark.repl.SparkILoop$$anonfun$org$
> > apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(
> SparkILoop.scala:997)
> >         at
> > org.apache.spark.repl.SparkILoop$$anonfun$org$
> > apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
> >         at
> > org.apache.spark.repl.SparkILoop$$anonfun$org$
> > apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
> >         at
> > scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(
> > ScalaClassLoader.scala:135)
> >         at org.apache.spark.repl.SparkILoop.org
> > $apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
> >         at org.apache.spark.repl.SparkILoop.process(SparkILoop.
> scala:1059)
> >         at org.apache.spark.repl.Main$.main(Main.scala:31)
> >         at org.apache.spark.repl.Main.main(Main.scala)
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >         at
> > sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:
> > 57)
> >         at
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(
> > DelegatingMethodAccessorImpl.java:43)
> >         at java.lang.reflect.Method.invoke(Method.java:606)
> >         at
> > org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> > deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
> >         at
> > org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
> >         at
> > org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
> >         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> > scala:121)
> >         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> > Caused by: org.apache.thrift.protocol.TProtocolException: Required field
> > 'fact_table' was not present! Struct: TableInfo(fact_table:null,
> > aggregate_table_list:null)
> >         at
> > org.apache.carbondata.format.TableInfo.validate(TableInfo.java:397)
> >         at
> > org.apache.carbondata.format.TableInfo$TableInfoStandardScheme.read(
> > TableInfo.java:478)
> >         at
> > org.apache.carbondata.format.TableInfo$TableInfoStandardScheme.read(
> > TableInfo.java:430)
> >         at org.apache.carbondata.format.TableInfo.read(TableInfo.java:
> 363)
> >         at
> > org.apache.carbondata.core.reader.ThriftReader.read(
> ThriftReader.java:108)
> >         ... 67 more
> >
> > --
> >
> >
> > Best regards,
> > William Zen
> >
>
>
>
> --
> Thanks & Regards,
> Ravi
>



--


Best regards,
William Zen