Re: carbondata org.apache.thrift.TBaseHelper.hashCode(segment_id); 问题

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: carbondata org.apache.thrift.TBaseHelper.hashCode(segment_id); 问题

仲景武

hi, all

I have installed carbonate succeed  following the document “https://cwiki.apache.org/confluence/display/CARBONDATA/“

but when load data into carbonate table  throws exception:


run command:
cc.sql("load data local inpath '../carbondata/sample.csv' into table test_table")

errors:

org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: /home/bigdata/bigdata/carbondata/sample.csv
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1307)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.take(RDD.scala:1302)
at com.databricks.spark.csv.CarbonCsvRelation.firstLine$lzycompute(CarbonCsvRelation.scala:181)
at com.databricks.spark.csv.CarbonCsvRelation.firstLine(CarbonCsvRelation.scala:176)
at com.databricks.spark.csv.CarbonCsvRelation.inferSchema(CarbonCsvRelation.scala:144)
at com.databricks.spark.csv.CarbonCsvRelation.<init>(CarbonCsvRelation.scala:74)
at com.databricks.spark.csv.newapi.DefaultSource.createRelation(DefaultSource.scala:142)
at com.databricks.spark.csv.newapi.DefaultSource.createRelation(DefaultSource.scala:44)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.loadDataFrame(GlobalDictionaryUtil.scala:386)
at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.generateGlobalDictionary(GlobalDictionaryUtil.scala:767)
at org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1170)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
at org.apache.carbondata.spark.rdd.CarbonDataFrameRDD.<init>(CarbonDataFrameRDD.scala:23)
at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:137)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:51)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:53)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:55)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:57)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:59)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:61)
at $iwC$$iwC$$iwC.<init>(<console>:63)
at $iwC$$iwC.<init>(<console>:65)
at $iwC.<init>(<console>:67)
at <init>(<console>:69)
at .<init>(<console>:73)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.carbon.Main$.main(Main.scala:31)
at org.apache.spark.repl.carbon.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)




cat /home/bigdata/bigdata/carbondata/sample.csv

id,name,city,age
1,david,shenzhen,31
2,eason,shenzhen,27
3,jarry,wuhan,35



ip:taonongyuan.com  username:bigdata  passwd: Zjw11763

this is private <http://www.baidu.com/link?url=zuB-GJ6ONUu4xqmpd_NsR53R4f-Dwi037YBSX9Xc1DOs3kXtzG5XjyhXo7uAOcC1hfRcTaEnGZQoscjTduMloRYu-KsmdmEUPsq68db0VH3MoYCMd5IamXotlUEffF9b> aly cents server ,you can login to debug…..



regards,
仲景武





在 2016年9月27日,上午4:56,Liang Big data <[hidden email]<mailto:[hidden email]>> 写道:

Hi zhongjingwu:

Can you put these discussions into mailing list : [hidden email]<mailto:[hidden email]>
You may get more helps from mailing list.

Regards
Liang

在 2016年9月26日 下午8:48,仲景武 <[hidden email]<mailto:[hidden email]>>写道:

在 2016年9月26日,下午8:46,仲景武 <[hidden email]<mailto:[hidden email]>> 写道:


在 2016年9月26日,下午8:45,仲景武 <[hidden email]<mailto:[hidden email]>> 写道:

@Override
public int hashCode() {
  int hashCode = 1;

  hashCode = hashCode * 8191 + min_surrogate_key;

  hashCode = hashCode * 8191 + max_surrogate_key;

  hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(start_offset);

  hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(end_offset);

  hashCode = hashCode * 8191 + chunk_count;

  hashCode = hashCode * 8191 + ((isSetSegment_id()) ? 131071 : 524287);
  if (isSetSegment_id())
    hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(segment_id);

  return hashCode;
}

我在源码中没有看到 org.apache.thrift.TBaseHelper.hashCode(int) 这样的重载方案啊 无法编译成功啊,什么情况?

<FE4BB0D35DD30805BE3BD071450C3118.jpeg>



--

Regards
Liang

Reply | Threaded
Open this post in threaded view
|

Re: carbondata org.apache.thrift.TBaseHelper.hashCode(segment_id); 问题

Lion.X
hi, jingwu,

Now, Carbon dose not support load data from local, pls put the file into
HDFS and test it again.

Lionx

2016-10-19 16:55 GMT+08:00 仲景武 <[hidden email]>:

>
> hi, all
>
> I have installed carbonate succeed  following the document “
> https://cwiki.apache.org/confluence/display/CARBONDATA/“
>
> but when load data into carbonate table  throws exception:
>
>
> run command:
> cc.sql("load data local inpath '../carbondata/sample.csv' into table
> test_table")
>
> errors:
>
> org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path
> does not exist: /home/bigdata/bigdata/carbondata/sample.csv
> at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.
> singleThreadedListStatus(FileInputFormat.java:321)
> at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.
> listStatus(FileInputFormat.java:264)
> at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.
> getSplits(FileInputFormat.java:385)
> at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
> at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(
> MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
> at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1307)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(
> RDDOperationScope.scala:150)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(
> RDDOperationScope.scala:111)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
> at org.apache.spark.rdd.RDD.take(RDD.scala:1302)
> at com.databricks.spark.csv.CarbonCsvRelation.firstLine$
> lzycompute(CarbonCsvRelation.scala:181)
> at com.databricks.spark.csv.CarbonCsvRelation.firstLine(
> CarbonCsvRelation.scala:176)
> at com.databricks.spark.csv.CarbonCsvRelation.inferSchema(
> CarbonCsvRelation.scala:144)
> at com.databricks.spark.csv.CarbonCsvRelation.<init>(
> CarbonCsvRelation.scala:74)
> at com.databricks.spark.csv.newapi.DefaultSource.
> createRelation(DefaultSource.scala:142)
> at com.databricks.spark.csv.newapi.DefaultSource.
> createRelation(DefaultSource.scala:44)
> at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(
> ResolvedDataSource.scala:158)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
> at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.loadDataFrame(
> GlobalDictionaryUtil.scala:386)
> at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.
> generateGlobalDictionary(GlobalDictionaryUtil.scala:767)
> at org.apache.spark.sql.execution.command.LoadTable.
> run(carbonTableSchema.scala:1170)
> at org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult$lzycompute(commands.scala:58)
> at org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult(commands.scala:56)
> at org.apache.spark.sql.execution.ExecutedCommand.
> doExecute(commands.scala:70)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$
> execute$5.apply(SparkPlan.scala:132)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$
> execute$5.apply(SparkPlan.scala:130)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(
> RDDOperationScope.scala:150)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(
> QueryExecution.scala:55)
> at org.apache.spark.sql.execution.QueryExecution.
> toRdd(QueryExecution.scala:55)
> at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
> at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
> at org.apache.carbondata.spark.rdd.CarbonDataFrameRDD.<init>(
> CarbonDataFrameRDD.scala:23)
> at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:137)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.
> <init>(<console>:42)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<
> init>(<console>:47)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:51)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:53)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:55)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:57)
> at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:59)
> at $iwC$$iwC$$iwC$$iwC.<init>(<console>:61)
> at $iwC$$iwC$$iwC.<init>(<console>:63)
> at $iwC$$iwC.<init>(<console>:65)
> at $iwC.<init>(<console>:67)
> at <init>(<console>:69)
> at .<init>(<console>:73)
> at .<clinit>(<console>)
> at .<init>(<console>:7)
> at .<clinit>(<console>)
> at $print(<console>)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(
> SparkIMain.scala:1065)
> at org.apache.spark.repl.SparkIMain$Request.loadAndRun(
> SparkIMain.scala:1346)
> at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
> at org.apache.spark.repl.SparkILoop.reallyInterpret$1(
> SparkILoop.scala:857)
> at org.apache.spark.repl.SparkILoop.interpretStartingWith(
> SparkILoop.scala:902)
> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
> at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
> at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
> at org.apache.spark.repl.SparkILoop.org$apache$spark$
> repl$SparkILoop$$loop(SparkILoop.scala:670)
> at org.apache.spark.repl.SparkILoop$$anonfun$org$
> apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
> at org.apache.spark.repl.SparkILoop$$anonfun$org$
> apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
> at org.apache.spark.repl.SparkILoop$$anonfun$org$
> apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
> at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(
> ScalaClassLoader.scala:135)
> at org.apache.spark.repl.SparkILoop.org$apache$spark$
> repl$SparkILoop$$process(SparkILoop.scala:945)
> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
> at org.apache.spark.repl.carbon.Main$.main(Main.scala:31)
> at org.apache.spark.repl.carbon.Main.main(Main.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
>
>
>
> cat /home/bigdata/bigdata/carbondata/sample.csv
>
> id,name,city,age
> 1,david,shenzhen,31
> 2,eason,shenzhen,27
> 3,jarry,wuhan,35
>
>
>
> ip:taonongyuan.com  username:bigdata  passwd: Zjw11763
>
> this is private <http://www.baidu.com/link?url=zuB-GJ6ONUu4xqmpd_NsR53R4f-
> Dwi037YBSX9Xc1DOs3kXtzG5XjyhXo7uAOcC1hfRcTaEnGZQoscjTduMloRYu-
> KsmdmEUPsq68db0VH3MoYCMd5IamXotlUEffF9b> aly cents server ,you can login
> to debug…..
>
>
>
> regards,
> 仲景武
>
>
>
>
>
> 在 2016年9月27日,上午4:56,Liang Big data <[hidden email]<mailto:
> [hidden email]>> 写道:
>
> Hi zhongjingwu:
>
> Can you put these discussions into mailing list :
> [hidden email]<mailto:dev@
> carbondata.incubator.apache.org>
> You may get more helps from mailing list.
>
> Regards
> Liang
>
> 在 2016年9月26日 下午8:48,仲景武 <[hidden email]<mailto:
> [hidden email]>>写道:
>
> 在 2016年9月26日,下午8:46,仲景武 <[hidden email]<mailto:
> [hidden email]>> 写道:
>
>
> 在 2016年9月26日,下午8:45,仲景武 <[hidden email]<mailto:
> [hidden email]>> 写道:
>
> @Override
> public int hashCode() {
>   int hashCode = 1;
>
>   hashCode = hashCode * 8191 + min_surrogate_key;
>
>   hashCode = hashCode * 8191 + max_surrogate_key;
>
>   hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.
> hashCode(start_offset);
>
>   hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.
> hashCode(end_offset);
>
>   hashCode = hashCode * 8191 + chunk_count;
>
>   hashCode = hashCode * 8191 + ((isSetSegment_id()) ? 131071 : 524287);
>   if (isSetSegment_id())
>     hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.
> hashCode(segment_id);
>
>   return hashCode;
> }
>
> 我在源码中没有看到 org.apache.thrift.TBaseHelper.hashCode(int) 这样的重载方案啊
> 无法编译成功啊,什么情况?
>
> <FE4BB0D35DD30805BE3BD071450C3118.jpeg>
>
>
>
> --
>
> Regards
> Liang
>
>
Reply | Threaded
Open this post in threaded view
|

Re: carbondata org.apache.thrift.TBaseHelper.hashCode(segment_id); 问题

仲景武
In reply to this post by 仲景武
when run command (thrift sever):

jdbc:hive2://taonongyuan.com<http://taonongyuan.com>:10099/default> load data inpath 'hdfs://name001:9000/carbondata/sample.csv' into table test_table3;


throw exception:

Driver stacktrace: (state=,code=0)
0: jdbc:hive2://taonongyuan.com<http://taonongyuan.com>:10099/default> load data inpath 'hdfs:///name001:9000/carbondata/sample.csv' into table test_table3;
Error: java.lang.IllegalArgumentException: Pathname /name001:9000/carbondata/sample.csv from hdfs:/name001:9000/carbondata/sample.csv is not a valid DFS filename. (state=,code=0)
0: jdbc:hive2://taonongyuan.com<http://taonongyuan.com>:10099/default> load data inpath 'hdfs://name001:9000/carbondata/sample.csv' into table test_table3;
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 (TID 18, data002): java.lang.IllegalArgumentException: Wrong FS: hdfs://name001:9000/user/hive/warehouse/carbon.store/default/test_table3/Metadata/fdd8c8c4-5cdd-4542-aab1-785be20b9f36.dictmeta, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:140)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
at org.apache.carbondata.core.datastorage.store.impl.FileFactory.getDataInputStream(FileFactory.java:146)
at org.apache.carbondata.core.reader.ThriftReader.open(ThriftReader.java:79)
at org.apache.carbondata.core.reader.CarbonDictionaryMetadataReaderImpl.openThriftReader(CarbonDictionaryMetadataReaderImpl.java:181)
at org.apache.carbondata.core.reader.CarbonDictionaryMetadataReaderImpl.readLastEntryOfDictionaryMetaChunk(CarbonDictionaryMetadataReaderImpl.java:128)
at org.apache.carbondata.core.cache.dictionary.AbstractDictionaryCache.readLastChunkFromDictionaryMetadataFile(AbstractDictionaryCache.java:129)
at org.apache.carbondata.core.cache.dictionary.AbstractDictionaryCache.checkAndLoadDictionaryData(AbstractDictionaryCache.java:204)
at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.getDictionary(ReverseDictionaryCache.java:181)
at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.get(ReverseDictionaryCache.java:69)
at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.get(ReverseDictionaryCache.java:40)
at org.apache.carbondata.spark.load.CarbonLoaderUtil.getDictionary(CarbonLoaderUtil.java:508)
at org.apache.carbondata.spark.load.CarbonLoaderUtil.getDictionary(CarbonLoaderUtil.java:514)
at org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.<init>(CarbonGlobalDictionaryRDD.scala:362)
at org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:293)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)



在 2016年10月19日,下午4:55,仲景武 <[hidden email]<mailto:[hidden email]>> 写道:


hi, all

I have installed carbonate succeed  following the document “https://cwiki.apache.org/confluence/display/CARBONDATA/“<https://cwiki.apache.org/confluence/display/CARBONDATA/%E2%80%9C>

but when load data into carbonate table  throws exception:


run command:
cc.sql("load data local inpath '../carbondata/sample.csv' into table test_table")

errors:

org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: /home/bigdata/bigdata/carbondata/sample.csv
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1307)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.take(RDD.scala:1302)
at com.databricks.spark.csv.CarbonCsvRelation.firstLine$lzycompute(CarbonCsvRelation.scala:181)
at com.databricks.spark.csv.CarbonCsvRelation.firstLine(CarbonCsvRelation.scala:176)
at com.databricks.spark.csv.CarbonCsvRelation.inferSchema(CarbonCsvRelation.scala:144)
at com.databricks.spark.csv.CarbonCsvRelation.<init>(CarbonCsvRelation.scala:74)
at com.databricks.spark.csv.newapi.DefaultSource.createRelation(DefaultSource.scala:142)
at com.databricks.spark.csv.newapi.DefaultSource.createRelation(DefaultSource.scala:44)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.loadDataFrame(GlobalDictionaryUtil.scala:386)
at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.generateGlobalDictionary(GlobalDictionaryUtil.scala:767)
at org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1170)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
at org.apache.carbondata.spark.rdd.CarbonDataFrameRDD.<init>(CarbonDataFrameRDD.scala:23)
at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:137)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:51)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:53)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:55)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:57)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:59)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:61)
at $iwC$$iwC$$iwC.<init>(<console>:63)
at $iwC$$iwC.<init>(<console>:65)
at $iwC.<init>(<console>:67)
at <init>(<console>:69)
at .<init>(<console>:73)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.carbon.Main$.main(Main.scala:31)
at org.apache.spark.repl.carbon.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)




cat /home/bigdata/bigdata/carbondata/sample.csv

id,name,city,age
1,david,shenzhen,31
2,eason,shenzhen,27
3,jarry,wuhan,35



ip:taonongyuan.com  username:bigdata  passwd: Zjw11763

this is private <http://www.baidu.com/link?url=zuB-GJ6ONUu4xqmpd_NsR53R4f-Dwi037YBSX9Xc1DOs3kXtzG5XjyhXo7uAOcC1hfRcTaEnGZQoscjTduMloRYu-KsmdmEUPsq68db0VH3MoYCMd5IamXotlUEffF9b> aly cents server ,you can login to debug…..



regards,
仲景武





在 2016年9月27日,上午4:56,Liang Big data <[hidden email]<mailto:[hidden email]>> 写道:

Hi zhongjingwu:

Can you put these discussions into mailing list : [hidden email]<mailto:[hidden email]>
You may get more helps from mailing list.

Regards
Liang

在 2016年9月26日 下午8:48,仲景武 <[hidden email]<mailto:[hidden email]>>写道:

在 2016年9月26日,下午8:46,仲景武 <[hidden email]<mailto:[hidden email]>> 写道:


在 2016年9月26日,下午8:45,仲景武 <[hidden email]<mailto:[hidden email]>> 写道:

@Override
public int hashCode() {
  int hashCode = 1;

  hashCode = hashCode * 8191 + min_surrogate_key;

  hashCode = hashCode * 8191 + max_surrogate_key;

  hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(start_offset);

  hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(end_offset);

  hashCode = hashCode * 8191 + chunk_count;

  hashCode = hashCode * 8191 + ((isSetSegment_id()) ? 131071 : 524287);
  if (isSetSegment_id())
    hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(segment_id);

  return hashCode;
}

我在源码中没有看到 org.apache.thrift.TBaseHelper.hashCode(int) 这样的重载方案啊 无法编译成功啊,什么情况?

<FE4BB0D35DD30805BE3BD071450C3118.jpeg>



--

Regards
Liang


Reply | Threaded
Open this post in threaded view
|

load data error

仲景武

when run command (thrift sever):

jdbc:hive2://taonongyuan.com<http://taonongyuan.com/>:10099/default> load data inpath 'hdfs://name001:9000/carbondata/sample.csv' into table test_table3;


throw exception:

Driver stacktrace: (state=,code=0)
0: jdbc:hive2://taonongyuan.com<http://taonongyuan.com/>:10099/default> load data inpath 'hdfs:///name001:9000/carbondata/sample.csv' into table test_table3;
Error: java.lang.IllegalArgumentException: Pathname /name001:9000/carbondata/sample.csv from hdfs:/name001:9000/carbondata/sample.csv is not a valid DFS filename. (state=,code=0)
0: jdbc:hive2://taonongyuan.com<http://taonongyuan.com/>:10099/default> load data inpath 'hdfs://name001:9000/carbondata/sample.csv' into table test_table3;
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 (TID 18, data002): java.lang.IllegalArgumentException: Wrong FS: hdfs://name001:9000/user/hive/warehouse/carbon.store/default/test_table3/Metadata/fdd8c8c4-5cdd-4542-aab1-785be20b9f36.dictmeta, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:140)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
at org.apache.carbondata.core.datastorage.store.impl.FileFactory.getDataInputStream(FileFactory.java:146)
at org.apache.carbondata.core.reader.ThriftReader.open(ThriftReader.java:79)
at org.apache.carbondata.core.reader.CarbonDictionaryMetadataReaderImpl.openThriftReader(CarbonDictionaryMetadataReaderImpl.java:181)
at org.apache.carbondata.core.reader.CarbonDictionaryMetadataReaderImpl.readLastEntryOfDictionaryMetaChunk(CarbonDictionaryMetadataReaderImpl.java:128)
at org.apache.carbondata.core.cache.dictionary.AbstractDictionaryCache.readLastChunkFromDictionaryMetadataFile(AbstractDictionaryCache.java:129)
at org.apache.carbondata.core.cache.dictionary.AbstractDictionaryCache.checkAndLoadDictionaryData(AbstractDictionaryCache.java:204)
at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.getDictionary(ReverseDictionaryCache.java:181)
at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.get(ReverseDictionaryCache.java:69)
at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.get(ReverseDictionaryCache.java:40)
at org.apache.carbondata.spark.load.CarbonLoaderUtil.getDictionary(CarbonLoaderUtil.java:508)
at org.apache.carbondata.spark.load.CarbonLoaderUtil.getDictionary(CarbonLoaderUtil.java:514)
at org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.<init>(CarbonGlobalDictionaryRDD.scala:362)
at org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:293)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)



在 2016年10月19日,下午4:55,仲景武 <[hidden email]<mailto:[hidden email]>> 写道:


hi, all

I have installed carbonate succeed  following the document “https://cwiki.apache.org/confluence/display/CARBONDATA/“<https://cwiki.apache.org/confluence/display/CARBONDATA/%E2%80%9C>

but when load data into carbonate table  throws exception:


run command:
cc.sql("load data local inpath '../carbondata/sample.csv' into table test_table")

errors:

org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: /home/bigdata/bigdata/carbondata/sample.csv
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1307)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.take(RDD.scala:1302)
at com.databricks.spark.csv.CarbonCsvRelation.firstLine$lzycompute(CarbonCsvRelation.scala:181)
at com.databricks.spark.csv.CarbonCsvRelation.firstLine(CarbonCsvRelation.scala:176)
at com.databricks.spark.csv.CarbonCsvRelation.inferSchema(CarbonCsvRelation.scala:144)
at com.databricks.spark.csv.CarbonCsvRelation.<init>(CarbonCsvRelation.scala:74)
at com.databricks.spark.csv.newapi.DefaultSource.createRelation(DefaultSource.scala:142)
at com.databricks.spark.csv.newapi.DefaultSource.createRelation(DefaultSource.scala:44)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.loadDataFrame(GlobalDictionaryUtil.scala:386)
at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.generateGlobalDictionary(GlobalDictionaryUtil.scala:767)
at org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1170)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
at org.apache.carbondata.spark.rdd.CarbonDataFrameRDD.<init>(CarbonDataFrameRDD.scala:23)
at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:137)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:51)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:53)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:55)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:57)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:59)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:61)
at $iwC$$iwC$$iwC.<init>(<console>:63)
at $iwC$$iwC.<init>(<console>:65)
at $iwC.<init>(<console>:67)
at <init>(<console>:69)
at .<init>(<console>:73)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.carbon.Main$.main(Main.scala:31)
at org.apache.spark.repl.carbon.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)




cat /home/bigdata/bigdata/carbondata/sample.csv

id,name,city,age
1,david,shenzhen,31
2,eason,shenzhen,27
3,jarry,wuhan,35



ip:taonongyuan.com  username:bigdata  passwd: Zjw11763

this is private <http://www.baidu.com/link?url=zuB-GJ6ONUu4xqmpd_NsR53R4f-Dwi037YBSX9Xc1DOs3kXtzG5XjyhXo7uAOcC1hfRcTaEnGZQoscjTduMloRYu-KsmdmEUPsq68db0VH3MoYCMd5IamXotlUEffF9b> aly cents server ,you can login to debug…..



regards,
仲景武





在 2016年9月27日,上午4:56,Liang Big data <[hidden email]<mailto:[hidden email]>> 写道:

Hi zhongjingwu:

Can you put these discussions into mailing list : [hidden email]<mailto:[hidden email]>
You may get more helps from mailing list.

Regards
Liang

在 2016年9月26日 下午8:48,仲景武 <[hidden email]<mailto:[hidden email]>>写道:

在 2016年9月26日,下午8:46,仲景武 <[hidden email]<mailto:[hidden email]>> 写道:


在 2016年9月26日,下午8:45,仲景武 <[hidden email]<mailto:[hidden email]>> 写道:

@Override
public int hashCode() {
  int hashCode = 1;

  hashCode = hashCode * 8191 + min_surrogate_key;

  hashCode = hashCode * 8191 + max_surrogate_key;

  hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(start_offset);

  hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(end_offset);

  hashCode = hashCode * 8191 + chunk_count;

  hashCode = hashCode * 8191 + ((isSetSegment_id()) ? 131071 : 524287);
  if (isSetSegment_id())
    hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(segment_id);

  return hashCode;
}

我在源码中没有看到 org.apache.thrift.TBaseHelper.hashCode(int) 这样的重载方案啊 无法编译成功啊,什么情况?

<FE4BB0D35DD30805BE3BD071450C3118.jpeg>



--

Regards
Liang



Reply | Threaded
Open this post in threaded view
|

Re: load data error

Lion.X
1. please check the file exists or not.
2. if 1 is right, remove the "hdfs://name001:9000/" out of you path, and try again.
Reply | Threaded
Open this post in threaded view
|

Re: load data error

zhujin
In reply to this post by 仲景武
try hdfs://name001:9000/carbondata/sample.csv
  Instead of
hdfs:///name001:9000/carbondata/sample.csv

发自我的 iPhone

> 在 2016年10月20日,上午10:52,仲景武 <[hidden email]> 写道:
>
>
> when run command (thrift sever):
>
> jdbc:hive2://taonongyuan.com<http://taonongyuan.com/>:10099/default> load data inpath 'hdfs://name001:9000/carbondata/sample.csv' into table test_table3;
>
>
> throw exception:
>
> Driver stacktrace: (state=,code=0)
> 0: jdbc:hive2://taonongyuan.com<http://taonongyuan.com/>:10099/default> load data inpath 'hdfs:///name001:9000/carbondata/sample.csv' into table test_table3;
> Error: java.lang.IllegalArgumentException: Pathname /name001:9000/carbondata/sample.csv from hdfs:/name001:9000/carbondata/sample.csv is not a valid DFS filename. (state=,code=0)
> 0: jdbc:hive2://taonongyuan.com<http://taonongyuan.com/>:10099/default> load data inpath 'hdfs://name001:9000/carbondata/sample.csv' into table test_table3;
> Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 (TID 18, data002): java.lang.IllegalArgumentException: Wrong FS: hdfs://name001:9000/user/hive/warehouse/carbon.store/default/test_table3/Metadata/fdd8c8c4-5cdd-4542-aab1-785be20b9f36.dictmeta, expected: file:///
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
> at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
> at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529)
> at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
> at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
> at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
> at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:140)
> at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
> at org.apache.carbondata.core.datastorage.store.impl.FileFactory.getDataInputStream(FileFactory.java:146)
> at org.apache.carbondata.core.reader.ThriftReader.open(ThriftReader.java:79)
> at org.apache.carbondata.core.reader.CarbonDictionaryMetadataReaderImpl.openThriftReader(CarbonDictionaryMetadataReaderImpl.java:181)
> at org.apache.carbondata.core.reader.CarbonDictionaryMetadataReaderImpl.readLastEntryOfDictionaryMetaChunk(CarbonDictionaryMetadataReaderImpl.java:128)
> at org.apache.carbondata.core.cache.dictionary.AbstractDictionaryCache.readLastChunkFromDictionaryMetadataFile(AbstractDictionaryCache.java:129)
> at org.apache.carbondata.core.cache.dictionary.AbstractDictionaryCache.checkAndLoadDictionaryData(AbstractDictionaryCache.java:204)
> at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.getDictionary(ReverseDictionaryCache.java:181)
> at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.get(ReverseDictionaryCache.java:69)
> at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.get(ReverseDictionaryCache.java:40)
> at org.apache.carbondata.spark.load.CarbonLoaderUtil.getDictionary(CarbonLoaderUtil.java:508)
> at org.apache.carbondata.spark.load.CarbonLoaderUtil.getDictionary(CarbonLoaderUtil.java:514)
> at org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.<init>(CarbonGlobalDictionaryRDD.scala:362)
> at org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:293)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
>
>
> 在 2016年10月19日,下午4:55,仲景武 <[hidden email]<mailto:[hidden email]>> 写道:
>
>
> hi, all
>
> I have installed carbonate succeed  following the document “https://cwiki.apache.org/confluence/display/CARBONDATA/“<https://cwiki.apache.org/confluence/display/CARBONDATA/%E2%80%9C>
>
> but when load data into carbonate table  throws exception:
>
>
> run command:
> cc.sql("load data local inpath '../carbondata/sample.csv' into table test_table")
>
> errors:
>
> org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: /home/bigdata/bigdata/carbondata/sample.csv
> at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
> at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
> at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
> at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
> at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
> at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1307)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
> at org.apache.spark.rdd.RDD.take(RDD.scala:1302)
> at com.databricks.spark.csv.CarbonCsvRelation.firstLine$lzycompute(CarbonCsvRelation.scala:181)
> at com.databricks.spark.csv.CarbonCsvRelation.firstLine(CarbonCsvRelation.scala:176)
> at com.databricks.spark.csv.CarbonCsvRelation.inferSchema(CarbonCsvRelation.scala:144)
> at com.databricks.spark.csv.CarbonCsvRelation.<init>(CarbonCsvRelation.scala:74)
> at com.databricks.spark.csv.newapi.DefaultSource.createRelation(DefaultSource.scala:142)
> at com.databricks.spark.csv.newapi.DefaultSource.createRelation(DefaultSource.scala:44)
> at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
> at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.loadDataFrame(GlobalDictionaryUtil.scala:386)
> at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.generateGlobalDictionary(GlobalDictionaryUtil.scala:767)
> at org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1170)
> at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
> at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
> at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
> at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
> at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
> at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
> at org.apache.carbondata.spark.rdd.CarbonDataFrameRDD.<init>(CarbonDataFrameRDD.scala:23)
> at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:137)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:51)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:53)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:55)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:57)
> at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:59)
> at $iwC$$iwC$$iwC$$iwC.<init>(<console>:61)
> at $iwC$$iwC$$iwC.<init>(<console>:63)
> at $iwC$$iwC.<init>(<console>:65)
> at $iwC.<init>(<console>:67)
> at <init>(<console>:69)
> at .<init>(<console>:73)
> at .<clinit>(<console>)
> at .<init>(<console>:7)
> at .<clinit>(<console>)
> at $print(<console>)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
> at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
> at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
> at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
> at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
> at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
> at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
> at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
> at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
> at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
> at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
> at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
> at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
> at org.apache.spark.repl.carbon.Main$.main(Main.scala:31)
> at org.apache.spark.repl.carbon.Main.main(Main.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
>
>
>
> cat /home/bigdata/bigdata/carbondata/sample.csv
>
> id,name,city,age
> 1,david,shenzhen,31
> 2,eason,shenzhen,27
> 3,jarry,wuhan,35
>
>
>
> ip:taonongyuan.com  username:bigdata  passwd: Zjw11763
>
> this is private <http://www.baidu.com/link?url=zuB-GJ6ONUu4xqmpd_NsR53R4f-Dwi037YBSX9Xc1DOs3kXtzG5XjyhXo7uAOcC1hfRcTaEnGZQoscjTduMloRYu-KsmdmEUPsq68db0VH3MoYCMd5IamXotlUEffF9b> aly cents server ,you can login to debug…..
>
>
>
> regards,
> 仲景武
>
>
>
>
>
> 在 2016年9月27日,上午4:56,Liang Big data <[hidden email]<mailto:[hidden email]>> 写道:
>
> Hi zhongjingwu:
>
> Can you put these discussions into mailing list : [hidden email]<mailto:[hidden email]>
> You may get more helps from mailing list.
>
> Regards
> Liang
>
> 在 2016年9月26日 下午8:48,仲景武 <[hidden email]<mailto:[hidden email]>>写道:
>
> 在 2016年9月26日,下午8:46,仲景武 <[hidden email]<mailto:[hidden email]>> 写道:
>
>
> 在 2016年9月26日,下午8:45,仲景武 <[hidden email]<mailto:[hidden email]>> 写道:
>
> @Override
> public int hashCode() {
>  int hashCode = 1;
>
>  hashCode = hashCode * 8191 + min_surrogate_key;
>
>  hashCode = hashCode * 8191 + max_surrogate_key;
>
>  hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(start_offset);
>
>  hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(end_offset);
>
>  hashCode = hashCode * 8191 + chunk_count;
>
>  hashCode = hashCode * 8191 + ((isSetSegment_id()) ? 131071 : 524287);
>  if (isSetSegment_id())
>    hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(segment_id);
>
>  return hashCode;
> }
>
> 我在源码中没有看到 org.apache.thrift.TBaseHelper.hashCode(int) 这样的重载方案啊 无法编译成功啊,什么情况?
>
> <FE4BB0D35DD30805BE3BD071450C3118.jpeg>
>
>
>
> --
>
> Regards
> Liang
>
>
>

Reply | Threaded
Open this post in threaded view
|

Re: load data error

仲景武
sorry, it  can’t run….

0: jdbc:hive2://taonongyuan.com<http://taonongyuan.com>:10099/default> load data inpath 'hdfs:///name001:9000/carbondata/sample.csv' into table test_table3;
Error: java.lang.IllegalArgumentException: Pathname /name001:9000/carbondata/sample.csv from hdfs:/name001:9000/carbondata/sample.csv is not a valid DFS filename. (state=,code=0)
0: jdbc:hive2://taonongyuan.com<http://taonongyuan.com>:10099/default> load data inpath 'hdfs:///name001:9000/carbondata/sample.csv' into table test_table3;
Error: java.lang.IllegalArgumentException: Pathname /name001:9000/carbondata/sample.csv from hdfs:/name001:9000/carbondata/sample.csv is not a valid DFS filename. (state=,code=0)
0: jdbc:hive2://taonongyuan.com<http://taonongyuan.com>:10099/default> load data inpath '/carbondata/sample.csv' into table test_table3;
Error: org.apache.carbondata.processing.etl.DataLoadingException: The input file does not exist: hdfs://name001:9000hdfs://name001:9000/opt/data/carbondata/sample.csv (state=,code=0)
0: jdbc:hive2://taonongyuan.com<http://taonongyuan.com>:10099/default>
在 2016年10月20日,下午8:19,foryou2030 <[hidden email]<mailto:[hidden email]>> 写道:

try hdfs://name001:9000/carbondata/sample.csv
 Instead of
hdfs:///name001:9000/carbondata/sample.csv

发自我的 iPhone

在 2016年10月20日,上午10:52,仲景武 <[hidden email]> 写道:


when run command (thrift sever):

jdbc:hive2://taonongyuan.com<http://taonongyuan.com/>:10099/default> load data inpath 'hdfs://name001:9000/carbondata/sample.csv' into table test_table3;


throw exception:

Driver stacktrace: (state=,code=0)
0: jdbc:hive2://taonongyuan.com<http://taonongyuan.com/>:10099/default> load data inpath 'hdfs:///name001:9000/carbondata/sample.csv' into table test_table3;
Error: java.lang.IllegalArgumentException: Pathname /name001:9000/carbondata/sample.csv from hdfs:/name001:9000/carbondata/sample.csv is not a valid DFS filename. (state=,code=0)
0: jdbc:hive2://taonongyuan.com<http://taonongyuan.com/>:10099/default> load data inpath 'hdfs://name001:9000/carbondata/sample.csv' into table test_table3;
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 (TID 18, data002): java.lang.IllegalArgumentException: Wrong FS: hdfs://name001:9000/user/hive/warehouse/carbon.store/default/test_table3/Metadata/fdd8c8c4-5cdd-4542-aab1-785be20b9f36.dictmeta, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:140)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
at org.apache.carbondata.core.datastorage.store.impl.FileFactory.getDataInputStream(FileFactory.java:146)
at org.apache.carbondata.core.reader.ThriftReader.open(ThriftReader.java:79)
at org.apache.carbondata.core.reader.CarbonDictionaryMetadataReaderImpl.openThriftReader(CarbonDictionaryMetadataReaderImpl.java:181)
at org.apache.carbondata.core.reader.CarbonDictionaryMetadataReaderImpl.readLastEntryOfDictionaryMetaChunk(CarbonDictionaryMetadataReaderImpl.java:128)
at org.apache.carbondata.core.cache.dictionary.AbstractDictionaryCache.readLastChunkFromDictionaryMetadataFile(AbstractDictionaryCache.java:129)
at org.apache.carbondata.core.cache.dictionary.AbstractDictionaryCache.checkAndLoadDictionaryData(AbstractDictionaryCache.java:204)
at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.getDictionary(ReverseDictionaryCache.java:181)
at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.get(ReverseDictionaryCache.java:69)
at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.get(ReverseDictionaryCache.java:40)
at org.apache.carbondata.spark.load.CarbonLoaderUtil.getDictionary(CarbonLoaderUtil.java:508)
at org.apache.carbondata.spark.load.CarbonLoaderUtil.getDictionary(CarbonLoaderUtil.java:514)
at org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.<init>(CarbonGlobalDictionaryRDD.scala:362)
at org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:293)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)



在 2016年10月19日,下午4:55,仲景武 <[hidden email]<mailto:[hidden email]>> 写道:


hi, all

I have installed carbonate succeed  following the document “https://cwiki.apache.org/confluence/display/CARBONDATA/“<https://cwiki.apache.org/confluence/display/CARBONDATA/%E2%80%9C>

but when load data into carbonate table  throws exception:


run command:
cc.sql("load data local inpath '../carbondata/sample.csv' into table test_table")

errors:

org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: /home/bigdata/bigdata/carbondata/sample.csv
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1307)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.take(RDD.scala:1302)
at com.databricks.spark.csv.CarbonCsvRelation.firstLine$lzycompute(CarbonCsvRelation.scala:181)
at com.databricks.spark.csv.CarbonCsvRelation.firstLine(CarbonCsvRelation.scala:176)
at com.databricks.spark.csv.CarbonCsvRelation.inferSchema(CarbonCsvRelation.scala:144)
at com.databricks.spark.csv.CarbonCsvRelation.<init>(CarbonCsvRelation.scala:74)
at com.databricks.spark.csv.newapi.DefaultSource.createRelation(DefaultSource.scala:142)
at com.databricks.spark.csv.newapi.DefaultSource.createRelation(DefaultSource.scala:44)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.loadDataFrame(GlobalDictionaryUtil.scala:386)
at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.generateGlobalDictionary(GlobalDictionaryUtil.scala:767)
at org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1170)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
at org.apache.carbondata.spark.rdd.CarbonDataFrameRDD.<init>(CarbonDataFrameRDD.scala:23)
at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:137)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:51)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:53)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:55)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:57)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:59)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:61)
at $iwC$$iwC$$iwC.<init>(<console>:63)
at $iwC$$iwC.<init>(<console>:65)
at $iwC.<init>(<console>:67)
at <init>(<console>:69)
at .<init>(<console>:73)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.carbon.Main$.main(Main.scala:31)
at org.apache.spark.repl.carbon.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)




cat /home/bigdata/bigdata/carbondata/sample.csv

id,name,city,age
1,david,shenzhen,31
2,eason,shenzhen,27
3,jarry,wuhan,35



ip:taonongyuan.com  username:bigdata  passwd: Zjw11763

this is private <http://www.baidu.com/link?url=zuB-GJ6ONUu4xqmpd_NsR53R4f-Dwi037YBSX9Xc1DOs3kXtzG5XjyhXo7uAOcC1hfRcTaEnGZQoscjTduMloRYu-KsmdmEUPsq68db0VH3MoYCMd5IamXotlUEffF9b> aly cents server ,you can login to debug…..



regards,
仲景武





在 2016年9月27日,上午4:56,Liang Big data <[hidden email]<mailto:[hidden email]>> 写道:

Hi zhongjingwu:

Can you put these discussions into mailing list : [hidden email]<mailto:[hidden email]>
You may get more helps from mailing list.

Regards
Liang

在 2016年9月26日 下午8:48,仲景武 <[hidden email]<mailto:[hidden email]>>写道:

在 2016年9月26日,下午8:46,仲景武 <[hidden email]<mailto:[hidden email]>> 写道:


在 2016年9月26日,下午8:45,仲景武 <[hidden email]<mailto:[hidden email]>> 写道:

@Override
public int hashCode() {
int hashCode = 1;

hashCode = hashCode * 8191 + min_surrogate_key;

hashCode = hashCode * 8191 + max_surrogate_key;

hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(start_offset);

hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(end_offset);

hashCode = hashCode * 8191 + chunk_count;

hashCode = hashCode * 8191 + ((isSetSegment_id()) ? 131071 : 524287);
if (isSetSegment_id())
  hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.hashCode(segment_id);

return hashCode;
}

我在源码中没有看到 org.apache.thrift.TBaseHelper.hashCode(int) 这样的重载方案啊 无法编译成功啊,什么情况?

<FE4BB0D35DD30805BE3BD071450C3118.jpeg>



--

Regards
Liang