Re: Query failed after "update" statement interruptted
Posted by
Liang Chen on
Oct 16, 2017; 11:31am
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/Query-failed-after-update-statement-interruptted-tp24063p24123.html
Hi
Can you provide the full script? what is your update script? how to
reproduce ?
Regards
Liang
yixu2001 wrote
> dev
>
> On the process of "update" statement execution, interruption happened.
> After that, the "select" statement failed.
> Sometimes the "select" statement will recover to succeed, but sometimes it
> can not recover.
>
> The error infomation as following:
>
> "scala> cc.sql("select * from qqdata2.oc_indextest where id =
> '1999100000'").show(100,false);
> java.lang.NullPointerException
> at
> org.apache.carbondata.hadoop.CarbonInputFormat.getDataBlocksOfSegment(CarbonInputFormat.java:381)
> at
> org.apache.carbondata.hadoop.CarbonInputFormat.getSplits(CarbonInputFormat.java:316)
> at
> org.apache.carbondata.hadoop.CarbonInputFormat.getSplits(CarbonInputFormat.java:262)
> at
> org.apache.carbondata.spark.rdd.CarbonScanRDD.getPartitions(CarbonScanRDD.scala:81)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
> at scala.Option.getOrElse(Option.scala:121)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
> at
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
> at scala.Option.getOrElse(Option.scala:121)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
> at
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
> at scala.Option.getOrElse(Option.scala:121)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
> at
> org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:311)
> at
> org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
> at
> org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2378)
> at
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
> at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2780)
> at org.apache.spark.sql.Dataset.org
>
> $apache$spark$sql$Dataset$$execute$1(Dataset.scala:2377)
> at org.apache.spark.sql.Dataset.org
>
> $apache$spark$sql$Dataset$$collect(Dataset.scala:2384)
> at
> org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2120)
> at
> org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2119)
> at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2810)
> at org.apache.spark.sql.Dataset.head(Dataset.scala:2119)
> at org.apache.spark.sql.Dataset.take(Dataset.scala:2334)
> at org.apache.spark.sql.Dataset.showString(Dataset.scala:248)
> at org.apache.spark.sql.Dataset.show(Dataset.scala:640)
> ... 50 elided"
>
>
>
> yixu2001
--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/