minor compact throw err 'IndexBuilderException'

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

minor compact throw err 'IndexBuilderException'

Li Peng
Hello:
  in spark shell with carbondata 0.2.0, minor compact, throw err:

WARN  05-01 15:04:06,964 - Lost task 0.0 in stage 0.0 (TID 1, dpnode08): org.apache.carbondata.core.carbon.datastore.exception.IndexBuilderException:
        at org.apache.carbondata.integration.spark.merger.CarbonCompactionUtil.createDataFileFooterMappingForSegments(CarbonCompactionUtil.java:127)
        at org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:121)
        at org.apache.carbondata.spark.rdd.CarbonMergerRDD.compute(CarbonMergerRDD.scala:70)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.carbondata.core.util.CarbonUtilException: Problem while reading the file metadata
        at org.apache.carbondata.core.util.CarbonUtil.readMetadatFile(CarbonUtil.java:1063)
        at org.apache.carbondata.integration.spark.merger.CarbonCompactionUtil.createDataFileFooterMappingForSegments(CarbonCompactionUtil.java:123)
        ... 10 more
Caused by: java.io.IOException: It doesn't set the offset properly
        at org.apache.carbondata.core.reader.ThriftReader.setReadOffset(ThriftReader.java:88)
        at org.apache.carbondata.core.reader.CarbonFooterReader.readFooter(CarbonFooterReader.java:55)
        at org.apache.carbondata.core.util.DataFileFooterConverter.readDataFileFooter(DataFileFooterConverter.java:148)
        at org.apache.carbondata.core.util.CarbonUtil.readMetadatFile(CarbonUtil.java:1061)
        ... 11 more
Reply | Threaded
Open this post in threaded view
|

Re: minor compact throw err 'IndexBuilderException'

Liang Chen
Administrator
Hi

1.Just i tested at my machine for 0.2 version,it is working fine.
---------------------------------------------------------------------
scala> cc.sql("ALTER TABLE connectdemo1 COMPACT 'MINOR'")
INFO  05-01 23:46:54,111 - main Query [ALTER TABLE CONNECTDEMO1 COMPACT
'MINOR']
INFO  05-01 23:46:54,115 - Parsing command: alter table  connectdemo1
COMPACT 'MINOR'
INFO  05-01 23:46:54,116 - Parse Completed
AUDIT 05-01 23:46:54,379 -
[AppledeMacBook-Pro.local][apple][Thread-1]Compaction request received for
table default.connectdemo1
INFO  05-01 23:46:54,385 - main Acquired the compaction lock for table
default.connectdemo1
INFO  05-01 23:46:54,392 - main Successfully deleted the lock file
/var/folders/d3/x_28r1q932g6bq6pxcf8c6rh0000gn/T//default/connectdemo1/compaction.lock
res8: org.apache.spark.sql.DataFrame = []
------------------------------------------------------------------------

2.Can you provide the steps for reproducing the error.
3.Please check the compaction example : DataManagementExample.scala, see if
you used the correct compact script.

Regards
Liang

2017-01-05 15:24 GMT+08:00 Li Peng <[hidden email]>:

> Hello:
>   in spark shell with carbondata 0.2.0, minor compact, throw err:
>
> WARN  05-01 15:04:06,964 - Lost task 0.0 in stage 0.0 (TID 1, dpnode08):
> org.apache.carbondata.core.carbon.datastore.exception.
> IndexBuilderException:
>         at
> org.apache.carbondata.integration.spark.merger.CarbonCompactionUtil.
> createDataFileFooterMappingForSegments(CarbonCompactionUtil.java:127)
>         at
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<
> init>(CarbonMergerRDD.scala:121)
>         at
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.compute(
> CarbonMergerRDD.scala:70)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
>         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.
> scala:66)
>         at org.apache.spark.scheduler.Task.run(Task.scala:89)
>         at org.apache.spark.executor.Executor$TaskRunner.run(
> Executor.scala:227)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.carbondata.core.util.CarbonUtilException: Problem
> while reading the file metadata
>         at
> org.apache.carbondata.core.util.CarbonUtil.readMetadatFile(CarbonUtil.
> java:1063)
>         at
> org.apache.carbondata.integration.spark.merger.CarbonCompactionUtil.
> createDataFileFooterMappingForSegments(CarbonCompactionUtil.java:123)
>         ... 10 more
> Caused by: java.io.IOException: It doesn't set the offset properly
>         at
> org.apache.carbondata.core.reader.ThriftReader.setReadOffset(ThriftReader.
> java:88)
>         at
> org.apache.carbondata.core.reader.CarbonFooterReader.
> readFooter(CarbonFooterReader.java:55)
>         at
> org.apache.carbondata.core.util.DataFileFooterConverter.
> readDataFileFooter(DataFileFooterConverter.java:148)
>         at
> org.apache.carbondata.core.util.CarbonUtil.readMetadatFile(CarbonUtil.
> java:1061)
>         ... 11 more
>
>
>
> --
> View this message in context: http://apache-carbondata-
> mailing-list-archive.1130556.n5.nabble.com/minor-compact-throw-err-
> IndexBuilderException-tp5551.html
> Sent from the Apache CarbonData Mailing List archive mailing list archive
> at Nabble.com.
>



--
Regards
Liang
Reply | Threaded
Open this post in threaded view
|

Re: minor compact throw err 'IndexBuilderException'

Li Peng
Thanks.  
       Maybe my test data is error, it works fine after i delete some segements.