[Issue] Load auto compaction failed

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

[Issue] Load auto compaction failed

aaron
This post was updated on .
Hi community,

Based on 1.5.0 - the load with local dictionary and local sort, the load
failed when data count arrive 0.5 billion, but I've already load 50billion
before with global dictionary and sort. Do you have any ideas? the related DDL sql
are same as my last post.


18/09/26 08:39:45 AUDIT CarbonTableCompactor:
[ec2-dca-aa-p-sdn-16.appannie.org][hadoop][Thread-1]Compaction request
completed for table default.store
18/09/26 08:46:39 WARN TaskSetManager: Lost task 1.0 in stage 216.0 (TID
1513, 10.2.3.249, executor 2):
org.apache.spark.util.TaskCompletionListenerException:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

Previous exception in task:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)

org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)

org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)

org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
        org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        org.apache.spark.scheduler.Task.run(Task.scala:109)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        java.lang.Thread.run(Thread.java:748)
        at
org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
        at
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
        at org.apache.spark.scheduler.Task.run(Task.scala:119)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

18/09/26 08:53:31 WARN TaskSetManager: Lost task 1.1 in stage 216.0 (TID
1515, 10.2.3.11, executor 1):
org.apache.spark.util.TaskCompletionListenerException:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

Previous exception in task:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)

org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)

org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)

org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
        org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        org.apache.spark.scheduler.Task.run(Task.scala:109)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        java.lang.Thread.run(Thread.java:748)
        at
org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
        at
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
        at org.apache.spark.scheduler.Task.run(Task.scala:119)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

18/09/26 09:00:22 WARN TaskSetManager: Lost task 1.2 in stage 216.0 (TID
1516, 10.2.3.11, executor 1):
org.apache.spark.util.TaskCompletionListenerException:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

Previous exception in task:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)

org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)

org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)

org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
        org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        org.apache.spark.scheduler.Task.run(Task.scala:109)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        java.lang.Thread.run(Thread.java:748)
        at
org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
        at
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
        at org.apache.spark.scheduler.Task.run(Task.scala:119)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

18/09/26 09:07:16 WARN TaskSetManager: Lost task 1.3 in stage 216.0 (TID
1517, 10.2.3.249, executor 2):
org.apache.spark.util.TaskCompletionListenerException:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

Previous exception in task:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)

org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)

org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)

org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
        org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        org.apache.spark.scheduler.Task.run(Task.scala:109)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        java.lang.Thread.run(Thread.java:748)
        at
org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
        at
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
        at org.apache.spark.scheduler.Task.run(Task.scala:119)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

18/09/26 09:07:16 ERROR TaskSetManager: Task 1 in stage 216.0 failed 4
times; aborting job
18/09/26 09:07:16 ERROR CarbonTableCompactor: main Exception in compaction
thread Job aborted due to stage failure: Task 1 in stage 216.0 failed 4
times, most recent failure: Lost task 1.3 in stage 216.0 (TID 1517,
10.2.3.249, executor 2):
org.apache.spark.util.TaskCompletionListenerException:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

Previous exception in task:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)

org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)

org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)

org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
        org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        org.apache.spark.scheduler.Task.run(Task.scala:109)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        java.lang.Thread.run(Thread.java:748)
        at
org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
        at
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
        at org.apache.spark.scheduler.Task.run(Task.scala:119)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in
stage 216.0 failed 4 times, most recent failure: Lost task 1.3 in stage
216.0 (TID 1517, 10.2.3.249, executor 2):
org.apache.spark.util.TaskCompletionListenerException:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

Previous exception in task:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)

org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)

org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)

org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
        org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        org.apache.spark.scheduler.Task.run(Task.scala:109)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        java.lang.Thread.run(Thread.java:748)
        at
org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
        at
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
        at org.apache.spark.scheduler.Task.run(Task.scala:119)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
        at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
        at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
        at scala.Option.foreach(Option.scala:257)
        at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
        at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
        at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
        at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
        at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
        at
org.apache.carbondata.spark.rdd.CarbonTableCompactor.triggerCompaction(CarbonTableCompactor.scala:202)
        at
org.apache.carbondata.spark.rdd.CarbonTableCompactor.scanSegmentsAndSubmitJob(CarbonTableCompactor.scala:119)
        at
org.apache.carbondata.spark.rdd.CarbonTableCompactor.executeCompaction(CarbonTableCompactor.scala:68)
        at
org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$$anon$2.run(CarbonDataRDDFactory.scala:183)
        at
org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.startCompactionThreads(CarbonDataRDDFactory.scala:274)
        at
org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.handleSegmentMerging(CarbonDataRDDFactory.scala:891)
        at
org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:599)
        at
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:591)
        at
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:316)
        at
org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:92)
        at
org.apache.spark.sql.CarbonDataFrameWriter.loadDataFrame(CarbonDataFrameWriter.scala:62)
        at
org.apache.spark.sql.CarbonDataFrameWriter.writeToCarbonFile(CarbonDataFrameWriter.scala:46)
        at
org.apache.spark.sql.CarbonDataFrameWriter.appendToCarbonFile(CarbonDataFrameWriter.scala:41)
        at org.apache.spark.sql.CarbonSource.createRelation(CarbonSource.scala:115)
        at
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
        at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
        at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
        at
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
        at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
        at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
        at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
        at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
        at
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
        at
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
        at
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
        at
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
        at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
        at
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
        at
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
        at
com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:149)
        at
com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:143)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at com.appannie.CarbonImporter$.store_load(CarbonImporter.scala:143)
        at com.appannie.CarbonImporter$.main(CarbonImporter.scala:53)
        at com.appannie.CarbonImporter.main(CarbonImporter.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.util.TaskCompletionListenerException:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

Previous exception in task:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)

org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)

org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)

org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
        org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        org.apache.spark.scheduler.Task.run(Task.scala:109)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        java.lang.Thread.run(Thread.java:748)
        at
org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
        at
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
        at org.apache.spark.scheduler.Task.run(Task.scala:119)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
18/09/26 09:07:16 ERROR CarbonDataRDDFactory$: main Exception in compaction
thread Job aborted due to stage failure: Task 1 in stage 216.0 failed 4
times, most recent failure: Lost task 1.3 in stage 216.0 (TID 1517,
10.2.3.249, executor 2):
org.apache.spark.util.TaskCompletionListenerException:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

Previous exception in task:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)

org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)

org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)

org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
        org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        org.apache.spark.scheduler.Task.run(Task.scala:109)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        java.lang.Thread.run(Thread.java:748)
        at
org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
        at
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
        at org.apache.spark.scheduler.Task.run(Task.scala:119)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
18/09/26 09:07:16 ERROR CarbonDataRDDFactory$: main Exception in start
compaction thread. Exception in compaction Job aborted due to stage failure:
Task 1 in stage 216.0 failed 4 times, most recent failure: Lost task 1.3 in
stage 216.0 (TID 1517, 10.2.3.249, executor 2):
org.apache.spark.util.TaskCompletionListenerException:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

Previous exception in task:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)

org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)

org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)

org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
        org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        org.apache.spark.scheduler.Task.run(Task.scala:109)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        java.lang.Thread.run(Thread.java:748)
        at
org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
        at
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
        at org.apache.spark.scheduler.Task.run(Task.scala:119)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
18/09/26 09:07:16 ERROR CarbonLoadDataCommand: main
java.lang.Exception: Dataload is success. Auto-Compaction has failed. Please
check logs.
        at
org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:608)
        at
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:591)
        at
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:316)
        at
org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:92)
        at
org.apache.spark.sql.CarbonDataFrameWriter.loadDataFrame(CarbonDataFrameWriter.scala:62)
        at
org.apache.spark.sql.CarbonDataFrameWriter.writeToCarbonFile(CarbonDataFrameWriter.scala:46)
        at
org.apache.spark.sql.CarbonDataFrameWriter.appendToCarbonFile(CarbonDataFrameWriter.scala:41)
        at org.apache.spark.sql.CarbonSource.createRelation(CarbonSource.scala:115)
        at
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
        at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
        at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
        at
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
        at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
        at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
        at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
        at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
        at
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
        at
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
        at
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
        at
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
        at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
        at
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
        at
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
        at
com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:149)
        at
com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:143)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at com.appannie.CarbonImporter$.store_load(CarbonImporter.scala:143)
        at com.appannie.CarbonImporter$.main(CarbonImporter.scala:53)
        at com.appannie.CarbonImporter.main(CarbonImporter.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/09/26 09:07:16 AUDIT CarbonLoadDataCommand:
[ec2-dca-aa-p-sdn-16.appannie.org][hadoop][Thread-1]Dataload failure for
default.store. Please check the logs
18/09/26 09:07:16 ERROR CarbonLoadDataCommand: main Got exception
java.lang.Exception: Dataload is success. Auto-Compaction has failed. Please
check logs. when processing data. But this command does not support undo
yet, skipping the undo part.
Exception in thread "main" java.lang.Exception: Dataload is success.
Auto-Compaction has failed. Please check logs.
        at
org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:608)
        at
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:591)
        at
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:316)
        at
org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:92)
        at
org.apache.spark.sql.CarbonDataFrameWriter.loadDataFrame(CarbonDataFrameWriter.scala:62)
        at
org.apache.spark.sql.CarbonDataFrameWriter.writeToCarbonFile(CarbonDataFrameWriter.scala:46)
        at
org.apache.spark.sql.CarbonDataFrameWriter.appendToCarbonFile(CarbonDataFrameWriter.scala:41)
        at org.apache.spark.sql.CarbonSource.createRelation(CarbonSource.scala:115)
        at
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
        at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
        at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
        at
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
        at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
        at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
        at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
        at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
        at
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
        at
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
        at
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
        at
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
        at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
        at
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
        at
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
        at
com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:149)
        at
com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:143)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at com.appannie.CarbonImporter$.store_load(CarbonImporter.scala:143)
        at com.appannie.CarbonImporter$.main(CarbonImporter.scala:53)
        at com.appannie.CarbonImporter.main(CarbonImporter.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
nohup: ignoring input
18/09/26 16:17:21 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
18/09/26 16:17:21 WARN CarbonProperties: main The custom block distribution
value "null" is invalid. Using the default value "false
18/09/26 16:17:21 WARN CarbonProperties: main The enable auto handoff value
"null" is invalid. Using the default value "true
18/09/26 16:17:21 WARN CarbonProperties: main The specified value for
property carbon.sort.storage.inmemory.size.inmbis invalid.
18/09/26 16:17:21 WARN CarbonProperties: main The specified value for
property carbon.sort.storage.inmemory.size.inmbis invalid. Taking the
default value.512
18/09/26 16:17:29 WARN ObjectStore: Failed to get database global_temp,
returning NoSuchObjectException
18/09/26 16:17:30 AUDIT CacheProvider:
[ec2-dca-aa-p-sdn-16.appannie.org][hadoop][Thread-1]The key
carbon.options.bad.records.logger.enable with value true added in the
session param



--
Sent from: http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: [Issue] Load auto compaction failed

sraghunandan
Dear Aaron,
The memory requirements for local dictionary are high when compared to
global dictionary.what are the offheap configuration values? I guess low
values might be the reason

Regards
Raghu

On Thu, 27 Sep 2018, 4:58 am aaron, <[hidden email]> wrote:

> Hi community,
>
> Based on 1.5.0 - the load with local dictionary and local sort, the load
> failed when data count arrive 0.5 billion, but I've already load 50billion
> before with global dictionary and sort. Do you have any ideas?
>
>
> 18/09/26 08:39:45 AUDIT CarbonTableCompactor:
> [ec2-dca-aa-p-sdn-16.appannie.org][hadoop][Thread-1]Compaction request
> completed for table default.store
> 18/09/26 08:46:39 WARN TaskSetManager: Lost task 1.0 in stage 216.0 (TID
> 1513, 10.2.3.249, executor 2):
> org.apache.spark.util.TaskCompletionListenerException:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
> Previous exception in task:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
>
>
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
>
> org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>         org.apache.spark.scheduler.Task.run(Task.scala:109)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         java.lang.Thread.run(Thread.java:748)
>         at
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
>         at
>
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
>         at org.apache.spark.scheduler.Task.run(Task.scala:119)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
>
> 18/09/26 08:53:31 WARN TaskSetManager: Lost task 1.1 in stage 216.0 (TID
> 1515, 10.2.3.11, executor 1):
> org.apache.spark.util.TaskCompletionListenerException:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
> Previous exception in task:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
>
>
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
>
> org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>         org.apache.spark.scheduler.Task.run(Task.scala:109)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         java.lang.Thread.run(Thread.java:748)
>         at
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
>         at
>
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
>         at org.apache.spark.scheduler.Task.run(Task.scala:119)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
>
> 18/09/26 09:00:22 WARN TaskSetManager: Lost task 1.2 in stage 216.0 (TID
> 1516, 10.2.3.11, executor 1):
> org.apache.spark.util.TaskCompletionListenerException:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
> Previous exception in task:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
>
>
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
>
> org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>         org.apache.spark.scheduler.Task.run(Task.scala:109)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         java.lang.Thread.run(Thread.java:748)
>         at
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
>         at
>
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
>         at org.apache.spark.scheduler.Task.run(Task.scala:119)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
>
> 18/09/26 09:07:16 WARN TaskSetManager: Lost task 1.3 in stage 216.0 (TID
> 1517, 10.2.3.249, executor 2):
> org.apache.spark.util.TaskCompletionListenerException:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
> Previous exception in task:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
>
>
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
>
> org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>         org.apache.spark.scheduler.Task.run(Task.scala:109)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         java.lang.Thread.run(Thread.java:748)
>         at
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
>         at
>
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
>         at org.apache.spark.scheduler.Task.run(Task.scala:119)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
>
> 18/09/26 09:07:16 ERROR TaskSetManager: Task 1 in stage 216.0 failed 4
> times; aborting job
> 18/09/26 09:07:16 ERROR CarbonTableCompactor: main Exception in compaction
> thread Job aborted due to stage failure: Task 1 in stage 216.0 failed 4
> times, most recent failure: Lost task 1.3 in stage 216.0 (TID 1517,
> 10.2.3.249, executor 2):
> org.apache.spark.util.TaskCompletionListenerException:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
> Previous exception in task:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
>
>
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
>
> org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>         org.apache.spark.scheduler.Task.run(Task.scala:109)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         java.lang.Thread.run(Thread.java:748)
>         at
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
>         at
>
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
>         at org.apache.spark.scheduler.Task.run(Task.scala:119)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
>
> Driver stacktrace:
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1
> in
> stage 216.0 failed 4 times, most recent failure: Lost task 1.3 in stage
> 216.0 (TID 1517, 10.2.3.249, executor 2):
> org.apache.spark.util.TaskCompletionListenerException:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
> Previous exception in task:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
>
>
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
>
> org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>         org.apache.spark.scheduler.Task.run(Task.scala:109)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         java.lang.Thread.run(Thread.java:748)
>         at
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
>         at
>
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
>         at org.apache.spark.scheduler.Task.run(Task.scala:119)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
>
> Driver stacktrace:
>         at
> org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602)
>         at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590)
>         at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
>         at
>
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>         at
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>         at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
>         at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
>         at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
>         at scala.Option.foreach(Option.scala:257)
>         at
>
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
>         at
>
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
>         at
>
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
>         at
>
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
>         at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>         at
> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
>         at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
>         at
>
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>         at
>
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>         at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
>         at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
>         at
>
> org.apache.carbondata.spark.rdd.CarbonTableCompactor.triggerCompaction(CarbonTableCompactor.scala:202)
>         at
>
> org.apache.carbondata.spark.rdd.CarbonTableCompactor.scanSegmentsAndSubmitJob(CarbonTableCompactor.scala:119)
>         at
>
> org.apache.carbondata.spark.rdd.CarbonTableCompactor.executeCompaction(CarbonTableCompactor.scala:68)
>         at
>
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$$anon$2.run(CarbonDataRDDFactory.scala:183)
>         at
>
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.startCompactionThreads(CarbonDataRDDFactory.scala:274)
>         at
>
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.handleSegmentMerging(CarbonDataRDDFactory.scala:891)
>         at
>
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:599)
>         at
>
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:591)
>         at
>
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:316)
>         at
>
> org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:92)
>         at
>
> org.apache.spark.sql.CarbonDataFrameWriter.loadDataFrame(CarbonDataFrameWriter.scala:62)
>         at
>
> org.apache.spark.sql.CarbonDataFrameWriter.writeToCarbonFile(CarbonDataFrameWriter.scala:46)
>         at
>
> org.apache.spark.sql.CarbonDataFrameWriter.appendToCarbonFile(CarbonDataFrameWriter.scala:41)
>         at
> org.apache.spark.sql.CarbonSource.createRelation(CarbonSource.scala:115)
>         at
>
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>         at
>
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>         at
>
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>         at
>
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>         at
>
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>         at
>
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>         at
>
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
>         at
>
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>         at
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>         at
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>         at
>
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
>         at
>
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
>         at
>
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
>         at
>
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
>         at
>
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>         at
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
>         at
>
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
>         at
> org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
>         at
>
> com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:149)
>         at
>
> com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:143)
>         at scala.collection.immutable.List.foreach(List.scala:381)
>         at
> com.appannie.CarbonImporter$.store_load(CarbonImporter.scala:143)
>         at com.appannie.CarbonImporter$.main(CarbonImporter.scala:53)
>         at com.appannie.CarbonImporter.main(CarbonImporter.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
>
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
>         at
>
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
>         at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
>         at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: org.apache.spark.util.TaskCompletionListenerException:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
> Previous exception in task:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
>
>
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
>
> org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>         org.apache.spark.scheduler.Task.run(Task.scala:109)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         java.lang.Thread.run(Thread.java:748)
>         at
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
>         at
>
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
>         at org.apache.spark.scheduler.Task.run(Task.scala:119)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> 18/09/26 09:07:16 ERROR CarbonDataRDDFactory$: main Exception in compaction
> thread Job aborted due to stage failure: Task 1 in stage 216.0 failed 4
> times, most recent failure: Lost task 1.3 in stage 216.0 (TID 1517,
> 10.2.3.249, executor 2):
> org.apache.spark.util.TaskCompletionListenerException:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
> Previous exception in task:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
>
>
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
>
> org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>         org.apache.spark.scheduler.Task.run(Task.scala:109)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         java.lang.Thread.run(Thread.java:748)
>         at
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
>         at
>
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
>         at org.apache.spark.scheduler.Task.run(Task.scala:119)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
>
> Driver stacktrace:
> 18/09/26 09:07:16 ERROR CarbonDataRDDFactory$: main Exception in start
> compaction thread. Exception in compaction Job aborted due to stage
> failure:
> Task 1 in stage 216.0 failed 4 times, most recent failure: Lost task 1.3 in
> stage 216.0 (TID 1517, 10.2.3.249, executor 2):
> org.apache.spark.util.TaskCompletionListenerException:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
> Previous exception in task:
> org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
>
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
>
>
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
>
>
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
>
> org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>         org.apache.spark.scheduler.Task.run(Task.scala:109)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         java.lang.Thread.run(Thread.java:748)
>         at
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
>         at
>
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
>         at org.apache.spark.scheduler.Task.run(Task.scala:119)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
>
> Driver stacktrace:
> 18/09/26 09:07:16 ERROR CarbonLoadDataCommand: main
> java.lang.Exception: Dataload is success. Auto-Compaction has failed.
> Please
> check logs.
>         at
>
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:608)
>         at
>
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:591)
>         at
>
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:316)
>         at
>
> org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:92)
>         at
>
> org.apache.spark.sql.CarbonDataFrameWriter.loadDataFrame(CarbonDataFrameWriter.scala:62)
>         at
>
> org.apache.spark.sql.CarbonDataFrameWriter.writeToCarbonFile(CarbonDataFrameWriter.scala:46)
>         at
>
> org.apache.spark.sql.CarbonDataFrameWriter.appendToCarbonFile(CarbonDataFrameWriter.scala:41)
>         at
> org.apache.spark.sql.CarbonSource.createRelation(CarbonSource.scala:115)
>         at
>
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>         at
>
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>         at
>
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>         at
>
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>         at
>
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>         at
>
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>         at
>
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
>         at
>
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>         at
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>         at
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>         at
>
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
>         at
>
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
>         at
>
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
>         at
>
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
>         at
>
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>         at
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
>         at
>
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
>         at
> org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
>         at
>
> com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:149)
>         at
>
> com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:143)
>         at scala.collection.immutable.List.foreach(List.scala:381)
>         at
> com.appannie.CarbonImporter$.store_load(CarbonImporter.scala:143)
>         at com.appannie.CarbonImporter$.main(CarbonImporter.scala:53)
>         at com.appannie.CarbonImporter.main(CarbonImporter.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
>
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
>         at
>
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
>         at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
>         at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> 18/09/26 09:07:16 AUDIT CarbonLoadDataCommand:
> [ec2-dca-aa-p-sdn-16.appannie.org][hadoop][Thread-1]Dataload failure for
> default.store. Please check the logs
> 18/09/26 09:07:16 ERROR CarbonLoadDataCommand: main Got exception
> java.lang.Exception: Dataload is success. Auto-Compaction has failed.
> Please
> check logs. when processing data. But this command does not support undo
> yet, skipping the undo part.
> Exception in thread "main" java.lang.Exception: Dataload is success.
> Auto-Compaction has failed. Please check logs.
>         at
>
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:608)
>         at
>
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:591)
>         at
>
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:316)
>         at
>
> org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:92)
>         at
>
> org.apache.spark.sql.CarbonDataFrameWriter.loadDataFrame(CarbonDataFrameWriter.scala:62)
>         at
>
> org.apache.spark.sql.CarbonDataFrameWriter.writeToCarbonFile(CarbonDataFrameWriter.scala:46)
>         at
>
> org.apache.spark.sql.CarbonDataFrameWriter.appendToCarbonFile(CarbonDataFrameWriter.scala:41)
>         at
> org.apache.spark.sql.CarbonSource.createRelation(CarbonSource.scala:115)
>         at
>
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>         at
>
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>         at
>
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>         at
>
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>         at
>
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>         at
>
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>         at
>
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
>         at
>
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>         at
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>         at
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>         at
>
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
>         at
>
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
>         at
>
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
>         at
>
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
>         at
>
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>         at
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
>         at
>
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
>         at
> org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
>         at
>
> com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:149)
>         at
>
> com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:143)
>         at scala.collection.immutable.List.foreach(List.scala:381)
>         at
> com.appannie.CarbonImporter$.store_load(CarbonImporter.scala:143)
>         at com.appannie.CarbonImporter$.main(CarbonImporter.scala:53)
>         at com.appannie.CarbonImporter.main(CarbonImporter.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
>
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
>         at
>
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
>         at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
>         at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> nohup: ignoring input
> 18/09/26 16:17:21 WARN NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 18/09/26 16:17:21 WARN CarbonProperties: main The custom block distribution
> value "null" is invalid. Using the default value "false
> 18/09/26 16:17:21 WARN CarbonProperties: main The enable auto handoff value
> "null" is invalid. Using the default value "true
> 18/09/26 16:17:21 WARN CarbonProperties: main The specified value for
> property carbon.sort.storage.inmemory.size.inmbis invalid.
> 18/09/26 16:17:21 WARN CarbonProperties: main The specified value for
> property carbon.sort.storage.inmemory.size.inmbis invalid. Taking the
> default value.512
> 18/09/26 16:17:29 WARN ObjectStore: Failed to get database global_temp,
> returning NoSuchObjectException
> 18/09/26 16:17:30 AUDIT CacheProvider:
> [ec2-dca-aa-p-sdn-16.appannie.org][hadoop][Thread-1]The key
> carbon.options.bad.records.logger.enable with value true added in the
> session param
>
>
>
> --
> Sent from:
> http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
>
Reply | Threaded
Open this post in threaded view
|

Re: [Issue] Load auto compaction failed

kumarvishal09
Hi Aaron,
Can u please increase *carbon.unsafe.working.memory.in.mb* and try. In case
of local dictionary both actual page and encoded page are present in
memory(required for fallback) and it is freed after page is encoded.
Because of this memory requirement is high.

-Regards
Kumar Vishal


On Thu, Sep 27, 2018 at 7:03 AM Raghunandan S <
[hidden email]> wrote:

> Dear Aaron,
> The memory requirements for local dictionary are high when compared to
> global dictionary.what are the offheap configuration values? I guess low
> values might be the reason
>
> Regards
> Raghu
>
> On Thu, 27 Sep 2018, 4:58 am aaron, <[hidden email]> wrote:
>
> > Hi community,
> >
> > Based on 1.5.0 - the load with local dictionary and local sort, the load
> > failed when data count arrive 0.5 billion, but I've already load
> 50billion
> > before with global dictionary and sort. Do you have any ideas?
> >
> >
> > 18/09/26 08:39:45 AUDIT CarbonTableCompactor:
> > [ec2-dca-aa-p-sdn-16.appannie.org][hadoop][Thread-1]Compaction request
> > completed for table default.store
> > 18/09/26 08:46:39 WARN TaskSetManager: Lost task 1.0 in stage 216.0 (TID
> > 1513, 10.2.3.249, executor 2):
> > org.apache.spark.util.TaskCompletionListenerException:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> > Previous exception in task:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
> >
> >
> >
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
> >
> > org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
> >         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> >         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> >
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> >         org.apache.spark.scheduler.Task.run(Task.scala:109)
> >
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         java.lang.Thread.run(Thread.java:748)
> >         at
> >
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
> >         at
> >
> >
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
> >         at org.apache.spark.scheduler.Task.run(Task.scala:119)
> >         at
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> >
> > 18/09/26 08:53:31 WARN TaskSetManager: Lost task 1.1 in stage 216.0 (TID
> > 1515, 10.2.3.11, executor 1):
> > org.apache.spark.util.TaskCompletionListenerException:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> > Previous exception in task:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
> >
> >
> >
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
> >
> > org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
> >         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> >         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> >
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> >         org.apache.spark.scheduler.Task.run(Task.scala:109)
> >
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         java.lang.Thread.run(Thread.java:748)
> >         at
> >
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
> >         at
> >
> >
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
> >         at org.apache.spark.scheduler.Task.run(Task.scala:119)
> >         at
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> >
> > 18/09/26 09:00:22 WARN TaskSetManager: Lost task 1.2 in stage 216.0 (TID
> > 1516, 10.2.3.11, executor 1):
> > org.apache.spark.util.TaskCompletionListenerException:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> > Previous exception in task:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
> >
> >
> >
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
> >
> > org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
> >         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> >         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> >
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> >         org.apache.spark.scheduler.Task.run(Task.scala:109)
> >
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         java.lang.Thread.run(Thread.java:748)
> >         at
> >
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
> >         at
> >
> >
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
> >         at org.apache.spark.scheduler.Task.run(Task.scala:119)
> >         at
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> >
> > 18/09/26 09:07:16 WARN TaskSetManager: Lost task 1.3 in stage 216.0 (TID
> > 1517, 10.2.3.249, executor 2):
> > org.apache.spark.util.TaskCompletionListenerException:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> > Previous exception in task:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
> >
> >
> >
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
> >
> > org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
> >         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> >         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> >
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> >         org.apache.spark.scheduler.Task.run(Task.scala:109)
> >
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         java.lang.Thread.run(Thread.java:748)
> >         at
> >
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
> >         at
> >
> >
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
> >         at org.apache.spark.scheduler.Task.run(Task.scala:119)
> >         at
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> >
> > 18/09/26 09:07:16 ERROR TaskSetManager: Task 1 in stage 216.0 failed 4
> > times; aborting job
> > 18/09/26 09:07:16 ERROR CarbonTableCompactor: main Exception in
> compaction
> > thread Job aborted due to stage failure: Task 1 in stage 216.0 failed 4
> > times, most recent failure: Lost task 1.3 in stage 216.0 (TID 1517,
> > 10.2.3.249, executor 2):
> > org.apache.spark.util.TaskCompletionListenerException:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> > Previous exception in task:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
> >
> >
> >
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
> >
> > org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
> >         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> >         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> >
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> >         org.apache.spark.scheduler.Task.run(Task.scala:109)
> >
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         java.lang.Thread.run(Thread.java:748)
> >         at
> >
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
> >         at
> >
> >
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
> >         at org.apache.spark.scheduler.Task.run(Task.scala:119)
> >         at
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> >
> > Driver stacktrace:
> > org.apache.spark.SparkException: Job aborted due to stage failure: Task 1
> > in
> > stage 216.0 failed 4 times, most recent failure: Lost task 1.3 in stage
> > 216.0 (TID 1517, 10.2.3.249, executor 2):
> > org.apache.spark.util.TaskCompletionListenerException:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> > Previous exception in task:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
> >
> >
> >
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
> >
> > org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
> >         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> >         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> >
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> >         org.apache.spark.scheduler.Task.run(Task.scala:109)
> >
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         java.lang.Thread.run(Thread.java:748)
> >         at
> >
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
> >         at
> >
> >
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
> >         at org.apache.spark.scheduler.Task.run(Task.scala:119)
> >         at
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> >
> > Driver stacktrace:
> >         at
> > org.apache.spark.scheduler.DAGScheduler.org
> >
> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602)
> >         at
> >
> >
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590)
> >         at
> >
> >
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
> >         at
> >
> >
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> >         at
> > scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> >         at
> >
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
> >         at
> >
> >
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
> >         at
> >
> >
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
> >         at scala.Option.foreach(Option.scala:257)
> >         at
> >
> >
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
> >         at
> >
> >
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
> >         at
> >
> >
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
> >         at
> >
> >
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
> >         at
> org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> >         at
> > org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
> >         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
> >         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
> >         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
> >         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
> >         at
> org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
> >         at
> >
> >
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> >         at
> >
> >
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> >         at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
> >         at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
> >         at
> >
> >
> org.apache.carbondata.spark.rdd.CarbonTableCompactor.triggerCompaction(CarbonTableCompactor.scala:202)
> >         at
> >
> >
> org.apache.carbondata.spark.rdd.CarbonTableCompactor.scanSegmentsAndSubmitJob(CarbonTableCompactor.scala:119)
> >         at
> >
> >
> org.apache.carbondata.spark.rdd.CarbonTableCompactor.executeCompaction(CarbonTableCompactor.scala:68)
> >         at
> >
> >
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$$anon$2.run(CarbonDataRDDFactory.scala:183)
> >         at
> >
> >
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.startCompactionThreads(CarbonDataRDDFactory.scala:274)
> >         at
> >
> >
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.handleSegmentMerging(CarbonDataRDDFactory.scala:891)
> >         at
> >
> >
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:599)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:591)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:316)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:92)
> >         at
> >
> >
> org.apache.spark.sql.CarbonDataFrameWriter.loadDataFrame(CarbonDataFrameWriter.scala:62)
> >         at
> >
> >
> org.apache.spark.sql.CarbonDataFrameWriter.writeToCarbonFile(CarbonDataFrameWriter.scala:46)
> >         at
> >
> >
> org.apache.spark.sql.CarbonDataFrameWriter.appendToCarbonFile(CarbonDataFrameWriter.scala:41)
> >         at
> > org.apache.spark.sql.CarbonSource.createRelation(CarbonSource.scala:115)
> >         at
> >
> >
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
> >         at
> >
> >
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
> >         at
> >
> >
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
> >         at
> >
> >
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
> >         at
> >
> >
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> >         at
> >
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
> >         at
> > org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
> >         at
> >
> >
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
> >         at
> >
> >
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
> >         at
> >
> >
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
> >         at
> >
> >
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
> >         at
> >
> >
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
> >         at
> >
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
> >         at
> >
> >
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
> >         at
> > org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
> >         at
> >
> >
> com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:149)
> >         at
> >
> >
> com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:143)
> >         at scala.collection.immutable.List.foreach(List.scala:381)
> >         at
> > com.appannie.CarbonImporter$.store_load(CarbonImporter.scala:143)
> >         at com.appannie.CarbonImporter$.main(CarbonImporter.scala:53)
> >         at com.appannie.CarbonImporter.main(CarbonImporter.scala)
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >         at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >         at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >         at java.lang.reflect.Method.invoke(Method.java:498)
> >         at
> >
> >
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
> >         at
> >
> >
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
> >         at
> > org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
> >         at
> > org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
> >         at
> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
> >         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> > Caused by: org.apache.spark.util.TaskCompletionListenerException:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> > Previous exception in task:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
> >
> >
> >
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
> >
> > org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
> >         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> >         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> >
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> >         org.apache.spark.scheduler.Task.run(Task.scala:109)
> >
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         java.lang.Thread.run(Thread.java:748)
> >         at
> >
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
> >         at
> >
> >
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
> >         at org.apache.spark.scheduler.Task.run(Task.scala:119)
> >         at
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> > 18/09/26 09:07:16 ERROR CarbonDataRDDFactory$: main Exception in
> compaction
> > thread Job aborted due to stage failure: Task 1 in stage 216.0 failed 4
> > times, most recent failure: Lost task 1.3 in stage 216.0 (TID 1517,
> > 10.2.3.249, executor 2):
> > org.apache.spark.util.TaskCompletionListenerException:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> > Previous exception in task:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
> >
> >
> >
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
> >
> > org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
> >         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> >         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> >
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> >         org.apache.spark.scheduler.Task.run(Task.scala:109)
> >
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         java.lang.Thread.run(Thread.java:748)
> >         at
> >
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
> >         at
> >
> >
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
> >         at org.apache.spark.scheduler.Task.run(Task.scala:119)
> >         at
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> >
> > Driver stacktrace:
> > 18/09/26 09:07:16 ERROR CarbonDataRDDFactory$: main Exception in start
> > compaction thread. Exception in compaction Job aborted due to stage
> > failure:
> > Task 1 in stage 216.0 failed 4 times, most recent failure: Lost task 1.3
> in
> > stage 216.0 (TID 1517, 10.2.3.249, executor 2):
> > org.apache.spark.util.TaskCompletionListenerException:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> > Previous exception in task:
> > org.apache.carbondata.core.datastore.exception.CarbonDataWriterException
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)
> >
> >
> >
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)
> >
> >
> >
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:224)
> >
> >
> >
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
> >
> > org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
> >         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
> >         org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
> >
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> >         org.apache.spark.scheduler.Task.run(Task.scala:109)
> >
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         java.lang.Thread.run(Thread.java:748)
> >         at
> >
> org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
> >         at
> >
> >
> org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
> >         at org.apache.spark.scheduler.Task.run(Task.scala:119)
> >         at
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> >
> > Driver stacktrace:
> > 18/09/26 09:07:16 ERROR CarbonLoadDataCommand: main
> > java.lang.Exception: Dataload is success. Auto-Compaction has failed.
> > Please
> > check logs.
> >         at
> >
> >
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:608)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:591)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:316)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:92)
> >         at
> >
> >
> org.apache.spark.sql.CarbonDataFrameWriter.loadDataFrame(CarbonDataFrameWriter.scala:62)
> >         at
> >
> >
> org.apache.spark.sql.CarbonDataFrameWriter.writeToCarbonFile(CarbonDataFrameWriter.scala:46)
> >         at
> >
> >
> org.apache.spark.sql.CarbonDataFrameWriter.appendToCarbonFile(CarbonDataFrameWriter.scala:41)
> >         at
> > org.apache.spark.sql.CarbonSource.createRelation(CarbonSource.scala:115)
> >         at
> >
> >
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
> >         at
> >
> >
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
> >         at
> >
> >
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
> >         at
> >
> >
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
> >         at
> >
> >
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> >         at
> >
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
> >         at
> > org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
> >         at
> >
> >
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
> >         at
> >
> >
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
> >         at
> >
> >
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
> >         at
> >
> >
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
> >         at
> >
> >
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
> >         at
> >
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
> >         at
> >
> >
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
> >         at
> > org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
> >         at
> >
> >
> com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:149)
> >         at
> >
> >
> com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:143)
> >         at scala.collection.immutable.List.foreach(List.scala:381)
> >         at
> > com.appannie.CarbonImporter$.store_load(CarbonImporter.scala:143)
> >         at com.appannie.CarbonImporter$.main(CarbonImporter.scala:53)
> >         at com.appannie.CarbonImporter.main(CarbonImporter.scala)
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >         at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >         at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >         at java.lang.reflect.Method.invoke(Method.java:498)
> >         at
> >
> >
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
> >         at
> >
> >
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
> >         at
> > org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
> >         at
> > org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
> >         at
> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
> >         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> > 18/09/26 09:07:16 AUDIT CarbonLoadDataCommand:
> > [ec2-dca-aa-p-sdn-16.appannie.org][hadoop][Thread-1]Dataload failure for
> > default.store. Please check the logs
> > 18/09/26 09:07:16 ERROR CarbonLoadDataCommand: main Got exception
> > java.lang.Exception: Dataload is success. Auto-Compaction has failed.
> > Please
> > check logs. when processing data. But this command does not support undo
> > yet, skipping the undo part.
> > Exception in thread "main" java.lang.Exception: Dataload is success.
> > Auto-Compaction has failed. Please check logs.
> >         at
> >
> >
> org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:608)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:591)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:316)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:92)
> >         at
> >
> >
> org.apache.spark.sql.CarbonDataFrameWriter.loadDataFrame(CarbonDataFrameWriter.scala:62)
> >         at
> >
> >
> org.apache.spark.sql.CarbonDataFrameWriter.writeToCarbonFile(CarbonDataFrameWriter.scala:46)
> >         at
> >
> >
> org.apache.spark.sql.CarbonDataFrameWriter.appendToCarbonFile(CarbonDataFrameWriter.scala:41)
> >         at
> > org.apache.spark.sql.CarbonSource.createRelation(CarbonSource.scala:115)
> >         at
> >
> >
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
> >         at
> >
> >
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
> >         at
> >
> >
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
> >         at
> >
> >
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
> >         at
> >
> >
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
> >         at
> >
> >
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> >         at
> >
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
> >         at
> > org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
> >         at
> >
> >
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
> >         at
> >
> >
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
> >         at
> >
> >
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
> >         at
> >
> >
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
> >         at
> >
> >
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
> >         at
> >
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
> >         at
> >
> >
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
> >         at
> > org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
> >         at
> >
> >
> com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:149)
> >         at
> >
> >
> com.appannie.CarbonImporter$$anonfun$store_load$1.apply(CarbonImporter.scala:143)
> >         at scala.collection.immutable.List.foreach(List.scala:381)
> >         at
> > com.appannie.CarbonImporter$.store_load(CarbonImporter.scala:143)
> >         at com.appannie.CarbonImporter$.main(CarbonImporter.scala:53)
> >         at com.appannie.CarbonImporter.main(CarbonImporter.scala)
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >         at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >         at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >         at java.lang.reflect.Method.invoke(Method.java:498)
> >         at
> >
> >
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
> >         at
> >
> >
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
> >         at
> > org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
> >         at
> > org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
> >         at
> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
> >         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> > nohup: ignoring input
> > 18/09/26 16:17:21 WARN NativeCodeLoader: Unable to load native-hadoop
> > library for your platform... using builtin-java classes where applicable
> > 18/09/26 16:17:21 WARN CarbonProperties: main The custom block
> distribution
> > value "null" is invalid. Using the default value "false
> > 18/09/26 16:17:21 WARN CarbonProperties: main The enable auto handoff
> value
> > "null" is invalid. Using the default value "true
> > 18/09/26 16:17:21 WARN CarbonProperties: main The specified value for
> > property carbon.sort.storage.inmemory.size.inmbis invalid.
> > 18/09/26 16:17:21 WARN CarbonProperties: main The specified value for
> > property carbon.sort.storage.inmemory.size.inmbis invalid. Taking the
> > default value.512
> > 18/09/26 16:17:29 WARN ObjectStore: Failed to get database global_temp,
> > returning NoSuchObjectException
> > 18/09/26 16:17:30 AUDIT CacheProvider:
> > [ec2-dca-aa-p-sdn-16.appannie.org][hadoop][Thread-1]The key
> > carbon.options.bad.records.logger.enable with value true added in the
> > session param
> >
> >
> >
> > --
> > Sent from:
> > http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
> >
>
kumar vishal
Reply | Threaded
Open this post in threaded view
|

Re: [Issue] Load auto compaction failed

aaron
In reply to this post by sraghunandan
Reply | Threaded
Open this post in threaded view
|

Re: [Issue] Load auto compaction failed

aaron
In reply to this post by kumarvishal09
Good explanation, it works now! thanks



--
Sent from: http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/