[ https://issues.apache.org/jira/browse/CARBONDATA-3478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Indhumathi Muthumurugesh updated CARBONDATA-3478: ------------------------------------------------- Description: Please find the steps to reproduce the issue: # Create table having dimension and measure columns # load data # rename table # alter set tblproperties('sort_columns'=measure column', 'sort_scope'='local_sort') # load data # perform compaction and find the exception below Driver stacktrace: 2019-07-26 19:34:03 ERROR CarbonAlterTableCompactionCommand:345 - Exception in start compaction thread. java.lang.Exception: Exception in compaction Job aborted due to stage failure: Task 0 in stage 6.0 failed 1 times, most recent failure: Lost task 0.0 in stage 6.0 (TID 6, localhost, executor driver): java.lang.ArrayIndexOutOfBoundsException: 3 at org.apache.carbondata.core.scan.wrappers.ByteArrayWrapper.getNoDictionaryKeyByIndex(ByteArrayWrapper.java:81) at org.apache.carbondata.processing.merger.CompactionResultSortProcessor.prepareRowObjectForSorting(CompactionResultSortProcessor.java:332) at org.apache.carbondata.processing.merger.CompactionResultSortProcessor.processResult(CompactionResultSortProcessor.java:254) at org.apache.carbondata.processing.merger.CompactionResultSortProcessor.execute(CompactionResultSortProcessor.java:179) at org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:255) at org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:101) at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:82) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:328) at org.apache.spark.rdd.RDD.iterator(RDD.scala:292) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) | | > Fix ArrayIndexOutOfBoundsException issue on compaction after alter rename operation > ----------------------------------------------------------------------------------- > > Key: CARBONDATA-3478 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3478 > Project: CarbonData > Issue Type: Bug > Reporter: Indhumathi Muthumurugesh > Priority: Major > > Please find the steps to reproduce the issue: > # Create table having dimension and measure columns > # load data > # rename table > # alter set tblproperties('sort_columns'=measure column', 'sort_scope'='local_sort') > # load data > # perform compaction and find the exception below > > Driver stacktrace: > 2019-07-26 19:34:03 ERROR CarbonAlterTableCompactionCommand:345 - Exception in start compaction thread. > java.lang.Exception: Exception in compaction Job aborted due to stage failure: Task 0 in stage 6.0 failed 1 times, most recent failure: Lost task 0.0 in stage 6.0 (TID 6, localhost, executor driver): java.lang.ArrayIndexOutOfBoundsException: 3 > at org.apache.carbondata.core.scan.wrappers.ByteArrayWrapper.getNoDictionaryKeyByIndex(ByteArrayWrapper.java:81) > at org.apache.carbondata.processing.merger.CompactionResultSortProcessor.prepareRowObjectForSorting(CompactionResultSortProcessor.java:332) > at org.apache.carbondata.processing.merger.CompactionResultSortProcessor.processResult(CompactionResultSortProcessor.java:254) > at org.apache.carbondata.processing.merger.CompactionResultSortProcessor.execute(CompactionResultSortProcessor.java:179) > at org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:255) > at org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:101) > at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:82) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:328) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:292) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) > at org.apache.spark.scheduler.Task.run(Task.scala:109) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > | | -- This message was sent by Atlassian JIRA (v7.6.14#76016) |
Free forum by Nabble | Edit this page |