[ https://issues.apache.org/jira/browse/CARBONDATA-1414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16153216#comment-16153216 ] Neha Bhardwaj commented on CARBONDATA-1414: ------------------------------------------- Hi [~chenliang613] , The above query is raising the exception at COMPACTION now. 0: jdbc:hive2://localhost:10000> ALTER TABLE list_partition_table_string COMPACT 'Minor'; Error: java.lang.RuntimeException: Compaction failed. Please check logs for more info. Exception in compaction org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 33.0 failed 1 times, most recent failure: Lost task 2.0 in stage 33.0 (TID 149, localhost, executor driver): java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.rangeCheck(ArrayList.java:653) at java.util.ArrayList.get(ArrayList.java:429) at org.apache.carbondata.core.datastore.block.SegmentProperties.assignComplexOrdinal(SegmentProperties.java:472) at org.apache.carbondata.core.datastore.block.SegmentProperties.fillDimensionAndMeasureDetails(SegmentProperties.java:397) at org.apache.carbondata.core.datastore.block.SegmentProperties.<init>(SegmentProperties.java:173) at org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:161) at org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:79) at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:62) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Driver stacktrace: (state=,code=0) I think it is because of Complex Datatype in the table schema. For all other scenarios, the fix is working. > Show Segments raises exception for a Partition Table after Updation. > -------------------------------------------------------------------- > > Key: CARBONDATA-1414 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1414 > Project: CarbonData > Issue Type: Bug > Components: data-query > Affects Versions: 1.2.0 > Environment: spark 2.1 > Reporter: Neha Bhardwaj > Assignee: Liang Chen > Attachments: list_partition_table.csv > > > 1. Create Partition Table : > DROP TABLE IF EXISTS list_partition_table_string; > CREATE TABLE list_partition_table_string(shortField SHORT, intField INT, bigintField LONG, doubleField DOUBLE, timestampField TIMESTAMP, decimalField DECIMAL(18,2), dateField DATE, charField CHAR(5), floatField FLOAT, complexData ARRAY<STRING> ) PARTITIONED BY (stringField STRING) STORED BY 'carbondata' TBLPROPERTIES('PARTITION_TYPE'='LIST', 'LIST_INFO'='Asia, America, Europe', 'DICTIONARY_EXCLUDE'='stringfield'); > 2. Load Data : > load data inpath 'hdfs://localhost:54310/CSV/list_partition_table.csv' into table list_partition_table_string options('FILEHEADER'='shortfield,intfield,bigintfield,doublefield,stringfield,timestampfield,decimalfield,datefield,charfield,floatfield,complexdata', 'COMPLEX_DELIMITER_LEVEL_1'='$','COMPLEX_DELIMITER_LEVEL_2'='#', 'SINGLE_PASS'='TRUE'); > 3. Update Data : > update list_partition_table_string set (stringfield)=('China') where stringfield = 'Japan' ; > update list_partition_table_string set (stringfield)=('Japan') where stringfield > 'Europe' ; > update list_partition_table_string set (stringfield)=('Asia') where stringfield < 'Europe' ; > 4. Compaction : > ALTER TABLE list_partition_table_string COMPACT 'Minor'; > Show segments for table list_partition_table_string; > Expected Output: Segments Must be Displayed. > Actual Output : Error: java.lang.NullPointerException (state=,code=0) -- This message was sent by Atlassian JIRA (v6.4.14#64029) |
Free forum by Nabble | Edit this page |