[jira] [Created] (CARBONDATA-3272) Horinzontal Compaction fails during update of old table with ArrayIndexOfBoundException

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Created] (CARBONDATA-3272) Horinzontal Compaction fails during update of old table with ArrayIndexOfBoundException

Akash R Nilugal (Jira)
Akash R Nilugal created CARBONDATA-3272:
-------------------------------------------

             Summary: Horinzontal Compaction fails during update of old table with ArrayIndexOfBoundException
                 Key: CARBONDATA-3272
                 URL: https://issues.apache.org/jira/browse/CARBONDATA-3272
             Project: CarbonData
          Issue Type: Bug
            Reporter: Akash R Nilugal
            Assignee: Akash R Nilugal


when old store table is refereshed and try to load and update, data is updated, but the horizontal compaction during update is failed with ArrayIndexOutOfBoundsException. Trace is as below

 

*Previous exception in task: Exception occurred in query execution :: java.lang.ArrayIndexOutOfBoundsException: 14;*
 *org.apache.spark.sql.util.CarbonException$.analysisException(CarbonException.scala:23)*
 *org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:189)*
 *org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:84)*
 *org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:82)*
 *org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)*
 *org.apache.spark.rdd.RDD.iterator(RDD.scala:288)*
 *org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)*
 *org.apache.spark.scheduler.Task.run(Task.scala:99)*
 *org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)*
 *java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)*
 *java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)*
 *java.lang.Thread.run(Thread.java:748)*
 *at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)*
 *at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)*
 *at org.apache.spark.scheduler.Task.run(Task.scala:109)*
 *at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)*
 *at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)*
 *at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)*
 *at java.lang.Thread.run(Thread.java:748)*

 

 

*Steps to reproduce:*
 # CREATE TABLE uniqdata_Update (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,36),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('dictionary_include'='cust_id,cust_name,DOB,BIGINT_COLUMN1,DECIMAL_COLUMN1,Double_COLUMN1') ;perform the update operations
2. perform three load operations

*1.*refresh the old store of 1.5.1 in latest code

2. do one more load

3. update the dictionary column

update uniqdata_Update set (BIGINT_COLUMN1) = (9223372036854775807) where DOB!='2018-10-12 15:00:03';



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)