[jira] [Commented] (CARBONDATA-1427) After Splitting Partition, Data doesn't get Divided to Different Partitions.

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Commented] (CARBONDATA-1427) After Splitting Partition, Data doesn't get Divided to Different Partitions.

Akash R Nilugal (Jira)

    [ https://issues.apache.org/jira/browse/CARBONDATA-1427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16153243#comment-16153243 ]

Pallavi Singh commented on CARBONDATA-1427:
-------------------------------------------

Hi [~lucao]

I tried replicating the above scenario to fix the same, the alter partition command creates the modification but the old loads are not updated, only the new loads after the alter command show the additional data partitioning. Also, I can see some exception from the AlterTableSplitPartitionRDD getting escalated.

Find the query set and stack trace below:


 val path = s"$rootPath/examples/spark2/src/main/resources/list_partition_table.csv"

    spark.sql("""DROP TABLE IF EXISTS list_partition_table""")

    spark.sql(
      """CREATE TABLE list_partition_table
        |(shortField SHORT, intField INT, bigintField LONG, doubleField DOUBLE, timestampField TIMESTAMP, decimalField DECIMAL(18,2), dateField DATE, charField CHAR(5), floatField FLOAT, complexData ARRAY<STRING> )
        |PARTITIONED BY (stringField STRING)
        |STORED BY 'carbondata'
        |TBLPROPERTIES('PARTITION_TYPE'='LIST', 'LIST_INFO'='Asia, (China, Europe, NoPartition)')""".stripMargin)

    spark.sql("LOAD DATA LOCAL INPATH '"+path+
              "'into table list_partition_table "+
              "options('FILEHEADER'='shortfield,intfield,bigintfield,doublefield,stringfield,timestampfield,decimalfield,datefield,charfield,floatfield,complexdata', "+
              "'COMPLEX_DELIMITER_LEVEL_1'='$','COMPLEX_DELIMITER_LEVEL_2'='#')")

    spark.sql("""show partitions list_partition_table""").show(100)

    spark.sql("ALTER TABLE list_partition_table SPLIT PARTITION(2) INTO('China', '(Europe, NoPartition)' )")

    spark.sql("""show partitions list_partition_table""").show(100)

    spark.sql("LOAD DATA LOCAL INPATH '"+path+
              "'into table list_partition_table "+
              "options('FILEHEADER'='shortfield,intfield,bigintfield,doublefield,stringfield,timestampfield,decimalfield,datefield,charfield,floatfield,complexdata', "+
              "'COMPLEX_DELIMITER_LEVEL_1'='$','COMPLEX_DELIMITER_LEVEL_2'='#')")

    spark.sql("show segments for table list_partition_table").show()

and the stack-trace is :

17/09/05 13:32:14 AUDIT CarbonDataRDDFactory$: [pallavi][pallavi][Thread-1]Data load is successful for default.list_partition_table
+--------------------+
|           partition|
+--------------------+
|0, stringfield = ...|
|1, stringfield = ...|
|2, stringfield = ...|
+--------------------+

17/09/05 13:32:14 AUDIT CarbonDataRDDFactory$: [pallavi][pallavi][Thread-1]Add partition request received for table default.list_partition_table
17/09/05 13:32:14 ERROR Executor: Exception in task 2.0 in stage 6.0 (TID 8)
java.lang.RuntimeException: Exception when executing Row result processor 2
        at scala.sys.package$.error(package.scala:27)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.liftedTree1$1(AlterTableSplitPartitionRDD.scala:124)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.<init>(AlterTableSplitPartitionRDD.scala:111)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD.compute(AlterTableSplitPartitionRDD.scala:65)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
17/09/05 13:32:14 WARN TaskSetManager: Lost task 2.0 in stage 6.0 (TID 8, localhost, executor driver): java.lang.RuntimeException: Exception when executing Row result processor 2
        at scala.sys.package$.error(package.scala:27)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.liftedTree1$1(AlterTableSplitPartitionRDD.scala:124)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.<init>(AlterTableSplitPartitionRDD.scala:111)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD.compute(AlterTableSplitPartitionRDD.scala:65)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)

17/09/05 13:32:14 ERROR TaskSetManager: Task 2 in stage 6.0 failed 1 times; aborting job
17/09/05 13:32:14 ERROR DataManagementFunc$: Thread-26 Exception in partition split thread org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 6.0 failed 1 times, most recent failure: Lost task 2.0 in stage 6.0 (TID 8, localhost, executor driver): java.lang.RuntimeException: Exception when executing Row result processor 2
        at scala.sys.package$.error(package.scala:27)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.liftedTree1$1(AlterTableSplitPartitionRDD.scala:124)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.<init>(AlterTableSplitPartitionRDD.scala:111)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD.compute(AlterTableSplitPartitionRDD.scala:65)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
java.util.concurrent.ExecutionException: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 6.0 failed 1 times, most recent failure: Lost task 2.0 in stage 6.0 (TID 8, localhost, executor driver): java.lang.RuntimeException: Exception when executing Row result processor 2
        at scala.sys.package$.error(package.scala:27)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.liftedTree1$1(AlterTableSplitPartitionRDD.scala:124)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.<init>(AlterTableSplitPartitionRDD.scala:111)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD.compute(AlterTableSplitPartitionRDD.scala:65)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:192)
        at org.apache.carbondata.spark.rdd.DataManagementFunc$$anonfun$executePartitionSplit$1.apply(DataManagementFunc.scala:278)
        at org.apache.carbondata.spark.rdd.DataManagementFunc$$anonfun$executePartitionSplit$1.apply(DataManagementFunc.scala:277)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at org.apache.carbondata.spark.rdd.DataManagementFunc$.executePartitionSplit(DataManagementFunc.scala:277)
        at org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$SplitThread.run(CarbonDataRDDFactory.scala:376)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 6.0 failed 1 times, most recent failure: Lost task 2.0 in stage 6.0 (TID 8, localhost, executor driver): java.lang.RuntimeException: Exception when executing Row result processor 2
        at scala.sys.package$.error(package.scala:27)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.liftedTree1$1(AlterTableSplitPartitionRDD.scala:124)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.<init>(AlterTableSplitPartitionRDD.scala:111)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD.compute(AlterTableSplitPartitionRDD.scala:65)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
        at scala.Option.foreach(Option.scala:257)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:934)
        at org.apache.carbondata.spark.rdd.PartitionSplitter$$anonfun$triggerPartitionSplit$1.apply$mcVI$sp(PartitionSplitter.scala:75)
        at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
        at org.apache.carbondata.spark.rdd.PartitionSplitter$.triggerPartitionSplit(PartitionSplitter.scala:53)
        at org.apache.carbondata.spark.rdd.PartitionSplitter.triggerPartitionSplit(PartitionSplitter.scala)
        at org.apache.carbondata.spark.partition.SplitPartitionCallable.call(SplitPartitionCallable.java:38)
        at org.apache.carbondata.spark.partition.SplitPartitionCallable.call(SplitPartitionCallable.java:29)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Exception when executing Row result processor 2
        at scala.sys.package$.error(package.scala:27)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.liftedTree1$1(AlterTableSplitPartitionRDD.scala:124)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.<init>(AlterTableSplitPartitionRDD.scala:111)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD.compute(AlterTableSplitPartitionRDD.scala:65)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        ... 3 more
17/09/05 13:32:14 ERROR CarbonDataRDDFactory$: Thread-26 Exception in partition split thread: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 6.0 failed 1 times, most recent failure: Lost task 2.0 in stage 6.0 (TID 8, localhost, executor driver): java.lang.RuntimeException: Exception when executing Row result processor 2
        at scala.sys.package$.error(package.scala:27)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.liftedTree1$1(AlterTableSplitPartitionRDD.scala:124)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.<init>(AlterTableSplitPartitionRDD.scala:111)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD.compute(AlterTableSplitPartitionRDD.scala:65)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace: }
Exception in thread "Thread-26" java.lang.Exception: Exception in split partition org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 6.0 failed 1 times, most recent failure: Lost task 2.0 in stage 6.0 (TID 8, localhost, executor driver): java.lang.RuntimeException: Exception when executing Row result processor 2
        at scala.sys.package$.error(package.scala:27)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.liftedTree1$1(AlterTableSplitPartitionRDD.scala:124)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.<init>(AlterTableSplitPartitionRDD.scala:111)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD.compute(AlterTableSplitPartitionRDD.scala:65)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
        at org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$SplitThread.run(CarbonDataRDDFactory.scala:386)
17/09/05 13:32:14 AUDIT AlterTableSplitPartitionCommand: [pallavi][pallavi][Thread-1]Alter table add/split partition is successful for table default.list_partition_table
17/09/05 13:32:14 WARN Shell: Interrupted while reading the error stream
java.lang.InterruptedException
        at java.lang.Object.wait(Native Method)
        at java.lang.Thread.join(Thread.java:1252)
        at java.lang.Thread.join(Thread.java:1326)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:456)
        at org.apache.hadoop.util.Shell.run(Shell.java:379)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:678)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:661)
        at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
        at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:567)
        at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:542)
        at org.apache.hadoop.fs.LocatedFileStatus.<init>(LocatedFileStatus.java:42)
        at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1815)
        at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1797)
        at org.apache.carbondata.core.indexstore.Blocklet.updateLocations(Blocklet.java:71)
        at org.apache.carbondata.hadoop.api.CarbonTableInputFormat.convertToCarbonInputSplit(CarbonTableInputFormat.java:554)
        at org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getDataBlocksOfSegment(CarbonTableInputFormat.java:542)
        at org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:442)
        at org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplitsOfOneSegment(CarbonTableInputFormat.java:384)
        at org.apache.spark.util.PartitionUtils$.getPartitionBlockList(PartitionUtils.scala:142)
        at org.apache.spark.util.PartitionUtils$.getSegmentProperties(PartitionUtils.scala:128)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.liftedTree1$1(AlterTableSplitPartitionRDD.scala:112)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.<init>(AlterTableSplitPartitionRDD.scala:111)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD.compute(AlterTableSplitPartitionRDD.scala:65)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
17/09/05 13:32:14 ERROR Executor: Exception in task 3.0 in stage 6.0 (TID 9)
java.lang.RuntimeException: Exception when executing Row result processor 2
        at scala.sys.package$.error(package.scala:27)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.liftedTree1$1(AlterTableSplitPartitionRDD.scala:124)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD$$anon$1.<init>(AlterTableSplitPartitionRDD.scala:111)
        at org.apache.carbondata.spark.rdd.AlterTableSplitPartitionRDD.compute(AlterTableSplitPartitionRDD.scala:65)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
+--------------------+
|           partition|
+--------------------+
|0, stringfield = ...|
|1, stringfield = ...|
|3, stringfield = ...|
|4, stringfield = ...|
+--------------------+






> After Splitting Partition, Data doesn't get Divided to Different Partitions.
> ----------------------------------------------------------------------------
>
>                 Key: CARBONDATA-1427
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1427
>             Project: CarbonData
>          Issue Type: Bug
>          Components: data-query
>    Affects Versions: 1.2.0
>         Environment: spark 2.1
>            Reporter: Neha Bhardwaj
>            Assignee: Pallavi Singh
>            Priority: Minor
>         Attachments: list_partition_table.csv
>
>
> When Performing a Split Partition Query on a Partitioned Table, The data doesn't get affected at all, however, we can see the updated Partitions using the show Partitions Query and the old partition as deleted.
> But the data still remains in that partition, Ideally, the data should be divided as per the new partitions, Which happens after the subsequent loads, the data then gets to the latest partitions.
> Example :
> 1. Create Table :
> DROP TABLE IF EXISTS list_partition_table;
> CREATE TABLE list_partition_table(shortField SHORT, intField INT, bigintField LONG, doubleField DOUBLE, timestampField TIMESTAMP, decimalField DECIMAL(18,2), dateField DATE, charField CHAR(5), floatField FLOAT, complexData ARRAY<STRING> ) PARTITIONED BY (stringField STRING) STORED BY 'carbondata' TBLPROPERTIES('PARTITION_TYPE'='LIST', 'LIST_INFO'='Asia, (China, Europe, NoPartition)');
> 2. Load Data :
>  load data inpath 'hdfs://localhost:54310/CSV/list_partition_table.csv' into table list_partition_table options('FILEHEADER'='shortfield,intfield,bigintfield,doublefield,stringfield,timestampfield,decimalfield,datefield,charfield,floatfield,complexdata', 'COMPLEX_DELIMITER_LEVEL_1'='$','COMPLEX_DELIMITER_LEVEL_2'='#');
> 3. Show Partitions :
> show partitions list_partition_table;
> +----------------------------------------------+--+
> |                  partition                   |
> +----------------------------------------------+--+
> | 0, stringfield = DEFAULT                     |
> | 1, stringfield = Asia                        |
> | 2, stringfield = China, Europe, NoPartition  |
> +----------------------------------------------+--+
> 3 rows selected (0.09 seconds)
> 4. Split Partition :
> ALTER TABLE list_partition_table SPLIT PARTITION(2) INTO('China', '(Europe, NoPartition)' );
> 5. Show Partition :
> show partitions list_partition_table;
> +---------------------------------------+--+
> |               partition               |
> +---------------------------------------+--+
> | 0, stringfield = DEFAULT              |
> | 1, stringfield = Asia                 |
> | 3, stringfield = China                |
> | 4, stringfield = Europe, NoPartition  |
> +---------------------------------------+--+
> 4 rows selected (0.065 seconds)
> The partitions get updated , but still the data remains the same(UNPARTITIONED), in the same partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)