[GitHub] xuchuanyin commented on a change in pull request #3102: [CARBONDATA-3272]fix ArrayIndexOutOfBoundsException of horizontal compaction during update, when cardinality changes within a segment

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[GitHub] xuchuanyin commented on a change in pull request #3102: [CARBONDATA-3272]fix ArrayIndexOutOfBoundsException of horizontal compaction during update, when cardinality changes within a segment

GitBox
xuchuanyin commented on a change in pull request #3102: [CARBONDATA-3272]fix ArrayIndexOutOfBoundsException of horizontal compaction during update, when cardinality changes within a segment
URL: https://github.com/apache/carbondata/pull/3102#discussion_r251230461
 
 

 ##########
 File path: processing/src/main/java/org/apache/carbondata/processing/merger/CarbonCompactionExecutor.java
 ##########
 @@ -140,25 +139,72 @@ public CarbonCompactionExecutor(Map<String, TaskBlockInfo> segmentMapping,
               || !CarbonCompactionUtil.isSorted(listMetadata.get(0));
       for (String task : taskBlockListMapping) {
         list = taskBlockInfo.getTableBlockInfoList(task);
-        Collections.sort(list);
-        LOGGER.info(
-            "for task -" + task + "- in segment id -" + segmentId + "- block size is -" + list
-                .size());
-        queryModel.setTableBlockInfos(list);
-        if (sortingRequired) {
-          resultList.get(CarbonCompactionUtil.UNSORTED_IDX).add(
-              new RawResultIterator(executeBlockList(list, segmentId, task, configuration),
-                  sourceSegProperties, destinationSegProperties, false));
-        } else {
-          resultList.get(CarbonCompactionUtil.SORTED_IDX).add(
-              new RawResultIterator(executeBlockList(list, segmentId, task, configuration),
-                  sourceSegProperties, destinationSegProperties, false));
+        // during update there may be a chance that the cardinality may change within the segment
+        // which may lead to failure while converting the row, so get all the blocks present in a
+        // task and then split into multiple lists of same key length and create separate
+        // RawResultIterator for each list of same key length. If all the blocks have same keylength
+        // , then make a single RawResultIterator for all the blocks
+        List<List<TableBlockInfo>> listOfTableBlocksBasedOnKeyLength =
+            getListOfTableBlocksBasedOnKeyLength(list);
+        for (List<TableBlockInfo> tableBlockInfoList : listOfTableBlocksBasedOnKeyLength) {
+          Collections.sort(tableBlockInfoList);
+          LOGGER.info(
+              "for task -" + task + "- in segment id -" + segmentId + "- block size is -" + list
+                  .size());
+          queryModel.setTableBlockInfos(tableBlockInfoList);
+          if (sortingRequired) {
 
 Review comment:
   please optimize the following branches since they are nearly the same.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[hidden email]


With regards,
Apache Git Services