GitHub user ravipesala opened a pull request:
https://github.com/apache/carbondata/pull/1706 [CARBONDATA-1863][PARTITION]Supported clean files for partition table It clean all invalid data after drop partition from all the segments through clean command Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [X] Any interfaces changed? NO - [X] Any backward compatibility impacted? NO - [X] Document update required? YES - [X] Testing done Tests added - [X] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ravipesala/incubator-carbondata partition-clean Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1706.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1706 ---- commit b7cdfc2862d578fe57cee146f3e5f70ef8bf968f Author: ravipesala <ravi.pesala@...> Date: 2017-12-21T04:53:27Z Supported clean files for partition table ---- --- |
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1706 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2480/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1706 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2220/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1706 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/998/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1706 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2481/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1706 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1004/ --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1706#discussion_r158316333 --- Diff: core/src/main/java/org/apache/carbondata/core/metadata/PartitionMapFileStore.java --- @@ -253,6 +282,96 @@ public void commitPartitions(String segmentPath, final String uniqueId, boolean } } + /** + * Clean up invalid data after drop partition in all segments of table + * @param table + * @param currentPartitions Current partitions of table + * @param forceDelete Whether it should be deleted force or check the time for an hour creation + * to delete data. + * @throws IOException + */ + public void cleanSegments( + CarbonTable table, + List<String> currentPartitions, + boolean forceDelete) throws IOException { + SegmentStatusManager ssm = new SegmentStatusManager(table.getAbsoluteTableIdentifier()); + + CarbonTablePath carbonTablePath = CarbonStorePath + .getCarbonTablePath(table.getAbsoluteTableIdentifier().getTablePath(), + table.getAbsoluteTableIdentifier().getCarbonTableIdentifier()); + + LoadMetadataDetails[] details = ssm.readLoadMetadata(table.getMetaDataFilepath()); + // scan through each segment. + + for (LoadMetadataDetails segment : details) { + + // if this segment is valid then only we will go for deletion of related + // dropped partition files. if the segment is mark for delete or compacted then any way + // it will get deleted. + + if (segment.getSegmentStatus() == SegmentStatus.SUCCESS + || segment.getSegmentStatus() == SegmentStatus.LOAD_PARTIAL_SUCCESS) { + List<String> toBeDeletedIndexFiles = new ArrayList<>(); + List<String> toBeDeletedDataFiles = new ArrayList<>(); + // take the list of files from this segment. + String segmentPath = carbonTablePath.getCarbonDataDirectoryPath("0", segment.getLoadName()); + String partitionFilePath = getPartitionFilePath(segmentPath); + if (partitionFilePath != null) { + PartitionMapper partitionMapper = readPartitionMap(partitionFilePath); + DataFileFooterConverter fileFooterConverter = new DataFileFooterConverter(); + SegmentIndexFileStore indexFileStore = new SegmentIndexFileStore(); + indexFileStore.readAllIIndexOfSegment(segmentPath); + Set<String> indexFilesFromSegment = indexFileStore.getCarbonIndexMap().keySet(); + for (String indexFile : indexFilesFromSegment) { + // Check the partition information in the partiton mapper + List<String> indexPartitions = partitionMapper.partitionMap.get(indexFile); + if (indexPartitions == null || !currentPartitions.containsAll(indexPartitions)) { + Long fileTimestamp = CarbonUpdateUtil.getTimeStampAsLong(indexFile + .substring(indexFile.lastIndexOf(CarbonCommonConstants.HYPHEN) + 1, + indexFile.length() - CarbonTablePath.INDEX_FILE_EXT.length())); + if (CarbonUpdateUtil.isMaxQueryTimeoutExceeded(fileTimestamp) || forceDelete) { --- End diff -- 1. mergeindex also should be read based on transactiontimestamp, otherwise if droppartition is called and then clean files is called immediately , select can read previous index files which might get deleted during reading. 2. Alter drop partition can also recreate mergeindex map in same transactiontimestamp. This can be handled along with partitionmap transactiontimestamp implementation. --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on the issue:
https://github.com/apache/carbondata/pull/1706 LGTM --- |
In reply to this post by qiuchenjian-2
|
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1706 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2227/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1706 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2487/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1706 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2488/ --- |
Free forum by Nabble | Edit this page |