GitHub user akashrn5 opened a pull request:
https://github.com/apache/carbondata/pull/2533 [CARBONDATA-2765]handle flat folder support for implicit column Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/akashrn5/incubator-carbondata impli Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2533.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2533 ---- commit b5fc1f061b238be3d77445e4eb6b15d273e26069 Author: akashrn5 <akashnilugal@...> Date: 2018-07-20T07:16:32Z handle flat folder support for implicit column ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2533 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7364/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2533 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6125/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2533 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7367/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2533 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6129/ --- |
In reply to this post by qiuchenjian-2
Github user akashrn5 commented on the issue:
https://github.com/apache/carbondata/pull/2533 retest this please --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2533 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7371/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2533 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6132/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2533 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5942/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2533#discussion_r204214380 --- Diff: core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockDataMap.java --- @@ -642,7 +643,17 @@ private boolean addBlockBasedOnMinMaxValue(FilterExecuter filterExecuter, byte[] byte[][] minValue, String filePath, int blockletId) { BitSet bitSet = null; if (filterExecuter instanceof ImplicitColumnFilterExecutor) { - String uniqueBlockPath = filePath.substring(filePath.lastIndexOf("/Part") + 1); + String uniqueBlockPath; + String blockName = filePath.substring(filePath.lastIndexOf("/") + 1); + if (filePath.contains("/Fact/Part0/Segment_")) { --- End diff -- Use PR https://github.com/apache/carbondata/pull/2503 and check `CarbonUtil.isStandardCarbonTable` instead of doing contains check --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2533#discussion_r204214523 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonRelation.scala --- @@ -173,15 +175,38 @@ case class CarbonRelation( .getValidAndInvalidSegments.getValidSegments.asScala var size = 0L // for each segment calculate the size - segments.foreach {validSeg => - // for older store - if (null != validSeg.getLoadMetadataDetails.getDataSize && - null != validSeg.getLoadMetadataDetails.getIndexSize) { - size = size + validSeg.getLoadMetadataDetails.getDataSize.toLong + - validSeg.getLoadMetadataDetails.getIndexSize.toLong - } else { - size = size + FileFactory.getDirectorySize( - CarbonTablePath.getSegmentPath(tablePath, validSeg.getSegmentNo)) + if (carbonTable.getTableInfo.getFactTable.getTableProperties.asScala + .get(CarbonCommonConstants.FLAT_FOLDER).isDefined && + carbonTable.getTableInfo.getFactTable.getTableProperties.asScala --- End diff -- Why is this check required? In flat folder case also we can still get the datasize from tablestatus. I don't think these changes are required at all. --- |
In reply to this post by qiuchenjian-2
Github user akashrn5 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2533#discussion_r204282454 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonRelation.scala --- @@ -173,15 +175,38 @@ case class CarbonRelation( .getValidAndInvalidSegments.getValidSegments.asScala var size = 0L // for each segment calculate the size - segments.foreach {validSeg => - // for older store - if (null != validSeg.getLoadMetadataDetails.getDataSize && - null != validSeg.getLoadMetadataDetails.getIndexSize) { - size = size + validSeg.getLoadMetadataDetails.getDataSize.toLong + - validSeg.getLoadMetadataDetails.getIndexSize.toLong - } else { - size = size + FileFactory.getDirectorySize( - CarbonTablePath.getSegmentPath(tablePath, validSeg.getSegmentNo)) + if (carbonTable.getTableInfo.getFactTable.getTableProperties.asScala + .get(CarbonCommonConstants.FLAT_FOLDER).isDefined && + carbonTable.getTableInfo.getFactTable.getTableProperties.asScala --- End diff -- if `validSeg.getLoadMetadataDetails.getDataSize` or `alidSeg.getLoadMetadataDetails.getIndexSize` is null, then it will try to get size using the segment path which will not present in flat folder case, it will throw segment does not exists exception, i got this exception, so we can handle like this --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2533#discussion_r204284556 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonRelation.scala --- @@ -173,15 +175,38 @@ case class CarbonRelation( .getValidAndInvalidSegments.getValidSegments.asScala var size = 0L // for each segment calculate the size - segments.foreach {validSeg => - // for older store - if (null != validSeg.getLoadMetadataDetails.getDataSize && - null != validSeg.getLoadMetadataDetails.getIndexSize) { - size = size + validSeg.getLoadMetadataDetails.getDataSize.toLong + - validSeg.getLoadMetadataDetails.getIndexSize.toLong - } else { - size = size + FileFactory.getDirectorySize( - CarbonTablePath.getSegmentPath(tablePath, validSeg.getSegmentNo)) + if (carbonTable.getTableInfo.getFactTable.getTableProperties.asScala + .get(CarbonCommonConstants.FLAT_FOLDER).isDefined && + carbonTable.getTableInfo.getFactTable.getTableProperties.asScala --- End diff -- No, it should be present in case of flat folder case also. If not present then fix adding the datasize to tablestatus not this way --- |
In reply to this post by qiuchenjian-2
Github user akashrn5 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2533#discussion_r204284734 --- Diff: core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockDataMap.java --- @@ -642,7 +643,17 @@ private boolean addBlockBasedOnMinMaxValue(FilterExecuter filterExecuter, byte[] byte[][] minValue, String filePath, int blockletId) { BitSet bitSet = null; if (filterExecuter instanceof ImplicitColumnFilterExecutor) { - String uniqueBlockPath = filePath.substring(filePath.lastIndexOf("/Part") + 1); + String uniqueBlockPath; + String blockName = filePath.substring(filePath.lastIndexOf("/") + 1); + if (filePath.contains("/Fact/Part0/Segment_")) { --- End diff -- i was thinking of adding a utility function, but here we will not get the carbon table, so i thought i will use this way --- |
In reply to this post by qiuchenjian-2
Github user akashrn5 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2533#discussion_r204291557 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonRelation.scala --- @@ -173,15 +175,38 @@ case class CarbonRelation( .getValidAndInvalidSegments.getValidSegments.asScala var size = 0L // for each segment calculate the size - segments.foreach {validSeg => - // for older store - if (null != validSeg.getLoadMetadataDetails.getDataSize && - null != validSeg.getLoadMetadataDetails.getIndexSize) { - size = size + validSeg.getLoadMetadataDetails.getDataSize.toLong + - validSeg.getLoadMetadataDetails.getIndexSize.toLong - } else { - size = size + FileFactory.getDirectorySize( - CarbonTablePath.getSegmentPath(tablePath, validSeg.getSegmentNo)) + if (carbonTable.getTableInfo.getFactTable.getTableProperties.asScala + .get(CarbonCommonConstants.FLAT_FOLDER).isDefined && + carbonTable.getTableInfo.getFactTable.getTableProperties.asScala --- End diff -- ok --- |
In reply to this post by qiuchenjian-2
Github user akashrn5 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2533#discussion_r204297035 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonRelation.scala --- @@ -173,15 +175,38 @@ case class CarbonRelation( .getValidAndInvalidSegments.getValidSegments.asScala var size = 0L // for each segment calculate the size - segments.foreach {validSeg => - // for older store - if (null != validSeg.getLoadMetadataDetails.getDataSize && - null != validSeg.getLoadMetadataDetails.getIndexSize) { - size = size + validSeg.getLoadMetadataDetails.getDataSize.toLong + - validSeg.getLoadMetadataDetails.getIndexSize.toLong - } else { - size = size + FileFactory.getDirectorySize( - CarbonTablePath.getSegmentPath(tablePath, validSeg.getSegmentNo)) + if (carbonTable.getTableInfo.getFactTable.getTableProperties.asScala + .get(CarbonCommonConstants.FLAT_FOLDER).isDefined && + carbonTable.getTableInfo.getFactTable.getTableProperties.asScala --- End diff -- i have handled it, as you said, no need of these changes --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2533#discussion_r204316346 --- Diff: core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockDataMap.java --- @@ -642,7 +643,17 @@ private boolean addBlockBasedOnMinMaxValue(FilterExecuter filterExecuter, byte[] byte[][] minValue, String filePath, int blockletId) { BitSet bitSet = null; if (filterExecuter instanceof ImplicitColumnFilterExecutor) { - String uniqueBlockPath = filePath.substring(filePath.lastIndexOf("/Part") + 1); + String uniqueBlockPath; + String blockName = filePath.substring(filePath.lastIndexOf("/") + 1); + if (filePath.contains("/Fact/Part0/Segment_")) { --- End diff -- `CarbonTable` is available in BlockletDataMapModel, so you can do the check. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2533 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7390/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2533 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6151/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2533#discussion_r204331739 --- Diff: core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockDataMap.java --- @@ -97,10 +98,15 @@ * partition table and non transactional table */ protected boolean isFilePathStored; + /** + * datamap model + */ + protected BlockletDataMapModel blockletDataMapModel; --- End diff -- Don't take complete `blockletDataMapModel` to class level, it wastes memory as we keep them in LRU . Just keep boolean isStandardCarbonTable in class level --- |
Free forum by Nabble | Edit this page |