GitHub user QiangCai opened a pull request:
https://github.com/apache/carbondata/pull/1773 [CARBONDATA-1999] Block drop table and delete streaming segment while streaming is in progress 1. Block drop table while streaming is in progress 2. Block delete streaming segment - [x] Any interfaces changed? no - [x] Any backward compatibility impacted? no - [x] Document update required? yes - [x] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? added - How it is tested? Please attach test report. ut - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [x] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. small changes You can merge this pull request into a Git repository by running: $ git pull https://github.com/QiangCai/carbondata streaming_block Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1773.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1773 ---- commit 9773d90b06ee45b1fc362032765df1716898b6af Author: QiangCai <qiangcai@...> Date: 2018-01-08T03:14:00Z block drop table and delete segment while streaming is in progress ---- --- |
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1773 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2785/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1773 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1389/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1773 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2624/ --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1773#discussion_r160080008 --- Diff: core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java --- @@ -495,11 +495,16 @@ public static void writeLoadDetailsIntoFile(String dataLoadLocation, } // if the segment status is overwrite in progress, then no need to delete that. if (SegmentStatus.INSERT_OVERWRITE_IN_PROGRESS == loadMetadata.getSegmentStatus()) { - LOG.error("Cannot delete the segemnt " + loadId + " which is load overwrite " + + LOG.error("Cannot delete the segment " + loadId + " which is load overwrite " + "in progress"); invalidLoadIds.add(loadId); return invalidLoadIds; } + if (SegmentStatus.STREAMING == loadMetadata.getSegmentStatus()) { --- End diff -- get the status once and use elseif --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1773#discussion_r160080069 --- Diff: core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java --- @@ -543,6 +548,11 @@ public static void writeLoadDetailsIntoFile(String dataLoadLocation, + "as the segment has been compacted."); continue; } + if (SegmentStatus.STREAMING == loadMetadata.getSegmentStatus()) { --- End diff -- get the status once and use elseif --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1773#discussion_r160080074 --- Diff: core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java --- @@ -543,6 +548,11 @@ public static void writeLoadDetailsIntoFile(String dataLoadLocation, + "as the segment has been compacted."); continue; } + if (SegmentStatus.STREAMING == loadMetadata.getSegmentStatus()) { + LOG.info("Ignoring the segment : " + loadMetadata.getLoadName() + + "as the segment is streaming in progress."); + continue; + } if (SegmentStatus.MARKED_FOR_DELETE != loadMetadata.getSegmentStatus() && --- End diff -- get the status once and use elseif --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1773#discussion_r160080261 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala --- @@ -55,6 +55,10 @@ case class CarbonDropTableCommand( } LOGGER.audit(s"Deleting table [$tableName] under database [$dbName]") carbonTable = CarbonEnv.getCarbonTable(databaseNameOp, tableName)(sparkSession) + if (carbonTable.isStreamingTable) { + // streaming table should acquire streaming.lock + carbonLocks += CarbonLockUtil.getLockObject(identifier, LockUsage.STREAMING_LOCK) --- End diff -- better to assign to locksToBeAcquired at line 48 when initializing it. ``` val locksToBeAcquired = if (streaming) { 3 locks. } else { 2 locks } ``` --- |
In reply to this post by qiuchenjian-2
Github user QiangCai commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1773#discussion_r160089541 --- Diff: core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java --- @@ -495,11 +495,16 @@ public static void writeLoadDetailsIntoFile(String dataLoadLocation, } // if the segment status is overwrite in progress, then no need to delete that. if (SegmentStatus.INSERT_OVERWRITE_IN_PROGRESS == loadMetadata.getSegmentStatus()) { - LOG.error("Cannot delete the segemnt " + loadId + " which is load overwrite " + + LOG.error("Cannot delete the segment " + loadId + " which is load overwrite " + "in progress"); invalidLoadIds.add(loadId); return invalidLoadIds; } + if (SegmentStatus.STREAMING == loadMetadata.getSegmentStatus()) { --- End diff -- fixed --- |
In reply to this post by qiuchenjian-2
Github user QiangCai commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1773#discussion_r160089558 --- Diff: core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java --- @@ -543,6 +548,11 @@ public static void writeLoadDetailsIntoFile(String dataLoadLocation, + "as the segment has been compacted."); continue; } + if (SegmentStatus.STREAMING == loadMetadata.getSegmentStatus()) { + LOG.info("Ignoring the segment : " + loadMetadata.getLoadName() + + "as the segment is streaming in progress."); + continue; + } if (SegmentStatus.MARKED_FOR_DELETE != loadMetadata.getSegmentStatus() && --- End diff -- fixed --- |
In reply to this post by qiuchenjian-2
Github user QiangCai commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1773#discussion_r160089597 --- Diff: core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java --- @@ -543,6 +548,11 @@ public static void writeLoadDetailsIntoFile(String dataLoadLocation, + "as the segment has been compacted."); continue; } + if (SegmentStatus.STREAMING == loadMetadata.getSegmentStatus()) { --- End diff -- fixed --- |
In reply to this post by qiuchenjian-2
Github user QiangCai commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1773#discussion_r160090661 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala --- @@ -55,6 +55,10 @@ case class CarbonDropTableCommand( } LOGGER.audit(s"Deleting table [$tableName] under database [$dbName]") carbonTable = CarbonEnv.getCarbonTable(databaseNameOp, tableName)(sparkSession) + if (carbonTable.isStreamingTable) { + // streaming table should acquire streaming.lock + carbonLocks += CarbonLockUtil.getLockObject(identifier, LockUsage.STREAMING_LOCK) --- End diff -- Here should get LockUsage.METADATA_LOCK before invoking CarbonEnv.getCarbonTable to get metadata. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1773#discussion_r160100843 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala --- @@ -55,6 +55,10 @@ case class CarbonDropTableCommand( } LOGGER.audit(s"Deleting table [$tableName] under database [$dbName]") carbonTable = CarbonEnv.getCarbonTable(databaseNameOp, tableName)(sparkSession) + if (carbonTable.isStreamingTable) { + // streaming table should acquire streaming.lock + carbonLocks += CarbonLockUtil.getLockObject(identifier, LockUsage.STREAMING_LOCK) --- End diff -- ok --- |
In reply to this post by qiuchenjian-2
|
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1773 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1394/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1773 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2629/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1773 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2791/ --- |
In reply to this post by qiuchenjian-2
Github user QiangCai commented on the issue:
https://github.com/apache/carbondata/pull/1773 retest this please --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1773 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1403/ --- |
In reply to this post by qiuchenjian-2
Github user QiangCai commented on the issue:
https://github.com/apache/carbondata/pull/1773 retest this please --- |
Free forum by Nabble | Edit this page |