GitHub user rahulforallp opened a pull request:
https://github.com/apache/carbondata/pull/2513 [WIP] blocking concurrent load if any column included as dictionary Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/rahulforallp/incubator-carbondata concur_load Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2513.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2513 ---- commit bf294519ea67f5c9b4ae89a5ee5188ea6f9662c3 Author: rahul <rahul.kumar@...> Date: 2018-07-16T17:40:22Z [WIP] blocking concurrent load if any column included as dictionary ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2513 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7233/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2513 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6006/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2513 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7239/ --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2513#discussion_r202973884 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/dataload/TestLoadDataGeneral.scala --- @@ -275,6 +279,34 @@ class TestLoadDataGeneral extends QueryTest with BeforeAndAfterEach { CarbonCommonConstants.BLOCKLET_SIZE_DEFAULT_VAL) } + test("block concurrent load if DICTIONARY_INCLUDE is specified") { --- End diff -- Though it happens in most cases, This test cannot ensure parallel data load case. I think better to remove testcase. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2513 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6012/ --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2513#discussion_r202983275 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala --- @@ -253,14 +253,16 @@ case class CarbonLoadDataCommand( } // First system has to partition the data first and then call the load data LOGGER.info(s"Initiating Direct Load for the Table : ($dbName.$tableName)") - // Clean up the old invalid segment data before creating a new entry for new load. - SegmentStatusManager.deleteLoadsAndUpdateMetadata(table, false, currPartitions) - // add the start entry for the new load in the table status file - if (updateModel.isEmpty && !table.isHivePartitionTable) { - CarbonLoaderUtil.readAndUpdateLoadProgressInTableMeta( - carbonLoadModel, - isOverwriteTable) - isUpdateTableStatusRequired = true + CarbonLoadDataCommand.synchronized { --- End diff -- Checking segment status file for identifying parallel loading cannot work, as it could not distinguish if loading job is running or killed. So Only way to identify a parallel loading case is using lock. First identify the table with dictionary column(not direct dictionary) as table that cannot support parallel load. Then for those tables data loading should acquire lock. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2513 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7282/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2513 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6049/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2513 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7285/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2513 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6052/ --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2513#discussion_r203414480 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala --- @@ -355,6 +383,14 @@ case class CarbonLoadDataCommand( val file = FileFactory.getCarbonFile(partitionLocation, fileType) CarbonUtil.deleteFoldersAndFiles(file) } + if (isConcurrentLockRequired && !concurrentLoadLock.unlock()) { + LOGGER + .info("concurrent_load lock for table" + table.getTablePath + + "has been released successfully") + } else { + LOGGER.error( + "Unable to unlock concurrent_load lock for table" + table.getTablePath); + } --- End diff -- Unlocking should be in finally. --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2513#discussion_r203414767 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala --- @@ -253,15 +257,39 @@ case class CarbonLoadDataCommand( } // First system has to partition the data first and then call the load data LOGGER.info(s"Initiating Direct Load for the Table : ($dbName.$tableName)") - // Clean up the old invalid segment data before creating a new entry for new load. - SegmentStatusManager.deleteLoadsAndUpdateMetadata(table, false, currPartitions) - // add the start entry for the new load in the table status file - if (updateModel.isEmpty && !table.isHivePartitionTable) { - CarbonLoaderUtil.readAndUpdateLoadProgressInTableMeta( - carbonLoadModel, - isOverwriteTable) - isUpdateTableStatusRequired = true + --- End diff -- Add function to acquire and release concurrent lock. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2513 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7293/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2513 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6060/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2513 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5911/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2513 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7296/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2513 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5914/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2513 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6063/ --- |
In reply to this post by qiuchenjian-2
Github user rahulforallp commented on the issue:
https://github.com/apache/carbondata/pull/2513 retest sdv please --- |
Free forum by Nabble | Edit this page |