[GitHub] [carbondata] marchpure opened a new pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC deltafiles processing

classic Classic list List threaded Threaded
22 messages Options
12
Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] marchpure opened a new pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC deltafiles processing

GitBox

marchpure opened a new pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793


   ### Why is this PR needed?
    In the CDC flow. the parallelism of processing deltafiles is the same as executor number. The insufficient parallelism limits CDC's performance.
   
    ### What changes were proposed in this PR?
    Set the parallelism of processing deltafiles as same as the configured value of 'spark.sql.suffle.partitions'.
    Specially, it won't increase the file count of deltafiles because of the deltafiles combination.
   
    ### Does this PR introduce any user interface change?
    - No
   
    ### Is any new testcase added?
    - Yes
   
       
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC deltafiles processing

GitBox

CarbonDataQA1 commented on pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#issuecomment-646096200


   Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1445/
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC deltafiles processing

GitBox
In reply to this post by GitBox

CarbonDataQA1 commented on pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#issuecomment-646097526


   Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3171/
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] akashrn5 commented on a change in pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC intermediate files processing

GitBox
In reply to this post by GitBox

akashrn5 commented on a change in pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#discussion_r442662060



##########
File path: integration/spark/src/test/scala/org/apache/carbondata/spark/testsuite/merge/MergeTestCase.scala
##########
@@ -702,10 +716,25 @@ class MergeTestCase extends QueryTest with BeforeAndAfterAll {
       insertExpr(insertMap).
       whenMatched("B.deleted=true").
       delete().execute()
+    assert(getDeleteDeltaFileCount("target") == 1)
     checkAnswer(sql("select count(*) from target"), Seq(Row(3)))
     checkAnswer(sql("select * from target order by key"), Seq(Row("c", "200"), Row("d", "3"), Row("e", "100")))
   }
 
+  private def getDeleteDeltaFileCount(tableName: String): Int = {
+    val table = CarbonEnv.getCarbonTable(None, tableName)(sqlContext.sparkSession)
+    val path = table.getTablePath

Review comment:
       take the path till segment dir, that is till `Part0`

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/mutation/merge/CarbonMergeDataSetCommand.scala
##########
@@ -51,6 +52,7 @@ import org.apache.carbondata.core.mutate.CarbonUpdateUtil
 import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.carbondata.processing.loading.FailureCauses
 
+

Review comment:
       revert this

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/mutation/merge/CarbonMergeDataSetCommand.scala
##########
@@ -269,11 +271,10 @@ case class CarbonMergeDataSetCommand(
       new SparkCarbonFileFormat().prepareWrite(sparkSession, job,
         Map(), schema)
     val config = SparkSQLUtil.broadCastHadoopConf(sparkSession.sparkContext, job.getConfiguration)
-    (frame.rdd.coalesce(DistributionUtil.getConfiguredExecutors(sparkSession.sparkContext)).
-      mapPartitionsWithIndex { case (index, iter) =>
+    (frame.rdd.mapPartitionsWithIndex { case (index, iter) =>
         CarbonProperties.getInstance().addProperty(CarbonLoadOptionConstants
           .ENABLE_CARBON_LOAD_DIRECT_WRITE_TO_STORE_PATH, "true")
-        val confB = config.value.value
+        val confB = new Configuration(config.value.value)

Review comment:
       why this change required? `confB` is already of type `Configuration`

##########
File path: integration/spark/src/test/scala/org/apache/carbondata/spark/testsuite/merge/MergeTestCase.scala
##########
@@ -702,10 +716,25 @@ class MergeTestCase extends QueryTest with BeforeAndAfterAll {
       insertExpr(insertMap).
       whenMatched("B.deleted=true").
       delete().execute()
+    assert(getDeleteDeltaFileCount("target") == 1)
     checkAnswer(sql("select count(*) from target"), Seq(Row(3)))
     checkAnswer(sql("select * from target order by key"), Seq(Row("c", "200"), Row("d", "3"), Row("e", "100")))
   }
 
+  private def getDeleteDeltaFileCount(tableName: String): Int = {
+    val table = CarbonEnv.getCarbonTable(None, tableName)(sqlContext.sparkSession)
+    val path = table.getTablePath
+    val deleteDeltaFiles = FileFactory.getCarbonFile(path).listFiles(true, new CarbonFileFilter {
+      override def accept(file: CarbonFile): Boolean = file.getName.endsWith(CarbonCommonConstants
+        .DELETE_DELTA_FILE_EXT)
+    })
+    if (deleteDeltaFiles != null) {

Review comment:
       no need of null check, it returns an array, so directly return `deleteDeltaFiles .size`

##########
File path: integration/spark/src/test/scala/org/apache/carbondata/spark/testsuite/merge/MergeTestCase.scala
##########
@@ -547,6 +558,7 @@ class MergeTestCase extends QueryTest with BeforeAndAfterAll {
     CarbonMergeDataSetCommand(dwSelframe,
       odsframe,
       MergeDataSetMatches(col("A.id").equalTo(col("B.id")), matches.toList)).run(sqlContext.sparkSession)
+

Review comment:
       revert this change




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] marchpure commented on a change in pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC intermediate files processing

GitBox
In reply to this post by GitBox

marchpure commented on a change in pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#discussion_r442741175



##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/mutation/merge/CarbonMergeDataSetCommand.scala
##########
@@ -51,6 +52,7 @@ import org.apache.carbondata.core.mutate.CarbonUpdateUtil
 import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.carbondata.processing.loading.FailureCauses
 
+

Review comment:
       modified




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] marchpure commented on a change in pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC intermediate files processing

GitBox
In reply to this post by GitBox

marchpure commented on a change in pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#discussion_r442741921



##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/mutation/merge/CarbonMergeDataSetCommand.scala
##########
@@ -269,11 +271,10 @@ case class CarbonMergeDataSetCommand(
       new SparkCarbonFileFormat().prepareWrite(sparkSession, job,
         Map(), schema)
     val config = SparkSQLUtil.broadCastHadoopConf(sparkSession.sparkContext, job.getConfiguration)
-    (frame.rdd.coalesce(DistributionUtil.getConfiguredExecutors(sparkSession.sparkContext)).
-      mapPartitionsWithIndex { case (index, iter) =>
+    (frame.rdd.mapPartitionsWithIndex { case (index, iter) =>
         CarbonProperties.getInstance().addProperty(CarbonLoadOptionConstants
           .ENABLE_CARBON_LOAD_DIRECT_WRITE_TO_STORE_PATH, "true")
-        val confB = config.value.value
+        val confB = new Configuration(config.value.value)

Review comment:
       In multi-concurrent scenarios, conf is tampered, leading to some exceptions. A new configuration can solved this issue.

##########
File path: integration/spark/src/test/scala/org/apache/carbondata/spark/testsuite/merge/MergeTestCase.scala
##########
@@ -547,6 +558,7 @@ class MergeTestCase extends QueryTest with BeforeAndAfterAll {
     CarbonMergeDataSetCommand(dwSelframe,
       odsframe,
       MergeDataSetMatches(col("A.id").equalTo(col("B.id")), matches.toList)).run(sqlContext.sparkSession)
+

Review comment:
       modified




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] marchpure commented on a change in pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC intermediate files processing

GitBox
In reply to this post by GitBox

marchpure commented on a change in pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#discussion_r442742117



##########
File path: integration/spark/src/test/scala/org/apache/carbondata/spark/testsuite/merge/MergeTestCase.scala
##########
@@ -702,10 +716,25 @@ class MergeTestCase extends QueryTest with BeforeAndAfterAll {
       insertExpr(insertMap).
       whenMatched("B.deleted=true").
       delete().execute()
+    assert(getDeleteDeltaFileCount("target") == 1)
     checkAnswer(sql("select count(*) from target"), Seq(Row(3)))
     checkAnswer(sql("select * from target order by key"), Seq(Row("c", "200"), Row("d", "3"), Row("e", "100")))
   }
 
+  private def getDeleteDeltaFileCount(tableName: String): Int = {
+    val table = CarbonEnv.getCarbonTable(None, tableName)(sqlContext.sparkSession)
+    val path = table.getTablePath
+    val deleteDeltaFiles = FileFactory.getCarbonFile(path).listFiles(true, new CarbonFileFilter {
+      override def accept(file: CarbonFile): Boolean = file.getName.endsWith(CarbonCommonConstants
+        .DELETE_DELTA_FILE_EXT)
+    })
+    if (deleteDeltaFiles != null) {

Review comment:
       modified




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] marchpure commented on a change in pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC intermediate files processing

GitBox
In reply to this post by GitBox

marchpure commented on a change in pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#discussion_r442742889



##########
File path: integration/spark/src/test/scala/org/apache/carbondata/spark/testsuite/merge/MergeTestCase.scala
##########
@@ -702,10 +716,25 @@ class MergeTestCase extends QueryTest with BeforeAndAfterAll {
       insertExpr(insertMap).
       whenMatched("B.deleted=true").
       delete().execute()
+    assert(getDeleteDeltaFileCount("target") == 1)
     checkAnswer(sql("select count(*) from target"), Seq(Row(3)))
     checkAnswer(sql("select * from target order by key"), Seq(Row("c", "200"), Row("d", "3"), Row("e", "100")))
   }
 
+  private def getDeleteDeltaFileCount(tableName: String): Int = {
+    val table = CarbonEnv.getCarbonTable(None, tableName)(sqlContext.sparkSession)
+    val path = table.getTablePath

Review comment:
       I scan the dir with recursive = true. It will get # of all deltafiles.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC intermediate files processing

GitBox
In reply to this post by GitBox

CarbonDataQA1 commented on pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#issuecomment-646597966


   Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3173/
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC intermediate files processing

GitBox
In reply to this post by GitBox

CarbonDataQA1 commented on pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#issuecomment-646598464


   Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1448/
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] akashrn5 commented on a change in pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC intermediate files processing

GitBox
In reply to this post by GitBox

akashrn5 commented on a change in pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#discussion_r442947011



##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/mutation/merge/CarbonMergeDataSetCommand.scala
##########
@@ -269,11 +271,10 @@ case class CarbonMergeDataSetCommand(
       new SparkCarbonFileFormat().prepareWrite(sparkSession, job,
         Map(), schema)
     val config = SparkSQLUtil.broadCastHadoopConf(sparkSession.sparkContext, job.getConfiguration)
-    (frame.rdd.coalesce(DistributionUtil.getConfiguredExecutors(sparkSession.sparkContext)).
-      mapPartitionsWithIndex { case (index, iter) =>
+    (frame.rdd.mapPartitionsWithIndex { case (index, iter) =>
         CarbonProperties.getInstance().addProperty(CarbonLoadOptionConstants
           .ENABLE_CARBON_LOAD_DIRECT_WRITE_TO_STORE_PATH, "true")
-        val confB = config.value.value
+        val confB = new Configuration(config.value.value)

Review comment:
       sorry, i did not understand conf is tampered, can you please tell what type error you got during concurrent scenarios?

##########
File path: integration/spark/src/test/scala/org/apache/carbondata/spark/testsuite/merge/MergeTestCase.scala
##########
@@ -702,10 +716,25 @@ class MergeTestCase extends QueryTest with BeforeAndAfterAll {
       insertExpr(insertMap).
       whenMatched("B.deleted=true").
       delete().execute()
+    assert(getDeleteDeltaFileCount("target") == 1)
     checkAnswer(sql("select count(*) from target"), Seq(Row(3)))
     checkAnswer(sql("select * from target order by key"), Seq(Row("c", "200"), Row("d", "3"), Row("e", "100")))
   }
 
+  private def getDeleteDeltaFileCount(tableName: String): Int = {
+    val table = CarbonEnv.getCarbonTable(None, tableName)(sqlContext.sparkSession)
+    val path = table.getTablePath

Review comment:
       yeah, saw that, but if we give the path till Part directory, we can reduce the files in list, so better to change the path.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] marchpure commented on a change in pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC intermediate files processing

GitBox
In reply to this post by GitBox

marchpure commented on a change in pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#discussion_r442954765



##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/mutation/merge/CarbonMergeDataSetCommand.scala
##########
@@ -269,11 +271,10 @@ case class CarbonMergeDataSetCommand(
       new SparkCarbonFileFormat().prepareWrite(sparkSession, job,
         Map(), schema)
     val config = SparkSQLUtil.broadCastHadoopConf(sparkSession.sparkContext, job.getConfiguration)
-    (frame.rdd.coalesce(DistributionUtil.getConfiguredExecutors(sparkSession.sparkContext)).
-      mapPartitionsWithIndex { case (index, iter) =>
+    (frame.rdd.mapPartitionsWithIndex { case (index, iter) =>
         CarbonProperties.getInstance().addProperty(CarbonLoadOptionConstants
           .ENABLE_CARBON_LOAD_DIRECT_WRITE_TO_STORE_PATH, "true")
-        val confB = config.value.value
+        val confB = new Configuration(config.value.value)

Review comment:
       Without this change. The CI throws the exceptions (checked in serveral different envs):
   java.lang.RuntimeException: Store location not set for the key __temptable-f8c5bbbf-0b73-4288-9438-146283d442c0_1592586283728_null_e8112868-3cf4-442d-bf9c-05f90bfca8240x0
    at org.apache.carbondata.processing.loading.TableProcessingOperations.deleteLocalDataLoadFolderLocation(TableProcessingOperations.java:125)
   
   
   The root cause is
   the line: "val context = new TaskAttemptContextImpl(**confB**, attemptID)"
   different context will have the same taskno, which is strange.
   
   With this line, the exception disappears, CI passed.
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] marchpure commented on a change in pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC intermediate files processing

GitBox
In reply to this post by GitBox

marchpure commented on a change in pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#discussion_r442959405



##########
File path: integration/spark/src/test/scala/org/apache/carbondata/spark/testsuite/merge/MergeTestCase.scala
##########
@@ -702,10 +716,25 @@ class MergeTestCase extends QueryTest with BeforeAndAfterAll {
       insertExpr(insertMap).
       whenMatched("B.deleted=true").
       delete().execute()
+    assert(getDeleteDeltaFileCount("target") == 1)
     checkAnswer(sql("select count(*) from target"), Seq(Row(3)))
     checkAnswer(sql("select * from target order by key"), Seq(Row("c", "200"), Row("d", "3"), Row("e", "100")))
   }
 
+  private def getDeleteDeltaFileCount(tableName: String): Int = {
+    val table = CarbonEnv.getCarbonTable(None, tableName)(sqlContext.sparkSession)
+    val path = table.getTablePath

Review comment:
       modified




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] marchpure commented on a change in pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC intermediate files processing

GitBox
In reply to this post by GitBox

marchpure commented on a change in pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#discussion_r442959766



##########
File path: integration/spark/src/test/scala/org/apache/carbondata/spark/testsuite/merge/MergeTestCase.scala
##########
@@ -702,10 +716,25 @@ class MergeTestCase extends QueryTest with BeforeAndAfterAll {
       insertExpr(insertMap).
       whenMatched("B.deleted=true").
       delete().execute()
+    assert(getDeleteDeltaFileCount("target") == 1)
     checkAnswer(sql("select count(*) from target"), Seq(Row(3)))
     checkAnswer(sql("select * from target order by key"), Seq(Row("c", "200"), Row("d", "3"), Row("e", "100")))
   }
 
+  private def getDeleteDeltaFileCount(tableName: String): Int = {
+    val table = CarbonEnv.getCarbonTable(None, tableName)(sqlContext.sparkSession)
+    val path = table.getTablePath

Review comment:
       Agree with you. Thanks a lot for this good find.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC intermediate files processing

GitBox
In reply to this post by GitBox

CarbonDataQA1 commented on pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#issuecomment-646831746


   Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3174/
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC intermediate files processing

GitBox
In reply to this post by GitBox

CarbonDataQA1 commented on pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#issuecomment-646832146


   Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1449/
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] akashrn5 commented on a change in pull request #3793: [CARBONDATA-3858] Increase the parallelism of CDC intermediate files processing

GitBox
In reply to this post by GitBox

akashrn5 commented on a change in pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#discussion_r443361855



##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/mutation/merge/CarbonMergeDataSetCommand.scala
##########
@@ -269,11 +271,10 @@ case class CarbonMergeDataSetCommand(
       new SparkCarbonFileFormat().prepareWrite(sparkSession, job,
         Map(), schema)
     val config = SparkSQLUtil.broadCastHadoopConf(sparkSession.sparkContext, job.getConfiguration)
-    (frame.rdd.coalesce(DistributionUtil.getConfiguredExecutors(sparkSession.sparkContext)).
-      mapPartitionsWithIndex { case (index, iter) =>
+    (frame.rdd.mapPartitionsWithIndex { case (index, iter) =>
         CarbonProperties.getInstance().addProperty(CarbonLoadOptionConstants
           .ENABLE_CARBON_LOAD_DIRECT_WRITE_TO_STORE_PATH, "true")
-        val confB = config.value.value
+        val confB = new Configuration(config.value.value)

Review comment:
       i think adding new conf for it is not correct we need to analyze properly, may be you can revert these changes and we can handle during other cdc  optimizations




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3793: [CARBONDATA-3858] Check CDC deltafiles count in the testcase

GitBox
In reply to this post by GitBox

CarbonDataQA1 commented on pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#issuecomment-647406601


   Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3188/
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3793: [CARBONDATA-3858] Check CDC deltafiles count in the testcase

GitBox
In reply to this post by GitBox

CarbonDataQA1 commented on pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#issuecomment-647407337


   Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1462/
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] akashrn5 commented on pull request #3793: [CARBONDATA-3858] Check CDC deltafiles count in the testcase

GitBox
In reply to this post by GitBox

akashrn5 commented on pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#issuecomment-647900779


   LGTM


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


12