GitHub user kevinjmh opened a pull request:
https://github.com/apache/carbondata/pull/2917 [WIP]Show load/insert/update/delete row number Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kevinjmh/carbondata ProceededRowCount Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2917.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2917 ---- commit f7d199e7d6d28c72124a2ad45635949976816d04 Author: Manhua <kevinjmh@...> Date: 2018-11-13T09:27:10Z show load/insert/update/delete row number ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2917 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1390/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2917 Build Failed with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9648/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2917 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1600/ --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:
https://github.com/apache/carbondata/pull/2917 This is a good feature, will it show the number of rows got deleted or updated? hope it can get into next version. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2917 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1449/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2917 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1659/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2917 Build Failed with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9707/ --- |
In reply to this post by qiuchenjian-2
Github user kevinjmh commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2917#discussion_r235654816 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/TestShowIUDRowCount.scala --- @@ -0,0 +1,60 @@ +package org.apache.carbondata.spark.testsuite.iud + +import org.apache.spark.sql.Row +import org.apache.spark.sql.test.util.QueryTest +import org.scalatest.{BeforeAndAfterAll, BeforeAndAfterEach} + +class TestShowIUDRowCount extends QueryTest with BeforeAndAfterEach with BeforeAndAfterAll { + + override protected def beforeAll(): Unit = { + dropTable("iud_rows") + } + + override protected def beforeEach(): Unit = { + dropTable("iud_rows") + } + + override protected def afterEach(): Unit = { + dropTable("iud_rows") + } + + test("Test show load row count") { + sql("""create table iud_rows (c1 string,c2 int,c3 string,c5 string) + |STORED BY 'org.apache.carbondata.format'""".stripMargin) + checkAnswer( + sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/IUD/dest.csv' INTO table iud_rows"""), --- End diff -- why `sql()` function in QueryTest get different plan to self made spark context or beeline --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2917 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1510/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2917 Build Failed with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9768/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2917 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1720/ --- |
In reply to this post by qiuchenjian-2
Github user kevinjmh commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2917#discussion_r242500505 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/TestShowIUDRowCount.scala --- @@ -0,0 +1,60 @@ +package org.apache.carbondata.spark.testsuite.iud + +import org.apache.spark.sql.Row +import org.apache.spark.sql.test.util.QueryTest +import org.scalatest.{BeforeAndAfterAll, BeforeAndAfterEach} + +class TestShowIUDRowCount extends QueryTest with BeforeAndAfterEach with BeforeAndAfterAll { + + override protected def beforeAll(): Unit = { + dropTable("iud_rows") + } + + override protected def beforeEach(): Unit = { + dropTable("iud_rows") + } + + override protected def afterEach(): Unit = { + dropTable("iud_rows") + } + + test("Test show load row count") { + sql("""create table iud_rows (c1 string,c2 int,c3 string,c5 string) + |STORED BY 'org.apache.carbondata.format'""".stripMargin) + checkAnswer( + sql(s"""LOAD DATA LOCAL INPATH '$resourcesPath/IUD/dest.csv' INTO table iud_rows"""), --- End diff -- I found that this test will pass if I add OPTIONS(), without/with detail config, in load command. And the logical plan will change from `LoadDataCommand`( class in spark) to `CarbonLoadDataCommand`(class in carbon). If we want to show proceeded row count, we need to change the value `output` in these command/query plan, but we cannot do that for `LoadDataCommand` in spark. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2917 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1976/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2917 Build Failed with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/10228/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2917 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2249/ --- |
Free forum by Nabble | Edit this page |