GitHub user ravipesala opened a pull request:
https://github.com/apache/carbondata/pull/2659 [CARBONDATA-2887] Fix complex filters on spark carbon file format Problem: Filters on complex types are not working using carbon fileformat as it try to push down nonull filter of complex type to carbon, but carbon does not handle any type of filters in complex types. Solution: Removed all types complex filters pushed down from carbon fileformat Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ravipesala/incubator-carbondata complex-issue Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2659.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2659 ---- commit f8881fb3da31337f071e0fba936d714d46a4afd6 Author: ravipesala <ravi.pesala@...> Date: 2018-08-24T15:13:07Z Fix complex filters on spark carbon file format ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2659 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6773/ --- |
In reply to this post by qiuchenjian-2
Github user ajantha-bhat commented on the issue:
https://github.com/apache/carbondata/pull/2659 retest this please --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2659 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6774/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2659 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6775/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2659 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6396/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2659 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6397/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2659 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8052/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2659 retest sdv please --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2659 retest this please --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2659 retest sdv please --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2659 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6777/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2659 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6402/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2659 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8058/ --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2659#discussion_r212785152 --- Diff: integration/spark-datasource/src/test/scala/org/apache/spark/sql/carbondata/datasource/SparkCarbonDataSourceTest.scala --- @@ -285,6 +285,44 @@ class SparkCarbonDataSourceTest extends FunSuite with BeforeAndAfterAll { spark.sql("drop table if exists date_parquet_table") } + test("test write with array type with filter") { + spark.sql("drop table if exists carbon_table") + spark.sql("drop table if exists parquet_table") + import spark.implicits._ + val df = spark.sparkContext.parallelize(1 to 10) + .map(x => ("a" + x % 10, Array("b", "c"), x)) + .toDF("c1", "c2", "number") + + df.write + .format("parquet").saveAsTable("parquet_table") + spark.sql("create table carbon_table(c1 string, c2 array<string>, number int) using carbon") + spark.sql("insert into carbon_table select * from parquet_table") + assert(spark.sql("select * from carbon_table").count() == 10) + TestUtil.checkAnswer(spark.sql("select * from carbon_table where c1='a1' and c2[0]='b'"), spark.sql("select * from parquet_table where c1='a1' and c2[0]='b'")) + TestUtil.checkAnswer(spark.sql("select * from carbon_table"), spark.sql("select * from parquet_table")) + spark.sql("drop table if exists carbon_table") + spark.sql("drop table if exists parquet_table") + } + + test("test write with struct type with filter") { + spark.sql("drop table if exists carbon_table") + spark.sql("drop table if exists parquet_table") + import spark.implicits._ + val df = spark.sparkContext.parallelize(1 to 10) + .map(x => ("a" + x % 10, ("b", "c"), x)) + .toDF("c1", "c2", "number") + + df.write + .format("parquet").saveAsTable("parquet_table") + spark.sql("create table carbon_table(c1 string, c2 struct<a1:string, a2:string>, number int) using carbon") --- End diff -- can you create more complex schema with array and struct --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2659 test --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2659 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2659#discussion_r212829790 --- Diff: integration/spark-datasource/src/test/scala/org/apache/spark/sql/carbondata/datasource/SparkCarbonDataSourceTest.scala --- @@ -285,6 +285,44 @@ class SparkCarbonDataSourceTest extends FunSuite with BeforeAndAfterAll { spark.sql("drop table if exists date_parquet_table") } + test("test write with array type with filter") { + spark.sql("drop table if exists carbon_table") + spark.sql("drop table if exists parquet_table") + import spark.implicits._ + val df = spark.sparkContext.parallelize(1 to 10) + .map(x => ("a" + x % 10, Array("b", "c"), x)) + .toDF("c1", "c2", "number") + + df.write + .format("parquet").saveAsTable("parquet_table") + spark.sql("create table carbon_table(c1 string, c2 array<string>, number int) using carbon") + spark.sql("insert into carbon_table select * from parquet_table") + assert(spark.sql("select * from carbon_table").count() == 10) + TestUtil.checkAnswer(spark.sql("select * from carbon_table where c1='a1' and c2[0]='b'"), spark.sql("select * from parquet_table where c1='a1' and c2[0]='b'")) + TestUtil.checkAnswer(spark.sql("select * from carbon_table"), spark.sql("select * from parquet_table")) + spark.sql("drop table if exists carbon_table") + spark.sql("drop table if exists parquet_table") + } + + test("test write with struct type with filter") { + spark.sql("drop table if exists carbon_table") + spark.sql("drop table if exists parquet_table") + import spark.implicits._ + val df = spark.sparkContext.parallelize(1 to 10) + .map(x => ("a" + x % 10, ("b", "c"), x)) + .toDF("c1", "c2", "number") + + df.write + .format("parquet").saveAsTable("parquet_table") + spark.sql("create table carbon_table(c1 string, c2 struct<a1:string, a2:string>, number int) using carbon") --- End diff -- ok --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2659 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6410/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2659 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6411/ --- |
Free forum by Nabble | Edit this page |