GitHub user zzcclp opened a pull request:
https://github.com/apache/carbondata/pull/1962 [CARBONDATA-2149]Fix complex type data displaying error when use DataFrame to write complex type data The default value of 'complex_delimiter_level_1' and 'complex_delimiter_level_2' is wrong, it must be '$' and ':', not be '\\$' and '\\:'. Escape characters '\\' need to be added only when using delimiters in ArrayParserImpl or StructParserImpl. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/zzcclp/carbondata CARBONDATA-2149 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1962.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1962 ---- commit 75d11d447d8e36188fff95fb1c6ac688fbcc6d6c Author: Zhang Zhichao <441586683@...> Date: 2018-02-09T09:32:54Z [CARBONDATA-2149]Fix complex type data displaying error when use DataFrame to write complex type data The default value of 'complex_delimiter_level_1' and 'complex_delimiter_level_2' is wrong, it must be '$' and ':', not be '\\$' and '\\:'. Escape characters '\\' need to be added only when using delimiters in ArrayParserImpl or StructParserImpl. ---- --- |
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1962 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3480/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1962 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3641/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1962 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2403/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1962 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2453/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1962 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3693/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1962 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2455/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1962 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3695/ --- |
In reply to this post by qiuchenjian-2
Github user zzcclp commented on the issue:
https://github.com/apache/carbondata/pull/1962 anyone can help to review this pr? --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1962#discussion_r170410120 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/DataFrameComplexTypeExample.scala --- @@ -0,0 +1,88 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.carbondata.examples + +import org.apache.spark.sql.SaveMode + +case class StructElement(school: Array[String], age: Int) +case class ComplexTypeData(id: Int, name: String, city: String, salary: Float, file: StructElement) + +// scalastyle:off println +object DataFrameComplexTypeExample { + + def main(args: Array[String]) { + + val spark = ExampleUtils.createCarbonSession("DataFrameComplexTypeExample", 4) + val complexTableName = s"complex_type_table" + + import spark.implicits._ + + // drop table if exists previously + spark.sql(s"DROP TABLE IF EXISTS ${ complexTableName }") + spark.sql( + s""" + | CREATE TABLE ${ complexTableName }( + | id INT, + | name STRING, + | city STRING, + | salary FLOAT, + | file struct<school:array<string>, age:int> + | ) + | STORED BY 'carbondata' + | TBLPROPERTIES( + | 'sort_columns'='name', + | 'dictionary_include'='city') + | """.stripMargin) + + val sc = spark.sparkContext + // generate data + val df = sc.parallelize(Seq( + ComplexTypeData(1, "index_1", "city_1", 10000.0f, + StructElement(Array("struct_11", "struct_12"), 10)), + ComplexTypeData(2, "index_2", "city_2", 20000.0f, + StructElement(Array("struct_21", "struct_22"), 20)), + ComplexTypeData(3, "index_3", "city_3", 30000.0f, + StructElement(Array("struct_31", "struct_32"), 30)) + )).toDF + df.printSchema() + df.write + .format("carbondata") + .option("tableName", complexTableName) + .option("single_pass", "true") --- End diff -- why adds this option : option("single_pass", "true") ? --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1962#discussion_r170410239 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/DataFrameComplexTypeExample.scala --- @@ -0,0 +1,88 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.carbondata.examples + +import org.apache.spark.sql.SaveMode + +case class StructElement(school: Array[String], age: Int) +case class ComplexTypeData(id: Int, name: String, city: String, salary: Float, file: StructElement) + +// scalastyle:off println +object DataFrameComplexTypeExample { + + def main(args: Array[String]) { + + val spark = ExampleUtils.createCarbonSession("DataFrameComplexTypeExample", 4) + val complexTableName = s"complex_type_table" + + import spark.implicits._ + + // drop table if exists previously + spark.sql(s"DROP TABLE IF EXISTS ${ complexTableName }") + spark.sql( + s""" + | CREATE TABLE ${ complexTableName }( + | id INT, + | name STRING, + | city STRING, + | salary FLOAT, + | file struct<school:array<string>, age:int> + | ) + | STORED BY 'carbondata' + | TBLPROPERTIES( + | 'sort_columns'='name', + | 'dictionary_include'='city') + | """.stripMargin) + + val sc = spark.sparkContext + // generate data + val df = sc.parallelize(Seq( + ComplexTypeData(1, "index_1", "city_1", 10000.0f, + StructElement(Array("struct_11", "struct_12"), 10)), + ComplexTypeData(2, "index_2", "city_2", 20000.0f, + StructElement(Array("struct_21", "struct_22"), 20)), + ComplexTypeData(3, "index_3", "city_3", 30000.0f, + StructElement(Array("struct_31", "struct_32"), 30)) + )).toDF + df.printSchema() + df.write + .format("carbondata") + .option("tableName", complexTableName) + .option("single_pass", "true") + .mode(SaveMode.Append) + .save() + + spark.sql(s"select count(*) from ${ complexTableName }").show(100, truncate = false) + + spark.sql(s"select * from ${ complexTableName } order by id desc").show(300, truncate = false) + + spark.sql(s"select * " + + s"from ${ complexTableName } " + + s"where id = 100000001 or id = 1 limit 100").show(100, truncate = false) + + spark.sql(s"select * " + + s"from ${ complexTableName } " + + s"where id > 10 limit 100").show(100, truncate = false) + + // show segments + spark.sql(s"SHOW SEGMENTS FOR TABLE ${complexTableName}").show(false) + --- End diff -- please drop table at here --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1962#discussion_r170410468 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestLoadDataWithNoMeasure.scala --- @@ -101,7 +102,7 @@ class TestLoadDataWithNoMeasure extends QueryTest with BeforeAndAfterAll { val testData = s"$resourcesPath/datasingleComplexCol.csv" sql("LOAD DATA LOCAL INPATH '" + testData + "' into table nomeasureTest_scd options " + "('DELIMITER'=',','QUOTECHAR'='\"','FILEHEADER'='cityDetail'," + - "'COMPLEX_DELIMITER_LEVEL_1'=':')" + "'COMPLEX_DELIMITER_LEVEL_1'=':','COMPLEX_DELIMITER_LEVEL_2'='$')" --- End diff -- LEVEL_1 should be '$', LEVEL_2 should be ':' as per CarbonTableOutputFormat.java code change. --- |
In reply to this post by qiuchenjian-2
Github user zzcclp commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1962#discussion_r170443663 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/DataFrameComplexTypeExample.scala --- @@ -0,0 +1,88 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.carbondata.examples + +import org.apache.spark.sql.SaveMode + +case class StructElement(school: Array[String], age: Int) +case class ComplexTypeData(id: Int, name: String, city: String, salary: Float, file: StructElement) + +// scalastyle:off println +object DataFrameComplexTypeExample { + + def main(args: Array[String]) { + + val spark = ExampleUtils.createCarbonSession("DataFrameComplexTypeExample", 4) + val complexTableName = s"complex_type_table" + + import spark.implicits._ + + // drop table if exists previously + spark.sql(s"DROP TABLE IF EXISTS ${ complexTableName }") + spark.sql( + s""" + | CREATE TABLE ${ complexTableName }( + | id INT, + | name STRING, + | city STRING, + | salary FLOAT, + | file struct<school:array<string>, age:int> + | ) + | STORED BY 'carbondata' + | TBLPROPERTIES( + | 'sort_columns'='name', + | 'dictionary_include'='city') + | """.stripMargin) + + val sc = spark.sparkContext + // generate data + val df = sc.parallelize(Seq( + ComplexTypeData(1, "index_1", "city_1", 10000.0f, + StructElement(Array("struct_11", "struct_12"), 10)), + ComplexTypeData(2, "index_2", "city_2", 20000.0f, + StructElement(Array("struct_21", "struct_22"), 20)), + ComplexTypeData(3, "index_3", "city_3", 30000.0f, + StructElement(Array("struct_31", "struct_32"), 30)) + )).toDF + df.printSchema() + df.write + .format("carbondata") + .option("tableName", complexTableName) + .option("single_pass", "true") --- End diff -- It is irrelevant, removed. --- |
In reply to this post by qiuchenjian-2
Github user zzcclp commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1962#discussion_r170443664 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/DataFrameComplexTypeExample.scala --- @@ -0,0 +1,88 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.carbondata.examples + +import org.apache.spark.sql.SaveMode + +case class StructElement(school: Array[String], age: Int) +case class ComplexTypeData(id: Int, name: String, city: String, salary: Float, file: StructElement) + +// scalastyle:off println +object DataFrameComplexTypeExample { + + def main(args: Array[String]) { + + val spark = ExampleUtils.createCarbonSession("DataFrameComplexTypeExample", 4) + val complexTableName = s"complex_type_table" + + import spark.implicits._ + + // drop table if exists previously + spark.sql(s"DROP TABLE IF EXISTS ${ complexTableName }") + spark.sql( + s""" + | CREATE TABLE ${ complexTableName }( + | id INT, + | name STRING, + | city STRING, + | salary FLOAT, + | file struct<school:array<string>, age:int> + | ) + | STORED BY 'carbondata' + | TBLPROPERTIES( + | 'sort_columns'='name', + | 'dictionary_include'='city') + | """.stripMargin) + + val sc = spark.sparkContext + // generate data + val df = sc.parallelize(Seq( + ComplexTypeData(1, "index_1", "city_1", 10000.0f, + StructElement(Array("struct_11", "struct_12"), 10)), + ComplexTypeData(2, "index_2", "city_2", 20000.0f, + StructElement(Array("struct_21", "struct_22"), 20)), + ComplexTypeData(3, "index_3", "city_3", 30000.0f, + StructElement(Array("struct_31", "struct_32"), 30)) + )).toDF + df.printSchema() + df.write + .format("carbondata") + .option("tableName", complexTableName) + .option("single_pass", "true") + .mode(SaveMode.Append) + .save() + + spark.sql(s"select count(*) from ${ complexTableName }").show(100, truncate = false) + + spark.sql(s"select * from ${ complexTableName } order by id desc").show(300, truncate = false) + + spark.sql(s"select * " + + s"from ${ complexTableName } " + + s"where id = 100000001 or id = 1 limit 100").show(100, truncate = false) + + spark.sql(s"select * " + + s"from ${ complexTableName } " + + s"where id > 10 limit 100").show(100, truncate = false) + + // show segments + spark.sql(s"SHOW SEGMENTS FOR TABLE ${complexTableName}").show(false) + --- End diff -- Done --- |
In reply to this post by qiuchenjian-2
Github user zzcclp commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1962#discussion_r170443695 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestLoadDataWithNoMeasure.scala --- @@ -101,7 +102,7 @@ class TestLoadDataWithNoMeasure extends QueryTest with BeforeAndAfterAll { val testData = s"$resourcesPath/datasingleComplexCol.csv" sql("LOAD DATA LOCAL INPATH '" + testData + "' into table nomeasureTest_scd options " + "('DELIMITER'=',','QUOTECHAR'='\"','FILEHEADER'='cityDetail'," + - "'COMPLEX_DELIMITER_LEVEL_1'=':')" + "'COMPLEX_DELIMITER_LEVEL_1'=':','COMPLEX_DELIMITER_LEVEL_2'='$')" --- End diff -- This test case was not added by me, the level 1 of delimiter was ':' in the csv file. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1962 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3877/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1962 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2632/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1962 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3656/ --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on the issue:
https://github.com/apache/carbondata/pull/1962 LGTM --- |
In reply to this post by qiuchenjian-2
|
Free forum by Nabble | Edit this page |