GitHub user QiangCai opened a pull request:
https://github.com/apache/carbondata/pull/896 [CARBONDATA-936] Create table with partition and add test case (12-dev) 1. add PartitionInfo converter 2. support spark 1.6 3. add test case You can merge this pull request into a Git repository by running: $ git pull https://github.com/QiangCai/incubator-carbondata createpartitiontable Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/896.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #896 ---- commit cc0a51794008c4252c0e1c93cb86ff6ed9368da4 Author: QiangCai <[hidden email]> Date: 2017-05-08T15:20:13Z create table with partition ---- --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/896 Build Failed with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1955/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/896 Build Failed with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1963/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/896 Build Success with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1969/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user QiangCai commented on the issue:
https://github.com/apache/carbondata/pull/896 @jackylk please review --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/896#discussion_r115888919 --- Diff: core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java --- @@ -381,6 +422,40 @@ private DataType fromExternalToWrapperDataType(org.apache.carbondata.format.Data return wrapperColumnSchema; } + private PartitionType fromExternalToWrapperPartitionType( + org.apache.carbondata.format.PartitionType externalPartitionType) { + if (null == externalPartitionType) { + return null; --- End diff -- suggest to throw IllegalArguementException instead --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/896#discussion_r115889135 --- Diff: core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java --- @@ -381,6 +422,40 @@ private DataType fromExternalToWrapperDataType(org.apache.carbondata.format.Data return wrapperColumnSchema; } + private PartitionType fromExternalToWrapperPartitionType( + org.apache.carbondata.format.PartitionType externalPartitionType) { + if (null == externalPartitionType) { + return null; + } + switch (externalPartitionType) { + case HASH: + return PartitionType.HASH; + case LIST: + return PartitionType.LIST; + case RANGE: + return PartitionType.RANGE; + case RANGE_INTERVAL: + return PartitionType.RANGE_INTERVAL; + default: + return PartitionType.HASH; + } + } + + private PartitionInfo fromExternalToWrapperPartitionInfo( + org.apache.carbondata.format.PartitionInfo externalPartitionInfo) { + List<ColumnSchema> wrapperColumnSchema = new ArrayList<ColumnSchema>(); + for (org.apache.carbondata.format.ColumnSchema columnSchema : externalPartitionInfo --- End diff -- move `externalPartitionInfo` to next line, please also check other newly added code in this class --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/896 Build Failed with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1986/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/896#discussion_r115891532 --- Diff: integration/spark-common/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchema.scala --- @@ -506,6 +506,14 @@ class TableNewProcessor(cm: TableModel) { tableSchema.setBucketingInfo( new BucketingInfo(bucketCols.asJava, cm.bucketFields.get.numberOfBuckets)) } + if (cm.partitionInfo.isDefined) { + val partitionInfo = cm.partitionInfo.get + val partitionCols = partitionInfo.getColumnSchemaList.asScala.map { columnSchema => + allColumns.find(_.getColumnName.equalsIgnoreCase(columnSchema.getColumnName)).get --- End diff -- suggest to use `filter` instead of `find` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/896#discussion_r115891588 --- Diff: integration/spark/src/main/scala/org/apache/spark/sql/CarbonSqlParser.scala --- @@ -239,8 +241,27 @@ class CarbonSqlParser() extends CarbonDDLSqlParser { val columnName = col.getName() val dataType = Option(col.getType) val comment = col.getComment + val x = '`' + col.getName + '`' + ' ' + col.getType + val f = Field(columnName, dataType, Some(columnName), None) --- End diff -- please use meaningful name instead of `x` and `f` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/896#discussion_r115891756 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/parser/CarbonSparkSqlParser.scala --- @@ -122,15 +122,15 @@ class CarbonSqlAstBuilder(conf: SQLConf) extends SparkSqlAstBuilder(conf) { if (!CommonUtil.validatePartitionColumns(tableProperties, partitionerFields)) { throw new MalformedCarbonCommandException("Invalid partition definition") } - // partition columns must be part of the schema + // partition columns can't be part of the schema val badPartCols = partitionerFields.map(_.partitionColumn).toSet.intersect(colNames.toSet) - if (badPartCols.isEmpty) { --- End diff -- please move all validation logic into validate function --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/896#discussion_r115891931 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/parser/CarbonSparkSqlParser.scala --- @@ -97,8 +97,8 @@ class CarbonSqlAstBuilder(conf: SQLConf) extends SparkSqlAstBuilder(conf) { if (ctx.bucketSpec != null) { operationNotAllowed("CREATE TABLE ... CLUSTERED BY", ctx) } - val partitionerFields = Option(ctx.partitionColumns).toSeq.flatMap(visitColTypeList) - .map( structField => + val partitionByStructField = Option(ctx.partitionColumns).toSeq.flatMap(visitColTypeList) --- End diff -- should be `partitionByStructFields` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/896#discussion_r115891982 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/parser/CarbonSparkSqlParser.scala --- @@ -97,8 +97,8 @@ class CarbonSqlAstBuilder(conf: SQLConf) extends SparkSqlAstBuilder(conf) { if (ctx.bucketSpec != null) { operationNotAllowed("CREATE TABLE ... CLUSTERED BY", ctx) } - val partitionerFields = Option(ctx.partitionColumns).toSeq.flatMap(visitColTypeList) - .map( structField => + val partitionByStructField = Option(ctx.partitionColumns).toSeq.flatMap(visitColTypeList) + val partitionerFields = partitionByStructField.map( structField => --- End diff -- change to `.map { structField =>` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/896 Build Success with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1989/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user QiangCai commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/896#discussion_r115900968 --- Diff: core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java --- @@ -381,6 +422,40 @@ private DataType fromExternalToWrapperDataType(org.apache.carbondata.format.Data return wrapperColumnSchema; } + private PartitionType fromExternalToWrapperPartitionType( + org.apache.carbondata.format.PartitionType externalPartitionType) { + if (null == externalPartitionType) { + return null; + } + switch (externalPartitionType) { + case HASH: + return PartitionType.HASH; + case LIST: + return PartitionType.LIST; + case RANGE: + return PartitionType.RANGE; + case RANGE_INTERVAL: + return PartitionType.RANGE_INTERVAL; + default: + return PartitionType.HASH; + } + } + + private PartitionInfo fromExternalToWrapperPartitionInfo( + org.apache.carbondata.format.PartitionInfo externalPartitionInfo) { + List<ColumnSchema> wrapperColumnSchema = new ArrayList<ColumnSchema>(); + for (org.apache.carbondata.format.ColumnSchema columnSchema : externalPartitionInfo --- End diff -- fixed --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user QiangCai commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/896#discussion_r115900985 --- Diff: integration/spark-common/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchema.scala --- @@ -506,6 +506,14 @@ class TableNewProcessor(cm: TableModel) { tableSchema.setBucketingInfo( new BucketingInfo(bucketCols.asJava, cm.bucketFields.get.numberOfBuckets)) } + if (cm.partitionInfo.isDefined) { + val partitionInfo = cm.partitionInfo.get + val partitionCols = partitionInfo.getColumnSchemaList.asScala.map { columnSchema => + allColumns.find(_.getColumnName.equalsIgnoreCase(columnSchema.getColumnName)).get --- End diff -- fixed --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user QiangCai commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/896#discussion_r115901104 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/parser/CarbonSparkSqlParser.scala --- @@ -122,15 +122,15 @@ class CarbonSqlAstBuilder(conf: SQLConf) extends SparkSqlAstBuilder(conf) { if (!CommonUtil.validatePartitionColumns(tableProperties, partitionerFields)) { throw new MalformedCarbonCommandException("Invalid partition definition") } - // partition columns must be part of the schema + // partition columns can't be part of the schema val badPartCols = partitionerFields.map(_.partitionColumn).toSet.intersect(colNames.toSet) - if (badPartCols.isEmpty) { --- End diff -- It will add more two parameters to validate function, better to keep. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user QiangCai commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/896#discussion_r115901128 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/parser/CarbonSparkSqlParser.scala --- @@ -97,8 +97,8 @@ class CarbonSqlAstBuilder(conf: SQLConf) extends SparkSqlAstBuilder(conf) { if (ctx.bucketSpec != null) { operationNotAllowed("CREATE TABLE ... CLUSTERED BY", ctx) } - val partitionerFields = Option(ctx.partitionColumns).toSeq.flatMap(visitColTypeList) - .map( structField => + val partitionByStructField = Option(ctx.partitionColumns).toSeq.flatMap(visitColTypeList) --- End diff -- fixed --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user QiangCai commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/896#discussion_r115901142 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/parser/CarbonSparkSqlParser.scala --- @@ -97,8 +97,8 @@ class CarbonSqlAstBuilder(conf: SQLConf) extends SparkSqlAstBuilder(conf) { if (ctx.bucketSpec != null) { operationNotAllowed("CREATE TABLE ... CLUSTERED BY", ctx) } - val partitionerFields = Option(ctx.partitionColumns).toSeq.flatMap(visitColTypeList) - .map( structField => + val partitionByStructField = Option(ctx.partitionColumns).toSeq.flatMap(visitColTypeList) + val partitionerFields = partitionByStructField.map( structField => --- End diff -- fixed --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/896#discussion_r115904013 --- Diff: integration/spark/src/main/scala/org/apache/spark/sql/CarbonSqlParser.scala --- @@ -239,8 +241,27 @@ class CarbonSqlParser() extends CarbonDDLSqlParser { val columnName = col.getName() val dataType = Option(col.getType) val comment = col.getComment + val rawSchema = '`' + col.getName + '`' + ' ' + col.getType + val f = Field(columnName, dataType, Some(columnName), None) --- End diff -- please rename `f` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
Free forum by Nabble | Edit this page |