GitHub user kunal642 opened a pull request:
https://github.com/apache/carbondata/pull/1040 [CARBONDATA-1171] Added support for show partitions DDL This PR will give support for show partitions DDL. Example:- show partitions <table-name>. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kunal642/carbondata show_partitions Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1040.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1040 ---- commit ba4ad237dfd70a93b585a3a6c5e086888fe48426 Author: kunal642 <[hidden email]> Date: 2017-06-15T09:00:23Z added support for show partitions DDL ---- --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
Github user asfgit commented on the issue:
https://github.com/apache/carbondata/pull/1040 Can one of the admins verify this patch? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1040 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2505/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1040 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2512/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user asfgit commented on the issue:
https://github.com/apache/carbondata/pull/1040 Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/carbondata-pr-spark-1.6/394/<h2>Failed Tests: <span class='status-failure'>3</span></h2><h3><a name='carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test' /><a href='https://builds.apache.org/job/carbondata-pr-spark-1.6/394/org.apache.carbondata$carbondata-spark-common-test/testReport'>carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test</a>: <span class='status-failure'>3</span></h3><ul><li><a href='https://builds.apache.org/job/carbondata-pr-spark-1.6/394/org.apache.carbondata$carbondata-spark-common-test/testReport/org.apache.carbondata.spark.testsuite.partition/TestDataLoadingForPartitionTable/data_loading_for_partition_table__range_partition/'><strong>org.apache.carbondata.spark.testsuite.partition.TestDataLoadingForPartitionTable.data loading for partition table: range partition</strong></a></li><li><a href='https://builds.apache.org/job/carbondata-pr-spark-1.6/394/org.apache.carbondata$carbondata-spa rk-common-test/testReport/org.apache.carbondata.spark.testsuite.partition/TestDataLoadingForPartitionTable/Insert_into_for_partition_table__range_partition/'><strong>org.apache.carbondata.spark.testsuite.partition.TestDataLoadingForPartitionTable.Insert into for partition table: range partition</strong></a></li><li><a href='https://builds.apache.org/job/carbondata-pr-spark-1.6/394/org.apache.carbondata$carbondata-spark-common-test/testReport/org.apache.carbondata.spark.testsuite.partition/TestQueryForPartitionTable/detail_query_on_partition_table__range_partition/'><strong>org.apache.carbondata.spark.testsuite.partition.TestQueryForPartitionTable.detail query on partition table: range partition</strong></a></li></ul> --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user kunal642 commented on the issue:
https://github.com/apache/carbondata/pull/1040 retest this please --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1040 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2516/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user asfgit commented on the issue:
https://github.com/apache/carbondata/pull/1040 Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/carbondata-pr-spark-1.6/399/<h2>Failed Tests: <span class='status-failure'>3</span></h2><h3><a name='carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test' /><a href='https://builds.apache.org/job/carbondata-pr-spark-1.6/399/org.apache.carbondata$carbondata-spark-common-test/testReport'>carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test</a>: <span class='status-failure'>3</span></h3><ul><li><a href='https://builds.apache.org/job/carbondata-pr-spark-1.6/399/org.apache.carbondata$carbondata-spark-common-test/testReport/org.apache.carbondata.spark.testsuite.partition/TestDataLoadingForPartitionTable/data_loading_for_partition_table__range_partition/'><strong>org.apache.carbondata.spark.testsuite.partition.TestDataLoadingForPartitionTable.data loading for partition table: range partition</strong></a></li><li><a href='https://builds.apache.org/job/carbondata-pr-spark-1.6/399/org.apache.carbondata$carbondata-spa rk-common-test/testReport/org.apache.carbondata.spark.testsuite.partition/TestDataLoadingForPartitionTable/Insert_into_for_partition_table__range_partition/'><strong>org.apache.carbondata.spark.testsuite.partition.TestDataLoadingForPartitionTable.Insert into for partition table: range partition</strong></a></li><li><a href='https://builds.apache.org/job/carbondata-pr-spark-1.6/399/org.apache.carbondata$carbondata-spark-common-test/testReport/org.apache.carbondata.spark.testsuite.partition/TestQueryForPartitionTable/detail_query_on_partition_table__range_partition/'><strong>org.apache.carbondata.spark.testsuite.partition.TestQueryForPartitionTable.detail query on partition table: range partition</strong></a></li></ul> --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1040 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2523/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user asfgit commented on the issue:
https://github.com/apache/carbondata/pull/1040 Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/carbondata-pr-spark-1.6/411/<h2>Failed Tests: <span class='status-failure'>1</span></h2><h3><a name='carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test' /><a href='https://builds.apache.org/job/carbondata-pr-spark-1.6/411/org.apache.carbondata$carbondata-spark-common-test/testReport'>carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test</a>: <span class='status-failure'>1</span></h3><ul><li><a href='https://builds.apache.org/job/carbondata-pr-spark-1.6/411/org.apache.carbondata$carbondata-spark-common-test/testReport/org.apache.carbondata.spark.testsuite.allqueries/InsertIntoCarbonTableTestCase/insert_into_carbon_table_from_carbon_table_union_query/'><strong>org.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert into carbon table from carbon table union query</strong></a></li></ul> --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:
https://github.com/apache/carbondata/pull/1040 retest this please --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1040 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2568/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user asfgit commented on the issue:
https://github.com/apache/carbondata/pull/1040 Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/carbondata-pr-spark-1.6/462/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1040#discussion_r122628497 --- Diff: integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala --- @@ -435,4 +437,18 @@ object CommonUtil { } } + def getPartitionsBasedOnType(partitionInfo: PartitionInfo): Seq[String] = { + val columnSchema = partitionInfo.getColumnSchemaList.get(0) + partitionInfo.getPartitionType match { + case PartitionType.HASH => Seq(s"${ columnSchema.getColumnName }=HASH_NUMBER(${ partitionInfo --- End diff -- move to next line after `=>` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1040#discussion_r122628550 --- Diff: integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala --- @@ -435,4 +437,18 @@ object CommonUtil { } } + def getPartitionsBasedOnType(partitionInfo: PartitionInfo): Seq[String] = { + val columnSchema = partitionInfo.getColumnSchemaList.get(0) + partitionInfo.getPartitionType match { + case PartitionType.HASH => Seq(s"${ columnSchema.getColumnName }=HASH_NUMBER(${ partitionInfo --- End diff -- And add space before and after `=` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1040#discussion_r122628649 --- Diff: integration/spark/src/main/scala/org/apache/spark/sql/CarbonCatalystOperators.scala --- @@ -75,6 +75,17 @@ case class DescribeFormattedCommand(sql: String, tblIdentifier: TableIdentifier) Seq(AttributeReference("result", StringType, nullable = false)()) } +case class ShowPartitionsCommand(tableIdentifier: TableIdentifier) + extends LogicalPlan with Command { + + override def output: Seq[AttributeReference] = + Seq(AttributeReference("partitions", StringType, nullable = false)()) + + override def children: Seq[LogicalPlan] = Seq.empty +} + + --- End diff -- remove unnecessary empty lines --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1040#discussion_r122628873 --- Diff: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchema.scala --- @@ -921,3 +923,28 @@ private[sql] case class CleanFiles( Seq.empty } } + +private[sql] case class ShowPartitions(tableIdentifier: TableIdentifier) + extends RunnableCommand { + + override val output: Seq[Attribute] = { + AttributeReference("partition", StringType, nullable = false)() :: Nil + } + + def run(sqlContext: SQLContext): Seq[Row] = { --- End diff -- It seems this command is implemented twice in spark and spark2, can you move it to spark-common module? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user asfgit commented on the issue:
https://github.com/apache/carbondata/pull/1040 Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/carbondata-pr-spark-1.6/490/<h2>Build result: FAILURE</span></h2>[...truncated 69.80 KB...][ERROR] Re-run Maven using the -X switch to enable full debug logging.[ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles:[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException[ERROR] [ERROR] After correcting the problems, you can resume the build with the command[ERROR] mvn <goals> -rf :carbondata-spark-common[JENKINS] Archiving /home/jenkins/jenkins-slave/workspace/carbondata-pr-spark-1.6/processing/pom.xml to org.apache.carbondata/carbondata-processing/1.2.0-SNAPSHOT/carbondata-processing-1.2.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/jenkins-slave/workspace/carbondata-pr-spark-1.6/integration/spark-common/pom.xml to org.apache.carbondata/carbondata-spark-common/1.2.0-SNAPSHOT/carbondata-spark-common-1.2.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/jenkin s-slave/workspace/carbondata-pr-spark-1.6/examples/spark/pom.xml to org.apache.carbondata/carbondata-examples-spark/1.2.0-SNAPSHOT/carbondata-examples-spark-1.2.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/jenkins-slave/workspace/carbondata-pr-spark-1.6/integration/hive/pom.xml to org.apache.carbondata/carbondata-hive/1.2.0-SNAPSHOT/carbondata-hive-1.2.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/jenkins-slave/workspace/carbondata-pr-spark-1.6/common/pom.xml to org.apache.carbondata/carbondata-common/1.2.0-SNAPSHOT/carbondata-common-1.2.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/jenkins-slave/workspace/carbondata-pr-spark-1.6/assembly/pom.xml to org.apache.carbondata/carbondata-assembly/1.2.0-SNAPSHOT/carbondata-assembly-1.2.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/jenkins-slave/workspace/carbondata-pr-spark-1.6/pom.xml to org.apache.carbondata/carbondata-parent/1.2.0-SNAPSHOT/carbondata-parent-1.2.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/jenkins-slave/works pace/carbondata-pr-spark-1.6/examples/flink/pom.xml to org.apache.carbondata/carbondata-examples-flink/1.2.0-SNAPSHOT/carbondata-examples-flink-1.2.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/jenkins-slave/workspace/carbondata-pr-spark-1.6/core/pom.xml to org.apache.carbondata/carbondata-core/1.2.0-SNAPSHOT/carbondata-core-1.2.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/jenkins-slave/workspace/carbondata-pr-spark-1.6/hadoop/pom.xml to org.apache.carbondata/carbondata-hadoop/1.2.0-SNAPSHOT/carbondata-hadoop-1.2.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/jenkins-slave/workspace/carbondata-pr-spark-1.6/integration/presto/pom.xml to org.apache.carbondata/carbondata-presto/1.2.0-SNAPSHOT/carbondata-presto-1.2.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/jenkins-slave/workspace/carbondata-pr-spark-1.6/integration/spark-common-test/pom.xml to org.apache.carbondata/carbondata-spark-common-test/1.2.0-SNAPSHOT/carbondata-spark-common-test-1.2.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/jenkins-slave/workspace/carbondata-pr-spark-1.6/integration/spark/pom.xml to org.apache.carbondata/carbondata-spark/1.2.0-SNAPSHOT/carbondata-spark-1.2.0-SNAPSHOT.pomchannel stoppedSetting status of 1346d30148567228a3f7385b00d05cde88403a85 to FAILURE with url https://builds.apache.org/job/carbondata-pr-spark-1.6/490/ and message: 'Tests Failed for Spark1.6 'Using context: Jenkins(Spark1.6): mvn clean test -Pspark-1.6 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1040 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2592/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user asfgit commented on the issue:
https://github.com/apache/carbondata/pull/1040 Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/carbondata-pr-spark-1.6/517/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
Free forum by Nabble | Edit this page |