GitHub user jackylk opened a pull request:
https://github.com/apache/carbondata/pull/1835 [CARBONDATA-2057] Support specifying path when creating pre-aggregate table When creating datamap of pre-aggreagate table, user should be able to specify the store location of it. User can use "path" property: ``` CREATE DATAMAP agg ON TABLE main USING 'preaggregate' DMPROPERTIES ('path'='datamap_storage_path') AS SELECT ... ``` - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jackylk/incubator-carbondata datamap_location Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1835.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1835 ---- commit b76bf7988a1c56b6657668784c53af44862645fa Author: Jacky Li <jacky.likun@...> Date: 2018-01-02T15:46:14Z [CARBONDATA-1968] Add external table support This PR adds support for creating external table with existing carbondata files, using Hive syntax. CREATE EXTERNAL TABLE tableName STORED BY 'carbondata' LOCATION 'path' This closes #1749 commit 65d07cc86020a858bbd611893f6e991601b63f96 Author: Jacky Li <jacky.likun@...> Date: 2018-01-06T12:28:44Z [CARBONDATA-1992] Remove partitionId in CarbonTablePath In CarbonTablePath, there is a deprecated partition id which is always 0, it should be removed to avoid confusion. This closes #1765 commit 3e1da7c3be6298c620214c7c505b91e6c4596ff8 Author: SangeetaGulia <sangeeta.gulia@...> Date: 2017-09-21T09:26:26Z [CARBONDATA-1827] S3 Carbon Implementation 1.Provide support for s3 in carbondata. 2.Added S3Example to create carbon table on s3. 3.Added S3CSVExample to load carbon table using csv from s3. This closes #1805 commit d1ae835671db60dd3c96f9b87489006857a31837 Author: Jacky Li <jacky.likun@...> Date: 2018-01-19T06:48:36Z add datamap path ---- --- |
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1835#discussion_r162550458 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/datamap/TestDataMapCommand.scala --- @@ -207,6 +209,26 @@ class TestDataMapCommand extends QueryTest with BeforeAndAfterAll { Seq(Row(1, 31), Row(2, 27), Row(3, 70), Row(4, 55))) } + test("create pre-agg table with path") { + sql("drop table if exists maintbl_preagg") + sql("drop table if exists maintbl ") + val path = "./_pre-agg_test" + try { + sql("create table maintbl(year int,month int,name string,salary int) stored by 'carbondata' tblproperties('sort_columns'='month,year,name')") + sql("insert into maintbl select 10,11,'amy',12") + sql("insert into maintbl select 10,11,'amy',12") + sql("create datamap preagg on table maintbl " + + "using 'preaggregate' " + + s"dmproperties ('path'='$path') " + + "as select name,avg(salary) from maintbl group by name") + assertResult(true)(new File(path).exists()) + checkAnswer(sql("select name,avg(salary) from maintbl group by name"), Row("amy", 12.0)) --- End diff -- Make sure the data comes from aggregate table, better query aggregate table and check once. --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1835#discussion_r162550560 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/datamap/TestDataMapCommand.scala --- @@ -207,6 +209,26 @@ class TestDataMapCommand extends QueryTest with BeforeAndAfterAll { Seq(Row(1, 31), Row(2, 27), Row(3, 70), Row(4, 55))) } + test("create pre-agg table with path") { + sql("drop table if exists maintbl_preagg") + sql("drop table if exists maintbl ") + val path = "./_pre-agg_test" + try { + sql("create table maintbl(year int,month int,name string,salary int) stored by 'carbondata' tblproperties('sort_columns'='month,year,name')") + sql("insert into maintbl select 10,11,'amy',12") + sql("insert into maintbl select 10,11,'amy',12") + sql("create datamap preagg on table maintbl " + + "using 'preaggregate' " + + s"dmproperties ('path'='$path') " + + "as select name,avg(salary) from maintbl group by name") + assertResult(true)(new File(path).exists()) --- End diff -- Check the datafles are present inside the path or not --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1835#discussion_r162550904 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonDropDataMapCommand.scala --- @@ -104,12 +99,12 @@ case class CarbonDropDataMapCommand( tableName))(sparkSession) if (dataMapSchema.isDefined) { if (dataMapSchema.get._1.getRelationIdentifier != null) { - CarbonDropTableCommand( + commandToRun = CarbonDropTableCommand( --- End diff -- same code is already handled in pr 1821 --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1835#discussion_r162553454 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonDropDataMapCommand.scala --- @@ -104,12 +99,12 @@ case class CarbonDropDataMapCommand( tableName))(sparkSession) if (dataMapSchema.isDefined) { if (dataMapSchema.get._1.getRelationIdentifier != null) { - CarbonDropTableCommand( + commandToRun = CarbonDropTableCommand( --- End diff -- ok, that PR already need to rebase --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1835 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2990/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1835 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2991/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1835 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1748/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1835 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2978/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1835 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2993/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1835 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1749/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1835 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2979/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1835 LGTM --- |
In reply to this post by qiuchenjian-2
|
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1835#discussion_r163775104 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonDropDataMapCommand.scala --- @@ -68,18 +69,12 @@ case class CarbonDropDataMapCommand( lock => carbonLocks += CarbonLockUtil.getLockObject(tableIdentifier, lock) } LOGGER.audit(s"Deleting datamap [$dataMapName] under table [$tableName]") - var carbonTable: Option[CarbonTable] = - catalog.getTableFromMetadataCache(dbName, tableName) - if (carbonTable.isEmpty) { - try { - carbonTable = Some(catalog.lookupRelation(identifier)(sparkSession) - .asInstanceOf[CarbonRelation].metaData.carbonTable) - } catch { - case ex: NoSuchTableException => - if (!ifExistsSet) { - throw ex - } - } + val carbonTable: Option[CarbonTable] = try { + Some(CarbonEnv.getCarbonTable(databaseNameOp, tableName)(sparkSession)) + } catch { + case ex: NoSuchTableException => + if (!ifExistsSet) throw ex --- End diff -- if add this line, when run "DROP DATAMAP IF EXISTS agg1_month ON TABLE mainTableNotExist", it will not throw exception if the table not exists. So we should remove it. https://github.com/apache/carbondata/pull/1858/files --- |
Free forum by Nabble | Edit this page |