Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2084 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4613/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2084 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/4115/ --- |
In reply to this post by qiuchenjian-2
Github user manishgupta88 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2084#discussion_r177429936 --- Diff: core/src/main/java/org/apache/carbondata/events/OperationContext.java --- @@ -41,4 +41,12 @@ public void setProperty(String key, Object value) { public Object getProperty(String key) { return this.operationProperties.get(key); } + + public Object getOrElse(String key, Object defaultValue) { --- End diff -- This method is not getting used anywhere. If not required remove this method --- |
In reply to this post by qiuchenjian-2
Github user manishgupta88 commented on the issue:
https://github.com/apache/carbondata/pull/2084 Add 2-3 test cases for time series feature also --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2084#discussion_r177615795 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala --- @@ -49,10 +49,6 @@ case class CarbonCreateDataMapCommand( // so streaming table does not support create datamap mainTable = CarbonEnv.getCarbonTable(tableIdentifier.database, tableIdentifier.table)(sparkSession) - if (mainTable.isStreamingTable) { - throw new MalformedCarbonCommandException("Streaming table does not support creating datamap") --- End diff -- Better remove only for pre-agg/timeseries, keep this check for remaining datamaps and add the datamap type also in exception message --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2084#discussion_r177616850 --- Diff: integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala --- @@ -173,9 +175,9 @@ class TestStreamingTableOperation extends QueryTest with BeforeAndAfterAll { } override def afterAll { - dropTable() - sql("USE default") - sql("DROP DATABASE IF EXISTS streaming CASCADE") +// dropTable() +// sql("USE default") +// sql("DROP DATABASE IF EXISTS streaming CASCADE") --- End diff -- Did you comment these intentionally? --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2084#discussion_r177617095 --- Diff: integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala --- @@ -249,6 +251,195 @@ class TestStreamingTableOperation extends QueryTest with BeforeAndAfterAll { assertResult(exceptedRow)(row) } + test("test preaggregate table creation on streaming table without handoff") { + val identifier = new TableIdentifier("agg_table", Option("streaming")) + val carbonTable = CarbonEnv.getInstance(spark).carbonMetastore.lookupRelation(identifier)(spark) + .asInstanceOf[CarbonRelation].metaData.carbonTable + val csvDataDir = new File("target/csvdatanew").getCanonicalPath + // streaming ingest 10 rows + generateCSVDataFile(spark, idStart = 10, rowNums = 5, csvDataDir) --- End diff -- Is it required to generate csv data for each test case? can't we generate in beforeALL? --- |
In reply to this post by qiuchenjian-2
Github user kunal642 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2084#discussion_r177706257 --- Diff: core/src/main/java/org/apache/carbondata/events/OperationContext.java --- @@ -41,4 +41,12 @@ public void setProperty(String key, Object value) { public Object getProperty(String key) { return this.operationProperties.get(key); } + + public Object getOrElse(String key, Object defaultValue) { --- End diff -- removed --- |
In reply to this post by qiuchenjian-2
Github user kunal642 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2084#discussion_r177706270 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala --- @@ -49,10 +49,6 @@ case class CarbonCreateDataMapCommand( // so streaming table does not support create datamap mainTable = CarbonEnv.getCarbonTable(tableIdentifier.database, tableIdentifier.table)(sparkSession) - if (mainTable.isStreamingTable) { - throw new MalformedCarbonCommandException("Streaming table does not support creating datamap") --- End diff -- done --- |
In reply to this post by qiuchenjian-2
Github user kunal642 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2084#discussion_r177707665 --- Diff: integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala --- @@ -173,9 +175,9 @@ class TestStreamingTableOperation extends QueryTest with BeforeAndAfterAll { } override def afterAll { - dropTable() - sql("USE default") - sql("DROP DATABASE IF EXISTS streaming CASCADE") +// dropTable() +// sql("USE default") +// sql("DROP DATABASE IF EXISTS streaming CASCADE") --- End diff -- removed the comment. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2084 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/3411/ --- |
In reply to this post by qiuchenjian-2
Github user kunal642 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2084#discussion_r177729512 --- Diff: integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala --- @@ -249,6 +251,195 @@ class TestStreamingTableOperation extends QueryTest with BeforeAndAfterAll { assertResult(exceptedRow)(row) } + test("test preaggregate table creation on streaming table without handoff") { + val identifier = new TableIdentifier("agg_table", Option("streaming")) + val carbonTable = CarbonEnv.getInstance(spark).carbonMetastore.lookupRelation(identifier)(spark) + .asInstanceOf[CarbonRelation].metaData.carbonTable + val csvDataDir = new File("target/csvdatanew").getCanonicalPath + // streaming ingest 10 rows + generateCSVDataFile(spark, idStart = 10, rowNums = 5, csvDataDir) --- End diff -- moved to beforeAll(). In compaction, had to generateCSVDataFile 4 times because checkpoints are maintained --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2084 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4637/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2084 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/3413/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2084 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4639/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2084 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/3415/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2084 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/4145/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2084 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/4147/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2084 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/4148/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2084 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/4150/ --- |
Free forum by Nabble | Edit this page |