GitHub user xubo245 opened a pull request:
https://github.com/apache/carbondata/pull/2282 [CARBONDATA-2456] Handling request by shard in search mode, which instead of handling the request by multiBlockSplit Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? No - [ ] Any backward compatibility impacted? No - [ ] Document update required? No - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. add test case - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. No You can merge this pull request into a Git repository by running: $ git pull https://github.com/xubo245/carbondata CARBONDATA-2456-shardOfSearchMode Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2282.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2282 ---- commit a927d914b40f878188598b30053ab4b004ca2728 Author: xubo245 <601450868@...> Date: 2018-05-08T10:34:07Z [CARBONDATA-2456] Handling request by shard in search mode, which instead of handling the request by multiBlockSplit commit 74de91f2e9b51c6db41509b5bb9543c2c7656230 Author: xubo245 <601450868@...> Date: 2018-05-08T10:35:01Z add ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2282 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/5732/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2282 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4571/ --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on the issue:
https://github.com/apache/carbondata/pull/2282 @jackylk Please review it. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2282#discussion_r186920478 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/detailquery/SearchModeWithShardSuite.scala --- @@ -0,0 +1,85 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.carbondata.spark.testsuite.detailquery + +import org.apache.carbondata.datamap.lucene.LuceneFineGrainDataMapSuite +import org.apache.spark.sql.test.util.QueryTest +import org.apache.spark.sql.{CarbonSession, Row} +import org.scalatest.BeforeAndAfterAll + +/** + * Test Suite for search mode with shard + */ + +class SearchModeWithShardSuite extends QueryTest with BeforeAndAfterAll { + val file = resourcesPath + "/datamap_input.csv" + + override def beforeAll = { + sqlContext.sparkSession.asInstanceOf[CarbonSession].startSearchMode() + LuceneFineGrainDataMapSuite.createFile(file, 1000000) + sql("DROP TABLE IF EXISTS datamap_test_table") + } + + override def afterAll = { + LuceneFineGrainDataMapSuite.deleteFile(file) + sql("DROP TABLE IF EXISTS datamap_test_table") + sqlContext.sparkSession.asInstanceOf[CarbonSession].stopSearchMode() + } + + private def sparkSql(sql: String): Seq[Row] = { + sqlContext.sparkSession.asInstanceOf[CarbonSession].sparkSql(sql).collect() + } + + test("test search mode with shard to search") { --- End diff -- add this to existing test suite `SearchModeTestcase` --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2282#discussion_r186920585 --- Diff: store/search/src/main/scala/org/apache/spark/rpc/Master.scala --- @@ -182,40 +184,79 @@ class Master(sparkConf: SparkConf) { } LOG.info(s"[SearchId:$queryId] accumulated result size $rowCount") } - def onFaiure(e: Throwable) = throw new IOException(s"exception in worker: ${ e.getMessage }") - def onTimedout() = throw new ExecutionTimeoutException() + + def onFailure(e: Throwable) = throw new IOException(s"exception in worker: ${e.getMessage}") + + def onTimeout() = throw new ExecutionTimeoutException() // prune data and get a mapping of worker hostname to list of blocks, // then add these blocks to the SearchRequest and fire the RPC call val nodeBlockMapping: JMap[String, JList[Distributable]] = pruneBlock(table, columns, filter) val tuple = nodeBlockMapping.asScala.map { case (splitAddress, blocks) => - // Build a SearchRequest - val split = new SerializableWritable[CarbonMultiBlockSplit]( - new CarbonMultiBlockSplit(blocks, splitAddress)) - val request = SearchRequest(queryId, split, table.getTableInfo, columns, filter, localLimit) + val hashMap = new mutable.HashMap[String, JList[Distributable]]() + for (i <- 0 until (blocks.size())) { + val shardName = CarbonTablePath --- End diff -- I think this logic should be inside Scheduler --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2282#discussion_r186924226 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/detailquery/SearchModeWithShardSuite.scala --- @@ -0,0 +1,85 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.carbondata.spark.testsuite.detailquery + +import org.apache.carbondata.datamap.lucene.LuceneFineGrainDataMapSuite +import org.apache.spark.sql.test.util.QueryTest +import org.apache.spark.sql.{CarbonSession, Row} +import org.scalatest.BeforeAndAfterAll + +/** + * Test Suite for search mode with shard + */ + +class SearchModeWithShardSuite extends QueryTest with BeforeAndAfterAll { + val file = resourcesPath + "/datamap_input.csv" + + override def beforeAll = { + sqlContext.sparkSession.asInstanceOf[CarbonSession].startSearchMode() + LuceneFineGrainDataMapSuite.createFile(file, 1000000) + sql("DROP TABLE IF EXISTS datamap_test_table") + } + + override def afterAll = { + LuceneFineGrainDataMapSuite.deleteFile(file) + sql("DROP TABLE IF EXISTS datamap_test_table") + sqlContext.sparkSession.asInstanceOf[CarbonSession].stopSearchMode() + } + + private def sparkSql(sql: String): Seq[Row] = { + sqlContext.sparkSession.asInstanceOf[CarbonSession].sparkSql(sql).collect() + } + + test("test search mode with shard to search") { --- End diff -- ok, done --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2282 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4594/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2282 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/5753/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2282 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/4814/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2282 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4597/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2282 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/5756/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2282 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/4817/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2282 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/5033/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2282 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6194/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2282 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5168/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2282 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6198/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2282 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/5037/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2282 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5172/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2282 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/5054/ --- |
Free forum by Nabble | Edit this page |