GitHub user sraghunandan opened a pull request:
https://github.com/apache/carbondata/pull/1332 [WIP]Regenerate hive saved data incase test case fails You can merge this pull request into a Git repository by running: $ git pull https://github.com/sraghunandan/carbondata-1 disable_hive_result_caching Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1332.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1332 ---- commit 19950412b431cc96a346a5f02fe65f4dfd66c7c9 Author: sraghunandan <[hidden email]> Date: 2017-09-06T09:39:44Z Regenerate hive saved data incase test case fails Reasons: 1.May be the test case changed 2.May be the input data changed 3.May be the environment changed ---- --- |
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1332 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/559/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1332 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/565/ --- |
In reply to this post by qiuchenjian-2
Github user sraghunandan commented on the issue:
https://github.com/apache/carbondata/pull/1332 ok to test --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1332 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/3434/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1332 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/577/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1332#discussion_r137439030 --- Diff: integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala --- @@ -84,22 +82,34 @@ class QueryTest extends PlanTest with Suite { checkAnswer(df, expectedAnswer.collect()) } - protected def checkAnswer(carbon: String, hive: String, uniqueIdentifier:String): Unit = { - val path = TestQueryExecutor.hiveresultpath + "/"+uniqueIdentifier + protected def checkAnswer(carbon: String, hive: String, uniqueIdentifier: String): Unit = { + val path = TestQueryExecutor.hiveresultpath + "/" + uniqueIdentifier if (FileFactory.isFileExist(path, FileFactory.getFileType(path))) { - val objinp = new ObjectInputStream(FileFactory.getDataInputStream(path, FileFactory.getFileType(path))) + val objinp = new ObjectInputStream(FileFactory + .getDataInputStream(path, FileFactory.getFileType(path))) val rows = objinp.readObject().asInstanceOf[Array[Row]] objinp.close() - checkAnswer(sql(carbon), rows) + QueryTest.checkAnswer(sql(carbon), rows) match { + case Some(errorMessage) => { + FileFactory.deleteFile(path, FileFactory.getFileType(path)) + writeAndCheckAnswer(carbon, hive, path) --- End diff -- Doesn't it go to endless loop when test fails? --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1332 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/585/ --- |
In reply to this post by qiuchenjian-2
Github user sraghunandan commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1332#discussion_r137459712 --- Diff: integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala --- @@ -84,22 +82,34 @@ class QueryTest extends PlanTest with Suite { checkAnswer(df, expectedAnswer.collect()) } - protected def checkAnswer(carbon: String, hive: String, uniqueIdentifier:String): Unit = { - val path = TestQueryExecutor.hiveresultpath + "/"+uniqueIdentifier + protected def checkAnswer(carbon: String, hive: String, uniqueIdentifier: String): Unit = { + val path = TestQueryExecutor.hiveresultpath + "/" + uniqueIdentifier if (FileFactory.isFileExist(path, FileFactory.getFileType(path))) { - val objinp = new ObjectInputStream(FileFactory.getDataInputStream(path, FileFactory.getFileType(path))) + val objinp = new ObjectInputStream(FileFactory + .getDataInputStream(path, FileFactory.getFileType(path))) val rows = objinp.readObject().asInstanceOf[Array[Row]] objinp.close() - checkAnswer(sql(carbon), rows) + QueryTest.checkAnswer(sql(carbon), rows) match { + case Some(errorMessage) => { + FileFactory.deleteFile(path, FileFactory.getFileType(path)) + writeAndCheckAnswer(carbon, hive, path) --- End diff -- i couldn't understand your comment. how it would go to infinite loop? we are not using recursive call --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1332#discussion_r137919083 --- Diff: integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala --- @@ -84,22 +82,34 @@ class QueryTest extends PlanTest with Suite { checkAnswer(df, expectedAnswer.collect()) } - protected def checkAnswer(carbon: String, hive: String, uniqueIdentifier:String): Unit = { - val path = TestQueryExecutor.hiveresultpath + "/"+uniqueIdentifier + protected def checkAnswer(carbon: String, hive: String, uniqueIdentifier: String): Unit = { + val path = TestQueryExecutor.hiveresultpath + "/" + uniqueIdentifier if (FileFactory.isFileExist(path, FileFactory.getFileType(path))) { - val objinp = new ObjectInputStream(FileFactory.getDataInputStream(path, FileFactory.getFileType(path))) + val objinp = new ObjectInputStream(FileFactory + .getDataInputStream(path, FileFactory.getFileType(path))) val rows = objinp.readObject().asInstanceOf[Array[Row]] objinp.close() - checkAnswer(sql(carbon), rows) + QueryTest.checkAnswer(sql(carbon), rows) match { + case Some(errorMessage) => { + FileFactory.deleteFile(path, FileFactory.getFileType(path)) + writeAndCheckAnswer(carbon, hive, path) --- End diff -- Got it , my misunderstanding --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1332 LGTM --- |
In reply to this post by qiuchenjian-2
|
Free forum by Nabble | Edit this page |