http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/jira-Commented-CARBONDATA-272-Two-test-case-are-failing-on-second-time-maven-build-without-clean-tp1470.html
Currently Test case are passing only when 'clean' is used with mvn.
This is due to improper table drop in test cases AllDataTypesTestCaseAggregate and NO_DICTIONARY_COL_TestCase
During development, running test with 'clean' will take more time to build so it is better ensure tables are dropped properly
> Two test case are failing , on second time maven build without 'clean'
> -----------------------------------------------------------------------
>
> Key: CARBONDATA-272
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-272> Project: CarbonData
> Issue Type: Bug
> Components: spark-integration
> Reporter: Vinod KC
> Priority: Trivial
> Labels: test
>
> Two test case are failing , during second time build without mvn clean
> eg: >
> 1) run : mvn -Pspark-1.6 -Dspark.version=1.6.2 install
> 2) After successful build, again run mvn -Pspark-1.6 -Dspark.version=1.6.2 install
> [31m*** 2 SUITES ABORTED ***[0m
> [INFO] ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache CarbonData :: Parent ........................ SUCCESS [ 11.412 s]
> [INFO] Apache CarbonData :: Common ........................ SUCCESS [ 5.585 s]
> [INFO] Apache CarbonData :: Format ........................ SUCCESS [ 7.079 s]
> [INFO] Apache CarbonData :: Core .......................... SUCCESS [ 15.874 s]
> [INFO] Apache CarbonData :: Processing .................... SUCCESS [ 12.417 s]
> [INFO] Apache CarbonData :: Hadoop ........................ SUCCESS [ 17.330 s]
> [INFO] Apache CarbonData :: Spark ......................... FAILURE [07:47 min]
> [INFO] Apache CarbonData :: Assembly ...................... SKIPPED
> [INFO] Apache CarbonData :: Examples ...................... SKIPPED
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO] ------------------------------------------------------------------------
> Reason for failure is that two tables created by test cases AllDataTypesTestCaseAggregate and NO_DICTIONARY_COL_TestCase are not dropping tables properly.
> Refer error below error log
> [32m- skip auto identify high cardinality column for column group[0m
> [32mAllDataTypesTestCaseAggregate:[0m
> ERROR 24-09 08:31:29,368 - Table alldatatypescubeAGG not found: default.alldatatypescubeAGG table not found
> AUDIT 24-09 08:31:29,383 - [vinod][vinod][Thread-1]Creating Table with Database name [default] and Table name [alldatatypestableagg]
> AUDIT 24-09 08:31:29,385 - [vinod][vinod][Thread-1]Table creation with Database name [default] and Table name [alldatatypestableagg] failed. Table [alldatatypestableagg] already exists under database [default]
> ERROR 24-09 08:31:29,401 - Table Desc1 not found: default.Desc1 table not found
> ERROR 24-09 08:31:29,414 - Table Desc2 not found: default.Desc2 table not found
> AUDIT 24-09 08:31:29,422 - [vinod][vinod][Thread-1]Creating Table with Database name [default] and Table name [desc1]
> [31mException encountered when invoking run on a nested suite - Table [alldatatypestableagg] already exists under database [default] *** ABORTED ***[0m
> [31m java.lang.RuntimeException: Table [alldatatypestableagg] already exists under database [default][0m
> [31m at scala.sys.package$.error(package.scala:27)[0m
> [31m at org.apache.spark.sql.execution.command.CreateTable.run(carbonTableSchema.scala:853)[0m
> [31m at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)[0m
> [31m at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)[0m
> [31m at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)[0m
> [31m at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)[0m
> [31m at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)[0m
> [31m at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)[0m
> [31m at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)[0m
> [31m at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)[0m
> [31m ...[0m
> [32mNO_DICTIONARY_COL_TestCase:[0m
> ERROR 24-09 08:31:29,954 - Table filtertestTables not found: default.filtertestTables table not found
> AUDIT 24-09 08:31:30,041 - [vinod][vinod][Thread-1]Deleting table [no_dictionary_carbon_6] under database [default]
> AUDIT 24-09 08:31:30,115 - [vinod][vinod][Thread-1]Deleted table [no_dictionary_carbon_6] under database [default]
> AUDIT 24-09 08:31:30,122 - [vinod][vinod][Thread-1]Deleting table [no_dictionary_carbon_7] under database [default]
> AUDIT 24-09 08:31:30,191 - [vinod][vinod][Thread-1]Deleted table [no_dictionary_carbon_7] under database [default]
> AUDIT 24-09 08:31:30,454 - [vinod][vinod][Thread-1]Creating Table with Database name [default] and Table name [no_dictionary_carbon_6]
> AUDIT 24-09 08:31:30,480 - [vinod][vinod][Thread-1]Table created with Database name [default] and Table name [no_dictionary_carbon_6]
> AUDIT 24-09 08:31:30,583 - [vinod][vinod][Thread-1]Data load request has been received for table default.no_dictionary_carbon_6
> AUDIT 24-09 08:31:30,665 - [vinod][vinod][Thread-1]Data load is successful for default.no_dictionary_carbon_6
> AUDIT 24-09 08:31:30,684 - [vinod][vinod][Thread-1]Creating Table with Database name [default] and Table name [no_dictionary_carbon_7]
> AUDIT 24-09 08:31:30,727 - [vinod][vinod][Thread-1]Table created with Database name [default] and Table name [no_dictionary_carbon_7]
> AUDIT 24-09 08:31:30,822 - [vinod][vinod][Thread-1]Data load request has been received for table default.no_dictionary_carbon_7
> AUDIT 24-09 08:31:31,077 - [vinod][vinod][Thread-1]Data load is successful for default.no_dictionary_carbon_7
> AUDIT 24-09 08:31:31,090 - [vinod][vinod][Thread-1]Creating Table with Database name [default] and Table name [filtertesttable]
> AUDIT 24-09 08:31:31,092 - [vinod][vinod][Thread-1]Table creation with Database name [default] and Table name [filtertesttable] failed. Table [filtertesttable] already exists under database [default]
> [31mException encountered when invoking run on a nested suite - Table [filtertesttable] already exists under database [default] *** ABORTED ***[0m
> [31m java.lang.RuntimeException: Table [filtertesttable] already exists under database [default][0m
> [31m at scala.sys.package$.error(package.scala:27)[0m
> [31m at org.apache.spark.sql.execution.command.CreateTable.run(carbonTableSchema.scala:853)[0m
> [31m at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)[0m
> [31m at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)[0m
> [31m at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)[0m
> [31m at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)[0m
> [31m at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)[0m
> [31m at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)[0m
> [31m at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)[0m
> [31m at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)[0m
> [31m ...[0m