[jira] [Commented] (CARBONDATA-272) Two test case are failing , on second time maven build without 'clean'

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Commented] (CARBONDATA-272) Two test case are failing , on second time maven build without 'clean'

Akash R Nilugal (Jira)

    [ https://issues.apache.org/jira/browse/CARBONDATA-272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15540681#comment-15540681 ]

ASF GitHub Bot commented on CARBONDATA-272:
-------------------------------------------

Github user asfgit closed the pull request at:

    https://github.com/apache/incubator-carbondata/pull/197


> Two test case are failing , on second time maven build without  'clean'
> -----------------------------------------------------------------------
>
>                 Key: CARBONDATA-272
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-272
>             Project: CarbonData
>          Issue Type: Bug
>          Components: spark-integration
>            Reporter: Vinod KC
>            Priority: Trivial
>              Labels: test
>
> Two test case are failing , during second time build without mvn clean
> eg: >
> 1) run : mvn  -Pspark-1.6 -Dspark.version=1.6.2  install
> 2) After successful build, again run mvn  -Pspark-1.6 -Dspark.version=1.6.2  install
> *** 2 SUITES ABORTED ***
> [INFO] ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache CarbonData :: Parent ........................ SUCCESS [ 11.412 s]
> [INFO] Apache CarbonData :: Common ........................ SUCCESS [  5.585 s]
> [INFO] Apache CarbonData :: Format ........................ SUCCESS [  7.079 s]
> [INFO] Apache CarbonData :: Core .......................... SUCCESS [ 15.874 s]
> [INFO] Apache CarbonData :: Processing .................... SUCCESS [ 12.417 s]
> [INFO] Apache CarbonData :: Hadoop ........................ SUCCESS [ 17.330 s]
> [INFO] Apache CarbonData :: Spark ......................... FAILURE [07:47 min]
> [INFO] Apache CarbonData :: Assembly ...................... SKIPPED
> [INFO] Apache CarbonData :: Examples ...................... SKIPPED
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO] ------------------------------------------------------------------------
> Reason for failure is that two tables created by test cases AllDataTypesTestCaseAggregate and NO_DICTIONARY_COL_TestCase are not dropping tables properly.
> Refer error below error log
> - skip auto identify high cardinality column for column group
> AllDataTypesTestCaseAggregate:
> ERROR 24-09 08:31:29,368 - Table alldatatypescubeAGG not found: default.alldatatypescubeAGG table not found
> AUDIT 24-09 08:31:29,383 - [vinod][vinod][Thread-1]Creating Table with Database name [default] and Table name [alldatatypestableagg]
> AUDIT 24-09 08:31:29,385 - [vinod][vinod][Thread-1]Table creation with Database name [default] and Table name [alldatatypestableagg] failed. Table [alldatatypestableagg] already exists under database [default]
> ERROR 24-09 08:31:29,401 - Table Desc1 not found: default.Desc1 table not found
> ERROR 24-09 08:31:29,414 - Table Desc2 not found: default.Desc2 table not found
> AUDIT 24-09 08:31:29,422 - [vinod][vinod][Thread-1]Creating Table with Database name [default] and Table name [desc1]
> Exception encountered when invoking run on a nested suite - Table [alldatatypestableagg] already exists under database [default] *** ABORTED ***
>   java.lang.RuntimeException: Table [alldatatypestableagg] already exists under database [default]
>   at scala.sys.package$.error(package.scala:27)
>   at org.apache.spark.sql.execution.command.CreateTable.run(carbonTableSchema.scala:853)
>   at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
>   at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
>   at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
>   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
>   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
>   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
>   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
>   at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
>   ...
> [32mNO_DICTIONARY_COL_TestCase:
> ERROR 24-09 08:31:29,954 - Table filtertestTables not found: default.filtertestTables table not found
> AUDIT 24-09 08:31:30,041 - [vinod][vinod][Thread-1]Deleting table [no_dictionary_carbon_6] under database [default]
> AUDIT 24-09 08:31:30,115 - [vinod][vinod][Thread-1]Deleted table [no_dictionary_carbon_6] under database [default]
> AUDIT 24-09 08:31:30,122 - [vinod][vinod][Thread-1]Deleting table [no_dictionary_carbon_7] under database [default]
> AUDIT 24-09 08:31:30,191 - [vinod][vinod][Thread-1]Deleted table [no_dictionary_carbon_7] under database [default]
> AUDIT 24-09 08:31:30,454 - [vinod][vinod][Thread-1]Creating Table with Database name [default] and Table name [no_dictionary_carbon_6]
> AUDIT 24-09 08:31:30,480 - [vinod][vinod][Thread-1]Table created with Database name [default] and Table name [no_dictionary_carbon_6]
> AUDIT 24-09 08:31:30,583 - [vinod][vinod][Thread-1]Data load request has been received for table default.no_dictionary_carbon_6
> AUDIT 24-09 08:31:30,665 - [vinod][vinod][Thread-1]Data load is successful for default.no_dictionary_carbon_6
> AUDIT 24-09 08:31:30,684 - [vinod][vinod][Thread-1]Creating Table with Database name [default] and Table name [no_dictionary_carbon_7]
> AUDIT 24-09 08:31:30,727 - [vinod][vinod][Thread-1]Table created with Database name [default] and Table name [no_dictionary_carbon_7]
> AUDIT 24-09 08:31:30,822 - [vinod][vinod][Thread-1]Data load request has been received for table default.no_dictionary_carbon_7
> AUDIT 24-09 08:31:31,077 - [vinod][vinod][Thread-1]Data load is successful for default.no_dictionary_carbon_7
> AUDIT 24-09 08:31:31,090 - [vinod][vinod][Thread-1]Creating Table with Database name [default] and Table name [filtertesttable]
> AUDIT 24-09 08:31:31,092 - [vinod][vinod][Thread-1]Table creation with Database name [default] and Table name [filtertesttable] failed. Table [filtertesttable] already exists under database [default]
> Exception encountered when invoking run on a nested suite - Table [filtertesttable] already exists under database [default] *** ABORTED ***
>   java.lang.RuntimeException: Table [filtertesttable] already exists under database [default]
>   at scala.sys.package$.error(package.scala:27)
>   at org.apache.spark.sql.execution.command.CreateTable.run(carbonTableSchema.scala:853)
>   at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
>   at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
>   at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
>   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
>   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
>   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
>   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
>   at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
>   ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)