Chetan Bhat created CARBONDATA-4049:
--------------------------------------- Summary: Sometimes refresh table fails with error "table not found in database" error Key: CARBONDATA-4049 URL: https://issues.apache.org/jira/browse/CARBONDATA-4049 Project: CarbonData Issue Type: Bug Components: data-query Affects Versions: 2.1.0 Environment: Spark 2.4.5 Reporter: Chetan Bhat In Carbon 2.1 version user creates a database. user copies a old version store such as 1.6.1 to HDFS folder of the database in the In Carbon 2.1 version In Spark-SQL or beeline the user accesses the database using the use db command. Refresh table command is executed on the old version store table and then the subsequent operations on the table are performed. Next refresh table command is tried to be executed on another old version store table . Issue : Sometimes refresh table fails with error "table not found in database" error. spark-sql> refresh table brinjal_deleteseg; *Error in query: Table or view 'brinjal_deleteseg' not found in database '1_6_1';* **Log - 2020-11-12 18:55:46,922 | INFO | [main] | Created broadcast 171 from broadCastHadoopConf at CarbonRDD.scala:58 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,922 | INFO | [main] | Created broadcast 171 from broadCastHadoopConf at CarbonRDD.scala:58 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,924 | INFO | [main] | Pushed Filters: | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,939 | INFO | [main] | Distributed Index server is enabled for 1_6_1.brinjal_update | org.apache.carbondata.core.util.CarbonProperties.isDistributedPruningEnabled(CarbonProperties.java:1742)2020-11-12 18:55:46,939 | INFO | [main] | Started block pruning ... | org.apache.carbondata.hadoop.api.CarbonInputFormat.getPrunedBlocklets(CarbonInputFormat.java:526)2020-11-12 18:55:46,940 | INFO | [main] | Distributed Index server is enabled for 1_6_1.brinjal_update | org.apache.carbondata.core.util.CarbonProperties.isDistributedPruningEnabled(CarbonProperties.java:1742)2020-11-12 18:55:46,945 | INFO | [main] | Successfully Created directory: hdfs://hacluster/tmp/indexservertmp/4b6353d4-65d7-4856-b3cd-b3bc11d15c55 | org.apache.carbondata.core.util.CarbonUtil.createTempFolderForIndexServer(CarbonUtil.java:3273)2020-11-12 18:55:46,945 | INFO | [main] | Temp folder path for Query ID: 4b6353d4-65d7-4856-b3cd-b3bc11d15c55 is org.apache.carbondata.core.datastore.filesystem.HDFSCarbonFile@b8f2e1bf | org.apache.carbondata.indexserver.DistributedIndexJob.execute(IndexJobs.scala:57)2020-11-12 18:55:46,946 | ERROR | [main] | Configured port for index server is not a valid number | org.apache.carbondata.core.util.CarbonProperties.getIndexServerPort(CarbonProperties.java:1779)java.lang.NumberFormatException: null at java.lang.Integer.parseInt(Integer.java:542) at java.lang.Integer.parseInt(Integer.java:615) at org.apache.carbondata.core.util.CarbonProperties.getIndexServerPort(CarbonProperties.java:1777) at org.apache.carbondata.indexserver.IndexServer$.serverPort$lzycompute(IndexServer.scala:88) at org.apache.carbondata.indexserver.IndexServer$.serverPort(IndexServer.scala:88) at org.apache.carbondata.indexserver.IndexServer$.getClient(IndexServer.scala:312) at org.apache.carbondata.indexserver.IndexServer$.getClient(IndexServer.scala:301) at org.apache.carbondata.indexserver.DistributedIndexJob$$anonfun$1.apply(IndexJobs.scala:83) at org.apache.carbondata.indexserver.DistributedIndexJob$$anonfun$1.apply(IndexJobs.scala:59) at org.apache.carbondata.spark.util.CarbonScalaUtil$.logTime(CarbonScalaUtil.scala:769) at org.apache.carbondata.indexserver.DistributedIndexJob.execute(IndexJobs.scala:58) at org.apache.carbondata.core.index.IndexUtil.executeIndexJob(IndexUtil.java:304) at org.apache.carbondata.hadoop.api.CarbonInputFormat.getDistributedSplit(CarbonInputFormat.java:431) at org.apache.carbondata.hadoop.api.CarbonInputFormat.getPrunedBlocklets(CarbonInputFormat.java:532) at org.apache.carbondata.hadoop.api.CarbonInputFormat.getDataBlocksOfSegment(CarbonInputFormat.java:477) at org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:356) at org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:204) at org.apache.carbondata.spark.rdd.CarbonScanRDD.internalGetPartitions(CarbonScanRDD.scala:159) at org.apache.carbondata.spark.rdd.CarbonRDD.getPartitions(CarbonRDD.scala:68) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:269) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:269) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:269) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:990) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:385) at org.apache.spark.rdd.RDD.collect(RDD.scala:989) at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:299) at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:326) at org.apache.spark.sql.execution.QueryExecution.hiveResultString(QueryExecution.scala:128) at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver$$anonfun$run$1.apply(SparkSQLDriver.scala:64) at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver$$anonfun$run$1.apply(SparkSQLDriver.scala:64) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:371) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:274) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)2020-11-12 18:55:46,949 | ERROR | [main] | Exception occurred while getting splits using index server. Initiating Fall back to embedded mode | org.apache.carbondata.hadoop.api.CarbonInputFormat.getDistributedSplit(CarbonInputFormat.java:438)java.lang.NumberFormatException: null at java.lang.Integer.parseInt(Integer.java:542) at java.lang.Integer.parseInt(Integer.java:615) at org.apache.carbondata.core.util.CarbonProperties.getIndexServerPort(CarbonProperties.java:1777) at org.apache.carbondata.indexserver.IndexServer$.serverPort$lzycompute(IndexServer.scala:88) at org.apache.carbondata.indexserver.IndexServer$.serverPort(IndexServer.scala:88) at org.apache.carbondata.indexserver.IndexServer$.getClient(IndexServer.scala:312) at org.apache.carbondata.indexserver.IndexServer$.getClient(IndexServer.scala:301) at org.apache.carbondata.indexserver.DistributedIndexJob$$anonfun$1.apply(IndexJobs.scala:83) at org.apache.carbondata.indexserver.DistributedIndexJob$$anonfun$1.apply(IndexJobs.scala:59) at org.apache.carbondata.spark.util.CarbonScalaUtil$.logTime(CarbonScalaUtil.scala:769) at org.apache.carbondata.indexserver.DistributedIndexJob.execute(IndexJobs.scala:58) at org.apache.carbondata.core.index.IndexUtil.executeIndexJob(IndexUtil.java:304) at org.apache.carbondata.hadoop.api.CarbonInputFormat.getDistributedSplit(CarbonInputFormat.java:431) at org.apache.carbondata.hadoop.api.CarbonInputFormat.getPrunedBlocklets(CarbonInputFormat.java:532) at org.apache.carbondata.hadoop.api.CarbonInputFormat.getDataBlocksOfSegment(CarbonInputFormat.java:477) at org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:356) at org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:204) at org.apache.carbondata.spark.rdd.CarbonScanRDD.internalGetPartitions(CarbonScanRDD.scala:159) at org.apache.carbondata.spark.rdd.CarbonRDD.getPartitions(CarbonRDD.scala:68) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:269) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:269) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:269) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:990) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:385) at org.apache.spark.rdd.RDD.collect(RDD.scala:989) at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:299) at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:326) at org.apache.spark.sql.execution.QueryExecution.hiveResultString(QueryExecution.scala:128) at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver$$anonfun$run$1.apply(SparkSQLDriver.scala:64) at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver$$anonfun$run$1.apply(SparkSQLDriver.scala:64) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:371) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:274) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)2020-11-12 18:55:46,951 | INFO | [main] | Block broadcast_172 stored as values in memory (estimated size 370.9 KB, free 909.1 MB) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,960 | INFO | [main] | Block broadcast_172_piece0 stored as bytes in memory (estimated size 29.4 KB, free 909.0 MB) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,961 | INFO | [dispatcher-event-loop-1] | Added broadcast_172_piece0 in memory on vm1:43460 (size: 29.4 KB, free: 912.0 MB) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,961 | INFO | [main] | Created broadcast 172 from broadCastHadoopConf at CarbonRDD.scala:58 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,962 | INFO | [main] | Distributed Index server is enabled for 1_6_1.brinjal_update | org.apache.carbondata.core.util.CarbonProperties.isDistributedPruningEnabled(CarbonProperties.java:1742)2020-11-12 18:55:46,966 | INFO | [main] | Starting job: collect at IndexServer.scala:178 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,967 | INFO | [dag-scheduler-event-loop] | Got job 87 (collect at IndexServer.scala:178) with 1 output partitions | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,967 | INFO | [dag-scheduler-event-loop] | Final stage: ResultStage 83 (collect at IndexServer.scala:178) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,967 | INFO | [dag-scheduler-event-loop] | Parents of final stage: List() | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,967 | INFO | [dag-scheduler-event-loop] | Missing parents: List() | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,967 | INFO | [dag-scheduler-event-loop] | Submitting ResultStage 83 (DistributedPruneRDD[249] at RDD at CarbonRDD.scala:38), which has no missing parents | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,968 | INFO | [dag-scheduler-event-loop] | Block broadcast_173 stored as values in memory (estimated size 17.7 KB, free 909.0 MB) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,969 | INFO | [dag-scheduler-event-loop] | Block broadcast_173_piece0 stored as bytes in memory (estimated size 8.7 KB, free 909.0 MB) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,970 | INFO | [dispatcher-event-loop-2] | Added broadcast_173_piece0 in memory on vm1:43460 (size: 8.7 KB, free: 912.0 MB) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,970 | INFO | [dag-scheduler-event-loop] | Created broadcast 173 from broadcast at DAGScheduler.scala:1163 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,970 | INFO | [dag-scheduler-event-loop] | Submitting 1 missing tasks from ResultStage 83 (DistributedPruneRDD[249] at RDD at CarbonRDD.scala:38) (first 15 tasks are for partitions Vector(0)) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,970 | INFO | [dag-scheduler-event-loop] | Adding task set 83.0 with 1 tasks | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,971 | INFO | [dispatcher-event-loop-5] | Starting task 0.0 in stage 83.0 (TID 464, localhost, executor driver, partition 0, PROCESS_LOCAL, 9449 bytes) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,971 | INFO | [Executor task launch worker for task 464] | Running task 0.0 in stage 83.0 (TID 464) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,974 | INFO | [Executor task launch worker for task 464] | Value for carbon.max.executor.threads.for.block.pruning is 4 | org.apache.carbondata.core.util.CarbonProperties.getNumOfThreadsForExecutorPruning(CarbonProperties.java:1809)2020-11-12 18:55:46,995 | INFO | [IndexPruningPool_1605178546974] | Constructing new SegmentProperties for table: 1_6_1_brinjal_update. Current size of segment properties holder list is: 2 | org.apache.carbondata.core.datastore.block.SegmentPropertiesAndSchemaHolder.addSegmentProperties(SegmentPropertiesAndSchemaHolder.java:115)2020-11-12 18:55:46,997 | INFO | [IndexPruningPool_1605178546974] | Removed entry from InMemory lru cache :: hdfs://hacluster/user/sparkhive/warehouse/1_6_1.db/brinjal_update/Fact/Part0/Segment_5/5_1605178541880.carbonindexmerge | org.apache.carbondata.core.cache.CarbonLRUCache.removeKey(CarbonLRUCache.java:189)2020-11-12 18:55:46,997 | INFO | [Executor task launch worker for task 464] | Time taken to collect 1 blocklets : 23 | org.apache.carbondata.indexserver.DistributedPruneRDD.internalCompute(DistributedPruneRDD.scala:118)2020-11-12 18:55:46,998 | INFO | [Executor task launch worker for task 464] | Finished task 0.0 in stage 83.0 (TID 464). 2475 bytes result sent to driver | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,999 | INFO | [task-result-getter-3] | Finished task 0.0 in stage 83.0 (TID 464) in 27 ms on localhost (executor driver) (1/1) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,999 | INFO | [task-result-getter-3] | Removed TaskSet 83.0, whose tasks have all completed, from pool | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,999 | INFO | [dag-scheduler-event-loop] | ResultStage 83 (collect at IndexServer.scala:178) finished in 0.031 s | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:46,999 | INFO | [main] | Job 87 finished: collect at IndexServer.scala:178, took 0.032684 s | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,001 | INFO | [main] | Block broadcast_174 stored as values in memory (estimated size 370.9 KB, free 908.6 MB) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,010 | INFO | [main] | Block broadcast_174_piece0 stored as bytes in memory (estimated size 29.4 KB, free 908.6 MB) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,011 | INFO | [dispatcher-event-loop-7] | Added broadcast_174_piece0 in memory on vm1:43460 (size: 29.4 KB, free: 912.0 MB) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,011 | INFO | [main] | Created broadcast 174 from broadCastHadoopConf at CarbonRDD.scala:58 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,016 | INFO | [main] | Starting job: collect at IndexServer.scala:205 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,017 | INFO | [main] | Job 88 finished: collect at IndexServer.scala:205, took 0.000032 s | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,017 | INFO | [main] | Finished block pruning ... | org.apache.carbondata.hadoop.api.CarbonInputFormat.getPrunedBlocklets(CarbonInputFormat.java:622)2020-11-12 18:55:47,018 | INFO | [main] | Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes. | org.apache.carbondata.spark.rdd.CarbonScanRDD.distributeColumnarSplits(CarbonScanRDD.scala:366)2020-11-12 18:55:47,018 | INFO | [main] | Identified no.of.blocks: 5, no.of.tasks: 5, no.of.nodes: 0, parallelism: 8 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,022 | INFO | [main] | Starting job: processCmd at CliDriver.java:376 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,023 | INFO | [dag-scheduler-event-loop] | Got job 89 (processCmd at CliDriver.java:376) with 5 output partitions | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,023 | INFO | [dag-scheduler-event-loop] | Final stage: ResultStage 84 (processCmd at CliDriver.java:376) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,023 | INFO | [dag-scheduler-event-loop] | Parents of final stage: List() | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,023 | INFO | [dag-scheduler-event-loop] | Missing parents: List() | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,023 | INFO | [dag-scheduler-event-loop] | Submitting ResultStage 84 (MapPartitionsRDD[248] at processCmd at CliDriver.java:376), which has no missing parents | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,025 | INFO | [dag-scheduler-event-loop] | Block broadcast_175 stored as values in memory (estimated size 24.3 KB, free 908.6 MB) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,026 | INFO | [dag-scheduler-event-loop] | Block broadcast_175_piece0 stored as bytes in memory (estimated size 11.6 KB, free 908.6 MB) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,026 | INFO | [dispatcher-event-loop-6] | Added broadcast_175_piece0 in memory on vm1:43460 (size: 11.6 KB, free: 912.0 MB) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,026 | INFO | [dag-scheduler-event-loop] | Created broadcast 175 from broadcast at DAGScheduler.scala:1163 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,027 | INFO | [dag-scheduler-event-loop] | Submitting 5 missing tasks from ResultStage 84 (MapPartitionsRDD[248] at processCmd at CliDriver.java:376) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4)) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,027 | INFO | [dag-scheduler-event-loop] | Adding task set 84.0 with 5 tasks | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,028 | INFO | [dispatcher-event-loop-1] | Starting task 0.0 in stage 84.0 (TID 465, localhost, executor driver, partition 0, ANY, 9185 bytes) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,028 | INFO | [dispatcher-event-loop-1] | Starting task 1.0 in stage 84.0 (TID 466, localhost, executor driver, partition 1, ANY, 9185 bytes) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,028 | INFO | [dispatcher-event-loop-1] | Starting task 2.0 in stage 84.0 (TID 467, localhost, executor driver, partition 2, ANY, 9185 bytes) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,029 | INFO | [dispatcher-event-loop-1] | Starting task 3.0 in stage 84.0 (TID 468, localhost, executor driver, partition 3, ANY, 9185 bytes) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,029 | INFO | [dispatcher-event-loop-1] | Starting task 4.0 in stage 84.0 (TID 469, localhost, executor driver, partition 4, ANY, 9185 bytes) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,029 | INFO | [Executor task launch worker for task 466] | Running task 1.0 in stage 84.0 (TID 466) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,029 | INFO | [Executor task launch worker for task 467] | Running task 2.0 in stage 84.0 (TID 467) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,029 | INFO | [Executor task launch worker for task 469] | Running task 4.0 in stage 84.0 (TID 469) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,029 | INFO | [Executor task launch worker for task 468] | Running task 3.0 in stage 84.0 (TID 468) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,029 | INFO | [Executor task launch worker for task 465] | Running task 0.0 in stage 84.0 (TID 465) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,033 | INFO | [Executor task launch worker for task 469] | Projection Columns: [imei, amsize, channelsid, activecountry, activecity, productiondate, deliverydate, gamepointid, deviceinformationid, deliverycharge] | org.apache.carbondata.core.scan.model.QueryModelBuilder.projectColumns(QueryModelBuilder.java:94)2020-11-12 18:55:47,033 | INFO | [Executor task launch worker for task 467] | Projection Columns: [imei, amsize, channelsid, activecountry, activecity, productiondate, deliverydate, gamepointid, deviceinformationid, deliverycharge] | org.apache.carbondata.core.scan.model.QueryModelBuilder.projectColumns(QueryModelBuilder.java:94)2020-11-12 18:55:47,033 | INFO | [Executor task launch worker for task 466] | Projection Columns: [imei, amsize, channelsid, activecountry, activecity, productiondate, deliverydate, gamepointid, deviceinformationid, deliverycharge] | org.apache.carbondata.core.scan.model.QueryModelBuilder.projectColumns(QueryModelBuilder.java:94)2020-11-12 18:55:47,033 | INFO | [Executor task launch worker for task 468] | Projection Columns: [imei, amsize, channelsid, activecountry, activecity, productiondate, deliverydate, gamepointid, deviceinformationid, deliverycharge] | org.apache.carbondata.core.scan.model.QueryModelBuilder.projectColumns(QueryModelBuilder.java:94)2020-11-12 18:55:47,033 | INFO | [Executor task launch worker for task 465] | Projection Columns: [imei, amsize, channelsid, activecountry, activecity, productiondate, deliverydate, gamepointid, deviceinformationid, deliverycharge] | org.apache.carbondata.core.scan.model.QueryModelBuilder.projectColumns(QueryModelBuilder.java:94)2020-11-12 18:55:47,034 | INFO | [Executor task launch worker for task 466] | Query will be executed on table: brinjal_update | org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.initQuery(AbstractQueryExecutor.java:122)2020-11-12 18:55:47,034 | INFO | [Executor task launch worker for task 467] | Query will be executed on table: brinjal_update | org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.initQuery(AbstractQueryExecutor.java:122)2020-11-12 18:55:47,034 | INFO | [Executor task launch worker for task 469] | Query will be executed on table: brinjal_update | org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.initQuery(AbstractQueryExecutor.java:122)2020-11-12 18:55:47,034 | INFO | [Executor task launch worker for task 466] | Query prefetch is: true | org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.getBlockExecutionInfoForBlock(AbstractQueryExecutor.java:479)2020-11-12 18:55:47,034 | INFO | [Executor task launch worker for task 465] | Query will be executed on table: brinjal_update | org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.initQuery(AbstractQueryExecutor.java:122)2020-11-12 18:55:47,034 | INFO | [Executor task launch worker for task 469] | Query prefetch is: true | org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.getBlockExecutionInfoForBlock(AbstractQueryExecutor.java:479)2020-11-12 18:55:47,034 | INFO | [Executor task launch worker for task 465] | Query prefetch is: true | org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.getBlockExecutionInfoForBlock(AbstractQueryExecutor.java:479)2020-11-12 18:55:47,034 | INFO | [Executor task launch worker for task 468] | Query will be executed on table: brinjal_update | org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.initQuery(AbstractQueryExecutor.java:122)2020-11-12 18:55:47,034 | INFO | [Executor task launch worker for task 467] | Query prefetch is: true | org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.getBlockExecutionInfoForBlock(AbstractQueryExecutor.java:479)2020-11-12 18:55:47,035 | INFO | [Executor task launch worker for task 468] | Query prefetch is: true | org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.getBlockExecutionInfoForBlock(AbstractQueryExecutor.java:479)2020-11-12 18:55:47,036 | INFO | [Executor task launch worker for task 466] | Vector based dictionary collector is used to scan and collect the data | org.apache.carbondata.core.scan.collector.ResultCollectorFactory.getScannedResultCollector(ResultCollectorFactory.java:78)2020-11-12 18:55:47,036 | INFO | [Executor task launch worker for task 466] | Direct page-wise vector fill collector is used to scan and collect the data | org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.<init>(DictionaryBasedVectorResultCollector.java:73)2020-11-12 18:55:47,037 | INFO | [Executor task launch worker for task 469] | Vector based dictionary collector is used to scan and collect the data | org.apache.carbondata.core.scan.collector.ResultCollectorFactory.getScannedResultCollector(ResultCollectorFactory.java:78)2020-11-12 18:55:47,037 | INFO | [Executor task launch worker for task 469] | Direct page-wise vector fill collector is used to scan and collect the data | org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.<init>(DictionaryBasedVectorResultCollector.java:73)2020-11-12 18:55:47,037 | INFO | [Executor task launch worker for task 465] | Vector based dictionary collector is used to scan and collect the data | org.apache.carbondata.core.scan.collector.ResultCollectorFactory.getScannedResultCollector(ResultCollectorFactory.java:78)2020-11-12 18:55:47,037 | INFO | [Executor task launch worker for task 465] | Direct page-wise vector fill collector is used to scan and collect the data | org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.<init>(DictionaryBasedVectorResultCollector.java:73)2020-11-12 18:55:47,037 | INFO | [Executor task launch worker for task 468] | Vector based dictionary collector is used to scan and collect the data | org.apache.carbondata.core.scan.collector.ResultCollectorFactory.getScannedResultCollector(ResultCollectorFactory.java:78)2020-11-12 18:55:47,037 | INFO | [Executor task launch worker for task 468] | Direct page-wise vector fill collector is used to scan and collect the data | org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.<init>(DictionaryBasedVectorResultCollector.java:73)2020-11-12 18:55:47,037 | INFO | [Executor task launch worker for task 467] | Vector based dictionary collector is used to scan and collect the data | org.apache.carbondata.core.scan.collector.ResultCollectorFactory.getScannedResultCollector(ResultCollectorFactory.java:78)2020-11-12 18:55:47,038 | INFO | [Executor task launch worker for task 467] | Direct page-wise vector fill collector is used to scan and collect the data | org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.<init>(DictionaryBasedVectorResultCollector.java:73)2020-11-12 18:55:47,050 | INFO | [Executor task launch worker for task 465] | Total off-heap working memory used after task 645a72eb-8d06-489b-940d-f7ca1e901bc7 is 128073. Current running tasks are 7a7a3012-c819-4e4c-9f08-54dd890850e4, f251027c-707b-4df1-b35f-00eb74c8d77f, 5233fa5c-5173-4a9e-a8bf-8c024dac1509, a05de0f7-398e-45a5-a91d-a4e1045e9a98 | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,050 | INFO | [Executor task launch worker for task 467] | Total off-heap working memory used after task a05de0f7-398e-45a5-a91d-a4e1045e9a98 is 128073. Current running tasks are 7a7a3012-c819-4e4c-9f08-54dd890850e4, f251027c-707b-4df1-b35f-00eb74c8d77f, 5233fa5c-5173-4a9e-a8bf-8c024dac1509 | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,050 | INFO | [Executor task launch worker for task 469] | Total off-heap working memory used after task f251027c-707b-4df1-b35f-00eb74c8d77f is 128073. Current running tasks are 7a7a3012-c819-4e4c-9f08-54dd890850e4, 5233fa5c-5173-4a9e-a8bf-8c024dac1509 | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,050 | INFO | [Executor task launch worker for task 467] | Total off-heap working memory used after task 11861831-3ae3-4dd7-98ce-352f4416e4d5 is 128073. Current running tasks are 7a7a3012-c819-4e4c-9f08-54dd890850e4, 5233fa5c-5173-4a9e-a8bf-8c024dac1509 | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,050 | INFO | [Executor task launch worker for task 469] | Total off-heap working memory used after task c4193df5-ceb8-4ff9-8844-0d9c42f989d8 is 128045. Current running tasks are 7a7a3012-c819-4e4c-9f08-54dd890850e4, 5233fa5c-5173-4a9e-a8bf-8c024dac1509 | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,051 | INFO | [Executor task launch worker for task 467] | Total off-heap working memory used after task 88e4dbd6-c19e-43e6-bc9b-42dfefc8378c is 128045. Current running tasks are 7a7a3012-c819-4e4c-9f08-54dd890850e4, 5233fa5c-5173-4a9e-a8bf-8c024dac1509 | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,051 | INFO | [Executor task launch worker for task 469] | Total off-heap working memory used after task f75746ea-783d-4098-91d5-1a93fda81f04 is 128045. Current running tasks are 7a7a3012-c819-4e4c-9f08-54dd890850e4, 5233fa5c-5173-4a9e-a8bf-8c024dac1509 | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,051 | INFO | [Executor task launch worker for task 465] | Total off-heap working memory used after task ae4ac11d-7a43-426d-8578-ac77b6ddf74b is 128090. Current running tasks are 7a7a3012-c819-4e4c-9f08-54dd890850e4, 5233fa5c-5173-4a9e-a8bf-8c024dac1509 | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,051 | INFO | [Executor task launch worker for task 466] | Total off-heap working memory used after task 7a7a3012-c819-4e4c-9f08-54dd890850e4 is 128090. Current running tasks are 5233fa5c-5173-4a9e-a8bf-8c024dac1509 | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,051 | INFO | [Executor task launch worker for task 465] | Total off-heap working memory used after task fb7aa505-1163-4fe2-936e-0de1e4f78271 is 128090. Current running tasks are 5233fa5c-5173-4a9e-a8bf-8c024dac1509 | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,051 | INFO | [Executor task launch worker for task 466] | Total off-heap working memory used after task 27cb623e-3b1c-4f49-adaa-c11c19d766af is 128000. Current running tasks are 5233fa5c-5173-4a9e-a8bf-8c024dac1509 | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,051 | INFO | [Executor task launch worker for task 466] | Total off-heap working memory used after task b2d72c4c-7d5d-4030-9f79-54219d56474d is 0. Current running tasks are 5233fa5c-5173-4a9e-a8bf-8c024dac1509 | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,052 | INFO | [Executor task launch worker for task 465] | Finished task 0.0 in stage 84.0 (TID 465). 2766 bytes result sent to driver | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,051 | INFO | [Executor task launch worker for task 469] | Finished task 4.0 in stage 84.0 (TID 469). 2198 bytes result sent to driver | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,051 | INFO | [Executor task launch worker for task 467] | Finished task 2.0 in stage 84.0 (TID 467). 2444 bytes result sent to driver | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,052 | INFO | [Executor task launch worker for task 466] | Finished task 1.0 in stage 84.0 (TID 466). 2524 bytes result sent to driver | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,052 | INFO | [Executor task launch worker for task 468] | Total off-heap working memory used after task 5233fa5c-5173-4a9e-a8bf-8c024dac1509 is 0. Current running tasks are | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,052 | INFO | [Executor task launch worker for task 468] | Total off-heap working memory used after task 4ea15806-df6e-4377-9cea-84f07586fe02 is 0. Current running tasks are | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,052 | INFO | [task-result-getter-2] | Finished task 0.0 in stage 84.0 (TID 465) in 25 ms on localhost (executor driver) (1/5) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,052 | INFO | [Executor task launch worker for task 468] | Total off-heap working memory used after task 1086a9e4-a644-4340-8215-b8c3d936e32d is 0. Current running tasks are | org.apache.carbondata.core.memory.UnsafeMemoryManager.freeMemoryAll(UnsafeMemoryManager.java:179)2020-11-12 18:55:47,053 | INFO | [task-result-getter-1] | Finished task 4.0 in stage 84.0 (TID 469) in 24 ms on localhost (executor driver) (2/5) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,053 | INFO | [Executor task launch worker for task 468] | Finished task 3.0 in stage 84.0 (TID 468). 2380 bytes result sent to driver | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,053 | INFO | [task-result-getter-0] | Finished task 2.0 in stage 84.0 (TID 467) in 25 ms on localhost (executor driver) (3/5) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,054 | INFO | [task-result-getter-3] | Finished task 1.0 in stage 84.0 (TID 466) in 26 ms on localhost (executor driver) (4/5) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,054 | INFO | [task-result-getter-2] | Finished task 3.0 in stage 84.0 (TID 468) in 26 ms on localhost (executor driver) (5/5) | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,054 | INFO | [task-result-getter-2] | Removed TaskSet 84.0, whose tasks have all completed, from pool | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,054 | INFO | [dag-scheduler-event-loop] | ResultStage 84 (processCmd at CliDriver.java:376) finished in 0.030 s | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,055 | INFO | [main] | Job 89 finished: processCmd at CliDriver.java:376, took 0.032288 s | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)2020-11-12 18:55:47,071 | INFO | [main] | Time taken: 0.344 seconds, Fetched 534 row(s) | org.apache.hadoop.hive.ql.session.SessionState$LogHelper.printInfo(SessionState.java:951)2020-11-12 18:55:47,086 | AUDIT | [main] | \{"time":"November 12, 2020 6:55:47 PM CST","username":"root","opName":"SET","opId":"3304241283944785","opStatus":"START"} | org.apache.carbondata.processing.util.Auditor.logOperationStart(Auditor.java:74)2020-11-12 18:55:47,086 | INFO | [main] | The key carbon.enable.index.server with value false added in the session param | org.apache.carbondata.core.util.SessionParams.addProperty(SessionParams.java:102)2020-11-12 18:55:47,087 | AUDIT | [main] | \{"time":"November 12, 2020 6:55:47 PM CST","username":"root","opName":"SET","opId":"3304241283944785","opStatus":"SUCCESS","opTime":"1 ms","table":"NA","extraInfo":{}} | org.apache.carbondata.processing.util.Auditor.logOperationEnd(Auditor.java:97)2020-11-12 18:55:47,094 | INFO | [main] | Time taken: 0.016 seconds, Fetched 1 row(s) | org.apache.hadoop.hive.ql.session.SessionState$LogHelper.printInfo(SessionState.java:951)2020-11-12 18:58:04,451 | AUDIT | [main] | \{"time":"November 12, 2020 6:58:04 PM CST","username":"root","opName":"REFRESH TABLE","opId":"3304378649144145","opStatus":"START"} | org.apache.carbondata.processing.util.Auditor.logOperationStart(Auditor.java:74)2020-11-12 18:58:04,452 | INFO | [main] | 0: get_database: 1_6_1 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)2020-11-12 18:58:04,452 | INFO | [main] | ugi=root ip=unknown-ip-addr cmd=get_database: 1_6_1 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)2020-11-12 18:58:04,455 | INFO | [main] | 0: get_table : db=1_6_1 tbl=brinjal_deleteseg | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)2020-11-12 18:58:04,455 | INFO | [main] | ugi=root ip=unknown-ip-addr cmd=get_table : db=1_6_1 tbl=brinjal_deleteseg | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)2020-11-12 18:58:04,458 | INFO | [main] | 0: get_table : db=1_6_1 tbl=brinjal_deleteseg | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)2020-11-12 18:58:04,458 | INFO | [main] | ugi=root ip=unknown-ip-addr cmd=get_table : db=1_6_1 tbl=brinjal_deleteseg | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)2020-11-12 18:58:04,460 | INFO | [main] | 0: get_database: 1_6_1 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)2020-11-12 18:58:04,460 | INFO | [main] | ugi=root ip=unknown-ip-addr cmd=get_database: 1_6_1 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)2020-11-12 18:58:04,462 | INFO | [main] | 0: get_database: 1_6_1 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)2020-11-12 18:58:04,462 | INFO | [main] | ugi=root ip=unknown-ip-addr cmd=get_database: 1_6_1 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)2020-11-12 18:58:04,465 | INFO | [main] | 0: get_database: 1_6_1 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)2020-11-12 18:58:04,465 | INFO | [main] | ugi=root ip=unknown-ip-addr cmd=get_database: 1_6_1 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)2020-11-12 18:58:04,467 | INFO | [main] | 0: get_table : db=1_6_1 tbl=brinjal_deleteseg | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)2020-11-12 18:58:04,467 | INFO | [main] | ugi=root ip=unknown-ip-addr cmd=get_table : db=1_6_1 tbl=brinjal_deleteseg | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)2020-11-12 18:58:04,472 | AUDIT | [main] | \{"time":"November 12, 2020 6:58:04 PM CST","username":"root","opName":"REFRESH TABLE","opId":"3304378649144145","opStatus":"FAILED","opTime":"21 ms","table":"1_6_1.brinjal_deleteseg","extraInfo":{"Exception":"org.apache.spark.sql.catalyst.analysis.NoSuchTableException","Message":"Table or view 'brinjal_deleteseg' not found in database '1_6_1';"}} | org.apache.carbondata.processing.util.Auditor.logOperationEnd(Auditor.java:97) -- This message was sent by Atlassian Jira (v8.3.4#803005) |
Free forum by Nabble | Edit this page |