[jira] [Commented] (CARBONDATA-1349) Error messge displays while execute Select Query in the existing table.

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Commented] (CARBONDATA-1349) Error messge displays while execute Select Query in the existing table.

Akash R Nilugal (Jira)

    [ https://issues.apache.org/jira/browse/CARBONDATA-1349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16110726#comment-16110726 ]

Vinod Rohilla commented on CARBONDATA-1349:
-------------------------------------------

[~xuchuanyin]

Please check below thrift server logs:

17/08/02 16:43:57 INFO thriftserver.SparkExecuteStatementOperation: Running query 'select * from uniqdata' with dc6ef943-0fec-4bce-8717-0834c268a685
17/08/02 16:43:57 INFO parser.CarbonSparkSqlParser: Parsing command: select * from uniqdata
17/08/02 16:43:57 INFO metastore.HiveMetaStore: 6: get_table : db=vinod tbl=uniqdata
17/08/02 16:43:57 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=vinod tbl=uniqdata
17/08/02 16:43:57 INFO metastore.HiveMetaStore: 6: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
17/08/02 16:43:57 INFO metastore.ObjectStore: ObjectStore, initialize called
17/08/02 16:43:57 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
17/08/02 16:43:57 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
17/08/02 16:43:57 INFO metastore.ObjectStore: Initialized ObjectStore
17/08/02 16:43:57 INFO parser.CatalystSqlParser: Parsing command: array<string>
17/08/02 16:43:57 INFO metastore.HiveMetaStore: 6: get_table : db=vinod tbl=uniqdata
17/08/02 16:43:57 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=vinod tbl=uniqdata
17/08/02 16:43:57 INFO parser.CatalystSqlParser: Parsing command: array<string>
17/08/02 16:43:57 INFO metastore.HiveMetaStore: 6: get_database: vinod
17/08/02 16:43:57 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_database: vinod
17/08/02 16:43:57 INFO metastore.HiveMetaStore: 6: get_database: vinod
17/08/02 16:43:57 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_database: vinod
17/08/02 16:43:57 INFO metastore.HiveMetaStore: 6: get_tables: db=vinod pat=*
17/08/02 16:43:57 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_tables: db=vinod pat=*
17/08/02 16:43:57 INFO optimizer.CarbonLateDecodeRule: pool-23-thread-5 Starting to optimize plan
17/08/02 16:43:57 INFO optimizer.CarbonLateDecodeRule: pool-23-thread-5 Skip CarbonOptimizer
17/08/02 16:43:58 INFO table.TableInfo: pool-23-thread-5 Table block size not specified for vinod_uniqdata. Therefore considering the default value 1024 MB
17/08/02 16:43:58 INFO memory.UnsafeMemoryManager: pool-23-thread-5 Memory block (org.apache.carbondata.core.memory.MemoryBlock@5187d9ed) is created with size 8388608. Total memory used 8389355Bytes, left 528481557Bytes
17/08/02 16:43:58 INFO memory.UnsafeMemoryManager: pool-23-thread-5 Memory block (org.apache.carbondata.core.memory.MemoryBlock@e0198c5) is created with size 349. Total memory used 8389704Bytes, left 528481208Bytes
17/08/02 16:43:58 INFO memory.UnsafeMemoryManager: pool-23-thread-5 Freeing memory of size: 8388608available memory:  536869816
17/08/02 16:43:58 INFO rdd.CarbonScanRDD:
 Identified no.of.blocks: 1,
 no.of.tasks: 1,
 no.of.nodes: 0,
 parallelism: 4
       
17/08/02 16:43:58 INFO spark.SparkContext: Starting job: run at AccessController.java:0
17/08/02 16:43:58 INFO scheduler.DAGScheduler: Got job 2 (run at AccessController.java:0) with 1 output partitions
17/08/02 16:43:58 INFO scheduler.DAGScheduler: Final stage: ResultStage 2 (run at AccessController.java:0)
17/08/02 16:43:58 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/08/02 16:43:58 INFO scheduler.DAGScheduler: Missing parents: List()
17/08/02 16:43:58 INFO scheduler.DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[10] at run at AccessController.java:0), which has no missing parents
17/08/02 16:43:58 INFO memory.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 17.8 KB, free 2.5 GB)
17/08/02 16:43:58 INFO memory.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 7.0 KB, free 2.5 GB)
17/08/02 16:43:58 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.2.179:35394 (size: 7.0 KB, free: 2.5 GB)
17/08/02 16:43:58 INFO spark.SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:996
17/08/02 16:43:58 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[10] at run at AccessController.java:0)
17/08/02 16:43:58 INFO scheduler.TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
17/08/02 16:43:58 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, localhost, executor driver, partition 0, ANY, 6654 bytes)
17/08/02 16:43:58 INFO executor.Executor: Running task 0.0 in stage 2.0 (TID 2)
17/08/02 16:43:58 INFO table.TableInfo: Executor task launch worker-2 Table block size not specified for vinod_uniqdata. Therefore considering the default value 1024 MB
17/08/02 16:43:58 INFO impl.AbstractQueryExecutor: [Executor task launch worker-2][partitionID:uniqdata;queryID:12867873975390] Query will be executed on table: uniqdata
17/08/02 16:43:58 INFO collector.ResultCollectorFactory: [Executor task launch worker-2][partitionID:uniqdata;queryID:12867873975390] Vector based dictionary collector is used to scan and collect the data
17/08/02 16:43:58 ERROR spark.TaskContextImpl: Error in TaskCompletionListener
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.nio.BufferUnderflowException
        at org.apache.carbondata.core.scan.processor.AbstractDataBlockIterator.close(AbstractDataBlockIterator.java:231)
        at org.apache.carbondata.core.scan.result.iterator.AbstractDetailQueryResultIterator.close(AbstractDetailQueryResultIterator.java:306)
        at org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.finish(AbstractQueryExecutor.java:544)
        at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.close(VectorizedCarbonRecordReader.java:132)
        at org.apache.carbondata.spark.rdd.CarbonScanRDD$$anon$1$$anonfun$7.apply(CarbonScanRDD.scala:215)
        at org.apache.carbondata.spark.rdd.CarbonScanRDD$$anon$1$$anonfun$7.apply(CarbonScanRDD.scala:213)
        at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:123)
        at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:97)
        at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:95)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:95)
        at org.apache.spark.scheduler.Task.run(Task.scala:112)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.nio.BufferUnderflowException
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:192)
        at org.apache.carbondata.core.scan.processor.AbstractDataBlockIterator.close(AbstractDataBlockIterator.java:226)
        ... 16 more
Caused by: java.lang.RuntimeException: java.nio.BufferUnderflowException
        at org.apache.carbondata.core.datastore.chunk.impl.MeasureRawColumnChunk.convertToMeasureColDataChunks(MeasureRawColumnChunk.java:62)
        at org.apache.carbondata.core.scan.scanner.AbstractBlockletScanner.scanBlocklet(AbstractBlockletScanner.java:100)
        at org.apache.carbondata.core.scan.processor.AbstractDataBlockIterator$1.call(AbstractDataBlockIterator.java:191)
        at org.apache.carbondata.core.scan.processor.AbstractDataBlockIterator$1.call(AbstractDataBlockIterator.java:178)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        ... 3 more
Caused by: java.nio.BufferUnderflowException
        at java.nio.Buffer.nextGetIndex(Buffer.java:506)
        at java.nio.HeapByteBuffer.getLong(HeapByteBuffer.java:412)
        at org.apache.carbondata.core.metadata.ColumnPageCodecMeta.deserialize(ColumnPageCodecMeta.java:204)
        at org.apache.carbondata.core.datastore.chunk.reader.measure.v3.CompressedMeasureChunkFileBasedReaderV3.decodeMeasure(CompressedMeasureChunkFileBasedReaderV3.java:244)
        at org.apache.carbondata.core.datastore.chunk.reader.measure.v3.CompressedMeasureChunkFileBasedReaderV3.convertToMeasureChunk(CompressedMeasureChunkFileBasedReaderV3.java:219)
        at org.apache.carbondata.core.datastore.chunk.impl.MeasureRawColumnChunk.convertToMeasureColDataChunks(MeasureRawColumnChunk.java:59)
        ... 7 more
17/08/02 16:43:58 ERROR executor.Executor: Exception in task 0.0 in stage 2.0 (TID 2)
org.apache.spark.util.TaskCompletionListenerException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.nio.BufferUnderflowException
        at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105)
        at org.apache.spark.scheduler.Task.run(Task.scala:112)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
17/08/02 16:43:58 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 2.0 (TID 2, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.nio.BufferUnderflowException
        at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105)
        at org.apache.spark.scheduler.Task.run(Task.scala:112)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)

17/08/02 16:43:58 ERROR scheduler.TaskSetManager: Task 0 in stage 2.0 failed 1 times; aborting job
17/08/02 16:43:58 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
17/08/02 16:43:58 INFO scheduler.TaskSchedulerImpl: Cancelling stage 2
17/08/02 16:43:58 INFO scheduler.DAGScheduler: ResultStage 2 (run at AccessController.java:0) failed in 0.308 s due to Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 2, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.nio.BufferUnderflowException
        at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105)
        at org.apache.spark.scheduler.Task.run(Task.scala:112)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
17/08/02 16:43:58 INFO scheduler.DAGScheduler: Job 2 failed: run at AccessController.java:0, took 0.326253 s
17/08/02 16:43:58 ERROR thriftserver.SparkExecuteStatementOperation: Error executing query, currentState RUNNING,
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 2, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.nio.BufferUnderflowException
        at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105)
        at org.apache.spark.scheduler.Task.run(Task.scala:112)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
        at scala.Option.foreach(Option.scala:257)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:934)
        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:275)
        at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
        at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765)
        at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370)
        at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$collect$1.apply(Dataset.scala:2375)
        at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$collect$1.apply(Dataset.scala:2375)
        at org.apache.spark.sql.Dataset.withCallback(Dataset.scala:2778)
        at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2375)
        at org.apache.spark.sql.Dataset.collect(Dataset.scala:2351)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:235)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:160)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:173)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.util.TaskCompletionListenerException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.nio.BufferUnderflowException
        at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105)
        at org.apache.spark.scheduler.Task.run(Task.scala:112)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        ... 3 more
17/08/02 16:43:58 ERROR thriftserver.SparkExecuteStatementOperation: Error running hive query:
org.apache.hive.service.cli.HiveSQLException: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 2, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.nio.BufferUnderflowException
        at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105)
        at org.apache.spark.scheduler.Task.run(Task.scala:112)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:258)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:160)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:173)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)


> Error messge displays while execute Select Query in the existing table.
> -----------------------------------------------------------------------
>
>                 Key: CARBONDATA-1349
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1349
>             Project: CarbonData
>          Issue Type: Bug
>          Components: data-query
>         Environment: Spark 2.1
>            Reporter: Vinod Rohilla
>            Priority: Minor
>
> *Steps to reproduces:*
> 1: Table must be created at least 1 week before.
> 2: Data must be loaded.
> 3: Perform select Query. " Select * from uniqdata "
> *Actual result:*
> Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 87.0 failed 1 times, most recent failure: Lost task 0.0 in stage 87.0 (TID 111, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.nio.BufferUnderflowException
> at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105)
> at org.apache.spark.scheduler.Task.run(Task.scala:112)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> Expected Result: Select query should display the correct results.
> Note: If the user creates the new table and loading the data and Perform the select query.It shows the result but Select Query displays error result in the Existing table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)