> Null pointer exception when concurrent select queries are executed from different beeline terminals.
> ----------------------------------------------------------------------------------------------------
>
> Key: CARBONDATA-3482
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-3482> Project: CarbonData
> Issue Type: Bug
> Reporter: Kunal Khatua
> Assignee: Kunal Khatua
> Priority: Major
> Fix For: 1.6.1
>
> Time Spent: 5.5h
> Remaining Estimate: 0h
>
> # Beeline1: => create tables (1K )
> 2. Beeline2 => insert into table t2 (only 1 records ) till 7K
> 3. Concurrent queries
> q1 : select count(*) from t1
> q2 : select * from t1 limit 1
> q3 : select count(*) from t2
> q2 : select * from t2 limit 1
>
> Exception:
> java.lang.NullPointerException
> at org.apache.carbondata.core.indexstore.blockletindex.BlockDataMap.getFileFooterEntrySchema(BlockDataMap.java:1061)
> at org.apache.carbondata.core.indexstore.blockletindex.BlockDataMap.prune(BlockDataMap.java:727)
> at org.apache.carbondata.core.indexstore.blockletindex.BlockDataMap.prune(BlockDataMap.java:821)
> at org.apache.carbondata.core.indexstore.blockletindex.BlockletDataMapFactory.getAllBlocklets(BlockletDataMapFactory.java:446)
> at org.apache.carbondata.core.datamap.TableDataMap.pruneWithoutFilter(TableDataMap.java:156)
> at org.apache.carbondata.core.datamap.TableDataMap.prune(TableDataMap.java:143)
> at org.apache.carbondata.hadoop.api.CarbonInputFormat.getPrunedBlocklets(CarbonInputFormat.java:563)
> at org.apache.carbondata.hadoop.api.CarbonInputFormat.getDataBlocksOfSegment(CarbonInputFormat.java:471)
> at org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:471)
> at org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:199)
> at org.apache.carbondata.spark.rdd.CarbonScanRDD.internalGetPartitions(CarbonScanRDD.scala:141)
> at org.apache.carbondata.spark.rdd.CarbonRDD.getPartitions(CarbonRDD.scala:66)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:256)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:254)