[jira] [Updated] (CARBONDATA-2317) concurrent datamap with same name and schema creation throws exception

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Updated] (CARBONDATA-2317) concurrent datamap with same name and schema creation throws exception

Akash R Nilugal (Jira)

     [ https://issues.apache.org/jira/browse/CARBONDATA-2317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Indhumathi Muthu Murugesh updated CARBONDATA-2317:
--------------------------------------------------
    Fix Version/s:     (was: 2.0.2)

> concurrent datamap with same name and schema creation throws exception
> -----------------------------------------------------------------------
>
>                 Key: CARBONDATA-2317
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-2317
>             Project: CarbonData
>          Issue Type: Improvement
>            Reporter: Rahul Kumar
>            Assignee: Rahul Kumar
>            Priority: Minor
>             Fix For: 1.4.0
>
>          Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> *Steps to reproduce :* 
>  # From Beeline user creates a table.
>  #  From 4 concurrent terminals user tries to create datamaps.
> *Query used to reproduce:* 
>  # CREATE TABLE uniqdata(CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('DICTIONARY_INCLUDE'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
>  # create datamap uniqdata_agg on table uniqdata using 'preaggregate' as select cust_name, avg(cust_id) from uniqdata group by cust_id, cust_name;
>  # create datamap uniqdata_agg_sum on table uniqdata using 'preaggregate' as select cust_name, sum(cust_id) from uniqdata group by cust_id, cust_name;
>  # create datamap uniqdata_agg_count on table uniqdata using 'preaggregate' as select cust_name, count(cust_id) from uniqdata group by cust_id, cust_name;
>  # create datamap uniqdata_agg_min on table uniqdata using 'preaggregate' as select cust_name, min(cust_id) from uniqdata group by cust_id, cust_name;
>  # create datamap uniqdata_agg_max on table uniqdata using 'preaggregate' as select cust_name, max(cust_id) from uniqdata group by cust_id, cust_name;   --->*The datamaps are tried to be created from 4 concurrent terminals.*
> *2 of the datamaps fails with below error*
> {quote}0: jdbc:hive2://ha-cluster/default> create datamap uniqdata_agg_min on table uniqdata using 'preaggregate' as select cust_name, min(cust_id) from uniqdata group by cust_id, cust_name;Error: org.apache.carbondata.spark.exception.ProcessMetaDataException: operation failed for default.uniqdata_uniqdata_agg_min: Create table'uniqdata_uniqdata_agg_min' in database 'default' failed, File does not exist: /user/hive/warehouse/carbon.store/default/uniqdata_uniqdata_agg_min/Metadata/schema.write (inode 20219) Holder DFSClient_NONMAPREDUCE_1307577692_216 does not have any open files.at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2686)at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.completeFileInternal(FSDirWriteFileOp.java:625)at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.completeFile(FSDirWriteFileOp.java:605)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2731)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:883)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:561)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:422)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486) (state=,code=0){quote}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)