[jira] [Created] (CARBONDATA-3938) In Hive read table, we are unable to read a projection column or read a full scan - select * query. But the aggregate queries are working fine.

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Created] (CARBONDATA-3938) In Hive read table, we are unable to read a projection column or read a full scan - select * query. But the aggregate queries are working fine.

Akash R Nilugal (Jira)
Prasanna Ravichandran created CARBONDATA-3938:
-------------------------------------------------

             Summary: In Hive read table, we are unable to read a projection column or read a full scan - select * query. But the aggregate queries are working fine.
                 Key: CARBONDATA-3938
                 URL: https://issues.apache.org/jira/browse/CARBONDATA-3938
             Project: CarbonData
          Issue Type: Bug
          Components: hive-integration
    Affects Versions: 2.0.0
            Reporter: Prasanna Ravichandran
         Attachments: Hive on MR - Read projection column issue.txt

In Hive read table, we are unable to read a projection column or full scan query. But the aggregate queries are working fine.

 

Test query:

 

--spark beeline;

drop table if exists uniqdata;

drop table if exists uniqdata1;

CREATE TABLE uniqdata(CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) stored as carbondata ;

LOAD DATA INPATH 'hdfs://hacluster/user/prasanna/2000_UniqData.csv' into table uniqdata OPTIONS('DELIMITER'=',', 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

CREATE TABLE IF NOT EXISTS uniqdata1 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) ROW FORMAT SERDE 'org.apache.carbondata.hive.CarbonHiveSerDe' WITH SERDEPROPERTIES ('mapreduce.input.carboninputformat.databaseName'='default','mapreduce.input.carboninputformat.tableName'='uniqdata') STORED AS INPUTFORMAT 'org.apache.carbondata.hive.MapredCarbonInputFormat' OUTPUTFORMAT 'org.apache.carbondata.hive.MapredCarbonOutputFormat' LOCATION 'hdfs://hacluster/user/hive/warehouse/uniqdata';

select  count(*)  from uniqdata1;

 

 

--Hive Beeline;

select count(*) from uniqdata1; --Returns 2000;

select * from uniqdata1; --Return no rows;

select cust_id from uniqdata1 limit 5;--Return no rows;

 Attached the logs for your reference. With the Hive write table this issue is not seen. Issue is only seen in Hive read format table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)