Posted by
Akash R Nilugal (Jira) on
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/jira-Updated-CARBONDATA-3938-In-Hive-read-table-we-are-unable-to-read-a-projection-column-or-read-a--tp102491.html
[
https://issues.apache.org/jira/browse/CARBONDATA-3938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Prasanna Ravichandran updated CARBONDATA-3938:
----------------------------------------------
Description:
In Hive read table, we are unable to read a projection column or full scan query. But the aggregate queries are working fine.
Test query:
--spark beeline;
drop table if exists uniqdata;
drop table if exists uniqdata1;
CREATE TABLE uniqdata(CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) stored as carbondata ;
LOAD DATA INPATH 'hdfs://hacluster/user/prasanna/2000_UniqData.csv' into table uniqdata OPTIONS('DELIMITER'=',', 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
CREATE TABLE IF NOT EXISTS uniqdata1 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) ROW FORMAT SERDE 'org.apache.carbondata.hive.CarbonHiveSerDe' WITH SERDEPROPERTIES ('mapreduce.input.carboninputformat.databaseName'='default','mapreduce.input.carboninputformat.tableName'='uniqdata') STORED AS INPUTFORMAT 'org.apache.carbondata.hive.MapredCarbonInputFormat' OUTPUTFORMAT 'org.apache.carbondata.hive.MapredCarbonOutputFormat' LOCATION 'hdfs://hacluster/user/hive/warehouse/uniqdata';
select count(*) from uniqdata1;
--Hive Beeline;
select count(*) from uniqdata1; --not working, returning 0 rows, eventhough 2000 rows are there;--Issue 1 on Hive read format table;
select * from uniqdata1; --Return no rows;--Issue 2 - a) full scan on Hive read format table;
select cust_id from uniqdata1 limit 5;--Return no rows;–Issue 2-b select query with projection, not working, returning now rows;
Attached the logs for your reference.
With the Hive write table the aggregate& filter queries are not working but select * full scan queries are working.
All 3 Issues (Full scan - select *, filter queries and aggregate queries) is not working in Hive read format table.
This issue also exists when a normal carbon table(created through stored as carbondata) is created in Spark and data is read through select query from Hive beeline.)
was:
In Hive read table, we are unable to read a projection column or full scan query. But the aggregate queries are working fine.
Test query:
--spark beeline;
drop table if exists uniqdata;
drop table if exists uniqdata1;
CREATE TABLE uniqdata(CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) stored as carbondata ;
LOAD DATA INPATH 'hdfs://hacluster/user/prasanna/2000_UniqData.csv' into table uniqdata OPTIONS('DELIMITER'=',', 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
CREATE TABLE IF NOT EXISTS uniqdata1 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) ROW FORMAT SERDE 'org.apache.carbondata.hive.CarbonHiveSerDe' WITH SERDEPROPERTIES ('mapreduce.input.carboninputformat.databaseName'='default','mapreduce.input.carboninputformat.tableName'='uniqdata') STORED AS INPUTFORMAT 'org.apache.carbondata.hive.MapredCarbonInputFormat' OUTPUTFORMAT 'org.apache.carbondata.hive.MapredCarbonOutputFormat' LOCATION 'hdfs://hacluster/user/hive/warehouse/uniqdata';
select count(*) from uniqdata1;
--Hive Beeline;
select count(*) from uniqdata1; --not working, returning 0 rows, eventhough 2000 rows are there;--Issue 1 on Hive read format table;
select * from uniqdata1; --Return no rows;--Issue 2 - a) full scan on Hive read format table;
select cust_id from uniqdata1 limit 5;--Return no rows;–Issue 2-b select query with projection, not working, returning now rows;
Attached the logs for your reference. With the Hive write table this issue is not seen. Issue is only seen in Hive read format table.
This issue also exists when a normal carbon table is created in Spark and read through Hive beeline.
> In Hive read table, we are unable to read a projection column or read a full scan - select * query. Even the aggregate queries are not working.
> -----------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: CARBONDATA-3938
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-3938> Project: CarbonData
> Issue Type: Bug
> Components: hive-integration
> Affects Versions: 2.0.0
> Reporter: Prasanna Ravichandran
> Priority: Major
> Attachments: Hive on MR - Read projection column issue.txt
>
>
> In Hive read table, we are unable to read a projection column or full scan query. But the aggregate queries are working fine.
>
> Test query:
>
> --spark beeline;
> drop table if exists uniqdata;
> drop table if exists uniqdata1;
> CREATE TABLE uniqdata(CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) stored as carbondata ;
> LOAD DATA INPATH 'hdfs://hacluster/user/prasanna/2000_UniqData.csv' into table uniqdata OPTIONS('DELIMITER'=',', 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> CREATE TABLE IF NOT EXISTS uniqdata1 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) ROW FORMAT SERDE 'org.apache.carbondata.hive.CarbonHiveSerDe' WITH SERDEPROPERTIES ('mapreduce.input.carboninputformat.databaseName'='default','mapreduce.input.carboninputformat.tableName'='uniqdata') STORED AS INPUTFORMAT 'org.apache.carbondata.hive.MapredCarbonInputFormat' OUTPUTFORMAT 'org.apache.carbondata.hive.MapredCarbonOutputFormat' LOCATION 'hdfs://hacluster/user/hive/warehouse/uniqdata';
> select count(*) from uniqdata1;
>
>
> --Hive Beeline;
> select count(*) from uniqdata1; --not working, returning 0 rows, eventhough 2000 rows are there;--Issue 1 on Hive read format table;
> select * from uniqdata1; --Return no rows;--Issue 2 - a) full scan on Hive read format table;
> select cust_id from uniqdata1 limit 5;--Return no rows;–Issue 2-b select query with projection, not working, returning now rows;
> Attached the logs for your reference.
> With the Hive write table the aggregate& filter queries are not working but select * full scan queries are working.
> All 3 Issues (Full scan - select *, filter queries and aggregate queries) is not working in Hive read format table.
> This issue also exists when a normal carbon table(created through stored as carbondata) is created in Spark and data is read through select query from Hive beeline.)
--
This message was sent by Atlassian Jira
(v8.3.4#803005)