[jira] [Resolved] (CARBONDATA-950) selecting table data having a column of "date" type throws exception in hive

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Resolved] (CARBONDATA-950) selecting table data having a column of "date" type throws exception in hive

Akash R Nilugal (Jira)

     [ https://issues.apache.org/jira/browse/CARBONDATA-950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Liang Chen resolved CARBONDATA-950.
-----------------------------------
       Resolution: Fixed
    Fix Version/s: 1.2.0

> selecting table data having a column of "date" type throws exception in hive
> ----------------------------------------------------------------------------
>
>                 Key: CARBONDATA-950
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-950
>             Project: CarbonData
>          Issue Type: Bug
>          Components: data-query
>         Environment: spark 2.1, hive 1.2.1
>            Reporter: Neha Bhardwaj
>            Assignee: anubhav tarar
>            Priority: Minor
>             Fix For: 1.2.0
>
>         Attachments: my_user.csv
>
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> selecting data from a hive table containing a column of date datatype is failing to render output.
> Steps to reproduce:
> 1) In Spark Shell :
> a) Create Table -
> import org.apache.spark.sql.SparkSession
> import org.apache.spark.sql.CarbonSession._
> val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("hdfs://localhost:54310/opt/data")
> scala> carbon.sql(" create table my_user(id int, name string,dob date) stored by 'carbondata' ").show
> b) Load Data -
> scala> carbon.sql(""" load data inpath 'hdfs://localhost:54310/Files/my_user.csv' into table my_user """ ).show
> 2) In Hive :
> a) Add Jars -
> add jar /home/neha/incubator-carbondata/assembly/target/scala-2.11/carbondata_2.11-1.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar;
> add jar /opt/spark-2.1.0-bin-hadoop2.7/jars/spark-catalyst_2.11-2.1.0.jar;
> add jar /home/neha/incubator-carbondata/integration/hive/carbondata-hive-1.1.0-incubating-SNAPSHOT.jar;
> b) Create Table -
> create table my_user(id int, name string,dob date);
> c) Alter location -
> hive> alter table my_user set LOCATION 'hdfs://localhost:54310/opt/data/default/my_user' ;
> d) Set Properties -
> set hive.mapred.supports.subdirectories=true;
> set mapreduce.input.fileinputformat.input.dir.recursive=true;
> d) Alter FileFormat -
> alter table my_user set FILEFORMAT
> INPUTFORMAT "org.apache.carbondata.hive.MapredCarbonInputFormat"
> OUTPUTFORMAT "org.apache.carbondata.hive.MapredCarbonOutputFormat"
> SERDE "org.apache.carbondata.hive.CarbonHiveSerDe";
> e) Query:
> select * from my_user;
> Expected Output:
> display all the data of table my_user.
> Actual Output:
> Failed with exception java.io.IOException:java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)