> Error in fetching decimal type data loaded with Carbondata 1.1.0 in Carbondata 1.2.0
> ------------------------------------------------------------------------------------
>
> Key: CARBONDATA-1458
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-1458> Project: CarbonData
> Issue Type: Bug
> Components: data-query
> Affects Versions: 1.2.0
> Environment: Hadoop 2.7.3 and Spark 2.1.0
> Reporter: Pallavi Singh
> Assignee: Ravindra Pesala
> Fix For: 1.2.0
>
> Time Spent: 3h
> Remaining Estimate: 0h
>
> I have a carbon data table created and loaded with the Carbondata 1.1.0
> CREATE TABLE uniqdata1 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' ;
> LOAD DATA INPATH 'hdfs://localhost:54310/data/2000_UniqData.csv' into table uniqdata1 OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1') ;
> Now, when I am trying to fetch the data using Carbondata 1.2.0, I am getting exception
> This is happening for the decimal data type, the command and error message is given below:
> select DECIMAL_COLUMN1 from uniqdata1;
> Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 29.0 failed 4 times, most recent failure: Lost task 0.3 in stage 29.0 (TID 93, 192.168.2.188, executor 0): org.apache.spark.util.TaskCompletionListenerException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.ClassCastException: java.lang.Double cannot be cast to java.math.BigDecimal
> And also if the table contains decimal, then the select * query also fails because of above issue