[jira] [Closed] (CARBONDATA-1956) Select query with sum, count and avg throws exception for pre aggregate table

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Closed] (CARBONDATA-1956) Select query with sum, count and avg throws exception for pre aggregate table

Akash R Nilugal (Jira)

     [ https://issues.apache.org/jira/browse/CARBONDATA-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Geetika Gupta closed CARBONDATA-1956.
-------------------------------------
    Resolution: Fixed

> Select query with sum, count and avg throws exception for pre aggregate table
> -----------------------------------------------------------------------------
>
>                 Key: CARBONDATA-1956
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1956
>             Project: CarbonData
>          Issue Type: Bug
>          Components: data-query
>    Affects Versions: 1.3.0
>         Environment: spark2.1
>            Reporter: Geetika Gupta
>            Priority: Major
>             Fix For: 1.3.0
>
>         Attachments: 2000_UniqData.csv
>
>
> I create a datamap using the following command:
> create datamap uniqdata_agg_d on table uniqdata_29 using 'preaggregate' as select sum(decimal_column1), count(cust_id), avg(bigint_column1) from uniqdata_29 group by cust_id;
> The datamap creation was successfull, but when I tried the following query:
> select sum(decimal_column1), count(cust_id), avg(bigint_column1) from uniqdata_29 group by cust_id;
> It throws the following exception:
> Error: org.apache.spark.sql.AnalysisException: cannot resolve '(sum(uniqdata_29_uniqdata_agg_d.`uniqdata_29_bigint_column1_sum`) / sum(uniqdata_29_uniqdata_agg_d.`uniqdata_29_bigint_column1_count`))' due to data type mismatch: '(sum(uniqdata_29_uniqdata_agg_d.`uniqdata_29_bigint_column1_sum`) / sum(uniqdata_29_uniqdata_agg_d.`uniqdata_29_bigint_column1_count`))' requires (double or decimal) type, not bigint;;
> 'Aggregate [uniqdata_29_cust_id_count#244], [sum(uniqdata_29_decimal_column1_sum#243) AS sum(decimal_column1)#274, sum(cast(uniqdata_29_cust_id_count#244 as bigint)) AS count(cust_id)#276L, (sum(uniqdata_29_bigint_column1_sum#245L) / sum(uniqdata_29_bigint_column1_count#246L)) AS avg(bigint_column1)#279]
> +- Relation[uniqdata_29_decimal_column1_sum#243,uniqdata_29_cust_id_count#244,uniqdata_29_bigint_column1_sum#245L,uniqdata_29_bigint_column1_count#246L] CarbonDatasourceHadoopRelation [ Database name :28dec, Table name :uniqdata_29_uniqdata_agg_d, Schema :Some(StructType(StructField(uniqdata_29_decimal_column1_sum,DecimalType(30,10),true), StructField(uniqdata_29_cust_id_count,IntegerType,true), StructField(uniqdata_29_bigint_column1_sum,LongType,true), StructField(uniqdata_29_bigint_column1_count,LongType,true))) ] (state=,code=0)
> Steps for creation of maintable:
> CREATE TABLE uniqdata_29(CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format';
> Load command:
> LOAD DATA INPATH 'hdfs://localhost:54311/Files/2000_UniqData.csv' into table uniqdata_29 OPTIONS('DELIMITER'=',', 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> Datamap creation command:
> create datamap uniqdata_agg_d on table uniqdata_29 using 'preaggregate' as select sum(decimal_column1), count(cust_id), avg(bigint_column1) from uniqdata_29 group by cust_id;
> Note: select sum(decimal_column1), count(cust_id), avg(bigint_column1) from uniqdata_29 group by cust_id; executed successfully on maintable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)