> [MV] DataMap Choosing policy should be table size based not 1st Matched based
> ------------------------------------------------------------------------------
>
> Key: CARBONDATA-2572
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-2572> Project: CarbonData
> Issue Type: Sub-task
> Reporter: Babulal
> Priority: Major
>
> Create Table and create datamap
> 0: jdbc:hive2://10.18.222.231:23040> show datamap on table babu_2;
> +--------------+------------+-----------------------+--+
> | DataMapName | ClassName | Associated Table |
> +--------------+------------+-----------------------+--+
> | agg_69 | mv | default.agg_69_table |
> | agg_70 | mv | default.agg_70_table |
> +--------------+------------+----------------------
>
> create datamap agg_69 using 'mv' as select unit_id,y_year_id,country_id,sum(dollar_value_id),max(dollar_value),min(dollar_value),sum(quantity),min(quantity),max(quantity) from babu2 group by unit_id,y_year_id,country_id
> create datamap agg_70 using 'mv' as select unit_id,sum(dollar_value_id) from babu3 group by unit_id;
>
> Size of each MV table
>
> BLR1000023613:/srv/spark2.2Bigdata/install/spark/sparkJdbc # hadoop fs -du -s -h /user/hive/warehouse/carbon.store/default/agg_69_table
> *86.4 K* /user/hive/warehouse/carbon.store/default/agg_69_table
> BLR1000023613:/srv/spark2.2Bigdata/install/spark/sparkJdbc # hadoop fs -du -s -h /user/hive/warehouse/carbon.store/default/agg_70_table
> *2.9 K /*user/hive/warehouse/carbon.store/default/agg_70_table
> BLR1000023613:/srv/spark2.2Bigdata/install/spark/sparkJdbc # hadoop fs -du -s -h /user/hive/warehouse/carbon.store/default/agg_68_table
> Now Run select query which is given during agg_70 ,
>
> 0: jdbc:hive2://10.18.222.231:23040> explain select unit_id,sum(dollar_value) from babu_2group by unit;
> +
> | plan |
> +
> | == Physical Plan ==
> *...........................................
> +- *BatchedScan CarbonDatasourceHadoopRelation [ Database name :default, Table name :*agg_69_table*, Schema
>
> But as per Size agg_70 should be selected