Posted by
Akash R Nilugal (Jira) on
Dec 13, 2016; 12:04pm
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/jira-Created-CARBONDATA-530-Query-with-ordery-by-and-limit-is-not-optimized-properly-tp4352.html
Ashok Kumar created CARBONDATA-530:
--------------------------------------
Summary: Query with ordery by and limit is not optimized properly
Key: CARBONDATA-530
URL:
https://issues.apache.org/jira/browse/CARBONDATA-530 Project: CarbonData
Issue Type: Bug
Reporter: Ashok Kumar
Priority: Minor
for order by query having limit, spark optimizes the plan.
But since we put Decoder in between Limit and TungstenSort plan, check the plan as below, its not able to optimize the plan
|== Physical Plan == |
|Limit 2 |
| ConvertToSafe |
| CarbonDictionaryDecoder [CarbonDecoderRelation(Map(name#3 -> name#3),CarbonDatasourceRelation(`default`.`dict`,None))], ExcludeProfile(ArrayBuffer(name#3)), CarbonAliasDecoderRelation() |
| TungstenSort [name#3 ASC], true, 0 |
| ConvertToUnsafe |
| Exchange rangepartitioning(name#3 ASC) |
| ConvertToSafe |
| CarbonDictionaryDecoder [CarbonDecoderRelation(Map(name#3 -> name#3),CarbonDatasourceRelation(`default`.`dict`,None))], IncludeProfile(ArrayBuffer(name#3)), CarbonAliasDecoderRelation() |
| CarbonScan [name#3], (CarbonRelation default, dict, CarbonMetaData(ArrayBuffer(name),ArrayBuffer(default_dummy_measure),org.apache.carbondata.core.carbon.metadata.schema.table.CarbonTable@6021d179,DictionaryMap(Map(name -> true))), org.apache.carbondata.spark.merger.TableMeta@4c3f903d, None), [(name#3 = hello)], false|
| |
|Code Generation: true |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
We should put outer decoder on top of limit
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)