[
https://issues.apache.org/jira/browse/CARBONDATA-464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15704506#comment-15704506 ]
suo tong commented on CARBONDATA-464:
-------------------------------------
For per executor, we can simply use the logic to estimate memory need. numberOfTasks * numberOfBlocklets * numberOfColumns * 2 * columnSize
> Too many tiems GC occurs in query if we increase the blocklet size
> ------------------------------------------------------------------
>
> Key: CARBONDATA-464
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-464> Project: CarbonData
> Issue Type: Sub-task
> Reporter: suo tong
>
> parquet might fetch from i/o 1 million at one time, but its data is divided into column chunks, and each column trunk consist of many pages, the page(default size 1 MB) can be independently uncompressed and processed.
> In case of current carbon if we use larger blocklet, it requires larger processing memory also, as it decompresses complete blocklet required columns and keeps it in memory. Maybe we should consider to come up with similar approach to balance I/O and processing, but such a change requires carbon format level changes.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)