[jira] [Updated] (CARBONDATA-464) Too many tiems GC occurs in query if we increase the blocklet size

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Updated] (CARBONDATA-464) Too many tiems GC occurs in query if we increase the blocklet size

Akash R Nilugal (Jira)

     [ https://issues.apache.org/jira/browse/CARBONDATA-464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

suo tong updated CARBONDATA-464:
--------------------------------
    Description:
parquet might fetch from i/o 1 million at one time, but its data is divided into column chunks, which can be independently uncompressed and processed.
In case of current carbon if we use larger blocklet, it requires larger processing memory also, as it decompresses complete blocklet required columns and keeps it in memory. Maybe we should consider to come up with similar approach to balance I/O and processing

> Too many tiems GC occurs in query if we increase the blocklet size
> ------------------------------------------------------------------
>
>                 Key: CARBONDATA-464
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-464
>             Project: CarbonData
>          Issue Type: Sub-task
>            Reporter: suo tong
>
> parquet might fetch from i/o 1 million at one time, but its data is divided into column chunks, which can be independently uncompressed and processed.
> In case of current carbon if we use larger blocklet, it requires larger processing memory also, as it decompresses complete blocklet required columns and keeps it in memory. Maybe we should consider to come up with similar approach to balance I/O and processing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)