[
https://issues.apache.org/jira/browse/CARBONDATA-241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15496975#comment-15496975 ]
ASF GitHub Bot commented on CARBONDATA-241:
-------------------------------------------
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/158#discussion_r79222167
--- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/CarbonInputFormat.java ---
@@ -101,6 +106,8 @@
//comma separated list of input segment numbers
public static final String INPUT_SEGMENT_NUMBERS =
"mapreduce.input.carboninputformat.segmentnumbers";
+ public static final String INVALID_SEGMENT_NUMBERS =
+ "mapreduce.input.carboninputformat.invalidsegmentnumbers";
--- End diff --
Invalid segment deletion, need not be through CarbonInputFormat, When Invalid segments list given to Btree(both in Driver and executor it should able it delete invalid blocks).
> OOM error during query execution in long run
> --------------------------------------------
>
> Key: CARBONDATA-241
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-241> Project: CarbonData
> Issue Type: Bug
> Reporter: kumar vishal
> Assignee: kumar vishal
>
> **Problem:** During long run query execution is taking more time and it is throwing out of memory issue.
> **Reason**: In compaction we are compacting segments and each segment metadata is loaded in memory. So after compaction compacted segments are invalid but its meta data is not removed from memory because of this duplicate metadata is pile up and it is taking more memory and after few days query exeution is throwing OOM
> **Solution**: Need to remove invalid blocks from memory
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)