[
https://issues.apache.org/jira/browse/CARBONDATA-241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15499570#comment-15499570 ]
ASF GitHub Bot commented on CARBONDATA-241:
-------------------------------------------
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/158#discussion_r79290267
--- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/CarbonInputFormat.java ---
@@ -706,8 +725,9 @@ private String getUpdateExtension() {
/**
* @return updateExtension
*/
- private String[] getValidSegments(JobContext job) throws IOException {
- String segmentString = job.getConfiguration().get(INPUT_SEGMENT_NUMBERS, "");
+ private String[] getSegmentsFromConfiguration(JobContext job, String segmentType)
+ throws IOException {
+ String segmentString = job.getConfiguration().get(segmentType, "");
--- End diff --
change signature to previous one
> OOM error during query execution in long run
> --------------------------------------------
>
> Key: CARBONDATA-241
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-241> Project: CarbonData
> Issue Type: Bug
> Reporter: kumar vishal
> Assignee: kumar vishal
>
> **Problem:** During long run query execution is taking more time and it is throwing out of memory issue.
> **Reason**: In compaction we are compacting segments and each segment metadata is loaded in memory. So after compaction compacted segments are invalid but its meta data is not removed from memory because of this duplicate metadata is pile up and it is taking more memory and after few days query exeution is throwing OOM
> **Solution**: Need to remove invalid blocks from memory
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)