[
https://issues.apache.org/jira/browse/CARBONDATA-241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15493819#comment-15493819 ]
ASF GitHub Bot commented on CARBONDATA-241:
-------------------------------------------
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/158#discussion_r79003748
--- Diff: processing/src/main/java/org/apache/carbondata/lcm/status/SegmentStatusManager.java ---
@@ -102,6 +91,60 @@ public long getTableStatusLastModifiedTime() throws IOException {
/**
* get valid segment for given table
+ *
+ * @return
+ * @throws IOException
+ */
+ public InvalidSegmentsInfo getInvalidSegments() throws IOException {
--- End diff --
This requires reading SegmentInfo twice, once for valid blocks and next for Invalid Blocks. Instead send a single class containting ValidAndInvalidBlocks.
> OOM error during query execution in long run
> --------------------------------------------
>
> Key: CARBONDATA-241
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-241> Project: CarbonData
> Issue Type: Bug
> Reporter: kumar vishal
> Assignee: kumar vishal
>
> **Problem:** During long run query execution is taking more time and it is throwing out of memory issue.
> **Reason**: In compaction we are compacting segments and each segment metadata is loaded in memory. So after compaction compacted segments are invalid but its meta data is not removed from memory because of this duplicate metadata is pile up and it is taking more memory and after few days query exeution is throwing OOM
> **Solution**: Need to remove invalid blocks from memory
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)