[jira] [Commented] (CARBONDATA-906) Always OOM error when import large dataset (100milion rows)

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Commented] (CARBONDATA-906) Always OOM error when import large dataset (100milion rows)

Akash R Nilugal (Jira)

    [ https://issues.apache.org/jira/browse/CARBONDATA-906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15967344#comment-15967344 ]

Crabo Yang commented on CARBONDATA-906:
---------------------------------------

@Bhavya Aggwal , Nothing is change with the new cache size.  I'm doing a memory dump anylysis.

here is the dump summy log for your glance:
645.575: [Full GC [PSYoungGen: 1515008K->1515001K(3029504K)] [ParOldGen: 9087631K->9087631K(9088000K)] 10602639K->10602633K(12117504K) [PSPermGen: 61856K->61856K(62464K)], 7.7181000 secs] [Times: user=57.68 sys=0.03, real=7.71 secs]
653.294: [Full GC [PSYoungGen: 1515008K->1514969K(3029504K)] [ParOldGen: 9087631K->9087631K(9088000K)] 10602639K->10602600K(12117504K) [PSPermGen: 61856K->61856K(62464K)], 19.6959150 secs] [Times: user=149.28 sys=0.48, real=19.69 secs]
672.990: [Full GC [PSYoungGen: 1515008K->1514973K(3029504K)] [ParOldGen: 9087631K->9087631K(9088000K)] 10602639K->10602605K(12117504K) [PSPermGen: 61856K->61856K(62464K)], 8.0051300 secs] [Times: user=59.79 sys=0.04, real=8.01 secs]
680.996: [Full GC [PSYoungGen: 1515007K->1514973K(3029504K)] [ParOldGen: 9087631K->9087631K(9088000K)] 10602639K->10602605K(12117504K) [PSPermGen: 61856K->61856K(61952K)], 7.9419610 secs] [Times: user=59.35 sys=0.02, real=7.94 secs]
688.939: [Full GC [PSYoungGen: 1515008K->1514975K(3029504K)] [ParOldGen: 9087631K->9087631K(9088000K)] 10602639K->10602607K(12117504K) [PSPermGen: 61857K->61857K(61952K)], 7.8403410 secs] [Times: user=58.44 sys=0.04, real=7.84 secs]
696.780: [Full GC [PSYoungGen: 1515008K->1508833K(3029504K)] [ParOldGen: 9087631K->9087628K(9088000K)] 10602639K->10596461K(12117504K) [PSPermGen: 61857K->61317K(61440K)], 10.1406500 secs] [Times: user=74.19 sys=0.08, real=10.14 secs]

> Always OOM error when import large dataset (100milion rows)
> -----------------------------------------------------------
>
>                 Key: CARBONDATA-906
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-906
>             Project: CarbonData
>          Issue Type: Bug
>          Components: data-load
>    Affects Versions: 1.0.0-incubating
>            Reporter: Crabo Yang
>         Attachments: carbon.properties
>
>
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> at java.util.concurrent.ConcurrentHashMap$Segment.put(ConcurrentHashMap.java:457)
> at java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1130)
> at org.apache.carbondata.core.cache.dictionary.ColumnReverseDictionaryInfo.addDataToDictionaryMap(ColumnReverseDictionaryInfo.java:101)
> at org.apache.carbondata.core.cache.dictionary.ColumnReverseDictionaryInfo.addDictionaryChunk(ColumnReverseDictionaryInfo.java:88)
> at org.apache.carbondata.core.cache.dictionary.DictionaryCacheLoaderImpl.fillDictionaryValuesAndAddToDictionaryChunks(DictionaryCacheLoaderImpl.java:113)
> at org.apache.carbondata.core.cache.dictionary.DictionaryCacheLoaderImpl.load(DictionaryCacheLoaderImpl.java:81)
> at org.apache.carbondata.core.cache.dictionary.AbstractDictionaryCache.loadDictionaryData(AbstractDictionaryCache.java:236)
> at org.apache.carbondata.core.cache.dictionary.AbstractDictionaryCache.checkAndLoadDictionaryData(AbstractDictionaryCache.java:186)
> at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.getDictionary(ReverseDictionaryCache.java:174)
> at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.get(ReverseDictionaryCache.java:67)
> at org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.get(ReverseDictionaryCache.java:38)
> at org.apache.carbondata.processing.newflow.converter.impl.DictionaryFieldConverterImpl.<init>(DictionaryFieldConverterImpl.java:92)
> at org.apache.carbondata.processing.newflow.converter.impl.FieldEncoderFactory.createFieldEncoder(FieldEncoderFactory.java:77)
> at org.apache.carbondata.processing.newflow.converter.impl.RowConverterImpl.initialize(RowConverterImpl.java:102)
> at org.apache.carbondata.processing.newflow.steps.DataConverterProcessorStepImpl.initialize(DataConverterProcessorStepImpl.java:69)
> at org.apache.carbondata.processing.newflow.steps.SortProcessorStepImpl.initialize(SortProcessorStepImpl.java:57)
> at org.apache.carbondata.processing.newflow.steps.DataWriterProcessorStepImpl.initialize(DataWriterProcessorStepImpl.java:79)
> at org.apache.carbondata.processing.newflow.DataLoadExecutor.execute(DataLoadExecutor.java:45)
> at org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD$$anon$2.<init>(NewCarbonDataLoadRDD.scala:425)
> at org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD.compute(NewCarbonDataLoadRDD.scala:383)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)