[jira] [Updated] (CARBONDATA-3245) java.io.IOException: Filesystem closed

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Updated] (CARBONDATA-3245) java.io.IOException: Filesystem closed

Akash R Nilugal (Jira)

     [ https://issues.apache.org/jira/browse/CARBONDATA-3245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hao Ding updated CARBONDATA-3245:
---------------------------------
    Description:
carbondata 1.5.1, throw the below exception when doing major compaction with massive datasets.

 

Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.io.IOException: Filesystem closed
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator.processNextBlocklet(DataBlockIterator.java:164)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator.updateScanner(DataBlockIterator.java:141)
 ... 21 more
 Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Filesystem closed
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator$1.call(DataBlockIterator.java:210)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator$1.call(DataBlockIterator.java:205)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 ... 3 more
 Caused by: java.io.IOException: Filesystem closed
 at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:808)
 at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:868)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
 at java.io.DataInputStream.readFully(DataInputStream.java:195)
 at java.io.DataInputStream.readFully(DataInputStream.java:169)
 at org.apache.carbondata.core.datastore.impl.DFSFileReaderImpl.read(DFSFileReaderImpl.java:85)
 at org.apache.carbondata.core.datastore.impl.DFSFileReaderImpl.readByteArray(DFSFileReaderImpl.java:52)
 at org.apache.carbondata.core.datastore.impl.DFSFileReaderImpl.readByteBuffer(DFSFileReaderImpl.java:141)
 at org.apache.carbondata.core.datastore.chunk.reader.dimension.v3.CompressedDimensionChunkFileBasedReaderV3.readRawDimensionChunksInGroup(CompressedDimensionChunkFileBasedReaderV3.java:183)
 at org.apache.carbondata.core.datastore.chunk.reader.dimension.AbstractChunkReaderV2V3Format.readRawDimensionChunks(AbstractChunkReaderV2V3Format.java:76)
 at org.apache.carbondata.core.indexstore.blockletindex.BlockletDataRefNode.readDimensionChunks(BlockletDataRefNode.java:151)
 at org.apache.carbondata.core.scan.scanner.impl.BlockletFullScanner.readBlocklet(BlockletFullScanner.java:145)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator.readNextBlockletColumnChunks(DataBlockIterator.java:185)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator.access$500(DataBlockIterator.java:46)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator$2.call(DataBlockIterator.java:231)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator$2.call(DataBlockIterator.java:226)

 

possible problem:

getFileSystem in DFSFileReaderImpl.updateCache:

private FSDataInputStream updateCache(String filePath) throws IOException {

    FSDataInputStream fileChannel = fileNameAndStreamCache.get(filePath);

    if (null == fileChannel)

{       Path pt = new Path(filePath);       FileSystem fs = pt.getFileSystem(configuration); //// closed by others?       fileChannel = fs.open(pt);       fileNameAndStreamCache.put(filePath, fileChannel);     }

    return fileChannel;

  }

 

  was:
carbondata 1.5.1, throw the below exception when doing compaction with massive datasets.

 

Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.io.IOException: Filesystem closed
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator.processNextBlocklet(DataBlockIterator.java:164)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator.updateScanner(DataBlockIterator.java:141)
 ... 21 more
 Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Filesystem closed
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator$1.call(DataBlockIterator.java:210)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator$1.call(DataBlockIterator.java:205)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 ... 3 more
 Caused by: java.io.IOException: Filesystem closed
 at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:808)
 at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:868)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
 at java.io.DataInputStream.readFully(DataInputStream.java:195)
 at java.io.DataInputStream.readFully(DataInputStream.java:169)
 at org.apache.carbondata.core.datastore.impl.DFSFileReaderImpl.read(DFSFileReaderImpl.java:85)
 at org.apache.carbondata.core.datastore.impl.DFSFileReaderImpl.readByteArray(DFSFileReaderImpl.java:52)
 at org.apache.carbondata.core.datastore.impl.DFSFileReaderImpl.readByteBuffer(DFSFileReaderImpl.java:141)
 at org.apache.carbondata.core.datastore.chunk.reader.dimension.v3.CompressedDimensionChunkFileBasedReaderV3.readRawDimensionChunksInGroup(CompressedDimensionChunkFileBasedReaderV3.java:183)
 at org.apache.carbondata.core.datastore.chunk.reader.dimension.AbstractChunkReaderV2V3Format.readRawDimensionChunks(AbstractChunkReaderV2V3Format.java:76)
 at org.apache.carbondata.core.indexstore.blockletindex.BlockletDataRefNode.readDimensionChunks(BlockletDataRefNode.java:151)
 at org.apache.carbondata.core.scan.scanner.impl.BlockletFullScanner.readBlocklet(BlockletFullScanner.java:145)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator.readNextBlockletColumnChunks(DataBlockIterator.java:185)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator.access$500(DataBlockIterator.java:46)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator$2.call(DataBlockIterator.java:231)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator$2.call(DataBlockIterator.java:226)

 

possible problem:

getFileSystem in DFSFileReaderImpl.updateCache:

private FSDataInputStream updateCache(String filePath) throws IOException {

    FSDataInputStream fileChannel = fileNameAndStreamCache.get(filePath);

    if (null == fileChannel)

{      

Path pt = new Path(filePath);      

FileSystem fs = pt.getFileSystem(configuration); //// closed by others?      

fileChannel = fs.open(pt);      

fileNameAndStreamCache.put(filePath, fileChannel);    

}

    return fileChannel;

  }

 


> java.io.IOException: Filesystem closed
> --------------------------------------
>
>                 Key: CARBONDATA-3245
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-3245
>             Project: CarbonData
>          Issue Type: Bug
>            Reporter: Hao Ding
>            Priority: Major
>
> carbondata 1.5.1, throw the below exception when doing major compaction with massive datasets.
>  
> Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.io.IOException: Filesystem closed
>  at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>  at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>  at org.apache.carbondata.core.scan.processor.DataBlockIterator.processNextBlocklet(DataBlockIterator.java:164)
>  at org.apache.carbondata.core.scan.processor.DataBlockIterator.updateScanner(DataBlockIterator.java:141)
>  ... 21 more
>  Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Filesystem closed
>  at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>  at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>  at org.apache.carbondata.core.scan.processor.DataBlockIterator$1.call(DataBlockIterator.java:210)
>  at org.apache.carbondata.core.scan.processor.DataBlockIterator$1.call(DataBlockIterator.java:205)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  ... 3 more
>  Caused by: java.io.IOException: Filesystem closed
>  at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:808)
>  at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:868)
>  at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
>  at java.io.DataInputStream.readFully(DataInputStream.java:195)
>  at java.io.DataInputStream.readFully(DataInputStream.java:169)
>  at org.apache.carbondata.core.datastore.impl.DFSFileReaderImpl.read(DFSFileReaderImpl.java:85)
>  at org.apache.carbondata.core.datastore.impl.DFSFileReaderImpl.readByteArray(DFSFileReaderImpl.java:52)
>  at org.apache.carbondata.core.datastore.impl.DFSFileReaderImpl.readByteBuffer(DFSFileReaderImpl.java:141)
>  at org.apache.carbondata.core.datastore.chunk.reader.dimension.v3.CompressedDimensionChunkFileBasedReaderV3.readRawDimensionChunksInGroup(CompressedDimensionChunkFileBasedReaderV3.java:183)
>  at org.apache.carbondata.core.datastore.chunk.reader.dimension.AbstractChunkReaderV2V3Format.readRawDimensionChunks(AbstractChunkReaderV2V3Format.java:76)
>  at org.apache.carbondata.core.indexstore.blockletindex.BlockletDataRefNode.readDimensionChunks(BlockletDataRefNode.java:151)
>  at org.apache.carbondata.core.scan.scanner.impl.BlockletFullScanner.readBlocklet(BlockletFullScanner.java:145)
>  at org.apache.carbondata.core.scan.processor.DataBlockIterator.readNextBlockletColumnChunks(DataBlockIterator.java:185)
>  at org.apache.carbondata.core.scan.processor.DataBlockIterator.access$500(DataBlockIterator.java:46)
>  at org.apache.carbondata.core.scan.processor.DataBlockIterator$2.call(DataBlockIterator.java:231)
>  at org.apache.carbondata.core.scan.processor.DataBlockIterator$2.call(DataBlockIterator.java:226)
>  
> possible problem:
> getFileSystem in DFSFileReaderImpl.updateCache:
> private FSDataInputStream updateCache(String filePath) throws IOException {
>     FSDataInputStream fileChannel = fileNameAndStreamCache.get(filePath);
>     if (null == fileChannel)
> {       Path pt = new Path(filePath);       FileSystem fs = pt.getFileSystem(configuration); //// closed by others?       fileChannel = fs.open(pt);       fileNameAndStreamCache.put(filePath, fileChannel);     }
>     return fileChannel;
>   }
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)