GitHub user sraghunandan opened a pull request:
https://github.com/apache/carbondata/pull/2313 [CARBONDATA-2489] Coverity scan fixes https://scan4.coverity.com/reports.htm#v29367/p11911 Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [X] Any interfaces changed? NO - [x] Any backward compatibility impacted? NO - [x] Document update required? NO - [X] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. All test cases run - [X] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. NA You can merge this pull request into a Git repository by running: $ git pull https://github.com/sraghunandan/carbondata-1 coverity_fortify_fixes Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2313.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2313 ---- commit 0537515aa290bcc9bdc9ff3ce1bccca103b51def Author: Raghunandan S <carbondatacontributions@...> Date: 2017-08-27T18:07:05Z coverity scan fixes https://scan4.coverity.com/reports.htm#v29367/p11911 ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2313 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/5922/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2313 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4768/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2313 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/4953/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2313 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/4954/ --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r188844399 --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/measure/AbstractMeasureChunkReaderV2V3Format.java --- @@ -103,11 +103,14 @@ public AbstractMeasureChunkReaderV2V3Format(final BlockletInfo blockletInfo, * @param presentMetadataThrift * @return wrapper presence meta */ - protected BitSet getNullBitSet( - org.apache.carbondata.format.PresenceMeta presentMetadataThrift) { + protected BitSet getNullBitSet(org.apache.carbondata.format.PresenceMeta presentMetadataThrift) { Compressor compressor = CompressorFactory.getInstance().getCompressor(); - return BitSet.valueOf( - compressor.unCompressByte(presentMetadataThrift.getPresent_bit_stream())); + if (null != presentMetadataThrift.getPresent_bit_stream()) { --- End diff -- take into a variable and check presentMetadataThrift.getPresent_bit_stream() --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r188844850 --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/chunk/store/impl/safe/SafeVariableLengthDimensionDataChunkStore.java --- @@ -49,6 +49,32 @@ public SafeVariableLengthDimensionDataChunkStore(boolean isInvertedIndex, int nu this.dataOffsets = new int[numberOfRows]; } + @Override public byte[] getRow(int rowId) { --- End diff -- why is it moved? --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r188845326 --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/compression/SnappyCompressor.java --- @@ -135,52 +136,52 @@ public String getName() { return Snappy.uncompressIntArray(compInput); } catch (IOException e) { LOGGER.error(e, e.getMessage()); + throw new RuntimeException(e); } - return null; } @Override public int[] unCompressInt(byte[] compInput, int offset, int length) { try { return Snappy.uncompressIntArray(compInput, offset, length); } catch (IOException e) { LOGGER.error(e, e.getMessage()); + throw new RuntimeException(e); } - return null; } @Override public byte[] compressLong(long[] unCompInput) { try { return Snappy.compress(unCompInput); } catch (IOException e) { LOGGER.error(e, e.getMessage()); - return null; + throw new RuntimeException(e); } } - @Override public long[] unCompressLong(byte[] compInput) { + @Override public long[] uncompresslong(byte[] compinput) { --- End diff -- why is it renamed? --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r188882558 --- Diff: core/src/main/java/org/apache/carbondata/core/dictionary/generator/IncrementalColumnDictionaryGenerator.java --- @@ -89,15 +89,19 @@ public IncrementalColumnDictionaryGenerator(CarbonDimension dimension, int maxVa } @Override public Integer getKey(String value) { - return incrementalCache.get(value); + synchronized (lock) { --- End diff -- This lock is not required --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r188906834 --- Diff: core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/SegmentIndexFileStore.java --- @@ -282,12 +282,15 @@ private void readIndexFile(CarbonFile indexFile) throws IOException { DataInputStream dataInputStream = FileFactory.getDataInputStream(indexFilePath, FileFactory.getFileType(indexFilePath)); byte[] bytes = new byte[(int) indexFile.getSize()]; - dataInputStream.readFully(bytes); - carbonIndexMap.put(indexFile.getName(), bytes); - carbonIndexMapWithFullPath.put( - indexFile.getParentFile().getAbsolutePath() + CarbonCommonConstants.FILE_SEPARATOR - + indexFile.getName(), bytes); - dataInputStream.close(); + try { --- End diff -- Finally should have only close --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r188960024 --- Diff: core/src/main/java/org/apache/carbondata/core/memory/UnsafeMemoryManager.java --- @@ -152,14 +152,15 @@ public synchronized boolean isMemoryAvailable() { return memoryUsed > totalMemory; } - public long getUsableMemory() { + public synchronized long getUsableMemory() { --- End diff -- Not required Lock --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r188965798 --- Diff: core/src/main/java/org/apache/carbondata/core/metadata/datatype/ArrayType.java --- @@ -31,6 +31,31 @@ public boolean isComplexType() { return true; } + @Override + public boolean equals(Object obj) { + if (this == obj) { + return true; + } + if (obj == null) { + return false; + } + if (!(obj instanceof ArrayType)) { + return false; + } + if (!this.getName().equalsIgnoreCase(((ArrayType) obj).getName())) { --- End diff -- Equals and hash should be implemented for all datatypes.Each required to have different logic 1) Nestedtype, should also include childType.equals. 2) Decimal type should consider precision, scala etc. --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r188969041 --- Diff: core/src/main/java/org/apache/carbondata/core/metadata/datatype/DecimalType.java --- @@ -44,4 +44,35 @@ public int getScale() { public void setScale(int scale) { this.scale = scale; } + + @Override + public boolean equals(Object obj) { + if (this == obj) { + return true; + } + if (obj == null) { + return false; + } + if (!(obj instanceof DecimalType)) { + return false; + } + if (!this.getName().equalsIgnoreCase(((DecimalType) obj).getName())) { + return false; + } + if (this.precision != ((DecimalType) obj).precision) { + return false; + } + if (this.scale != ((DecimalType) obj).scale) { + return false; + } + return true; + } + + @Override + public int hashCode() { + final int prime = 31; + int result = 1; + result = prime * result + getName().hashCode(); --- End diff -- should include precision.hash and scala.hash --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r188972300 --- Diff: core/src/main/java/org/apache/carbondata/core/reader/CarbonDictionaryMetadataReaderImpl.java --- @@ -128,6 +131,9 @@ public CarbonDictionaryMetadataReaderImpl( break; } } + if (null == dictionaryChunkMeta) { + throw new IOException("Last dictionary chunk does not exist"); --- End diff -- Message should be "Matching dictionary chunk for offset not found" --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r188992675 --- Diff: core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelRangeGrtThanFiterExecuterImpl.java --- @@ -98,9 +98,11 @@ private void ifDefaultValueMatchesFilter() { } else if (!msrColEvalutorInfoList.isEmpty() && !isMeasurePresentInCurrentBlock[0]) { CarbonMeasure measure = this.msrColEvalutorInfoList.get(0).getMeasure(); byte[] defaultValue = measure.getDefaultValue(); + SerializableComparator comparatorTmp = --- End diff -- if comparator is null then can create tmpComparator. --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r189165438 --- Diff: core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java --- @@ -60,7 +60,11 @@ */ private static BitSet getPresenceMeta( org.apache.carbondata.format.PresenceMeta presentMetadataThrift) { - return BitSet.valueOf(presentMetadataThrift.getPresent_bit_stream()); + if (null != presentMetadataThrift.getPresent_bit_stream()) { --- End diff -- Take return value to variable --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r189166315 --- Diff: core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentUpdateStatusManager.java --- @@ -348,32 +348,37 @@ public boolean isBlockValid(String segName, String blockName) { private List<String> getFilePaths(CarbonFile blockDir, final String blockNameFromTuple, final String extension, List<String> deleteFileList, final long deltaStartTimestamp, final long deltaEndTimeStamp) { - CarbonFile[] files = blockDir.getParentFile().listFiles(new CarbonFileFilter() { - - @Override public boolean accept(CarbonFile pathName) { - String fileName = pathName.getName(); - if (fileName.endsWith(extension) && pathName.getSize() > 0) { - String firstPart = fileName.substring(0, fileName.indexOf('.')); - String blockName = - firstPart.substring(0, firstPart.lastIndexOf(CarbonCommonConstants.HYPHEN)); - long timestamp = Long.parseLong(firstPart - .substring(firstPart.lastIndexOf(CarbonCommonConstants.HYPHEN) + 1, - firstPart.length())); - if (blockNameFromTuple.equals(blockName) && ( - (Long.compare(timestamp, deltaEndTimeStamp) <= 0) && ( - Long.compare(timestamp, deltaStartTimestamp) >= 0))) { - return true; + if (null != blockDir.getParentFile()) { --- End diff -- ParentFile not present should through IOException. --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r189174422 --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/MapredCarbonInputFormat.java --- @@ -77,13 +77,15 @@ private static void populateCarbonTable(Configuration configuration, String path } } } - AbsoluteTableIdentifier absoluteTableIdentifier = AbsoluteTableIdentifier - .from(validInputPath, getDatabaseName(configuration), getTableName(configuration)); - // read the schema file to get the absoluteTableIdentifier having the correct table id - // persisted in the schema - CarbonTable carbonTable = SchemaReader.readCarbonTableFromStore(absoluteTableIdentifier); - configuration.set(CARBON_TABLE, ObjectSerializationUtil.convertObjectToString(carbonTable)); - setTableInfo(configuration, carbonTable.getTableInfo()); + if (null != validInputPath) { --- End diff -- InvalidPath should throw exception --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r189175211 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/impl/CarbonLocalInputSplit.java --- @@ -115,23 +115,26 @@ public void setDetailInfo(BlockletDetailInfo blockletDetailInfo) { } - public static CarbonInputSplit convertSplit(CarbonLocalInputSplit carbonLocalInputSplit) { + public static CarbonInputSplit convertSplit(CarbonLocalInputSplit carbonLocalInputSplit) { CarbonInputSplit inputSplit = new CarbonInputSplit(carbonLocalInputSplit.getSegmentId(), "0", new Path(carbonLocalInputSplit.getPath()), carbonLocalInputSplit.getStart(), carbonLocalInputSplit.getLength(), carbonLocalInputSplit.getLocations() .toArray(new String[carbonLocalInputSplit.getLocations().size()]), - carbonLocalInputSplit.getNumberOfBlocklets(), ColumnarFormatVersion.valueOf(carbonLocalInputSplit.getVersion()), + carbonLocalInputSplit.getNumberOfBlocklets(), + ColumnarFormatVersion.valueOf(carbonLocalInputSplit.getVersion()), carbonLocalInputSplit.getDeleteDeltaFiles()); Gson gson = new Gson(); - BlockletDetailInfo blockletDetailInfo = gson.fromJson(carbonLocalInputSplit.detailInfo, BlockletDetailInfo.class); - try { - blockletDetailInfo.readColumnSchema(blockletDetailInfo.getColumnSchemaBinary()); - } catch (IOException e) { - throw new RuntimeException(e); + BlockletDetailInfo blockletDetailInfo = + gson.fromJson(carbonLocalInputSplit.detailInfo, BlockletDetailInfo.class); + + if (null != blockletDetailInfo) { --- End diff -- blockletDetailInfo null case should throw exception --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2313#discussion_r189180538 --- Diff: integration/spark2/src/main/java/org/apache/carbondata/spark/vectorreader/VectorizedCarbonRecordReader.java --- @@ -144,14 +145,10 @@ public void initialize(InputSplit inputSplit, TaskAttemptContext taskAttemptCont } throw new InterruptedException(e.getMessage()); } catch (Exception e) { - Throwable ext = e; - while (ext != null) { - if (ext instanceof FileNotFoundException) { - LOGGER.error(e); - throw new InterruptedException( - "Insert overwrite may be in progress.Please check " + e.getMessage()); - } - ext = ext.getCause(); + if (ExceptionUtils.hasCause(e, FileNotFoundException.class)) { --- End diff -- hasCause is not recursive, so can use indexOfThrowable --- |
Free forum by Nabble | Edit this page |