GitHub user sv71294 opened a pull request:
https://github.com/apache/carbondata/pull/2412 [CARBONDATA-2656] Presto vector stream readers performance Enhancement **Background**: In the present system, we create carbonColumnVectorImpl object in carbonVectorbatch where carbon core fill up vector data (one by one) in column matched data type array, later which at the time of presto block builder call, read by stream readers (based on data type) and iterated to fill up in block and returned to presto. **Solution**: We can eliminate the extra iteration over the carbonColumnVectorImpl object -> vectorArray, by extending it to StreamReaders which will fill up carbon-core vector data (one by one) directly to the block(presto), and on the call of block builder it will return the block to the Presto. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [x] Any interfaces changed? - [x] Any backward compatibility impacted? no - [x] Document update required? no - [x] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. [Preformance test report](https://goo.gl/duTMBh) - [x] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/sv71294/carbondata carbon-presto-direct-reader Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2412.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2412 ---- commit b5597f8d727ea75dba7935f1505cee97fd56123a Author: sv71294 <sv71294@...> Date: 2018-06-05T12:14:58Z direct vector readers ---- --- |
Github user bhavya411 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2412#discussion_r198107936 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/CarbonVectorBatch.java --- @@ -20,50 +20,81 @@ import java.util.HashSet; import java.util.Set; +import org.apache.carbondata.core.cache.dictionary.Dictionary; +import org.apache.carbondata.core.metadata.datatype.DataType; +import org.apache.carbondata.core.metadata.datatype.DataTypes; +import org.apache.carbondata.core.metadata.datatype.DecimalType; import org.apache.carbondata.core.metadata.datatype.StructField; import org.apache.carbondata.core.scan.result.vector.impl.CarbonColumnVectorImpl; +import org.apache.carbondata.presto.readers.BooleanStreamReader; +import org.apache.carbondata.presto.readers.DecimalSliceStreamReader; +import org.apache.carbondata.presto.readers.DoubleStreamReader; +import org.apache.carbondata.presto.readers.IntegerStreamReader; +import org.apache.carbondata.presto.readers.LongStreamReader; +import org.apache.carbondata.presto.readers.ObjectStreamReader; +import org.apache.carbondata.presto.readers.ShortStreamReader; +import org.apache.carbondata.presto.readers.SliceStreamReader; +import org.apache.carbondata.presto.readers.TimestampStreamReader; + +import com.facebook.presto.spi.block.SliceArrayBlock; public class CarbonVectorBatch { - private static final int DEFAULT_BATCH_SIZE = 4 * 1024; + private static final int DEFAULT_BATCH_SIZE = 4 * 1024; - private final StructField[] schema; private final int capacity; - private int numRows; private final CarbonColumnVectorImpl[] columns; - // True if the row is filtered. private final boolean[] filteredRows; - // Column indices that cannot have null values. private final Set<Integer> nullFilteredColumns; - + private int numRows; // Total number of rows that have been filtered. private int numRowsFiltered = 0; - - private CarbonVectorBatch(StructField[] schema, int maxRows) { - this.schema = schema; + private CarbonVectorBatch(StructField[] schema, CarbonDictionaryDecodeReadSupport readSupport, + int maxRows) { this.capacity = maxRows; this.columns = new CarbonColumnVectorImpl[schema.length]; this.nullFilteredColumns = new HashSet<>(); this.filteredRows = new boolean[maxRows]; + Dictionary[] dictionaries = readSupport.getDictionaries(); + DataType[] dataTypes = readSupport.getDataTypes(); for (int i = 0; i < schema.length; ++i) { - StructField field = schema[i]; - columns[i] = new CarbonColumnVectorImpl(maxRows, field.getDataType()); + columns[i] = createDirectStreamReader(maxRows, dataTypes[i], schema[i], dictionaries[i], + readSupport.getSliceArrayBlock(i)); --- End diff -- To be consistent can you get the sliceArrayBlock and pass it on as a parameter. --- |
In reply to this post by qiuchenjian-2
Github user sv71294 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2412#discussion_r198116335 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/CarbonVectorBatch.java --- @@ -20,50 +20,81 @@ import java.util.HashSet; import java.util.Set; +import org.apache.carbondata.core.cache.dictionary.Dictionary; +import org.apache.carbondata.core.metadata.datatype.DataType; +import org.apache.carbondata.core.metadata.datatype.DataTypes; +import org.apache.carbondata.core.metadata.datatype.DecimalType; import org.apache.carbondata.core.metadata.datatype.StructField; import org.apache.carbondata.core.scan.result.vector.impl.CarbonColumnVectorImpl; +import org.apache.carbondata.presto.readers.BooleanStreamReader; +import org.apache.carbondata.presto.readers.DecimalSliceStreamReader; +import org.apache.carbondata.presto.readers.DoubleStreamReader; +import org.apache.carbondata.presto.readers.IntegerStreamReader; +import org.apache.carbondata.presto.readers.LongStreamReader; +import org.apache.carbondata.presto.readers.ObjectStreamReader; +import org.apache.carbondata.presto.readers.ShortStreamReader; +import org.apache.carbondata.presto.readers.SliceStreamReader; +import org.apache.carbondata.presto.readers.TimestampStreamReader; + +import com.facebook.presto.spi.block.SliceArrayBlock; public class CarbonVectorBatch { - private static final int DEFAULT_BATCH_SIZE = 4 * 1024; + private static final int DEFAULT_BATCH_SIZE = 4 * 1024; - private final StructField[] schema; private final int capacity; - private int numRows; private final CarbonColumnVectorImpl[] columns; - // True if the row is filtered. private final boolean[] filteredRows; - // Column indices that cannot have null values. private final Set<Integer> nullFilteredColumns; - + private int numRows; // Total number of rows that have been filtered. private int numRowsFiltered = 0; - - private CarbonVectorBatch(StructField[] schema, int maxRows) { - this.schema = schema; + private CarbonVectorBatch(StructField[] schema, CarbonDictionaryDecodeReadSupport readSupport, + int maxRows) { this.capacity = maxRows; this.columns = new CarbonColumnVectorImpl[schema.length]; this.nullFilteredColumns = new HashSet<>(); this.filteredRows = new boolean[maxRows]; + Dictionary[] dictionaries = readSupport.getDictionaries(); + DataType[] dataTypes = readSupport.getDataTypes(); for (int i = 0; i < schema.length; ++i) { - StructField field = schema[i]; - columns[i] = new CarbonColumnVectorImpl(maxRows, field.getDataType()); + columns[i] = createDirectStreamReader(maxRows, dataTypes[i], schema[i], dictionaries[i], + readSupport.getSliceArrayBlock(i)); --- End diff -- actually, we are getting the index value directly by getSliceArrayBlock instead of the array. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2412 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6555/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2412 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/5384/ --- |
In reply to this post by qiuchenjian-2
Github user sv71294 commented on the issue:
https://github.com/apache/carbondata/pull/2412 retest this please --- |
In reply to this post by qiuchenjian-2
Github user bhavya411 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2412#discussion_r198136861 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/readers/ObjectStreamReader.java --- @@ -17,50 +17,50 @@ package org.apache.carbondata.presto.readers; -import java.io.IOException; +import org.apache.carbondata.core.metadata.datatype.DataType; +import org.apache.carbondata.core.scan.result.vector.impl.CarbonColumnVectorImpl; import com.facebook.presto.spi.block.Block; import com.facebook.presto.spi.block.BlockBuilder; import com.facebook.presto.spi.block.BlockBuilderStatus; +import com.facebook.presto.spi.type.IntegerType; import com.facebook.presto.spi.type.Type; /** * Class to read the Object Stream */ -public class ObjectStreamReader extends AbstractStreamReader { +public class ObjectStreamReader extends CarbonColumnVectorImpl implements PrestoVectorBlockBuilder { + protected int batchSize; + protected Type type = IntegerType.INTEGER; - public ObjectStreamReader() { + protected BlockBuilder builder; + public ObjectStreamReader(int batchSize, DataType dataType) { + super(batchSize, dataType); + this.batchSize = batchSize; + this.builder = type.createBlockBuilder(new BlockBuilderStatus(), batchSize); } - /** - * Function to create the object Block - * @param type - * @return - * @throws IOException - */ - public Block readBlock(Type type) throws IOException { - int numberOfRows = 0; - BlockBuilder builder = null; - if (isVectorReader) { - numberOfRows = batchSize; - builder = type.createBlockBuilder(new BlockBuilderStatus(), numberOfRows); - if (columnVector != null) { - for (int i = 0; i < numberOfRows; i++) { - type.writeObject(builder, columnVector.getData(i)); - } - } - } else { - numberOfRows = streamData.length; - builder = type.createBlockBuilder(new BlockBuilderStatus(), numberOfRows); - for (int i = 0; i < numberOfRows; i++) { - type.writeObject(builder, streamData[i]); - } - } - + @Override public Block buildBlock(int blockSize) { --- End diff -- remove the parameter blockSize --- |
In reply to this post by qiuchenjian-2
Github user bhavya411 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2412#discussion_r198136493 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/CarbondataPageSource.java --- @@ -166,61 +146,31 @@ protected void closeWithSuppression(Throwable throwable) /** * Lazy Block Implementation for the Carbondata */ - private final class CarbondataBlockLoader - implements LazyBlockLoader<LazyBlock> - { + private final class CarbondataBlockLoader implements LazyBlockLoader<LazyBlock> { private final int expectedBatchId = batchId; private final int columnIndex; - private final Type type; private boolean loaded; - public CarbondataBlockLoader(int columnIndex, Type type) - { + CarbondataBlockLoader(int columnIndex) { this.columnIndex = columnIndex; - this.type = requireNonNull(type, "type is null"); } - @Override - public final void load(LazyBlock lazyBlock) - { + @Override public final void load(LazyBlock lazyBlock) { if (loaded) { return; } checkState(batchId == expectedBatchId); try { - Block block = readers[columnIndex].readBlock(type); + Block block = + ((PrestoVectorBlockBuilder) vectorReader.getColumnarBatch().column(columnIndex)) + .buildBlock(lazyBlock.getPositionCount()); --- End diff -- You should be setting the batch size only, no need for two fields --- |
In reply to this post by qiuchenjian-2
Github user sv71294 commented on the issue:
https://github.com/apache/carbondata/pull/2412 @bhavya411 mentioned changes are done, please review the latest code. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2412 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/5390/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2412 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6563/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2412 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5462/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2412 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/5396/ --- |
In reply to this post by qiuchenjian-2
Github user sv71294 commented on the issue:
https://github.com/apache/carbondata/pull/2412 retest this please --- |
In reply to this post by qiuchenjian-2
Github user sv71294 commented on the issue:
https://github.com/apache/carbondata/pull/2412 retest this please --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2412 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5472/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2412 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6577/ --- |
In reply to this post by qiuchenjian-2
Github user sv71294 commented on the issue:
https://github.com/apache/carbondata/pull/2412 retest this please --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2412 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/5408/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2412 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6585/ --- |
Free forum by Nabble | Edit this page |