GitHub user kumarvishal09 opened a pull request:
https://github.com/apache/incubator-carbondata/pull/609 [CARBONDATA-726] Added code for V3 format Writer and Reader **1. Added code to support V3 Format Writer 2. Added Code to support V3 format Reader** **Note** 1. This PR is depends on PR#584, so this can be merged after PR#584 2. Update carbondata.thrift file so before merging format jar needs to be updated in repository Exposed below property carbon.number.of.page.in.blocklet.column for number of column page per blocklet column number.of.column.to.read.in.io number of column to be read in one IO number.of.rows.per.blocklet.column.page number of rows per page, max value is 32000 You can merge this pull request into a Git repository by running: $ git pull https://github.com/kumarvishal09/incubator-carbondata V3ReaderAndWriter Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-carbondata/pull/609.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #609 ---- commit a9958e2f342e91f18ca047262fdc8130ef317926 Author: kumarvishal <[hidden email]> Date: 2017-02-23T08:44:41Z Added V3 Format Writer and Reader Code commit a001093a4e1ce10d314dbd773f51aaf3557944ed Author: kumarvishal <[hidden email]> Date: 2017-02-23T14:04:09Z Added code to support V3 Writer + Reader ---- --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/609 Build Failed with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/943/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/609#discussion_r102875074 --- Diff: core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java --- @@ -81,7 +81,7 @@ /** * min blocklet size */ - public static final int BLOCKLET_SIZE_MIN_VAL = 50; + public static final int BLOCKLET_SIZE_MIN_VAL = 2000; --- End diff -- should this be set by user? Why user want to set this? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/609#discussion_r102875150 --- Diff: core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java --- @@ -835,26 +834,23 @@ /** * ZOOKEEPERLOCK TYPE */ - public static final String CARBON_LOCK_TYPE_ZOOKEEPER = - "ZOOKEEPERLOCK"; + public static final String CARBON_LOCK_TYPE_ZOOKEEPER = "ZOOKEEPERLOCK"; /** * LOCALLOCK TYPE */ - public static final String CARBON_LOCK_TYPE_LOCAL = - "LOCALLOCK"; + public static final String CARBON_LOCK_TYPE_LOCAL = "LOCALLOCK"; /** * HDFSLOCK TYPE */ - public static final String CARBON_LOCK_TYPE_HDFS = - "HDFSLOCK"; + public static final String CARBON_LOCK_TYPE_HDFS = "HDFSLOCK"; /** * Invalid filter member log string */ - public static final String FILTER_INVALID_MEMBER = " Invalid Record(s) are present " - + "while filter evaluation. "; + public static final String FILTER_INVALID_MEMBER = + " Invalid Record(s) are present " + "while filter evaluation. "; --- End diff -- remove the string concatenate --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/609#discussion_r102875453 --- Diff: core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java --- @@ -1151,6 +1139,68 @@ public static final String USE_KETTLE_DEFAULT = "false"; + /** + * number of page per blocklet column + */ + public static final String NUMBER_OF_PAGE_IN_BLOCKLET_COLUMN = --- End diff -- all these newly added config is for V3 reader/writer, I think it is better to separte them into a separate Constrant class --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/609#discussion_r102875973 --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/dimension/v3/CompressedDimensionChunkFileBasedReaderV3.java --- @@ -0,0 +1,256 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.core.datastore.chunk.reader.dimension.v3; + +import java.io.IOException; +import java.nio.ByteBuffer; + +import org.apache.carbondata.core.datastore.FileHolder; +import org.apache.carbondata.core.datastore.chunk.DimensionColumnDataChunk; +import org.apache.carbondata.core.datastore.chunk.impl.ColumnGroupDimensionDataChunk; +import org.apache.carbondata.core.datastore.chunk.impl.DimensionRawColumnChunk; +import org.apache.carbondata.core.datastore.chunk.impl.FixedLengthDimensionDataChunk; +import org.apache.carbondata.core.datastore.chunk.impl.VariableLengthDimensionDataChunk; +import org.apache.carbondata.core.datastore.chunk.reader.dimension.v2.CompressedDimensionChunkFileBasedReaderV2; +import org.apache.carbondata.core.datastore.columnar.UnBlockIndexer; +import org.apache.carbondata.core.metadata.blocklet.BlockletInfo; +import org.apache.carbondata.core.util.CarbonUtil; +import org.apache.carbondata.format.DataChunk2; +import org.apache.carbondata.format.DataChunk3; +import org.apache.carbondata.format.Encoding; + +import org.apache.commons.lang.ArrayUtils; + +/** + * Dimension column V3 Reader class which will be used to read and uncompress + * V3 format data + * data format + * Data Format + * <Column1 Data ChunkV3><Column1<Page1><Page2><Page3><Page4>> + * <Column2 Data ChunkV3><Column2<Page1><Page2><Page3><Page4>> + * <Column3 Data ChunkV3><Column3<Page1><Page2><Page3><Page4>> + * <Column4 Data ChunkV3><Column4<Page1><Page2><Page3><Page4>> + */ +public class CompressedDimensionChunkFileBasedReaderV3 + extends CompressedDimensionChunkFileBasedReaderV2 { --- End diff -- make one abstract class and put all dependent function in that class. V3 should not depend on V2 reader --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/609#discussion_r102876187 --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/columnar/BlockIndexerStorageForShort.java --- @@ -0,0 +1,232 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.core.datastore.columnar; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; + +import org.apache.carbondata.core.constants.CarbonCommonConstants; +import org.apache.carbondata.core.util.ByteUtil; + +public class BlockIndexerStorageForShort implements IndexStorage<short[]> { + + private boolean alreadySorted; + + private short[] dataAfterComp; + + private short[] indexMap; + + private byte[][] keyBlock; + + private short[] dataIndexMap; + + private int totalSize; + + public BlockIndexerStorageForShort(byte[][] keyBlock, boolean compressData, + boolean isNoDictionary, boolean isSortRequired) { + ColumnWithShortIndex[] columnWithIndexs = createColumnWithIndexArray(keyBlock, isNoDictionary); + if (isSortRequired) { + Arrays.sort(columnWithIndexs); + } + compressMyOwnWay(extractDataAndReturnIndexes(columnWithIndexs, keyBlock)); + if (compressData) { + compressDataMyOwnWay(columnWithIndexs); + } + } + + public BlockIndexerStorageForShort() { --- End diff -- remove this --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/609#discussion_r102876253 --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/columnar/BlockIndexerStorageForShort.java --- @@ -0,0 +1,232 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.core.datastore.columnar; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; + +import org.apache.carbondata.core.constants.CarbonCommonConstants; +import org.apache.carbondata.core.util.ByteUtil; + +public class BlockIndexerStorageForShort implements IndexStorage<short[]> { + + private boolean alreadySorted; + + private short[] dataAfterComp; + + private short[] indexMap; + + private byte[][] keyBlock; + + private short[] dataIndexMap; + + private int totalSize; + + public BlockIndexerStorageForShort(byte[][] keyBlock, boolean compressData, + boolean isNoDictionary, boolean isSortRequired) { + ColumnWithShortIndex[] columnWithIndexs = createColumnWithIndexArray(keyBlock, isNoDictionary); + if (isSortRequired) { + Arrays.sort(columnWithIndexs); + } + compressMyOwnWay(extractDataAndReturnIndexes(columnWithIndexs, keyBlock)); + if (compressData) { + compressDataMyOwnWay(columnWithIndexs); + } + } + + public BlockIndexerStorageForShort() { + // TODO Auto-generated constructor stub + } + + /** + * Create an object with each column array and respective index + * + * @return + */ + private ColumnWithShortIndex[] createColumnWithIndexArray(byte[][] keyBlock, + boolean isNoDictionary) { + ColumnWithShortIndex[] columnWithIndexs; + if (isNoDictionary) { + columnWithIndexs = new ColumnWithShortIndex[keyBlock.length]; + for (short i = 0; i < columnWithIndexs.length; i++) { + columnWithIndexs[i] = new ColumnWithShortIndexForNoDictionay(keyBlock[i], i); + } + } else { + columnWithIndexs = new ColumnWithShortIndex[keyBlock.length]; + for (short i = 0; i < columnWithIndexs.length; i++) { + columnWithIndexs[i] = new ColumnWithShortIndex(keyBlock[i], i); + } + } + return columnWithIndexs; + } + + private short[] extractDataAndReturnIndexes(ColumnWithShortIndex[] columnWithIndexs, + byte[][] keyBlock) { + short[] indexes = new short[columnWithIndexs.length]; + for (int i = 0; i < indexes.length; i++) { + indexes[i] = columnWithIndexs[i].getIndex(); + keyBlock[i] = columnWithIndexs[i].getColumn(); + } + this.keyBlock = keyBlock; + return indexes; + } + + /** + * It compresses depends up on the sequence numbers. + * [1,2,3,4,6,8,10,11,12,13] is translated to [1,4,6,8,10,13] and [0,6]. In + * first array the start and end of sequential numbers and second array + * keeps the indexes of where sequential numbers starts. If there is no + * sequential numbers then the same array it returns with empty second + * array. + * + * @param indexes + */ + public void compressMyOwnWay(short[] indexes) { + List<Short> list = new ArrayList<Short>(CarbonCommonConstants.CONSTANT_SIZE_TEN); + List<Short> map = new ArrayList<Short>(CarbonCommonConstants.CONSTANT_SIZE_TEN); + int k = 0; + int i = 1; + for (; i < indexes.length; i++) { + if (indexes[i] - indexes[i - 1] == 1) { + k++; + } else { + if (k > 0) { + map.add(((short) list.size())); + list.add(indexes[i - k - 1]); + list.add(indexes[i - 1]); + } else { + list.add(indexes[i - 1]); + } + k = 0; + } + } + if (k > 0) { + map.add(((short) list.size())); + list.add(indexes[i - k - 1]); + list.add(indexes[i - 1]); + } else { + list.add(indexes[i - 1]); + } + double compressionPercentage = (((list.size() + map.size()) * 100) / indexes.length); + if (compressionPercentage > 70) { + dataAfterComp = indexes; + } else { + dataAfterComp = convertToArray(list); + } + if (indexes.length == dataAfterComp.length) { + indexMap = new short[0]; + } else { + indexMap = convertToArray(map); + } + if (dataAfterComp.length == 2 && indexMap.length == 1) { + alreadySorted = true; + } + } + + private short[] convertToArray(List<Short> list) { + short[] shortArray = new short[list.size()]; + for (int i = 0; i < shortArray.length; i++) { + shortArray[i] = list.get(i); + } + return shortArray; + } + + /** + * @return the alreadySorted + */ + public boolean isAlreadySorted() { + return alreadySorted; + } + + /** + * @return the dataAfterComp + */ + public short[] getDataAfterComp() { + return dataAfterComp; + } + + /** + * @return the indexMap + */ + public short[] getIndexMap() { + return indexMap; + } + + /** + * @return the keyBlock + */ + public byte[][] getKeyBlock() { + return keyBlock; + } + + private void compressDataMyOwnWay(ColumnWithShortIndex[] indexes) { + byte[] prvKey = indexes[0].getColumn(); + List<ColumnWithShortIndex> list = + new ArrayList<ColumnWithShortIndex>(CarbonCommonConstants.CONSTANT_SIZE_TEN); + list.add(indexes[0]); + short counter = 1; + short start = 0; + List<Short> map = new ArrayList<Short>(CarbonCommonConstants.CONSTANT_SIZE_TEN); + for (int i = 1; i < indexes.length; i++) { + if (ByteUtil.UnsafeComparer.INSTANCE.compareTo(prvKey, indexes[i].getColumn()) != 0) { + prvKey = indexes[i].getColumn(); + list.add(indexes[i]); + map.add(start); + map.add(counter); + start += counter; + counter = 1; + continue; + } + counter++; + } + map.add(start); + map.add(counter); + this.keyBlock = convertToKeyArray(list); + if (indexes.length == keyBlock.length) { + dataIndexMap = new short[0]; + } else { + dataIndexMap = convertToArray(map); + } + } + + private byte[][] convertToKeyArray(List<ColumnWithShortIndex> list) { + byte[][] shortArray = new byte[list.size()][]; + for (int i = 0; i < shortArray.length; i++) { + shortArray[i] = list.get(i).getColumn(); + totalSize += shortArray[i].length; + } + return shortArray; + } + + @Override public short[] getDataIndexMap() { --- End diff -- move all @Override to previous line --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/609#discussion_r102876604 --- Diff: core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java --- @@ -471,6 +472,33 @@ public static int nextGreaterValueToTarget(int currentIndex, numberCompressor.unCompress(indexMap, 0, indexMap.length)); } + public static int[] getUnCompressColumnIndex(int totalLength, ByteBuffer buffer, int offset) { + buffer.position(offset); + int indexDataLength = buffer.getInt(); + int indexMapLength = totalLength - indexDataLength - CarbonCommonConstants.INT_SIZE_IN_BYTE; + // byte[] indexData = new byte[indexDataLength]; --- End diff -- Is there code needed? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/609#discussion_r102876673 --- Diff: core/src/main/java/org/apache/carbondata/core/util/DataFileFooterConverterV3.java --- @@ -0,0 +1,132 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.core.util; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; + +import org.apache.carbondata.core.datastore.block.TableBlockInfo; +import org.apache.carbondata.core.metadata.ColumnarFormatVersion; +import org.apache.carbondata.core.metadata.blocklet.BlockletInfo; +import org.apache.carbondata.core.metadata.blocklet.DataFileFooter; +import org.apache.carbondata.core.metadata.blocklet.index.BlockletIndex; +import org.apache.carbondata.core.metadata.schema.table.column.ColumnSchema; +import org.apache.carbondata.core.reader.CarbonFooterReader; +import org.apache.carbondata.format.FileFooter; + +public class DataFileFooterConverterV3 extends AbstractDataFileFooterConverter { + + /** + * Below method will be used to convert thrift file meta to wrapper file meta --- End diff -- describe the reading steps in comment, like 1. ... 2. ... 3.... --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/609#discussion_r102876851 --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/dimension/v3/CompressedDimensionChunkFileBasedReaderV3.java --- @@ -0,0 +1,256 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.core.datastore.chunk.reader.dimension.v3; + +import java.io.IOException; +import java.nio.ByteBuffer; + +import org.apache.carbondata.core.datastore.FileHolder; +import org.apache.carbondata.core.datastore.chunk.DimensionColumnDataChunk; +import org.apache.carbondata.core.datastore.chunk.impl.ColumnGroupDimensionDataChunk; +import org.apache.carbondata.core.datastore.chunk.impl.DimensionRawColumnChunk; +import org.apache.carbondata.core.datastore.chunk.impl.FixedLengthDimensionDataChunk; +import org.apache.carbondata.core.datastore.chunk.impl.VariableLengthDimensionDataChunk; +import org.apache.carbondata.core.datastore.chunk.reader.dimension.v2.CompressedDimensionChunkFileBasedReaderV2; +import org.apache.carbondata.core.datastore.columnar.UnBlockIndexer; +import org.apache.carbondata.core.metadata.blocklet.BlockletInfo; +import org.apache.carbondata.core.util.CarbonUtil; +import org.apache.carbondata.format.DataChunk2; +import org.apache.carbondata.format.DataChunk3; +import org.apache.carbondata.format.Encoding; + +import org.apache.commons.lang.ArrayUtils; + +/** + * Dimension column V3 Reader class which will be used to read and uncompress + * V3 format data + * data format + * Data Format + * <Column1 Data ChunkV3><Column1<Page1><Page2><Page3><Page4>> + * <Column2 Data ChunkV3><Column2<Page1><Page2><Page3><Page4>> + * <Column3 Data ChunkV3><Column3<Page1><Page2><Page3><Page4>> + * <Column4 Data ChunkV3><Column4<Page1><Page2><Page3><Page4>> + */ +public class CompressedDimensionChunkFileBasedReaderV3 + extends CompressedDimensionChunkFileBasedReaderV2 { + + /** + * end position of last dimension in carbon data file + */ + private long lastDimensionOffsets; + + public CompressedDimensionChunkFileBasedReaderV3(BlockletInfo blockletInfo, + int[] eachColumnValueSize, String filePath) { + super(blockletInfo, eachColumnValueSize, filePath); + lastDimensionOffsets = blockletInfo.getDimensionOffset(); + } + + /** + * Below method will be used to read the dimension column data form carbon data file --- End diff -- can you describe the reading steps in comment, like 1. ... 2. ... 3. ... --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/609#discussion_r102892320 --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/dimension/v3/CompressedDimensionChunkFileBasedReaderV3.java --- @@ -0,0 +1,256 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.core.datastore.chunk.reader.dimension.v3; + +import java.io.IOException; +import java.nio.ByteBuffer; + +import org.apache.carbondata.core.datastore.FileHolder; +import org.apache.carbondata.core.datastore.chunk.DimensionColumnDataChunk; +import org.apache.carbondata.core.datastore.chunk.impl.ColumnGroupDimensionDataChunk; +import org.apache.carbondata.core.datastore.chunk.impl.DimensionRawColumnChunk; +import org.apache.carbondata.core.datastore.chunk.impl.FixedLengthDimensionDataChunk; +import org.apache.carbondata.core.datastore.chunk.impl.VariableLengthDimensionDataChunk; +import org.apache.carbondata.core.datastore.chunk.reader.dimension.v2.CompressedDimensionChunkFileBasedReaderV2; +import org.apache.carbondata.core.datastore.columnar.UnBlockIndexer; +import org.apache.carbondata.core.metadata.blocklet.BlockletInfo; +import org.apache.carbondata.core.util.CarbonUtil; +import org.apache.carbondata.format.DataChunk2; +import org.apache.carbondata.format.DataChunk3; +import org.apache.carbondata.format.Encoding; + +import org.apache.commons.lang.ArrayUtils; + +/** + * Dimension column V3 Reader class which will be used to read and uncompress + * V3 format data + * data format + * Data Format + * <Column1 Data ChunkV3><Column1<Page1><Page2><Page3><Page4>> + * <Column2 Data ChunkV3><Column2<Page1><Page2><Page3><Page4>> + * <Column3 Data ChunkV3><Column3<Page1><Page2><Page3><Page4>> + * <Column4 Data ChunkV3><Column4<Page1><Page2><Page3><Page4>> + */ +public class CompressedDimensionChunkFileBasedReaderV3 + extends CompressedDimensionChunkFileBasedReaderV2 { + + /** + * end position of last dimension in carbon data file + */ + private long lastDimensionOffsets; + + public CompressedDimensionChunkFileBasedReaderV3(BlockletInfo blockletInfo, + int[] eachColumnValueSize, String filePath) { + super(blockletInfo, eachColumnValueSize, filePath); + lastDimensionOffsets = blockletInfo.getDimensionOffset(); + } + + /** + * Below method will be used to read the dimension column data form carbon data file --- End diff -- ok --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/609 Build Failed with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/947/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/609#discussion_r102900529 --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/dimension/v3/CompressedDimensionChunkFileBasedReaderV3.java --- @@ -0,0 +1,256 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.core.datastore.chunk.reader.dimension.v3; + +import java.io.IOException; +import java.nio.ByteBuffer; + +import org.apache.carbondata.core.datastore.FileHolder; +import org.apache.carbondata.core.datastore.chunk.DimensionColumnDataChunk; +import org.apache.carbondata.core.datastore.chunk.impl.ColumnGroupDimensionDataChunk; +import org.apache.carbondata.core.datastore.chunk.impl.DimensionRawColumnChunk; +import org.apache.carbondata.core.datastore.chunk.impl.FixedLengthDimensionDataChunk; +import org.apache.carbondata.core.datastore.chunk.impl.VariableLengthDimensionDataChunk; +import org.apache.carbondata.core.datastore.chunk.reader.dimension.v2.CompressedDimensionChunkFileBasedReaderV2; +import org.apache.carbondata.core.datastore.columnar.UnBlockIndexer; +import org.apache.carbondata.core.metadata.blocklet.BlockletInfo; +import org.apache.carbondata.core.util.CarbonUtil; +import org.apache.carbondata.format.DataChunk2; +import org.apache.carbondata.format.DataChunk3; +import org.apache.carbondata.format.Encoding; + +import org.apache.commons.lang.ArrayUtils; + +/** + * Dimension column V3 Reader class which will be used to read and uncompress + * V3 format data + * data format + * Data Format + * <Column1 Data ChunkV3><Column1<Page1><Page2><Page3><Page4>> + * <Column2 Data ChunkV3><Column2<Page1><Page2><Page3><Page4>> + * <Column3 Data ChunkV3><Column3<Page1><Page2><Page3><Page4>> + * <Column4 Data ChunkV3><Column4<Page1><Page2><Page3><Page4>> + */ +public class CompressedDimensionChunkFileBasedReaderV3 + extends CompressedDimensionChunkFileBasedReaderV2 { + + /** + * end position of last dimension in carbon data file + */ + private long lastDimensionOffsets; + + public CompressedDimensionChunkFileBasedReaderV3(BlockletInfo blockletInfo, + int[] eachColumnValueSize, String filePath) { + super(blockletInfo, eachColumnValueSize, filePath); + lastDimensionOffsets = blockletInfo.getDimensionOffset(); + } + + /** + * Below method will be used to read the dimension column data form carbon data file --- End diff -- ok --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/609 Build Failed with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/951/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/609 Build Failed with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/953/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/609 Build Failed with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/954/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/609 Build Failed with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/955/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/609 Build Failed with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/957/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/609 Build Failed with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/958/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
Free forum by Nabble | Edit this page |