[GitHub] carbondata pull request #1248: [CARBONDATA-1371] Support creating decoder ba...

classic Classic list List threaded Threaded
34 messages Options
12
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1248: [CARBONDATA-1371] Support creating decoder ba...

qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1248#discussion_r134144789
 
    --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/chunk/impl/MeasureRawColumnChunk.java ---
    @@ -67,19 +67,19 @@ public MeasureRawColumnChunk(int blockId, ByteBuffer rawData, int offSet, int le
       }
     
       /**
    -   * Convert raw data with specified page number processed to MeasureColumnDataChunk
    +   * Convert raw data with specified page number processed to ColumnPage
        * @param index
        * @return
        */
    -  public MeasureColumnDataChunk convertToMeasureColDataChunk(int index) {
    +  public ColumnPage convertToMeasureColDataChunk(int index) {
    --- End diff --
   
    fixed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1248: [CARBONDATA-1371] Support creating decoder ba...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1248#discussion_r134145165
 
    --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/dimension/v3/CompressedDimensionChunkFileBasedReaderV3.java ---
    @@ -49,6 +59,8 @@
      */
     public class CompressedDimensionChunkFileBasedReaderV3 extends AbstractChunkReaderV2V3Format {
     
    +  private EncodingStrategy strategy = new DefaultEncodingStrategy();
    --- End diff --
   
    fixed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1248: [CARBONDATA-1371] Support creating decoder ba...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1248#discussion_r134145643
 
    --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/dimension/v3/CompressedDimensionChunkFileBasedReaderV3.java ---
    @@ -194,72 +202,118 @@ public DimensionRawColumnChunk readRawDimensionChunk(FileHolder fileReader,
       /**
        * Below method will be used to convert the compressed dimension chunk raw data to actual data
        *
    -   * @param dimensionRawColumnChunk dimension raw chunk
    +   * @param rawColumnPage dimension raw chunk
        * @param pageNumber              number
        * @return DimensionColumnDataChunk
        */
       @Override public DimensionColumnDataChunk convertToDimensionChunk(
    -      DimensionRawColumnChunk dimensionRawColumnChunk, int pageNumber) throws IOException {
    -    byte[] dataPage = null;
    -    int[] invertedIndexes = null;
    -    int[] invertedIndexesReverse = null;
    -    int[] rlePage = null;
    -    // data chunk of page
    -    DataChunk2 dimensionColumnChunk = null;
    +      DimensionRawColumnChunk rawColumnPage, int pageNumber) throws IOException, MemoryException {
         // data chunk of blocklet column
    -    DataChunk3 dataChunk3 = dimensionRawColumnChunk.getDataChunkV3();
    +    DataChunk3 dataChunk3 = rawColumnPage.getDataChunkV3();
         // get the data buffer
    -    ByteBuffer rawData = dimensionRawColumnChunk.getRawData();
    -    dimensionColumnChunk = dataChunk3.getData_chunk_list().get(pageNumber);
    +    ByteBuffer rawData = rawColumnPage.getRawData();
    +    DataChunk2 pageMetadata = dataChunk3.getData_chunk_list().get(pageNumber);
         // calculating the start point of data
         // as buffer can contain multiple column data, start point will be datachunkoffset +
         // data chunk length + page offset
    -    int copySourcePoint = dimensionRawColumnChunk.getOffSet() + dimensionChunksLength
    -        .get(dimensionRawColumnChunk.getBlockletId()) + dataChunk3.getPage_offset().get(pageNumber);
    +    int offset = rawColumnPage.getOffSet() + dimensionChunksLength
    +        .get(rawColumnPage.getBlockletId()) + dataChunk3.getPage_offset().get(pageNumber);
         // first read the data and uncompressed it
    -    dataPage = COMPRESSOR
    -        .unCompressByte(rawData.array(), copySourcePoint, dimensionColumnChunk.data_page_length);
    -    copySourcePoint += dimensionColumnChunk.data_page_length;
    +    return decodeDimension(rawColumnPage, rawData, pageMetadata, offset);
    +  }
    +
    +  private DimensionColumnDataChunk decodeDimensionByMeta(DataChunk2 pageMetadata,
    +      ByteBuffer pageData, int offset)
    +      throws IOException, MemoryException {
    +    List<Encoding> encodings = pageMetadata.getEncoders();
    +    List<ByteBuffer> encoderMetas = pageMetadata.getEncoder_meta();
    +    assert (encodings.size() == 1);
    +    assert (encoderMetas.size() == 1);
    +    Encoding encoding = encodings.get(0);
    +    ColumnPageEncoderMeta metadata = null;
    +    ByteArrayInputStream stream = new ByteArrayInputStream(encoderMetas.get(0).array());
    +    DataInputStream in = new DataInputStream(stream);
    +    switch (encoding) {
    +      case DIRECT_COMPRESS:
    +        DirectCompressorEncoderMeta meta = new DirectCompressorEncoderMeta();
    --- End diff --
   
    ok, fixed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1248: [CARBONDATA-1371] Support creating decoder ba...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1248#discussion_r134147748
 
    --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/dimension/v3/CompressedDimensionChunkFileBasedReaderV3.java ---
    @@ -194,72 +202,118 @@ public DimensionRawColumnChunk readRawDimensionChunk(FileHolder fileReader,
       /**
        * Below method will be used to convert the compressed dimension chunk raw data to actual data
        *
    -   * @param dimensionRawColumnChunk dimension raw chunk
    +   * @param rawColumnPage dimension raw chunk
        * @param pageNumber              number
        * @return DimensionColumnDataChunk
        */
       @Override public DimensionColumnDataChunk convertToDimensionChunk(
    -      DimensionRawColumnChunk dimensionRawColumnChunk, int pageNumber) throws IOException {
    -    byte[] dataPage = null;
    -    int[] invertedIndexes = null;
    -    int[] invertedIndexesReverse = null;
    -    int[] rlePage = null;
    -    // data chunk of page
    -    DataChunk2 dimensionColumnChunk = null;
    +      DimensionRawColumnChunk rawColumnPage, int pageNumber) throws IOException, MemoryException {
         // data chunk of blocklet column
    -    DataChunk3 dataChunk3 = dimensionRawColumnChunk.getDataChunkV3();
    +    DataChunk3 dataChunk3 = rawColumnPage.getDataChunkV3();
         // get the data buffer
    -    ByteBuffer rawData = dimensionRawColumnChunk.getRawData();
    -    dimensionColumnChunk = dataChunk3.getData_chunk_list().get(pageNumber);
    +    ByteBuffer rawData = rawColumnPage.getRawData();
    +    DataChunk2 pageMetadata = dataChunk3.getData_chunk_list().get(pageNumber);
         // calculating the start point of data
         // as buffer can contain multiple column data, start point will be datachunkoffset +
         // data chunk length + page offset
    -    int copySourcePoint = dimensionRawColumnChunk.getOffSet() + dimensionChunksLength
    -        .get(dimensionRawColumnChunk.getBlockletId()) + dataChunk3.getPage_offset().get(pageNumber);
    +    int offset = rawColumnPage.getOffSet() + dimensionChunksLength
    +        .get(rawColumnPage.getBlockletId()) + dataChunk3.getPage_offset().get(pageNumber);
         // first read the data and uncompressed it
    -    dataPage = COMPRESSOR
    -        .unCompressByte(rawData.array(), copySourcePoint, dimensionColumnChunk.data_page_length);
    -    copySourcePoint += dimensionColumnChunk.data_page_length;
    +    return decodeDimension(rawColumnPage, rawData, pageMetadata, offset);
    +  }
    +
    +  private DimensionColumnDataChunk decodeDimensionByMeta(DataChunk2 pageMetadata,
    +      ByteBuffer pageData, int offset)
    +      throws IOException, MemoryException {
    +    List<Encoding> encodings = pageMetadata.getEncoders();
    +    List<ByteBuffer> encoderMetas = pageMetadata.getEncoder_meta();
    +    assert (encodings.size() == 1);
    +    assert (encoderMetas.size() == 1);
    +    Encoding encoding = encodings.get(0);
    +    ColumnPageEncoderMeta metadata = null;
    +    ByteArrayInputStream stream = new ByteArrayInputStream(encoderMetas.get(0).array());
    +    DataInputStream in = new DataInputStream(stream);
    +    switch (encoding) {
    --- End diff --
   
    fixed. I added a concrete method in EncodingStrategy to create decoder based on meta


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1248: [CARBONDATA-1371] Support creating decoder based on ...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/1248
 
    SDV Build Failed with Spark 2.1, Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/272/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1248: [CARBONDATA-1371] Support creating decoder ba...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1248#discussion_r134373768
 
    --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/chunk/store/ColumnPageWrapper.java ---
    @@ -0,0 +1,92 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.core.datastore.chunk.store;
    +
    +import org.apache.carbondata.core.datastore.chunk.DimensionColumnDataChunk;
    +import org.apache.carbondata.core.datastore.page.ColumnPage;
    +import org.apache.carbondata.core.scan.executor.infos.KeyStructureInfo;
    +import org.apache.carbondata.core.scan.result.vector.ColumnVectorInfo;
    +
    +public class ColumnPageWrapper implements DimensionColumnDataChunk {
    --- End diff --
   
    This wrapper has many default implementations like `isExplicitSorted`, doesn't it create a problem?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1248: [CARBONDATA-1371] Support creating decoder ba...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1248#discussion_r134375715
 
    --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/chunk/store/ColumnPageWrapper.java ---
    @@ -0,0 +1,92 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.core.datastore.chunk.store;
    +
    +import org.apache.carbondata.core.datastore.chunk.DimensionColumnDataChunk;
    +import org.apache.carbondata.core.datastore.page.ColumnPage;
    +import org.apache.carbondata.core.scan.executor.infos.KeyStructureInfo;
    +import org.apache.carbondata.core.scan.result.vector.ColumnVectorInfo;
    +
    +public class ColumnPageWrapper implements DimensionColumnDataChunk {
    +
    +  private ColumnPage columnPage;
    +
    +  public ColumnPageWrapper(ColumnPage columnPage) {
    +    this.columnPage = columnPage;
    +  }
    +
    +  @Override
    +  public int fillChunkData(byte[] data, int offset, int columnIndex,
    +      KeyStructureInfo restructuringInfo) {
    +    throw new UnsupportedOperationException("internal error");
    +  }
    +
    +  @Override
    +  public int fillConvertedChunkData(int rowId, int columnIndex, int[] row,
    +      KeyStructureInfo restructuringInfo) {
    +    throw new UnsupportedOperationException("internal error");
    +  }
    +
    +  @Override
    +  public int fillConvertedChunkData(ColumnVectorInfo[] vectorInfo, int column,
    +      KeyStructureInfo restructuringInfo) {
    +    throw new UnsupportedOperationException("internal error");
    +  }
    +
    +  @Override
    +  public int fillConvertedChunkData(int[] rowMapping, ColumnVectorInfo[] vectorInfo, int column,
    +      KeyStructureInfo restructuringInfo) {
    +    throw new UnsupportedOperationException("internal error");
    +  }
    +
    +  @Override
    +  public byte[] getChunkData(int columnIndex) {
    +    return columnPage.getBytes(columnIndex);
    +  }
    +
    +  @Override
    +  public int getInvertedIndex(int rowId) {
    +    throw new UnsupportedOperationException("internal error");
    +  }
    +
    +  @Override
    +  public boolean isNoDicitionaryColumn() {
    +    return true;
    +  }
    +
    +  @Override
    +  public int getColumnValueSize() {
    +    throw new UnsupportedOperationException("internal error");
    +  }
    +
    +  @Override
    +  public boolean isExplicitSorted() {
    +    return false;
    +  }
    +
    +  @Override
    +  public int compareTo(int index, byte[] compareValue) {
    +    throw new UnsupportedOperationException("internal error");
    +  }
    +
    +  @Override
    +  public void freeMemory() {
    --- End diff --
   
    doesn't it require to free column page memory if it is unsafe


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1248: [CARBONDATA-1371] Support creating decoder ba...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1248#discussion_r134376519
 
    --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/page/ColumnPage.java ---
    @@ -68,8 +74,38 @@ public DataType getDataType() {
         return dataType;
       }
     
    -  public Object getStatistics() {
    -    return statsCollector.getPageStats();
    +  public SimpleStatsResult getStatistics() {
    +    if (statsCollector != null) {
    +      return statsCollector.getPageStats();
    +    } else {
    +      // return a dummy result, for complex column
    +      return new SimpleStatsResult() {
    --- End diff --
   
    Better create class separately and set to ColumnPage instead of creating class instance every time.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1248: [CARBONDATA-1371] Support creating decoder ba...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1248#discussion_r134379724
 
    --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/chunk/store/ColumnPageWrapper.java ---
    @@ -0,0 +1,92 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.core.datastore.chunk.store;
    +
    +import org.apache.carbondata.core.datastore.chunk.DimensionColumnDataChunk;
    +import org.apache.carbondata.core.datastore.page.ColumnPage;
    +import org.apache.carbondata.core.scan.executor.infos.KeyStructureInfo;
    +import org.apache.carbondata.core.scan.result.vector.ColumnVectorInfo;
    +
    +public class ColumnPageWrapper implements DimensionColumnDataChunk {
    --- End diff --
   
    Actually ColumnPageWrapper is not used in this PR, because DefaultEncodingStrategy.createEncoder will call createEncoderForDimensionLegacy for dimension columns, so CompressedDimensionChunkFileBasedReaderV3.isEncodedWithMeta will return false.
    ColumnPageWrapper will be used in #1265


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1248: [CARBONDATA-1371] Support creating decoder ba...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1248#discussion_r134379783
 
    --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/chunk/store/ColumnPageWrapper.java ---
    @@ -0,0 +1,92 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.core.datastore.chunk.store;
    +
    +import org.apache.carbondata.core.datastore.chunk.DimensionColumnDataChunk;
    +import org.apache.carbondata.core.datastore.page.ColumnPage;
    +import org.apache.carbondata.core.scan.executor.infos.KeyStructureInfo;
    +import org.apache.carbondata.core.scan.result.vector.ColumnVectorInfo;
    +
    +public class ColumnPageWrapper implements DimensionColumnDataChunk {
    +
    +  private ColumnPage columnPage;
    +
    +  public ColumnPageWrapper(ColumnPage columnPage) {
    +    this.columnPage = columnPage;
    +  }
    +
    +  @Override
    +  public int fillChunkData(byte[] data, int offset, int columnIndex,
    +      KeyStructureInfo restructuringInfo) {
    +    throw new UnsupportedOperationException("internal error");
    +  }
    +
    +  @Override
    +  public int fillConvertedChunkData(int rowId, int columnIndex, int[] row,
    +      KeyStructureInfo restructuringInfo) {
    +    throw new UnsupportedOperationException("internal error");
    +  }
    +
    +  @Override
    +  public int fillConvertedChunkData(ColumnVectorInfo[] vectorInfo, int column,
    +      KeyStructureInfo restructuringInfo) {
    +    throw new UnsupportedOperationException("internal error");
    +  }
    +
    +  @Override
    +  public int fillConvertedChunkData(int[] rowMapping, ColumnVectorInfo[] vectorInfo, int column,
    +      KeyStructureInfo restructuringInfo) {
    +    throw new UnsupportedOperationException("internal error");
    +  }
    +
    +  @Override
    +  public byte[] getChunkData(int columnIndex) {
    +    return columnPage.getBytes(columnIndex);
    +  }
    +
    +  @Override
    +  public int getInvertedIndex(int rowId) {
    +    throw new UnsupportedOperationException("internal error");
    +  }
    +
    +  @Override
    +  public boolean isNoDicitionaryColumn() {
    +    return true;
    +  }
    +
    +  @Override
    +  public int getColumnValueSize() {
    +    throw new UnsupportedOperationException("internal error");
    +  }
    +
    +  @Override
    +  public boolean isExplicitSorted() {
    +    return false;
    +  }
    +
    +  @Override
    +  public int compareTo(int index, byte[] compareValue) {
    +    throw new UnsupportedOperationException("internal error");
    +  }
    +
    +  @Override
    +  public void freeMemory() {
    --- End diff --
   
    It will be done in #1265


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1248: [CARBONDATA-1371] Support creating decoder ba...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1248#discussion_r134381229
 
    --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/page/ColumnPage.java ---
    @@ -68,8 +74,38 @@ public DataType getDataType() {
         return dataType;
       }
     
    -  public Object getStatistics() {
    -    return statsCollector.getPageStats();
    +  public SimpleStatsResult getStatistics() {
    +    if (statsCollector != null) {
    +      return statsCollector.getPageStats();
    +    } else {
    +      // return a dummy result, for complex column
    +      return new SimpleStatsResult() {
    --- End diff --
   
    fixed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1248: [CARBONDATA-1371] Support creating decoder based on ...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/1248
 
    SDV Build Failed with Spark 2.1, Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/288/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1248: [CARBONDATA-1371] Support creating decoder based on ...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/1248
 
    LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1248: [CARBONDATA-1371] Support creating decoder ba...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user asfgit closed the pull request at:

    https://github.com/apache/carbondata/pull/1248


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
12