[GitHub] carbondata pull request #2628: WIP: Support zstd as column compressor in fin...

classic Classic list List threaded Threaded
147 messages Options
1234 ... 8
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2628: WIP: Support zstd as column compressor in fin...

qiuchenjian-2
GitHub user xuchuanyin opened a pull request:

    https://github.com/apache/carbondata/pull/2628

    WIP: Support zstd as column compressor in final store

    1. add zstd compressor for compressing column data
    2. add zstd support in thrift
    3. legacy store is not considered in this commit
   
    Be sure to do all of the following checklist to help us incorporate
    your contribution quickly and easily:
   
     - [ ] Any interfaces changed?
     
     - [ ] Any backward compatibility impacted?
     
     - [ ] Document update required?
   
     - [ ] Testing done
            Please provide details on
            - Whether new unit test cases have been added or why no new tests are required?
            - How it is tested? Please attach test report.
            - Is it a performance related change? Please attach the performance test report.
            - Any additional information to help reviewers in testing this change.
           
     - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
   


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/xuchuanyin/carbondata 0810_support_zstd_compressor_final_store

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/carbondata/pull/2628.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2628
   
----
commit c19c90852feaf33eaaca93a2bfdb3783c40159af
Author: xuchuanyin <xuchuanyin@...>
Date:   2018-08-10T14:02:57Z

    Support zstd as column compressor in final store
   
    1. add zstd compressor for compressing column data
    2. add zstd support in thrift
    3. legacy store is not considered in this commit

----


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2628: WIP: Support zstd as column compressor in final stor...

qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/2628
 
    Build Failed  with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7885/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2628: WIP: Support zstd as column compressor in final stor...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/2628
 
    Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6609/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2628: WIP: Support zstd as column compressor in final stor...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/2628
 
    SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6246/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2628: WIP: Support zstd as column compressor in final stor...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/2628
 
    SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6247/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2628: WIP: Support zstd as column compressor in final stor...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/2628
 
    Build Failed  with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7886/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2628: WIP: Support zstd as column compressor in final stor...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/2628
 
    Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6610/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2628: WIP: Support zstd as column compressor in final stor...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user brijoobopanna commented on the issue:

    https://github.com/apache/carbondata/pull/2628
 
    retest this please



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2628: WIP: Support zstd as column compressor in final stor...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/2628
 
    Build Failed  with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7894/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2628: WIP: Support zstd as column compressor in final stor...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/2628
 
    Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6618/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2628: WIP: Support zstd as column compressor in final stor...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/2628
 
    SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6254/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2628: [CARBONDATA-2851] Support zstd as column comp...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user xuchuanyin closed the pull request at:

    https://github.com/apache/carbondata/pull/2628


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2628: [CARBONDATA-2851] Support zstd as column comp...

qiuchenjian-2
In reply to this post by qiuchenjian-2
GitHub user xuchuanyin reopened a pull request:

    https://github.com/apache/carbondata/pull/2628

    [CARBONDATA-2851] Support zstd as column compressor in final store

    1. add zstd compressor for compressing column data
    2. add zstd support in thrift
    3. legacy store is not considered in this commit
    4. since zstd does not support zero-copy while compressing, offheap will
    not take effect for zstd
   
    A simple test with 1.2GB raw CSV data shows that the size (in MB) of final store with different compressor:
   
    | local dictionary | snappy | zstd | Size Reduced |
    | --- | --- | --- | -- |
    | local dict enabled | 335 | 207 | 38.2% |
    | local dict disabled | 375 | 225 | 40% |
   
    Be sure to do all of the following checklist to help us incorporate
    your contribution quickly and easily:
   
     - [ ] Any interfaces changed?
     
     - [ ] Any backward compatibility impacted?
     
     - [ ] Document update required?
   
     - [ ] Testing done
            Please provide details on
            - Whether new unit test cases have been added or why no new tests are required?
            - How it is tested? Please attach test report.
            - Is it a performance related change? Please attach the performance test report.
            - Any additional information to help reviewers in testing this change.
           
     - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
   


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/xuchuanyin/carbondata 0810_support_zstd_compressor_final_store

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/carbondata/pull/2628.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2628
   
----
commit bcd3d8f9c64f197668d46d29af1aa2ee2d956ceb
Author: xuchuanyin <xuchuanyin@...>
Date:   2018-08-10T14:02:57Z

    Support zstd as column compressor in final store
   
    1. add zstd compressor for compressing column data
    2. add zstd support in thrift
    3. legacy store is not considered in this commit
    4. since zstd does not support zero-copy while compressing, offheap will
    not take effect for zstd

----


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2628: [CARBONDATA-2851] Support zstd as column compressor ...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/2628
 
    Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7897/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2628: [CARBONDATA-2851] Support zstd as column compressor ...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/2628
 
    Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6621/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2628: [CARBONDATA-2851] Support zstd as column comp...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user kevinjmh commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2628#discussion_r209543118
 
    --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/compression/ZstdCompressor.java ---
    @@ -0,0 +1,200 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.core.datastore.compression;
    +
    +import java.io.IOException;
    +import java.io.Serializable;
    +import java.nio.ByteBuffer;
    +
    +import org.apache.carbondata.common.logging.LogService;
    +import org.apache.carbondata.common.logging.LogServiceFactory;
    +
    +import com.github.luben.zstd.Zstd;
    +
    +public class ZstdCompressor implements Compressor, Serializable {
    +  private static final long serialVersionUID = 8181578747306832771L;
    +  private static final LogService LOGGER =
    +      LogServiceFactory.getLogService(ZstdCompressor.class.getName());
    +  private static final int COMPRESS_LEVEL = 3;
    +
    +  public ZstdCompressor() {
    +  }
    +
    +  @Override
    +  public String getName() {
    +    return "zstd";
    +  }
    +
    +  @Override
    +  public byte[] compressByte(byte[] unCompInput) {
    +    return Zstd.compress(unCompInput, 3);
    +  }
    +
    +  @Override
    +  public byte[] compressByte(byte[] unCompInput, int byteSize) {
    +    return Zstd.compress(unCompInput, COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public byte[] unCompressByte(byte[] compInput) {
    +    long estimatedUncompressLength = Zstd.decompressedSize(compInput);
    +    return Zstd.decompress(compInput, (int) estimatedUncompressLength);
    +  }
    +
    +  @Override
    +  public byte[] unCompressByte(byte[] compInput, int offset, int length) {
    +    // todo: how to avoid memory copy
    +    byte[] dstBytes = new byte[length];
    +    System.arraycopy(compInput, offset, dstBytes, 0, length);
    +    return unCompressByte(dstBytes);
    +  }
    +
    +  @Override
    +  public byte[] compressShort(short[] unCompInput) {
    +    // short use 2 bytes
    +    byte[] unCompArray = new byte[unCompInput.length * 2];
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    for (short input : unCompInput) {
    +      unCompBuffer.putShort(input);
    +    }
    +    return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public short[] unCompressShort(byte[] compInput, int offset, int length) {
    +    byte[] unCompArray = unCompressByte(compInput, offset, length);
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    short[] shorts = new short[unCompArray.length / 2];
    +    for (int i = 0; i < shorts.length; i++) {
    +      shorts[i] = unCompBuffer.getShort();
    +    }
    +    return shorts;
    +  }
    +
    +  @Override
    +  public byte[] compressInt(int[] unCompInput) {
    +    // int use 4 bytes
    +    byte[] unCompArray = new byte[unCompInput.length * 4];
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    for (int input : unCompInput) {
    +      unCompBuffer.putInt(input);
    +    }
    +    return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public int[] unCompressInt(byte[] compInput, int offset, int length) {
    +    byte[] unCompArray = unCompressByte(compInput, offset, length);
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    int[] ints = new int[unCompArray.length / 4];
    +    for (int i = 0; i < ints.length; i++) {
    +      ints[i] = unCompBuffer.getInt();
    +    }
    +    return ints;
    +  }
    --- End diff --
   
    can try following code style to convert unCompress byte result to target datatype:
    (take Int for example)
    ```
        byte[] unCompArray = unCompressByte(compInput, offset, length);
        IntBuffer buf = ByteBuffer.wrap(unCompArray).asIntBuffer();
        int[] dest = new int[buf.remaining()];
        buf.get(dest);
        return dest;
    ```


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2628: [CARBONDATA-2851] Support zstd as column comp...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user kevinjmh commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2628#discussion_r209545381
 
    --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/compression/ZstdCompressor.java ---
    @@ -0,0 +1,200 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.core.datastore.compression;
    +
    +import java.io.IOException;
    +import java.io.Serializable;
    +import java.nio.ByteBuffer;
    +
    +import org.apache.carbondata.common.logging.LogService;
    +import org.apache.carbondata.common.logging.LogServiceFactory;
    +
    +import com.github.luben.zstd.Zstd;
    +
    +public class ZstdCompressor implements Compressor, Serializable {
    +  private static final long serialVersionUID = 8181578747306832771L;
    +  private static final LogService LOGGER =
    +      LogServiceFactory.getLogService(ZstdCompressor.class.getName());
    +  private static final int COMPRESS_LEVEL = 3;
    +
    +  public ZstdCompressor() {
    +  }
    +
    +  @Override
    +  public String getName() {
    +    return "zstd";
    +  }
    +
    +  @Override
    +  public byte[] compressByte(byte[] unCompInput) {
    +    return Zstd.compress(unCompInput, 3);
    +  }
    +
    +  @Override
    +  public byte[] compressByte(byte[] unCompInput, int byteSize) {
    +    return Zstd.compress(unCompInput, COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public byte[] unCompressByte(byte[] compInput) {
    +    long estimatedUncompressLength = Zstd.decompressedSize(compInput);
    +    return Zstd.decompress(compInput, (int) estimatedUncompressLength);
    +  }
    +
    +  @Override
    +  public byte[] unCompressByte(byte[] compInput, int offset, int length) {
    +    // todo: how to avoid memory copy
    +    byte[] dstBytes = new byte[length];
    +    System.arraycopy(compInput, offset, dstBytes, 0, length);
    +    return unCompressByte(dstBytes);
    +  }
    +
    +  @Override
    +  public byte[] compressShort(short[] unCompInput) {
    +    // short use 2 bytes
    +    byte[] unCompArray = new byte[unCompInput.length * 2];
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    for (short input : unCompInput) {
    +      unCompBuffer.putShort(input);
    +    }
    +    return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public short[] unCompressShort(byte[] compInput, int offset, int length) {
    +    byte[] unCompArray = unCompressByte(compInput, offset, length);
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    short[] shorts = new short[unCompArray.length / 2];
    +    for (int i = 0; i < shorts.length; i++) {
    +      shorts[i] = unCompBuffer.getShort();
    +    }
    +    return shorts;
    +  }
    +
    +  @Override
    +  public byte[] compressInt(int[] unCompInput) {
    +    // int use 4 bytes
    +    byte[] unCompArray = new byte[unCompInput.length * 4];
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    for (int input : unCompInput) {
    +      unCompBuffer.putInt(input);
    +    }
    +    return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public int[] unCompressInt(byte[] compInput, int offset, int length) {
    +    byte[] unCompArray = unCompressByte(compInput, offset, length);
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    int[] ints = new int[unCompArray.length / 4];
    +    for (int i = 0; i < ints.length; i++) {
    +      ints[i] = unCompBuffer.getInt();
    +    }
    +    return ints;
    +  }
    +
    +  @Override
    +  public byte[] compressLong(long[] unCompInput) {
    +    // long use 8 bytes
    +    byte[] unCompArray = new byte[unCompInput.length * 8];
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    for (long input : unCompInput) {
    +      unCompBuffer.putLong(input);
    +    }
    +    return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public long[] unCompressLong(byte[] compInput, int offset, int length) {
    +    byte[] unCompArray = unCompressByte(compInput, offset, length);
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    long[] longs = new long[unCompArray.length / 8];
    +    for (int i = 0; i < longs.length; i++) {
    +      longs[i] = unCompBuffer.getLong();
    +    }
    +    return longs;
    +  }
    +
    +  @Override
    +  public byte[] compressFloat(float[] unCompInput) {
    +    // float use 4 bytes
    +    byte[] unCompArray = new byte[unCompInput.length * 4];
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    for (float input : unCompInput) {
    +      unCompBuffer.putFloat(input);
    +    }
    +    return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public float[] unCompressFloat(byte[] compInput, int offset, int length) {
    +    byte[] unCompArray = unCompressByte(compInput, offset, length);
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    float[] floats = new float[unCompArray.length / 4];
    +    for (int i = 0; i < floats.length; i++) {
    +      floats[i] = unCompBuffer.getFloat();
    +    }
    +    return floats;
    +  }
    +
    +  @Override
    +  public byte[] compressDouble(double[] unCompInput) {
    +    // double use 8 bytes
    +    byte[] unCompArray = new byte[unCompInput.length * 8];
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    for (double input : unCompInput) {
    +      unCompBuffer.putDouble(input);
    +    }
    +    return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public double[] unCompressDouble(byte[] compInput, int offset, int length) {
    +    byte[] unCompArray = unCompressByte(compInput, offset, length);
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    double[] doubles = new double[unCompArray.length / 8];
    +    for (int i = 0; i < doubles.length; i++) {
    +      doubles[i] = unCompBuffer.getDouble();
    +    }
    +    return doubles;
    +  }
    +
    +  @Override
    +  public long rawCompress(long inputAddress, int inputSize, long outputAddress) throws IOException {
    +    throw new RuntimeException("Not implemented rawCompress for zstd yet");
    +  }
    +
    +  @Override
    +  public long rawUncompress(byte[] input, byte[] output) throws IOException {
    +    return Zstd.decompress(output, input);
    +  }
    +
    +  @Override
    +  public int maxCompressedLength(int inputSize) {
    +    throw new RuntimeException("Not implemented maxCompressedLength for zstd yet");
    +  }
    +
    --- End diff --
   
    Zstd.compressBound


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2628: [CARBONDATA-2851] Support zstd as column comp...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2628#discussion_r209849039
 
    --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/compression/ZstdCompressor.java ---
    @@ -0,0 +1,200 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.core.datastore.compression;
    +
    +import java.io.IOException;
    +import java.io.Serializable;
    +import java.nio.ByteBuffer;
    +
    +import org.apache.carbondata.common.logging.LogService;
    +import org.apache.carbondata.common.logging.LogServiceFactory;
    +
    +import com.github.luben.zstd.Zstd;
    +
    +public class ZstdCompressor implements Compressor, Serializable {
    +  private static final long serialVersionUID = 8181578747306832771L;
    +  private static final LogService LOGGER =
    +      LogServiceFactory.getLogService(ZstdCompressor.class.getName());
    +  private static final int COMPRESS_LEVEL = 3;
    +
    +  public ZstdCompressor() {
    +  }
    +
    +  @Override
    +  public String getName() {
    +    return "zstd";
    +  }
    +
    +  @Override
    +  public byte[] compressByte(byte[] unCompInput) {
    +    return Zstd.compress(unCompInput, 3);
    +  }
    +
    +  @Override
    +  public byte[] compressByte(byte[] unCompInput, int byteSize) {
    +    return Zstd.compress(unCompInput, COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public byte[] unCompressByte(byte[] compInput) {
    +    long estimatedUncompressLength = Zstd.decompressedSize(compInput);
    +    return Zstd.decompress(compInput, (int) estimatedUncompressLength);
    +  }
    +
    +  @Override
    +  public byte[] unCompressByte(byte[] compInput, int offset, int length) {
    +    // todo: how to avoid memory copy
    +    byte[] dstBytes = new byte[length];
    +    System.arraycopy(compInput, offset, dstBytes, 0, length);
    +    return unCompressByte(dstBytes);
    +  }
    +
    +  @Override
    +  public byte[] compressShort(short[] unCompInput) {
    +    // short use 2 bytes
    +    byte[] unCompArray = new byte[unCompInput.length * 2];
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    for (short input : unCompInput) {
    +      unCompBuffer.putShort(input);
    +    }
    +    return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public short[] unCompressShort(byte[] compInput, int offset, int length) {
    +    byte[] unCompArray = unCompressByte(compInput, offset, length);
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    short[] shorts = new short[unCompArray.length / 2];
    +    for (int i = 0; i < shorts.length; i++) {
    +      shorts[i] = unCompBuffer.getShort();
    +    }
    +    return shorts;
    +  }
    +
    +  @Override
    +  public byte[] compressInt(int[] unCompInput) {
    +    // int use 4 bytes
    +    byte[] unCompArray = new byte[unCompInput.length * 4];
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    for (int input : unCompInput) {
    +      unCompBuffer.putInt(input);
    +    }
    +    return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public int[] unCompressInt(byte[] compInput, int offset, int length) {
    +    byte[] unCompArray = unCompressByte(compInput, offset, length);
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    int[] ints = new int[unCompArray.length / 4];
    +    for (int i = 0; i < ints.length; i++) {
    +      ints[i] = unCompBuffer.getInt();
    +    }
    +    return ints;
    +  }
    +
    +  @Override
    +  public byte[] compressLong(long[] unCompInput) {
    +    // long use 8 bytes
    +    byte[] unCompArray = new byte[unCompInput.length * 8];
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    for (long input : unCompInput) {
    +      unCompBuffer.putLong(input);
    +    }
    +    return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public long[] unCompressLong(byte[] compInput, int offset, int length) {
    +    byte[] unCompArray = unCompressByte(compInput, offset, length);
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    long[] longs = new long[unCompArray.length / 8];
    +    for (int i = 0; i < longs.length; i++) {
    +      longs[i] = unCompBuffer.getLong();
    +    }
    +    return longs;
    +  }
    +
    +  @Override
    +  public byte[] compressFloat(float[] unCompInput) {
    +    // float use 4 bytes
    +    byte[] unCompArray = new byte[unCompInput.length * 4];
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    for (float input : unCompInput) {
    +      unCompBuffer.putFloat(input);
    +    }
    +    return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public float[] unCompressFloat(byte[] compInput, int offset, int length) {
    +    byte[] unCompArray = unCompressByte(compInput, offset, length);
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    float[] floats = new float[unCompArray.length / 4];
    +    for (int i = 0; i < floats.length; i++) {
    +      floats[i] = unCompBuffer.getFloat();
    +    }
    +    return floats;
    +  }
    +
    +  @Override
    +  public byte[] compressDouble(double[] unCompInput) {
    +    // double use 8 bytes
    +    byte[] unCompArray = new byte[unCompInput.length * 8];
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    for (double input : unCompInput) {
    +      unCompBuffer.putDouble(input);
    +    }
    +    return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
    +  }
    +
    +  @Override
    +  public double[] unCompressDouble(byte[] compInput, int offset, int length) {
    +    byte[] unCompArray = unCompressByte(compInput, offset, length);
    +    ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
    +    double[] doubles = new double[unCompArray.length / 8];
    +    for (int i = 0; i < doubles.length; i++) {
    +      doubles[i] = unCompBuffer.getDouble();
    +    }
    +    return doubles;
    +  }
    +
    +  @Override
    +  public long rawCompress(long inputAddress, int inputSize, long outputAddress) throws IOException {
    +    throw new RuntimeException("Not implemented rawCompress for zstd yet");
    +  }
    +
    +  @Override
    +  public long rawUncompress(byte[] input, byte[] output) throws IOException {
    +    return Zstd.decompress(output, input);
    +  }
    +
    +  @Override
    +  public int maxCompressedLength(int inputSize) {
    +    throw new RuntimeException("Not implemented maxCompressedLength for zstd yet");
    +  }
    +
    --- End diff --
   
    OK~


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2628: [CARBONDATA-2851] Support zstd as column compressor ...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/2628
 
    Build Failed  with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7908/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2628: [CARBONDATA-2851] Support zstd as column compressor ...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/2628
 
    Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6633/



---
1234 ... 8