GitHub user ravipesala opened a pull request:
https://github.com/apache/carbondata/pull/2324 [CARBONDATA-2496] Changed to hadoop bloom implementation and added compress option to compress bloom on disk. This PR removes the guava bloom and adds the hadoop bloom. And also added the compress bloom option to compress bloom on disk and in memory as well. The user can use `bloom_compress` property to enable/disable compression. By default, it is enabled. Please check the performance of bloom Loaded 100 million data with bloom datamap on a column with a cardinality of 5 million with 'BLOOM_SIZE'='5000000', 'bloom_fpp'='0.001' . Guava ----------------------------- DataMap Size : 233.6 MB Fisrt Read : 4.981 sec Second Read: 0.541 sec Hadoop ----------------------------------- DataMap Size : 224.7 MB Fisrt Read : 4.829 sec Second Read: 0.555 sec Haoop with compression ------------------------------------ DataMap Size : 98.8 MB Fisrt Read : 4.984 sec Second Read: 0.621 sec As per the above readings compressed Hadoop implementation gives optimal performance in terms of space and read.Thats why compress is made as default. Here RoaringBitMap is used internally for compressing the bloom so it s not only space efficient on disk and also efficient in memory. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ravipesala/incubator-carbondata bloom-change1 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2324.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2324 ---- commit 5bf9726bd5f5703bb8ef51abf6a6c75bc5423586 Author: ravipesala <ravi.pesala@...> Date: 2018-05-20T16:22:57Z Changed to hadoop bloom implementation and added compress option to compress bloom on disk. ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2324 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4824/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2324 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/5982/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2324 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5005/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2324 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/5983/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2324 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4825/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2324 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5006/ --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2324#discussion_r189477234 --- Diff: datamap/bloom/src/main/java/org/apache/carbondata/datamap/bloom/BloomCoarseGrainDataMapFactory.java --- @@ -66,22 +66,32 @@ */ private static final String BLOOM_SIZE = "bloom_size"; /** - * default size for bloom filter: suppose one blocklet contains 20 pages - * and all the indexed value is distinct. + * default size for bloom filter, cardinality of the column. */ - private static final int DEFAULT_BLOOM_FILTER_SIZE = 32000 * 20; + private static final int DEFAULT_BLOOM_FILTER_SIZE = Short.MAX_VALUE; --- End diff -- Can you make a page size constant and use it, so that we can easily change it later when we make page size configurable --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2324#discussion_r189477491 --- Diff: datamap/bloom/src/main/java/org/apache/hadoop/util/bloom/CarbonBloomFilter.java --- @@ -0,0 +1,103 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.util.bloom; + +import java.io.DataInput; +import java.io.DataOutput; +import java.io.IOException; +import java.util.BitSet; + +import org.roaringbitmap.RoaringBitmap; + +public class CarbonBloomFilter extends BloomFilter { --- End diff -- Please add comment --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2324#discussion_r189494115 --- Diff: datamap/bloom/src/main/java/org/apache/carbondata/datamap/bloom/BloomCoarseGrainDataMapFactory.java --- @@ -66,22 +66,32 @@ */ private static final String BLOOM_SIZE = "bloom_size"; /** - * default size for bloom filter: suppose one blocklet contains 20 pages - * and all the indexed value is distinct. + * default size for bloom filter, cardinality of the column. */ - private static final int DEFAULT_BLOOM_FILTER_SIZE = 32000 * 20; + private static final int DEFAULT_BLOOM_FILTER_SIZE = Short.MAX_VALUE; --- End diff -- ok --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2324#discussion_r189494121 --- Diff: datamap/bloom/src/main/java/org/apache/hadoop/util/bloom/CarbonBloomFilter.java --- @@ -0,0 +1,103 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.util.bloom; + +import java.io.DataInput; +import java.io.DataOutput; +import java.io.IOException; +import java.util.BitSet; + +import org.roaringbitmap.RoaringBitmap; + +public class CarbonBloomFilter extends BloomFilter { --- End diff -- ok --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2324 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4830/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2324 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/5989/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2324 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5010/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2324 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5014/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2324 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4841/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2324 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6000/ --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2324#discussion_r189611278 --- Diff: datamap/bloom/src/main/java/org/apache/hadoop/util/bloom/CarbonBloomFilter.java --- @@ -0,0 +1,108 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.util.bloom; + +import java.io.DataInput; +import java.io.DataOutput; +import java.io.IOException; +import java.util.BitSet; + +import org.roaringbitmap.RoaringBitmap; + +/** + * It is the extendable class to hadoop bloomfilter, it is extendable to implement compressed bloom + * and fast serialize and deserialize of bloom. + */ +public class CarbonBloomFilter extends BloomFilter { + + private RoaringBitmap bitmap; + + private boolean compress; + + public CarbonBloomFilter() { + } + + public CarbonBloomFilter(int vectorSize, int nbHash, int hashType, boolean compress) { + super(vectorSize, nbHash, hashType); + this.compress = compress; + } + + @Override + public boolean membershipTest(Key key) { + if (key == null) { + throw new NullPointerException("key cannot be null"); + } + + int[] h = hash.hash(key); + hash.clear(); + if (compress) { + // If it is compressed chek in roaring bitmap --- End diff -- chek? --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2324#discussion_r189609624 --- Diff: datamap/bloom/src/main/java/org/apache/carbondata/datamap/bloom/BloomDataMapCache.java --- @@ -133,15 +132,14 @@ private int validateAndGetCacheSize() { */ private List<BloomDMModel> loadBloomDataMapModel(CacheKey cacheKey) { DataInputStream dataInStream = null; - ObjectInputStream objectInStream = null; List<BloomDMModel> bloomDMModels = new ArrayList<BloomDMModel>(); try { String indexFile = getIndexFileFromCacheKey(cacheKey); dataInStream = FileFactory.getDataInputStream(indexFile, FileFactory.getFileType(indexFile)); - objectInStream = new ObjectInputStream(dataInStream); try { - BloomDMModel model = null; - while ((model = (BloomDMModel) objectInStream.readObject()) != null) { + while (dataInStream.available() > 0) { --- End diff -- I've checked the `available` method when I contributed these lines of code and found it's not suitable here. Better to keep them as it is. Maybe you can check it again. --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2324#discussion_r189612466 --- Diff: datamap/bloom/src/main/java/org/apache/carbondata/datamap/bloom/BloomDMModel.java --- @@ -40,15 +46,29 @@ public int getBlockletNo() { return blockletNo; } - public BloomFilter<byte[]> getBloomFilter() { + public BloomFilter getBloomFilter() { --- End diff -- return CarbonBloomFilter --- |
Free forum by Nabble | Edit this page |