Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2490 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7184/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2490 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/5960/ --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/2490 LGTM --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2490#discussion_r202705370 --- Diff: core/src/main/java/org/apache/carbondata/core/scan/collector/impl/RestructureBasedRowIdRawResultCollector.java --- @@ -0,0 +1,265 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.core.scan.collector.impl; + +import java.util.ArrayList; +import java.util.List; + +import org.apache.carbondata.common.annotations.InterfaceAudience; +import org.apache.carbondata.core.constants.CarbonCommonConstants; +import org.apache.carbondata.core.datastore.block.SegmentProperties; +import org.apache.carbondata.core.keygenerator.KeyGenException; +import org.apache.carbondata.core.keygenerator.KeyGenerator; +import org.apache.carbondata.core.keygenerator.mdkey.MultiDimKeyVarLengthGenerator; +import org.apache.carbondata.core.metadata.datatype.DataTypes; +import org.apache.carbondata.core.metadata.encoder.Encoding; +import org.apache.carbondata.core.metadata.schema.table.column.CarbonDimension; +import org.apache.carbondata.core.scan.executor.infos.BlockExecutionInfo; +import org.apache.carbondata.core.scan.model.ProjectionDimension; +import org.apache.carbondata.core.scan.model.ProjectionMeasure; +import org.apache.carbondata.core.scan.result.BlockletScannedResult; +import org.apache.carbondata.core.scan.wrappers.ByteArrayWrapper; +import org.apache.carbondata.core.stats.QueryStatistic; +import org.apache.carbondata.core.stats.QueryStatisticsConstants; +import org.apache.carbondata.core.util.CarbonUtil; +import org.apache.carbondata.core.util.DataTypeUtil; + +import org.apache.commons.lang3.ArrayUtils; + +/** + * It is not a collector it is just a scanned result holder. + * most of the lines are copyied from `RestructureBasedRawResultCollector`, the difference in + * function is that this class return all the dimensions in a ByteArrayWrapper and append + * blockletNo/PageId/RowId at end of the row. + * This implementation refers to `RestructureBasedRawResultCollector` + */ +@InterfaceAudience.Internal +public class RestructureBasedRowIdRawResultCollector extends RowIdRawBasedResultCollector { --- End diff -- There are many duplicated code from `RestructureBasedRawResultCollector`, it is not good for maintenance. Can you optimize it. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2490#discussion_r202706208 --- Diff: datamap/bloom/src/main/java/org/apache/carbondata/datamap/bloom/AbstractBloomDataMapWriter.java --- @@ -281,4 +287,13 @@ protected void releaseResouce() { } } + /** + * BloomDataMapBuilder(called when datamap rebuild) is set to true; + * BloomDataMapWriter(called when data load) is set to false; + * + * The reason for this is dict index column is already decoded to surrogate key when rebuild + * but it is still byte array if build datamap when loading + */ + abstract boolean isRebuildProcess(); --- End diff -- @xuchuanyin has merged Writer and Rebuilder for datamap, if you add this function, then it is still like having different implementations for Writer and Rebuilder --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2490#discussion_r202706832 --- Diff: datamap/bloom/src/main/java/org/apache/carbondata/datamap/bloom/AbstractBloomDataMapWriter.java --- @@ -281,4 +287,13 @@ protected void releaseResouce() { } } + /** + * BloomDataMapBuilder(called when datamap rebuild) is set to true; + * BloomDataMapWriter(called when data load) is set to false; + * + * The reason for this is dict index column is already decoded to surrogate key when rebuild + * but it is still byte array if build datamap when loading + */ + abstract boolean isRebuildProcess(); --- End diff -- It is better to abstract the difference of behavior instead of adding this. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2490#discussion_r202707191 --- Diff: integration/spark2/src/test/scala/org/apache/carbondata/datamap/bloom/BloomCoarseGrainDataMapSuite.scala --- @@ -377,6 +377,102 @@ class BloomCoarseGrainDataMapSuite extends QueryTest with BeforeAndAfterAll with checkQuery("fakeDm", shouldHit = false) } + test("test create bloom datamap on newly added column") { --- End diff -- Can this PR work for Lucene datamap? I think it should be a general fix for all index datamap, right? --- |
In reply to this post by qiuchenjian-2
Github user kevinjmh commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2490#discussion_r202883458 --- Diff: integration/spark2/src/test/scala/org/apache/carbondata/datamap/bloom/BloomCoarseGrainDataMapSuite.scala --- @@ -377,6 +377,102 @@ class BloomCoarseGrainDataMapSuite extends QueryTest with BeforeAndAfterAll with checkQuery("fakeDm", shouldHit = false) } + test("test create bloom datamap on newly added column") { --- End diff -- mostly yes, it depends on the way lucene deals with dictionary column. This PR enables to get default value of newly added column by a new result collector in `IndexDataMapRebuildRDD` and other codes are worked for getting surrogate value of dictionary column relying on PR2425 --- |
In reply to this post by qiuchenjian-2
Github user brijoobopanna commented on the issue:
https://github.com/apache/carbondata/pull/2490 retest sdv please --- |
In reply to this post by qiuchenjian-2
Github user kevinjmh commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2490#discussion_r202886338 --- Diff: datamap/bloom/src/main/java/org/apache/carbondata/datamap/bloom/AbstractBloomDataMapWriter.java --- @@ -281,4 +287,13 @@ protected void releaseResouce() { } } + /** + * BloomDataMapBuilder(called when datamap rebuild) is set to true; + * BloomDataMapWriter(called when data load) is set to false; + * + * The reason for this is dict index column is already decoded to surrogate key when rebuild + * but it is still byte array if build datamap when loading + */ + abstract boolean isRebuildProcess(); --- End diff -- fix by abstract method `convertDictionaryValue` --- |
In reply to this post by qiuchenjian-2
Github user kevinjmh commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2490#discussion_r202886715 --- Diff: core/src/main/java/org/apache/carbondata/core/scan/collector/impl/RestructureBasedRowIdRawResultCollector.java --- @@ -0,0 +1,265 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.core.scan.collector.impl; + +import java.util.ArrayList; +import java.util.List; + +import org.apache.carbondata.common.annotations.InterfaceAudience; +import org.apache.carbondata.core.constants.CarbonCommonConstants; +import org.apache.carbondata.core.datastore.block.SegmentProperties; +import org.apache.carbondata.core.keygenerator.KeyGenException; +import org.apache.carbondata.core.keygenerator.KeyGenerator; +import org.apache.carbondata.core.keygenerator.mdkey.MultiDimKeyVarLengthGenerator; +import org.apache.carbondata.core.metadata.datatype.DataTypes; +import org.apache.carbondata.core.metadata.encoder.Encoding; +import org.apache.carbondata.core.metadata.schema.table.column.CarbonDimension; +import org.apache.carbondata.core.scan.executor.infos.BlockExecutionInfo; +import org.apache.carbondata.core.scan.model.ProjectionDimension; +import org.apache.carbondata.core.scan.model.ProjectionMeasure; +import org.apache.carbondata.core.scan.result.BlockletScannedResult; +import org.apache.carbondata.core.scan.wrappers.ByteArrayWrapper; +import org.apache.carbondata.core.stats.QueryStatistic; +import org.apache.carbondata.core.stats.QueryStatisticsConstants; +import org.apache.carbondata.core.util.CarbonUtil; +import org.apache.carbondata.core.util.DataTypeUtil; + +import org.apache.commons.lang3.ArrayUtils; + +/** + * It is not a collector it is just a scanned result holder. + * most of the lines are copyied from `RestructureBasedRawResultCollector`, the difference in + * function is that this class return all the dimensions in a ByteArrayWrapper and append + * blockletNo/PageId/RowId at end of the row. + * This implementation refers to `RestructureBasedRawResultCollector` + */ +@InterfaceAudience.Internal +public class RestructureBasedRowIdRawResultCollector extends RowIdRawBasedResultCollector { --- End diff -- change to extends RestructureBasedRawResultCollector and add row id info when filling data --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2490 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7230/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2490 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6003/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2490 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7243/ --- |
In reply to this post by qiuchenjian-2
Github user brijoobopanna commented on the issue:
https://github.com/apache/carbondata/pull/2490 retest sdv please --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2490 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5886/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2490 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6017/ --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/2490 LGTM --- |
In reply to this post by qiuchenjian-2
|
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2490 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7255/ --- |
Free forum by Nabble | Edit this page |