Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1581#discussion_r155217593 --- Diff: integration/presto/pom.xml --- @@ -431,14 +435,27 @@ </dependency> <dependency> <groupId>org.apache.spark</groupId> - <artifactId>spark-sql_2.11</artifactId> + <artifactId>spark-network-common_2.11</artifactId> --- End diff -- ok, is it only for test cases , right ? --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1581#discussion_r155218744 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/CarbonColumnVectorImpl.java --- @@ -0,0 +1,249 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY --- End diff -- Please keep the license header consistent with other class --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1581#discussion_r155219726 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/CarbonVectorBatch.java --- @@ -0,0 +1,112 @@ +package org.apache.carbondata.presto; --- End diff -- miss license header --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1581#discussion_r155220673 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/CarbonVectorBatch.java --- @@ -0,0 +1,112 @@ +package org.apache.carbondata.presto; + +import java.util.Arrays; +import java.util.HashSet; +import java.util.Set; + +public class CarbonVectorBatch { + + private static final int DEFAULT_BATCH_SIZE = 1024; --- End diff -- why is 1024? for byteArr = new byte[batchSize]; whether can set smaller,or not ? --- |
In reply to this post by qiuchenjian-2
Github user bhavya411 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1581#discussion_r155435994 --- Diff: integration/presto/pom.xml --- @@ -431,14 +435,27 @@ </dependency> <dependency> <groupId>org.apache.spark</groupId> - <artifactId>spark-sql_2.11</artifactId> + <artifactId>spark-network-common_2.11</artifactId> --- End diff -- Yes it is used only in Tests, we have defined the scope as test for all these dependencies --- |
In reply to this post by qiuchenjian-2
Github user bhavya411 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1581#discussion_r155438974 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/CarbonVectorBatch.java --- @@ -0,0 +1,112 @@ +package org.apache.carbondata.presto; + +import java.util.Arrays; +import java.util.HashSet; +import java.util.Set; + +public class CarbonVectorBatch { + + private static final int DEFAULT_BATCH_SIZE = 1024; --- End diff -- The batchSize is the size of the complete batch so for every Column we have to set it as byteSize , it is the number of records in the batch so it has to be equivalent to the batch size --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1581#discussion_r155439723 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/CarbondataConnector.java --- @@ -31,13 +31,13 @@ private static final Logger log = Logger.get(CarbondataConnector.class); private final LifeCycleManager lifeCycleManager; - private final CarbondataMetadata metadata; + private final ConnectorMetadata metadata; --- End diff -- what are the difference of using CarbonDataMetadata or ConnectorMetadata? --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1581#discussion_r155440197 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/impl/CarbonTableReader.java --- @@ -95,12 +99,12 @@ * A cache for Carbon reader, with this cache, * metadata of a table is only read from file system once. */ - private ConcurrentHashMap<SchemaTableName, CarbonTableCacheModel> cc; + private AtomicReference<HashMap<SchemaTableName, CarbonTableCacheModel>> cc; --- End diff -- the name "cc" also need to change. --- |
In reply to this post by qiuchenjian-2
Github user bhavya411 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1581#discussion_r155492390 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/CarbondataConnector.java --- @@ -31,13 +31,13 @@ private static final Logger log = Logger.get(CarbondataConnector.class); private final LifeCycleManager lifeCycleManager; - private final CarbondataMetadata metadata; + private final ConnectorMetadata metadata; --- End diff -- CarbonDataMetadata extends ConnectorMetaData, we should always code to Interface hence have changed this, also this way in concurrency we are using the standard Presto protocol and providing a ClassLoaderSafeConnectorMetadata --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1581#discussion_r156017061 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/CarbondataConnector.java --- @@ -31,13 +31,13 @@ private static final Logger log = Logger.get(CarbondataConnector.class); private final LifeCycleManager lifeCycleManager; - private final CarbondataMetadata metadata; + private final ConnectorMetadata metadata; --- End diff -- ok --- |
In reply to this post by qiuchenjian-2
Github user bhavya411 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1581#discussion_r156023011 --- Diff: core/src/main/java/org/apache/carbondata/core/scan/collector/impl/RestructureBasedVectorResultCollector.java --- @@ -238,7 +238,7 @@ private void fillDataForNonExistingMeasures() { (long) defaultValue); } else if (DataTypes.isDecimal(dataType)) { vector.putDecimals(columnVectorInfo.vectorOffset, columnVectorInfo.size, - (Decimal) defaultValue, measure.getPrecision()); + ((Decimal) defaultValue).toJavaBigDecimal(), measure.getPrecision()); --- End diff -- We can not remove these imports as the data that is returned is stored in Spark Format , the casting is done to convert it to generic types --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1581 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/646/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1581 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1875/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1581 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2221/ --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on the issue:
https://github.com/apache/carbondata/pull/1581 retest this please --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1581#discussion_r156553976 --- Diff: core/src/main/java/org/apache/carbondata/core/scan/collector/impl/RestructureBasedVectorResultCollector.java --- @@ -238,7 +238,7 @@ private void fillDataForNonExistingMeasures() { (long) defaultValue); } else if (DataTypes.isDecimal(dataType)) { vector.putDecimals(columnVectorInfo.vectorOffset, columnVectorInfo.size, - (Decimal) defaultValue, measure.getPrecision()); + ((Decimal) defaultValue).toJavaBigDecimal(), measure.getPrecision()); --- End diff -- ok, this is core module. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1581 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1907/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1581 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/678/ --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on the issue:
https://github.com/apache/carbondata/pull/1581 LGTM --- |
In reply to this post by qiuchenjian-2
|
Free forum by Nabble | Edit this page |