Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3032 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2182/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3032 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2398/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3032 Build Success with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/10438/ --- |
In reply to this post by qiuchenjian-2
Github user zzcclp commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r245553139 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala --- @@ -18,52 +18,50 @@ package org.apache.carbondata.examples import java.io.File -import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} import org.apache.spark.sql.{Row, SparkSession} import org.slf4j.{Logger, LoggerFactory} -import org.apache.carbondata.core.constants.CarbonCommonConstants +import org.apache.carbondata.spark.util.CarbonSparkUtil object S3Example { - /** - * This example demonstrate usage of - * 1. create carbon table with storage location on object based storage - * like AWS S3, Huawei OBS, etc - * 2. load data into carbon table, the generated file will be stored on object based storage - * query the table. - * - * @param args require three parameters "Access-key" "Secret-key" - * "table-path on s3" "s3-endpoint" "spark-master" - */ + /** + * This example demonstrate usage of + * 1. create carbon table with storage location on object based storage + * like AWS S3, Huawei OBS, etc + * 2. load data into carbon table, the generated file will be stored on object based storage + * query the table. + * + * @param args require three parameters "Access-key" "Secret-key" + * "table-path on s3" "s3-endpoint" "spark-master" + */ --- End diff -- @xiaohui0318 please check the indent of comments, it needs to remove one blank. Check other comments. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3032 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2185/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3032 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2187/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3032 Build Success with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/10441/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3032 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2403/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3032 Build Success with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/10443/ --- |
In reply to this post by qiuchenjian-2
Github user xiaohui0318 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r245852864 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala --- @@ -18,52 +18,50 @@ package org.apache.carbondata.examples import java.io.File -import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} import org.apache.spark.sql.{Row, SparkSession} import org.slf4j.{Logger, LoggerFactory} -import org.apache.carbondata.core.constants.CarbonCommonConstants +import org.apache.carbondata.spark.util.CarbonSparkUtil object S3Example { - /** - * This example demonstrate usage of - * 1. create carbon table with storage location on object based storage - * like AWS S3, Huawei OBS, etc - * 2. load data into carbon table, the generated file will be stored on object based storage - * query the table. - * - * @param args require three parameters "Access-key" "Secret-key" - * "table-path on s3" "s3-endpoint" "spark-master" - */ + /** + * This example demonstrate usage of + * 1. create carbon table with storage location on object based storage + * like AWS S3, Huawei OBS, etc + * 2. load data into carbon table, the generated file will be stored on object based storage + * query the table. + * + * @param args require three parameters "Access-key" "Secret-key" + * "table-path on s3" "s3-endpoint" "spark-master" + */ --- End diff -- checked and fix --- |
In reply to this post by qiuchenjian-2
|
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r245861440 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3UsingSDkExample.scala --- @@ -16,28 +16,26 @@ */ package org.apache.carbondata.examples -import org.apache.hadoop.conf.Configuration -import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} import org.apache.spark.sql.SparkSession import org.slf4j.{Logger, LoggerFactory} -import org.apache.carbondata.core.constants.CarbonCommonConstants import org.apache.carbondata.core.metadata.datatype.DataTypes import org.apache.carbondata.sdk.file.{CarbonWriter, Field, Schema} +import org.apache.carbondata.spark.util.CarbonSparkUtil /** * Generate data and write data to S3 * User can generate different numbers of data by specifying the number-of-rows in parameters */ -object S3UsingSDKExample { +object S3UsingSdkExample { --- End diff -- Please test it with Huawei OBS --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r245861501 --- Diff: integration/spark2/src/main/scala/org/apache/carbondata/spark/thriftserver/CarbonThriftServer.scala --- @@ -28,12 +28,13 @@ import org.slf4j.{Logger, LoggerFactory} import org.apache.carbondata.common.logging.LogServiceFactory import org.apache.carbondata.core.constants.CarbonCommonConstants import org.apache.carbondata.core.util.CarbonProperties +import org.apache.carbondata.spark.util.CarbonSparkUtil -/** - * CarbonThriftServer support different modes: - * 1. read/write data from/to HDFS or local,it only needs configurate storePath - * 2. read/write data from/to S3, it needs provide access-key, secret-key, s3-endpoint - */ + /** + * CarbonThriftServer support different modes: + * 1. read/write data from/to HDFS or local,it only needs configurate storePath + * 2. read/write data from/to S3, it needs provide access-key, secret-key, s3-endpoint + */ object CarbonThriftServer { --- End diff -- Please test it with Huawei OBS --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r245861536 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala --- @@ -18,11 +18,10 @@ package org.apache.carbondata.examples import java.io.File -import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} import org.apache.spark.sql.{Row, SparkSession} import org.slf4j.{Logger, LoggerFactory} -import org.apache.carbondata.core.constants.CarbonCommonConstants +import org.apache.carbondata.spark.util.CarbonSparkUtil object S3Example { --- End diff -- Please test it with Huawei OBS --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on the issue:
https://github.com/apache/carbondata/pull/3032 retest this please --- |
In reply to this post by qiuchenjian-2
Github user xiaohui0318 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r246029819 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala --- @@ -18,11 +18,10 @@ package org.apache.carbondata.examples import java.io.File -import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} import org.apache.spark.sql.{Row, SparkSession} import org.slf4j.{Logger, LoggerFactory} -import org.apache.carbondata.core.constants.CarbonCommonConstants +import org.apache.carbondata.spark.util.CarbonSparkUtil object S3Example { --- End diff -- done --- |
In reply to this post by qiuchenjian-2
Github user xiaohui0318 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r246029848 --- Diff: integration/spark2/src/main/scala/org/apache/carbondata/spark/thriftserver/CarbonThriftServer.scala --- @@ -28,12 +28,13 @@ import org.slf4j.{Logger, LoggerFactory} import org.apache.carbondata.common.logging.LogServiceFactory import org.apache.carbondata.core.constants.CarbonCommonConstants import org.apache.carbondata.core.util.CarbonProperties +import org.apache.carbondata.spark.util.CarbonSparkUtil -/** - * CarbonThriftServer support different modes: - * 1. read/write data from/to HDFS or local,it only needs configurate storePath - * 2. read/write data from/to S3, it needs provide access-key, secret-key, s3-endpoint - */ + /** + * CarbonThriftServer support different modes: + * 1. read/write data from/to HDFS or local,it only needs configurate storePath + * 2. read/write data from/to S3, it needs provide access-key, secret-key, s3-endpoint + */ object CarbonThriftServer { --- End diff -- done --- |
In reply to this post by qiuchenjian-2
Github user xiaohui0318 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r246029862 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3UsingSDkExample.scala --- @@ -16,28 +16,26 @@ */ package org.apache.carbondata.examples -import org.apache.hadoop.conf.Configuration -import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} import org.apache.spark.sql.SparkSession import org.slf4j.{Logger, LoggerFactory} -import org.apache.carbondata.core.constants.CarbonCommonConstants import org.apache.carbondata.core.metadata.datatype.DataTypes import org.apache.carbondata.sdk.file.{CarbonWriter, Field, Schema} +import org.apache.carbondata.spark.util.CarbonSparkUtil /** * Generate data and write data to S3 * User can generate different numbers of data by specifying the number-of-rows in parameters */ -object S3UsingSDKExample { +object S3UsingSdkExample { --- End diff -- done --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3032 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2223/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3032 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2442/ --- |
Free forum by Nabble | Edit this page |