GitHub user xiaohui0318 opened a pull request:
https://github.com/apache/carbondata/pull/3032 [CARBONDATA-3210] merge getKeyOnPrefix into CarbonSparkUtil Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/xiaohui0318/carbondata master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/3032.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3032 ---- commit b93f75f8a150c7cc9971c68a219ae38c31f22a3f Author: xiaohui0318 <245300759@...> Date: 2018-12-28T06:08:54Z test commit commit c34e88a9ad24773ec35cdefbeb7156268176b27b Author: xiaohui0318 <245300759@...> Date: 2018-12-28T08:23:13Z org.apache.carbondata.examples.S3UsingSDKExample#getKeyOnPrefix org.apache.carbondata.examples.S3Example$#getKeyOnPrefix org.apache.carbondata.spark.thriftserver.CarbonThriftServer#getKeyOnPrefix è¿ä¸ä¸ªç±»ä¸çæ¹æ³getKeyOnPrefix åå¹¶å°spark2/src/main/scala/org/apache/carbondata/spark/util/CarbonSparkUtil.scala ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3032 Can one of the admins verify this patch? --- |
In reply to this post by qiuchenjian-2
Github user qiuchenjian commented on the issue:
https://github.com/apache/carbondata/pull/3032 Please describe the change of this PR --- |
In reply to this post by qiuchenjian-2
Github user qiuchenjian commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r244291200 --- Diff: README.md --- @@ -84,3 +85,6 @@ To get involved in CarbonData: ## About Apache CarbonData is an open source project of The Apache Software Foundation (ASF). + +## 2018-12-28å¼å§ --- End diff -- what the usage of this description? why it has chinese? --- |
In reply to this post by qiuchenjian-2
Github user BeyondYourself commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r244293682 --- Diff: integration/spark2/src/main/scala/org/apache/carbondata/spark/util/CarbonSparkUtil.scala --- @@ -117,4 +116,18 @@ object CarbonSparkUtil { case _ => delimiter } + def getKeyOnPrefix(path: String): (String, String, String) = { + val endPoint = "spark.hadoop." + ENDPOINT + if (path.startsWith(CarbonCommonConstants.S3A_PREFIX)) { + ("spark.hadoop." + ACCESS_KEY, "spark.hadoop." + SECRET_KEY, endPoint) --- End diff -- Duplicated spark.hadoop." literals make the process of refactoring error-prone, since you must be sure to update all occurrences."ï¼I think you can define a variable uniformly. --- |
In reply to this post by qiuchenjian-2
Github user qiuchenjian commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r244295612 --- Diff: integration/spark2/src/main/scala/org/apache/carbondata/spark/util/CarbonSparkUtil.scala --- @@ -117,4 +116,18 @@ object CarbonSparkUtil { case _ => delimiter } + def getKeyOnPrefix(path: String): (String, String, String) = { --- End diff -- getKeyOnPrefix is the same as S3Example, why add the same method ,but not be called --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on the issue:
https://github.com/apache/carbondata/pull/3032 @xiaohui0318 Please optimize the title, such as change merge to Merge --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r244473663 --- Diff: integration/spark2/src/main/scala/org/apache/carbondata/spark/util/CarbonSparkUtil.scala --- @@ -117,4 +116,29 @@ object CarbonSparkUtil { case _ => delimiter } + def getKeyOnPrefix(path: String): (String, String, String) = { --- End diff -- Please add empty line before this line --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r244473860 --- Diff: integration/spark2/src/main/scala/org/apache/carbondata/spark/util/CarbonSparkUtil.scala --- @@ -117,4 +116,29 @@ object CarbonSparkUtil { case _ => delimiter } + def getKeyOnPrefix(path: String): (String, String, String) = { + val endPoint = "spark.hadoop." + ENDPOINT + if (path.startsWith(CarbonCommonConstants.S3A_PREFIX)) { + ("spark.hadoop." + ACCESS_KEY, "spark.hadoop." + SECRET_KEY, endPoint) + } else if (path.startsWith(CarbonCommonConstants.S3N_PREFIX)) { + ("spark.hadoop." + CarbonCommonConstants.S3N_ACCESS_KEY, + "spark.hadoop." + CarbonCommonConstants.S3N_SECRET_KEY, endPoint) + } else if (path.startsWith(CarbonCommonConstants.S3_PREFIX)) { + ("spark.hadoop." + CarbonCommonConstants.S3_ACCESS_KEY, + "spark.hadoop." + CarbonCommonConstants.S3_SECRET_KEY, endPoint) + } else { + throw new Exception("Incorrect Store Path") + } + } + + def getS3EndPoint(args: Array[String]): String = { + if (args.length >= 4 && args(3).contains(".com")) args(3) --- End diff -- Can you optimize it? for example: change args to length and endPoint, not args(3). Because endpoint maybe is the 3rd or 4th number of parameter. It will error when the order change in args --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r244473902 --- Diff: integration/spark2/src/main/scala/org/apache/carbondata/spark/util/CarbonSparkUtil.scala --- @@ -117,4 +116,29 @@ object CarbonSparkUtil { case _ => delimiter } + def getKeyOnPrefix(path: String): (String, String, String) = { + val endPoint = "spark.hadoop." + ENDPOINT + if (path.startsWith(CarbonCommonConstants.S3A_PREFIX)) { + ("spark.hadoop." + ACCESS_KEY, "spark.hadoop." + SECRET_KEY, endPoint) + } else if (path.startsWith(CarbonCommonConstants.S3N_PREFIX)) { + ("spark.hadoop." + CarbonCommonConstants.S3N_ACCESS_KEY, + "spark.hadoop." + CarbonCommonConstants.S3N_SECRET_KEY, endPoint) + } else if (path.startsWith(CarbonCommonConstants.S3_PREFIX)) { + ("spark.hadoop." + CarbonCommonConstants.S3_ACCESS_KEY, + "spark.hadoop." + CarbonCommonConstants.S3_SECRET_KEY, endPoint) + } else { + throw new Exception("Incorrect Store Path") + } + } + + def getS3EndPoint(args: Array[String]): String = { + if (args.length >= 4 && args(3).contains(".com")) args(3) + else "" + } + + def getSparkMaster(args: Array[String]): String = { + if (args.length == 6) args(5) --- End diff -- Can you optimize it? like the previous comments --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r244473974 --- Diff: integration/spark2/src/main/scala/org/apache/carbondata/spark/util/CarbonSparkUtil.scala --- @@ -62,8 +62,7 @@ object CarbonSparkUtil { /** * return's the formatted column comment if column comment is present else empty("") * - * @param carbonColumn - * @return + * @return comment --- End diff -- Why do you delete @param carbonColumn? --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on the issue:
https://github.com/apache/carbondata/pull/3032 Have you validate with S3? --- |
In reply to this post by qiuchenjian-2
Github user zzcclp commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r244495974 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3UsingSDkExample.scala --- @@ -83,15 +83,15 @@ object S3UsingSDKExample { System.exit(0) } - val (accessKey, secretKey, endpoint) = getKeyOnPrefix(args(2)) + val (accessKey, secretKey, endpoint) = CarbonSparkUtil.getKeyOnPrefix(args(2)) val spark = SparkSession .builder() - .master(getSparkMaster(args)) + .master(CarbonSparkUtil.getSparkMaster(args)) .appName("S3UsingSDKExample") .config("spark.driver.host", "localhost") .config(accessKey, args(0)) .config(secretKey, args(1)) - .config(endpoint, getS3EndPoint(args)) + .config(endpoint,CarbonSparkUtil.getS3EndPoint(args)) --- End diff -- add a blank before CarbonSparkUtil --- |
In reply to this post by qiuchenjian-2
Github user zzcclp commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r244496097 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala --- @@ -21,8 +21,8 @@ import java.io.File import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} import org.apache.spark.sql.{Row, SparkSession} import org.slf4j.{Logger, LoggerFactory} - --- End diff -- don't remove this blank line --- |
In reply to this post by qiuchenjian-2
Github user zzcclp commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r244496128 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3UsingSDkExample.scala --- @@ -20,10 +20,10 @@ import org.apache.hadoop.conf.Configuration import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} import org.apache.spark.sql.SparkSession import org.slf4j.{Logger, LoggerFactory} - --- End diff -- don't remove this line --- |
In reply to this post by qiuchenjian-2
Github user zzcclp commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/3032#discussion_r244496155 --- Diff: integration/spark2/src/main/scala/org/apache/carbondata/spark/thriftserver/CarbonThriftServer.scala --- @@ -24,10 +24,10 @@ import org.apache.spark.SparkConf import org.apache.spark.sql.SparkSession import org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 import org.slf4j.{Logger, LoggerFactory} - --- End diff -- don't remove --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on the issue:
https://github.com/apache/carbondata/pull/3032 add to whitelist --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3032 Build Failed with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/10336/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3032 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2082/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/3032 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2287/ --- |
Free forum by Nabble | Edit this page |