GitHub user lamber-ken opened a pull request:
https://github.com/apache/carbondata/pull/2992 [CARBONDATA-3176] Optimize quick-start-guide documentation Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/BigDataArtisans/carbondata optimize-quick-start-guide-documentation Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2992.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2992 ---- commit ee91443a7449a53105bda1b8e818b8451d4a5d54 Author: lamber-ken <2217232293@...> Date: 2018-12-16T17:17:55Z [CARBONDATA-3176] Optimize quick-start-guide documentation ---- --- |
Github user lamber-ken commented on the issue:
https://github.com/apache/carbondata/pull/2992 Hi @xubo245 , can you please take a look? --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2992 Can one of the admins verify this patch? --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/2992 LGTM --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on the issue:
https://github.com/apache/carbondata/pull/2992 @lamber-ken Please finish the checklist in content of this PR. --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2992#discussion_r242007651 --- Diff: docs/quick-start-guide.md --- @@ -91,32 +90,31 @@ val carbon = SparkSession.builder().config(sc.getConf) ###### Creating a Table ``` -scala>carbon.sql("CREATE TABLE - IF NOT EXISTS test_table( - id string, - name string, - city string, - age Int) - STORED AS carbondata") +carbon.sql("""CREATE TABLE IF NOT EXISTS test_table( + id string, + name string, + city string, + age Int) + STORED AS carbondata""") ``` ###### Loading Data to a Table ``` -scala>carbon.sql("LOAD DATA INPATH '/path/to/sample.csv' - INTO TABLE test_table") +carbon.sql("LOAD DATA INPATH '/path/to/sample.csv' INTO TABLE test_table") --- End diff -- Can you optimize the path? it's hdfs path when creating CarbonSession, but it's local path when loading data --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on the issue:
https://github.com/apache/carbondata/pull/2992 @lamber-ken Can you optimize all documentation in this PR? not only for quick-start-guide, such as "docs/ddl-of-carbondata.md", it also has the similar place need to be optimized . There are many files have similar place, we shouldn't raise many PRs for these file. --- |
In reply to this post by qiuchenjian-2
Github user lamber-ken commented on the issue:
https://github.com/apache/carbondata/pull/2992 @xubo245, thanks for review. I'll update as your suggestion --- |
In reply to this post by qiuchenjian-2
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/2992 @xubo245 "and plan to support alluxio path too." --- I think there is no need to add this currently. We should only describe the feature implemented. --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on the issue:
https://github.com/apache/carbondata/pull/2992 @xuchuanyin I mean it's plan, just explain why do we change <hdfs store path> => <carbon_store_path> , not require add it to document now. --- |
In reply to this post by qiuchenjian-2
Github user lamber-ken commented on the issue:
https://github.com/apache/carbondata/pull/2992 Hi admins, can you please take a look? :) --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2992#discussion_r242803409 --- Diff: docs/ddl-of-carbondata.md --- @@ -157,22 +157,22 @@ CarbonData DDL statements are documented here,which includes: * GLOBAL_SORT: It increases the query performance, especially high concurrent point query. And if you care about loading resources isolation strictly, because the system uses the spark GroupBy to sort data, the resource can be controlled by spark. - ### Example: --- End diff -- There are many Examples in the fileï¼ we should unify it. --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on the issue:
https://github.com/apache/carbondata/pull/2992 add to whitelist --- |
In reply to this post by qiuchenjian-2
Github user lamber-ken commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2992#discussion_r243144133 --- Diff: docs/ddl-of-carbondata.md --- @@ -157,22 +157,22 @@ CarbonData DDL statements are documented here,which includes: * GLOBAL_SORT: It increases the query performance, especially high concurrent point query. And if you care about loading resources isolation strictly, because the system uses the spark GroupBy to sort data, the resource can be controlled by spark. - ### Example: --- End diff -- > There are many Examples in the fileï¼ we should unify it. ok --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2992 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1860/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2992 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1862/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2992 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/2070/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2992 Build Success with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/10117/ --- |
In reply to this post by qiuchenjian-2
Github user KanakaKumar commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2992#discussion_r243168227 --- Diff: docs/quick-start-guide.md --- @@ -80,43 +80,49 @@ import org.apache.spark.sql.CarbonSession._ * Create a CarbonSession : ``` -val carbon = SparkSession.builder().config(sc.getConf) - .getOrCreateCarbonSession("<hdfs store path>") +val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("<carbon_store_path>") ``` -**NOTE**: By default metastore location points to `../carbon.metastore`, user can provide own metastore location to CarbonSession like `SparkSession.builder().config(sc.getConf) -.getOrCreateCarbonSession("<hdfs store path>", "<local metastore path>")` +**NOTE** + - By default metastore location points to `../carbon.metastore`, user can provide own metastore location to CarbonSession like + `SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("<carbon_store_path>", "<local metastore path>")`. + - Data storage location can be specified by `<carbon_store_path>`, like `/carbon/data/store`, `hdfs://localhost:9000/carbon/data/store` or `s3a://carbon/data/store`. #### Executing Queries ###### Creating a Table ``` -scala>carbon.sql("CREATE TABLE - IF NOT EXISTS test_table( - id string, - name string, - city string, - age Int) - STORED AS carbondata") +carbon.sql( --- End diff -- This is scala code format. I think examples need not follow scala code format. I think the examples are wrapped to multiple lines for readability (Even if document converted to PDF) @sraghunandan , @sgururajshetty please help to confirm the standard convention followed. --- |
In reply to this post by qiuchenjian-2
Github user lamber-ken commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2992#discussion_r243169433 --- Diff: docs/quick-start-guide.md --- @@ -80,43 +80,49 @@ import org.apache.spark.sql.CarbonSession._ * Create a CarbonSession : ``` -val carbon = SparkSession.builder().config(sc.getConf) - .getOrCreateCarbonSession("<hdfs store path>") +val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("<carbon_store_path>") ``` -**NOTE**: By default metastore location points to `../carbon.metastore`, user can provide own metastore location to CarbonSession like `SparkSession.builder().config(sc.getConf) -.getOrCreateCarbonSession("<hdfs store path>", "<local metastore path>")` +**NOTE** + - By default metastore location points to `../carbon.metastore`, user can provide own metastore location to CarbonSession like + `SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("<carbon_store_path>", "<local metastore path>")`. + - Data storage location can be specified by `<carbon_store_path>`, like `/carbon/data/store`, `hdfs://localhost:9000/carbon/data/store` or `s3a://carbon/data/store`. #### Executing Queries ###### Creating a Table ``` -scala>carbon.sql("CREATE TABLE - IF NOT EXISTS test_table( - id string, - name string, - city string, - age Int) - STORED AS carbondata") +carbon.sql( --- End diff -- thanks for review. guide doc should reproduce the flow simply. if not, user need to modify multi lines to make runnable, it will affect user. --- |
Free forum by Nabble | Edit this page |