GitHub user HoneyQuery opened a pull request:
https://github.com/apache/carbondata/pull/1112 [CARBONDATA-1244] Rewrite README.md of presto integration and add/rewrite some comments to presto integration. Be sure to do all of the following to help us incorporate your contribution quickly and easily: - [x] Make sure the PR title is formatted like: `[CARBONDATA-<Jira issue #>] Description of pull request` - [x] Make sure tests pass via `mvn clean verify`. (Even better, enable Travis-CI on your fork and ensure the whole test matrix passes). - [x] Replace `<Jira issue #>` in the title with the actual Jira issue number, if there is one. - [x] If this contribution is large, please file an Apache [Individual Contributor License Agreement](https://www.apache.org/licenses/icla.txt). - [x] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? We only add some comments and rewrite the docs, no source code is changed. - What manual testing you have done? None. - Any additional information to help reviewers in testing this change. None. - [x] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. --- You can merge this pull request into a Git repository by running: $ git pull https://github.com/HoneyQuery/carbondata master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1112.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1112 ---- commit 1a9512c549014253d469d892312aa5103d0b21d8 Author: Jin Guodong <[hidden email]> Date: 2017-06-08T13:58:13Z remove some useless code, and add a simple Main for HiveEmbeddedServer2 commit 011035c91fbe43418a3cf8f4d5c35a1ecdff8e39 Author: Haoqiong Bian <[hidden email]> Date: 2017-06-08T14:09:54Z Merge pull request #1 from ray6080/master remove some useless code, and add a simple Main for HiveEmbeddedServer2 commit 45032dbdc8737410597410730d2fb35469b7d0b9 Author: Haoqiong Bian <[hidden email]> Date: 2017-06-08T14:13:35Z Merge pull request #1 from dbiir/master pull from dbiir commit bc219288908b656f1ba1f0af75935233265dc9b2 Author: bianhq <[hidden email]> Date: 2017-06-10T09:43:10Z add comments commit 9c88798bf0f5358ac03987523ea9b54a7d6f46d1 Author: bianhq <[hidden email]> Date: 2017-06-12T06:00:18Z add comments commit 97c6485393fb86c6f6f860e27e9273f48f6c861d Author: Guodong Jin <[hidden email]> Date: 2017-06-12T06:18:34Z Merge pull request #2 from HoneyQuery/master Add comments to presto connector commit bb223a83ff85e042461f16e4d5a83b4b8e13cb1f Author: bianhq <[hidden email]> Date: 2017-06-14T19:50:14Z add comments. commit 1f1f52e0089a7fc2b56d0b909c30b56e44b99f6c Author: Guodong Jin <[hidden email]> Date: 2017-06-15T04:55:17Z Merge pull request #3 from HoneyQuery/master add comments. commit f2a0409035b2b1fb581aca83d9d7c3155c15786b Author: Guodong Jin <[hidden email]> Date: 2017-06-26T09:27:13Z Merge pull request #4 from apache/master aync with apache carbondata 1.1 commit fcfae17e65b083f107142d6bcaca26fe0d164d9e Author: Haoqiong Bian <[hidden email]> Date: 2017-06-26T12:37:43Z Merge pull request #2 from dbiir/master sync with apache carbondata 0.1.1 commit d55c735d486a34f8d2e96a5b01a53ca8c3d5de85 Author: bianhq <[hidden email]> Date: 2017-06-28T13:07:49Z recover some unnecessary changes. commit f67feeea3837e3fa9315c6dfb0c51de9e99615bd Author: Haoqiong Bian <[hidden email]> Date: 2017-06-28T13:11:44Z Merge pull request #3 from apache/master sync with apache carbondata commit c4dd8a0b63b1222b74d2b8c4268a4eb5b240d731 Author: bianhq <[hidden email]> Date: 2017-06-28T20:04:28Z add comments to presto connector and polish README.md commit 5b95791695bc17d6c076c98c610a9c5c9e0774f5 Author: Haoqiong Bian <[hidden email]> Date: 2017-06-28T20:06:21Z Merge pull request #4 from apache/master sync with apache carbondata. ---- --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
Github user asfgit commented on the issue:
https://github.com/apache/carbondata/pull/1112 Can one of the admins verify this patch? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1112 Can one of the admins verify this patch? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1112 Can one of the admins verify this patch? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user asfgit commented on the issue:
https://github.com/apache/carbondata/pull/1112 Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/carbondata-pr-spark-1.6/717/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user chenerlu commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1112#discussion_r124707828 --- Diff: integration/presto/README.md --- @@ -59,28 +55,50 @@ Please follow the below steps to query carbondata in presto ``` * config carbondata-connector for presto - First:compile carbondata-presto integration module + Firstly: Compile carbondata, including carbondata-presto integration module ``` $ git clone https://github.com/apache/carbondata - $ cd carbondata/integration/presto - $ mvn clean package + $ cd carbondata + $ mvn -DskipTests -P{spark-version} -Dspark.version={spark-version-number} -Dhadoop.version={hadoop-version-number} clean package + ``` + Replace the spark and hadoop version with you the version you used in your cluster. --- End diff -- Maybe it will be better to delete these two "you". --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user chenerlu commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1112#discussion_r124707906 --- Diff: integration/presto/README.md --- @@ -59,28 +55,50 @@ Please follow the below steps to query carbondata in presto ``` * config carbondata-connector for presto - First:compile carbondata-presto integration module + Firstly: Compile carbondata, including carbondata-presto integration module ``` $ git clone https://github.com/apache/carbondata - $ cd carbondata/integration/presto - $ mvn clean package + $ cd carbondata + $ mvn -DskipTests -P{spark-version} -Dspark.version={spark-version-number} -Dhadoop.version={hadoop-version-number} clean package + ``` + Replace the spark and hadoop version with you the version you used in your cluster. + For example, if you use Spark2.1.0 and Hadoop 2.7.3, you would like to compile using: + ``` + mvn -DskipTests -Pspark-2.1 -Dspark.version=2.1.0 -Dhadoop.version=2.7.3 clean package + ``` + + Secondly: Create a folder named 'carbondata' under $PRESTO_HOME$/plugin and + copy all jar from carbondata/integration/presto/target/carbondata-presto-x.x.x-SNAPSHOT --- End diff -- jar -> jars --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user chenerlu commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1112#discussion_r124708068 --- Diff: integration/presto/README.md --- @@ -59,28 +55,50 @@ Please follow the below steps to query carbondata in presto ``` * config carbondata-connector for presto - First:compile carbondata-presto integration module + Firstly: Compile carbondata, including carbondata-presto integration module ``` $ git clone https://github.com/apache/carbondata - $ cd carbondata/integration/presto - $ mvn clean package + $ cd carbondata + $ mvn -DskipTests -P{spark-version} -Dspark.version={spark-version-number} -Dhadoop.version={hadoop-version-number} clean package + ``` + Replace the spark and hadoop version with you the version you used in your cluster. + For example, if you use Spark2.1.0 and Hadoop 2.7.3, you would like to compile using: + ``` + mvn -DskipTests -Pspark-2.1 -Dspark.version=2.1.0 -Dhadoop.version=2.7.3 clean package + ``` + + Secondly: Create a folder named 'carbondata' under $PRESTO_HOME$/plugin and + copy all jar from carbondata/integration/presto/target/carbondata-presto-x.x.x-SNAPSHOT + to $PRESTO_HOME$/plugin/carbondata + + Thirdly: Create a carbondata.properties file under $PRESTO_HOME$/etc/catalog/ containing the following contents: ``` - Second:create one folder "carbondata" under ./presto-server-0.166/plugin - Third:copy all jar from ./carbondata/integration/presto/target/carbondata-presto-x.x.x-SNAPSHOT - to ./presto-server-0.166/plugin/carbondata + connector.name=carbondata + carbondata-store={schema-store-path} + ``` + Replace the schema-store-path with the absolute path the directory which is the parent of the schema. + For example, if you have a schema named 'default' stored under hdfs://namenode:9000/test/carbondata/, + Then set carbondata-store=hdfs://namenode:9000/test/carbondata + + If you changed the jar balls or configuration files, make sure you have dispatch the new jar balls + and configuration file to all the presto nodes and restart the nodes in the cluster. A modification of the + carbondata connector will not take an effect automatically. ### Generate CarbonData file -Please refer to quick start : https://github.com/apache/carbondata/blob/master/docs/quick-start-guide.md +Please refer to quick start: https://github.com/apache/carbondata/blob/master/docs/quick-start-guide.md +Load data statement in Spark can be used to create carbondata tables. And you can easily find the creaed --- End diff -- created -> created --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user chenerlu commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1112#discussion_r124708591 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/impl/CarbonTableReader.java --- @@ -72,25 +72,54 @@ * 2:FileFactory, (physic table file) * 3:CarbonCommonFactory, (offer some ) * 4:DictionaryFactory, (parse dictionary util) + * + * Currently, it is mainly used to parse metadata of tables under + * the configured carbondata-store path and filter the relevant + * input splits with given query predicates. */ public class CarbonTableReader { private CarbonTableConfig config; + + /** + * The names of the tables under the schema (this.carbonFileList). + */ private List<SchemaTableName> tableList; + + /** + * carbonFileList represents the store path of the schema, which is configured as carbondata-store + * in the CarbonData catalog file ($PRESTO_HOME$/etc/catalog/carbondata.properties). + * Under the schema store path, there should be a directory named as the schema name. + * And under each schema directory, there are directories named as the table names. + * For example, the schema is named 'default' and there is two table named 'foo' and 'bar' in it, then the --- End diff -- Some notes like this, I think it is not necessary. We can discuss. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user HoneyQuery commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1112#discussion_r124719863 --- Diff: integration/presto/README.md --- @@ -59,28 +55,50 @@ Please follow the below steps to query carbondata in presto ``` * config carbondata-connector for presto - First:compile carbondata-presto integration module + Firstly: Compile carbondata, including carbondata-presto integration module ``` $ git clone https://github.com/apache/carbondata - $ cd carbondata/integration/presto - $ mvn clean package + $ cd carbondata + $ mvn -DskipTests -P{spark-version} -Dspark.version={spark-version-number} -Dhadoop.version={hadoop-version-number} clean package + ``` + Replace the spark and hadoop version with you the version you used in your cluster. + For example, if you use Spark2.1.0 and Hadoop 2.7.3, you would like to compile using: + ``` + mvn -DskipTests -Pspark-2.1 -Dspark.version=2.1.0 -Dhadoop.version=2.7.3 clean package + ``` + + Secondly: Create a folder named 'carbondata' under $PRESTO_HOME$/plugin and + copy all jar from carbondata/integration/presto/target/carbondata-presto-x.x.x-SNAPSHOT --- End diff -- OK. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user HoneyQuery commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1112#discussion_r124719922 --- Diff: integration/presto/README.md --- @@ -59,28 +55,50 @@ Please follow the below steps to query carbondata in presto ``` * config carbondata-connector for presto - First:compile carbondata-presto integration module + Firstly: Compile carbondata, including carbondata-presto integration module ``` $ git clone https://github.com/apache/carbondata - $ cd carbondata/integration/presto - $ mvn clean package + $ cd carbondata + $ mvn -DskipTests -P{spark-version} -Dspark.version={spark-version-number} -Dhadoop.version={hadoop-version-number} clean package + ``` + Replace the spark and hadoop version with you the version you used in your cluster. + For example, if you use Spark2.1.0 and Hadoop 2.7.3, you would like to compile using: + ``` + mvn -DskipTests -Pspark-2.1 -Dspark.version=2.1.0 -Dhadoop.version=2.7.3 clean package + ``` + + Secondly: Create a folder named 'carbondata' under $PRESTO_HOME$/plugin and + copy all jar from carbondata/integration/presto/target/carbondata-presto-x.x.x-SNAPSHOT + to $PRESTO_HOME$/plugin/carbondata + + Thirdly: Create a carbondata.properties file under $PRESTO_HOME$/etc/catalog/ containing the following contents: ``` - Second:create one folder "carbondata" under ./presto-server-0.166/plugin - Third:copy all jar from ./carbondata/integration/presto/target/carbondata-presto-x.x.x-SNAPSHOT - to ./presto-server-0.166/plugin/carbondata + connector.name=carbondata + carbondata-store={schema-store-path} + ``` + Replace the schema-store-path with the absolute path the directory which is the parent of the schema. + For example, if you have a schema named 'default' stored under hdfs://namenode:9000/test/carbondata/, + Then set carbondata-store=hdfs://namenode:9000/test/carbondata + + If you changed the jar balls or configuration files, make sure you have dispatch the new jar balls + and configuration file to all the presto nodes and restart the nodes in the cluster. A modification of the + carbondata connector will not take an effect automatically. ### Generate CarbonData file -Please refer to quick start : https://github.com/apache/carbondata/blob/master/docs/quick-start-guide.md +Please refer to quick start: https://github.com/apache/carbondata/blob/master/docs/quick-start-guide.md +Load data statement in Spark can be used to create carbondata tables. And you can easily find the creaed --- End diff -- OK. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user HoneyQuery commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1112#discussion_r124720145 --- Diff: integration/presto/README.md --- @@ -59,28 +55,50 @@ Please follow the below steps to query carbondata in presto ``` * config carbondata-connector for presto - First:compile carbondata-presto integration module + Firstly: Compile carbondata, including carbondata-presto integration module ``` $ git clone https://github.com/apache/carbondata - $ cd carbondata/integration/presto - $ mvn clean package + $ cd carbondata + $ mvn -DskipTests -P{spark-version} -Dspark.version={spark-version-number} -Dhadoop.version={hadoop-version-number} clean package + ``` + Replace the spark and hadoop version with you the version you used in your cluster. --- End diff -- OK. The two 'you' are removed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user HoneyQuery commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1112#discussion_r124720966 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/impl/CarbonTableReader.java --- @@ -72,25 +72,54 @@ * 2:FileFactory, (physic table file) * 3:CarbonCommonFactory, (offer some ) * 4:DictionaryFactory, (parse dictionary util) + * + * Currently, it is mainly used to parse metadata of tables under + * the configured carbondata-store path and filter the relevant + * input splits with given query predicates. */ public class CarbonTableReader { private CarbonTableConfig config; + + /** + * The names of the tables under the schema (this.carbonFileList). + */ private List<SchemaTableName> tableList; + + /** + * carbonFileList represents the store path of the schema, which is configured as carbondata-store + * in the CarbonData catalog file ($PRESTO_HOME$/etc/catalog/carbondata.properties). + * Under the schema store path, there should be a directory named as the schema name. + * And under each schema directory, there are directories named as the table names. + * For example, the schema is named 'default' and there is two table named 'foo' and 'bar' in it, then the --- End diff -- I have simplified this note as: ``` carbonFileList represents the store path of the schema, which is configured as carbondata-store in the CarbonData catalog file ($PRESTO_HOME$/etc/catalog/carbondata.properties). ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user asfgit commented on the issue:
https://github.com/apache/carbondata/pull/1112 Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/carbondata-pr-spark-1.6/722/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1112#discussion_r124811082 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/impl/CarbonLocalInputSplit.java --- @@ -17,19 +17,22 @@ package org.apache.carbondata.presto.impl; -import java.util.List; - import com.fasterxml.jackson.annotation.JsonCreator; import com.fasterxml.jackson.annotation.JsonProperty; +import java.util.List; + +/** + * BHQ: CarbonLocalInputSplit represents a block, it contains a set of blocklet. --- End diff -- can you explain what is the mean about BHQ? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1112#discussion_r124811689 --- Diff: integration/presto/README.md --- @@ -20,14 +20,10 @@ Please follow the below steps to query carbondata in presto ### Config presto server -* Download presto server 0.166 : https://repo1.maven.org/maven2/com/facebook/presto/presto-server/ -* Finish configuration as per https://prestodb.io/docs/current/installation/deployment.html - for example: +* Download presto server ( >= 0.166) : https://repo1.maven.org/maven2/com/facebook/presto/presto-server/ --- End diff -- Different presto version, need to do different integration , current version(carbondata 1.2.0), only support 0.166 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1112#discussion_r124811914 --- Diff: integration/presto/README.md --- @@ -59,28 +55,50 @@ Please follow the below steps to query carbondata in presto ``` * config carbondata-connector for presto - First:compile carbondata-presto integration module + Firstly: Compile carbondata, including carbondata-presto integration module ``` $ git clone https://github.com/apache/carbondata - $ cd carbondata/integration/presto - $ mvn clean package + $ cd carbondata + $ mvn -DskipTests -P{spark-version} -Dspark.version={spark-version-number} -Dhadoop.version={hadoop-version-number} clean package + ``` + Replace the spark and hadoop version with the version used in your cluster. + For example, if you use Spark2.1.0 and Hadoop 2.7.3, you would like to compile using: --- End diff -- here, suggest using Hadoop 2.7.2 as example --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1112#discussion_r124813648 --- Diff: integration/presto/src/main/java/org/apache/carbondata/presto/impl/CarbonTableReader.java --- @@ -133,30 +163,54 @@ public boolean updateCarbonFile() { return true; } + /** + * Return the schema names under a schema store path (this.carbonFileList). + * @return + */ public List<String> updateSchemaList() { updateCarbonFile(); if (carbonFileList != null) { - List<String> schemaList = - Stream.of(carbonFileList.listFiles()).map(a -> a.getName()).collect(Collectors.toList()); + /*List<String> schemaList = + Stream.of(carbonFileList.listFiles()).map(a -> a.getName()).collect(Collectors.toList());*/ + List<String> schemaList = new ArrayList<>(); + for (CarbonFile file : carbonFileList.listFiles()) --- End diff -- This part code, don't suggest changing . --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on the issue:
https://github.com/apache/carbondata/pull/1112 @HoneyQuery please squash all commits into one commit. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on the issue:
https://github.com/apache/carbondata/pull/1112 add to whitelist --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
Free forum by Nabble | Edit this page |