Github user anubhav100 commented on the issue:
https://github.com/apache/carbondata/pull/1805 retest this please --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:
https://github.com/apache/carbondata/pull/1805 Can you show an example of how to give <table_path> in S3Example using AWS S3? I tried with Huawei OBS, it has two problems: 1. I need to set the endpoint conf in main function by `spark.sparkContext.hadoopConfiguration.set("fs.s3a.endpoint", "obs.cn-north-1.myhwclouds.com") ` manually 2. After I set the conf, there is an exception thrown when running it: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/http/pool/ConnPoolControl --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1805 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2936/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1805 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1630/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1805 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2863/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1805 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2882/ --- |
In reply to this post by qiuchenjian-2
Github user jatin9896 commented on the issue:
https://github.com/apache/carbondata/pull/1805 @jackylk table path is the path to bucket location like I have provided s3a://<bucket-name>/<location> and regarding endpoints, I have modified example which takes endpoint as args(4) and it is not mandatory to provide. About connection pooling exception in the example is also fixed. Please check. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1805 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1650/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1805 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2892/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1805 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1661/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1805 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2895/ --- |
In reply to this post by qiuchenjian-2
Github user anubhav100 commented on the issue:
https://github.com/apache/carbondata/pull/1805 retest this please --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:
https://github.com/apache/carbondata/pull/1805 I tried, it is successful now. But in log it is showing using LOCAL_SORT, it should be NO_SORT, right? --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1805#discussion_r162088305 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala --- @@ -0,0 +1,157 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.examples + +import java.io.File + +import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} +import org.apache.spark.sql.SparkSession +import org.slf4j.{Logger, LoggerFactory} + +import org.apache.carbondata.core.constants.CarbonCommonConstants + +object S3Example { + + /** + * This example demonstrate usage of + * 1. create carbon table with storage location on object based storage + * like AWS S3, Huawei OBS, etc + * 2. load data into carbon table, the generated file will be stored on object based storage + * query the table. + * 3. With the indexing feature of carbondata, the data read from object based storage is minimum, + * thus providing both high performance analytic and low cost storage + * + * @param args require three parameters "Access-key" "Secret-key" + * "s3 bucket path" "spark-master" "s3-endpoint" + */ + def main(args: Array[String]) { + val rootPath = new File(this.getClass.getResource("/").getPath + + "../../../..").getCanonicalPath + val path = s"$rootPath/examples/spark2/src/main/resources/data1.csv" + val logger: Logger = LoggerFactory.getLogger(this.getClass) + + import org.apache.spark.sql.CarbonSession._ + if (args.length < 4 || args.length > 5) { + logger.error("Usage: java CarbonS3Example <access-key> <secret-key>" + + "<table-path> <spark-master> <s3-endpoint>") + System.exit(0) + } + + val (accessKey, secretKey, endpoint) = getKeyOnPrefix(args(2)) + val spark = SparkSession + .builder() + .master(args(3)) + .appName("S3Example") + .config("spark.driver.host", "localhost") + .config(accessKey, args(0)) + .config(secretKey, args(1)) + .config(endpoint, getS3EndPoint(args)) + .getOrCreateCarbonSession() + + spark.sparkContext.setLogLevel("INFO") --- End diff -- change to WARN, it is printing too many --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1805#discussion_r162088431 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala --- @@ -0,0 +1,157 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.examples + +import java.io.File + +import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} +import org.apache.spark.sql.SparkSession +import org.slf4j.{Logger, LoggerFactory} + +import org.apache.carbondata.core.constants.CarbonCommonConstants + +object S3Example { + + /** + * This example demonstrate usage of + * 1. create carbon table with storage location on object based storage + * like AWS S3, Huawei OBS, etc + * 2. load data into carbon table, the generated file will be stored on object based storage + * query the table. + * 3. With the indexing feature of carbondata, the data read from object based storage is minimum, + * thus providing both high performance analytic and low cost storage + * + * @param args require three parameters "Access-key" "Secret-key" + * "s3 bucket path" "spark-master" "s3-endpoint" + */ + def main(args: Array[String]) { + val rootPath = new File(this.getClass.getResource("/").getPath + + "../../../..").getCanonicalPath + val path = s"$rootPath/examples/spark2/src/main/resources/data1.csv" + val logger: Logger = LoggerFactory.getLogger(this.getClass) + + import org.apache.spark.sql.CarbonSession._ + if (args.length < 4 || args.length > 5) { + logger.error("Usage: java CarbonS3Example <access-key> <secret-key>" + + "<table-path> <spark-master> <s3-endpoint>") + System.exit(0) + } + + val (accessKey, secretKey, endpoint) = getKeyOnPrefix(args(2)) + val spark = SparkSession + .builder() + .master(args(3)) + .appName("S3Example") + .config("spark.driver.host", "localhost") + .config(accessKey, args(0)) + .config(secretKey, args(1)) + .config(endpoint, getS3EndPoint(args)) + .getOrCreateCarbonSession() + + spark.sparkContext.setLogLevel("INFO") + + spark.sql( + s""" + | CREATE TABLE if not exists carbon_table( + | shortField SHORT, + | intField INT, + | bigintField LONG, + | doubleField DOUBLE, + | stringField STRING, + | timestampField TIMESTAMP, + | decimalField DECIMAL(18,2), + | dateField DATE, + | charField CHAR(5), + | floatField FLOAT + | ) + | STORED BY 'carbondata' + | LOCATION '${ args(2) }' + | TBLPROPERTIES('SORT_COLUMNS'='', 'DICTIONARY_INCLUDE'='dateField, charField') + """.stripMargin) + + spark.sql( + s""" + | LOAD DATA LOCAL INPATH '$path' + | INTO TABLE carbon_table + | OPTIONS('HEADER'='true') + """.stripMargin) + + spark.sql( + s""" + | LOAD DATA LOCAL INPATH '$path' + | INTO TABLE carbon_table + | OPTIONS('HEADER'='true') + """.stripMargin) + + spark.sql( --- End diff -- do a select * after 1 load --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1805#discussion_r162089803 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala --- @@ -0,0 +1,157 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.examples + +import java.io.File + +import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} +import org.apache.spark.sql.SparkSession +import org.slf4j.{Logger, LoggerFactory} + +import org.apache.carbondata.core.constants.CarbonCommonConstants + +object S3Example { + + /** + * This example demonstrate usage of + * 1. create carbon table with storage location on object based storage + * like AWS S3, Huawei OBS, etc + * 2. load data into carbon table, the generated file will be stored on object based storage + * query the table. + * 3. With the indexing feature of carbondata, the data read from object based storage is minimum, + * thus providing both high performance analytic and low cost storage + * + * @param args require three parameters "Access-key" "Secret-key" + * "s3 bucket path" "spark-master" "s3-endpoint" + */ + def main(args: Array[String]) { + val rootPath = new File(this.getClass.getResource("/").getPath + + "../../../..").getCanonicalPath + val path = s"$rootPath/examples/spark2/src/main/resources/data1.csv" + val logger: Logger = LoggerFactory.getLogger(this.getClass) + + import org.apache.spark.sql.CarbonSession._ + if (args.length < 4 || args.length > 5) { + logger.error("Usage: java CarbonS3Example <access-key> <secret-key>" + + "<table-path> <spark-master> <s3-endpoint>") + System.exit(0) + } + + val (accessKey, secretKey, endpoint) = getKeyOnPrefix(args(2)) + val spark = SparkSession + .builder() + .master(args(3)) + .appName("S3Example") + .config("spark.driver.host", "localhost") + .config(accessKey, args(0)) + .config(secretKey, args(1)) + .config(endpoint, getS3EndPoint(args)) + .getOrCreateCarbonSession() + + spark.sparkContext.setLogLevel("INFO") + + spark.sql( + s""" + | CREATE TABLE if not exists carbon_table( + | shortField SHORT, + | intField INT, + | bigintField LONG, + | doubleField DOUBLE, + | stringField STRING, + | timestampField TIMESTAMP, + | decimalField DECIMAL(18,2), + | dateField DATE, + | charField CHAR(5), + | floatField FLOAT + | ) + | STORED BY 'carbondata' + | LOCATION '${ args(2) }' + | TBLPROPERTIES('SORT_COLUMNS'='', 'DICTIONARY_INCLUDE'='dateField, charField') + """.stripMargin) + + spark.sql( + s""" + | LOAD DATA LOCAL INPATH '$path' + | INTO TABLE carbon_table + | OPTIONS('HEADER'='true') + """.stripMargin) + + spark.sql( + s""" + | LOAD DATA LOCAL INPATH '$path' + | INTO TABLE carbon_table + | OPTIONS('HEADER'='true') + """.stripMargin) + + spark.sql( + s""" + | LOAD DATA LOCAL INPATH '$path' + | INTO TABLE carbon_table + | OPTIONS('HEADER'='true') + """.stripMargin) + + spark.sql( + s""" + | LOAD DATA LOCAL INPATH '$path' + | INTO TABLE carbon_table + | OPTIONS('HEADER'='true') + """.stripMargin) + --- End diff -- Now we do not allow doing compaction and data load concurrently, it will raise exception: Cannot run data loading and compaction on same table concurrently. Please wait for load to finish So, I think you should check the finish of data load then do compaction, you can check finish of data load by SHOW SEGMENT and collect the result, loop until there are 3 segments --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1805#discussion_r162089977 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala --- @@ -0,0 +1,157 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.examples + +import java.io.File + +import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} +import org.apache.spark.sql.SparkSession +import org.slf4j.{Logger, LoggerFactory} + +import org.apache.carbondata.core.constants.CarbonCommonConstants + +object S3Example { + + /** + * This example demonstrate usage of + * 1. create carbon table with storage location on object based storage + * like AWS S3, Huawei OBS, etc + * 2. load data into carbon table, the generated file will be stored on object based storage + * query the table. + * 3. With the indexing feature of carbondata, the data read from object based storage is minimum, + * thus providing both high performance analytic and low cost storage + * + * @param args require three parameters "Access-key" "Secret-key" + * "s3 bucket path" "spark-master" "s3-endpoint" + */ + def main(args: Array[String]) { + val rootPath = new File(this.getClass.getResource("/").getPath + + "../../../..").getCanonicalPath + val path = s"$rootPath/examples/spark2/src/main/resources/data1.csv" + val logger: Logger = LoggerFactory.getLogger(this.getClass) + + import org.apache.spark.sql.CarbonSession._ + if (args.length < 4 || args.length > 5) { + logger.error("Usage: java CarbonS3Example <access-key> <secret-key>" + + "<table-path> <spark-master> <s3-endpoint>") + System.exit(0) + } + + val (accessKey, secretKey, endpoint) = getKeyOnPrefix(args(2)) + val spark = SparkSession + .builder() + .master(args(3)) + .appName("S3Example") + .config("spark.driver.host", "localhost") + .config(accessKey, args(0)) + .config(secretKey, args(1)) + .config(endpoint, getS3EndPoint(args)) + .getOrCreateCarbonSession() + + spark.sparkContext.setLogLevel("INFO") + + spark.sql( + s""" + | CREATE TABLE if not exists carbon_table( + | shortField SHORT, + | intField INT, + | bigintField LONG, + | doubleField DOUBLE, + | stringField STRING, + | timestampField TIMESTAMP, + | decimalField DECIMAL(18,2), + | dateField DATE, + | charField CHAR(5), + | floatField FLOAT + | ) + | STORED BY 'carbondata' + | LOCATION '${ args(2) }' + | TBLPROPERTIES('SORT_COLUMNS'='', 'DICTIONARY_INCLUDE'='dateField, charField') + """.stripMargin) + + spark.sql( + s""" + | LOAD DATA LOCAL INPATH '$path' + | INTO TABLE carbon_table + | OPTIONS('HEADER'='true') + """.stripMargin) + + spark.sql( + s""" + | LOAD DATA LOCAL INPATH '$path' + | INTO TABLE carbon_table + | OPTIONS('HEADER'='true') + """.stripMargin) + + spark.sql( + s""" + | LOAD DATA LOCAL INPATH '$path' + | INTO TABLE carbon_table + | OPTIONS('HEADER'='true') + """.stripMargin) + + spark.sql( + s""" + | LOAD DATA LOCAL INPATH '$path' + | INTO TABLE carbon_table + | OPTIONS('HEADER'='true') + """.stripMargin) + + // Use compaction command to merge segments or small files in object based storage, + // this can be done periodically. + spark.sql("ALTER table carbon_table compact 'MINOR'") --- End diff -- change this to MAJOR, and remove subsequent command, to make it simpler --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1805#discussion_r162090105 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala --- @@ -0,0 +1,157 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.examples + +import java.io.File + +import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} +import org.apache.spark.sql.SparkSession +import org.slf4j.{Logger, LoggerFactory} + +import org.apache.carbondata.core.constants.CarbonCommonConstants + +object S3Example { + + /** + * This example demonstrate usage of + * 1. create carbon table with storage location on object based storage + * like AWS S3, Huawei OBS, etc + * 2. load data into carbon table, the generated file will be stored on object based storage + * query the table. + * 3. With the indexing feature of carbondata, the data read from object based storage is minimum, + * thus providing both high performance analytic and low cost storage + * + * @param args require three parameters "Access-key" "Secret-key" + * "s3 bucket path" "spark-master" "s3-endpoint" + */ + def main(args: Array[String]) { + val rootPath = new File(this.getClass.getResource("/").getPath + + "../../../..").getCanonicalPath + val path = s"$rootPath/examples/spark2/src/main/resources/data1.csv" + val logger: Logger = LoggerFactory.getLogger(this.getClass) + + import org.apache.spark.sql.CarbonSession._ + if (args.length < 4 || args.length > 5) { + logger.error("Usage: java CarbonS3Example <access-key> <secret-key>" + + "<table-path> <spark-master> <s3-endpoint>") + System.exit(0) + } + + val (accessKey, secretKey, endpoint) = getKeyOnPrefix(args(2)) + val spark = SparkSession + .builder() + .master(args(3)) + .appName("S3Example") + .config("spark.driver.host", "localhost") + .config(accessKey, args(0)) + .config(secretKey, args(1)) + .config(endpoint, getS3EndPoint(args)) + .getOrCreateCarbonSession() + + spark.sparkContext.setLogLevel("INFO") + + spark.sql( + s""" + | CREATE TABLE if not exists carbon_table( + | shortField SHORT, + | intField INT, + | bigintField LONG, + | doubleField DOUBLE, + | stringField STRING, + | timestampField TIMESTAMP, + | decimalField DECIMAL(18,2), + | dateField DATE, + | charField CHAR(5), + | floatField FLOAT + | ) + | STORED BY 'carbondata' + | LOCATION '${ args(2) }' + | TBLPROPERTIES('SORT_COLUMNS'='', 'DICTIONARY_INCLUDE'='dateField, charField') + """.stripMargin) + + spark.sql( + s""" + | LOAD DATA LOCAL INPATH '$path' + | INTO TABLE carbon_table + | OPTIONS('HEADER'='true') + """.stripMargin) + + spark.sql( + s""" + | LOAD DATA LOCAL INPATH '$path' + | INTO TABLE carbon_table + | OPTIONS('HEADER'='true') + """.stripMargin) + + spark.sql( + s""" + | LOAD DATA LOCAL INPATH '$path' + | INTO TABLE carbon_table + | OPTIONS('HEADER'='true') + """.stripMargin) + + spark.sql( --- End diff -- remove one load to make it simpler --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1805#discussion_r162090325 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala --- @@ -0,0 +1,157 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.examples + +import java.io.File + +import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} +import org.apache.spark.sql.SparkSession +import org.slf4j.{Logger, LoggerFactory} + +import org.apache.carbondata.core.constants.CarbonCommonConstants + +object S3Example { + + /** + * This example demonstrate usage of + * 1. create carbon table with storage location on object based storage + * like AWS S3, Huawei OBS, etc + * 2. load data into carbon table, the generated file will be stored on object based storage + * query the table. --- End diff -- remove this line --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1805#discussion_r162090538 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/S3Example.scala --- @@ -0,0 +1,157 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.carbondata.examples + +import java.io.File + +import org.apache.hadoop.fs.s3a.Constants.{ACCESS_KEY, ENDPOINT, SECRET_KEY} +import org.apache.spark.sql.SparkSession +import org.slf4j.{Logger, LoggerFactory} + +import org.apache.carbondata.core.constants.CarbonCommonConstants + +object S3Example { + + /** + * This example demonstrate usage of + * 1. create carbon table with storage location on object based storage + * like AWS S3, Huawei OBS, etc + * 2. load data into carbon table, the generated file will be stored on object based storage + * query the table. + * 3. With the indexing feature of carbondata, the data read from object based storage is minimum, + * thus providing both high performance analytic and low cost storage + * + * @param args require three parameters "Access-key" "Secret-key" + * "s3 bucket path" "spark-master" "s3-endpoint" --- End diff -- modify to `table path on S3` --- |
Free forum by Nabble | Edit this page |