http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/store-location-can-t-be-found-tp7318p7321.html
I follow your suggestion to create carbon session with store path, but when load data it through out another error as follow :
scala> carbon.sql("LOAD DATA INPATH 'hdfs://localhost:9000/resources/sample.csv' INTO TABLE test_table")
org.apache.spark.sql.AnalysisException: LOAD DATA is not supported for datasource tables: `default`.`test_table`;
at org.apache.spark.sql.execution.command.LoadDataCommand.run(tables.scala:194)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
> 在 2017年2月4日,上午11:49,Ravindra Pesala <
[hidden email]> 写道:
>
> Hi Mars,
>
> Please try creating carbonsession with storepath as follow.
>
> val carbon = SparkSession.builder().config(sc.getConf).
> getOrCreateCarbonSession("hdfs://localhost:9000/carbon/store <hdfs://localhost:9000/carbon/store> ")
>
>
> Regards,
> Ravindra.
>
> On 4 February 2017 at 08:12, Mars Xu <
[hidden email] <mailto:
[hidden email]>> wrote:
>
>> Hello All,
>> I met a problem of file not exist. it looks like the store
>> location can’t be found. I have already set carbon.store.location=hdfs://localhost:9000/carbon/store <hdfs://localhost:9000/carbon/store>
>> <hdfs://localhost:9000/carbon/store <hdfs://localhost:9000/carbon/store>> in $SPARK_HOME/conf/carbon.properties,
>> but when I start up spark-shell by following command and run some commands
>> ,the error is coming
>> spark-shell --master spark://localhost:7077 <spark://localhost:7077> --jars
>> ~/carbonlib/carbondata_2.11-1.0.0-incubating-shade-hadoop2.7.2.jar --conf
>> spark.carbon.storepath=hdfs://localhost:9000/carbon/store <hdfs://localhost:9000/carbon/store>
>> <hdfs://localhost:9000/carbon/store>
>>
>> scala> import org.apache.spark.sql.SparkSession
>> scala> import org.apache.spark.sql.CarbonSession._
>> scala> val carbon = SparkSession.builder().config(sc.getConf).
>> getOrCreateCarbonSession()
>> scala> carbon.sql("CREATE TABLE IF NOT EXISTS test_table(id string, name
>> string, city string, age Int) STORED BY 'carbondata’")
>> scala> carbon.sql("load data inpath 'hdfs://localhost:9000/resources/sample.csv'
>> into table test_table”)
>>
>> scala> carbon.sql("select * from test_table").show()
>> java.io.FileNotFoundException: File /private/var/carbon.store/
>> default/test_table/Fact/Part0/Segment_0 does not exist.
>> at org.apache.hadoop.hdfs.DistributedFileSystem$
>> DirListingIterator.<init>(DistributedFileSystem.java:948)
>> at org.apache.hadoop.hdfs.DistributedFileSystem$
>> DirListingIterator.<init>(DistributedFileSystem.java:927)
>>
>> My carbonate version is 1.0 and spark version is spark 2.1.
>
>
>
>
> --
> Thanks & Regards,
> Ravi