Posted by
李寅威 on
Dec 23, 2016; 8:03am
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/etl-DataLoadingException-The-input-file-does-not-exist-tp4853p4908.html
Oh I see, I've solved it, thx very much to Manish & QiangCai~~
here is my dml script:
cc.sql(s"load data inpath 'hdfs://master:9000/carbondata/pt/sample.csv' into table test_table")
------------------ 原始邮件 ------------------
发件人: "manish gupta";<
[hidden email]>;
发送时间: 2016年12月23日(星期五) 下午2:32
收件人: "dev"<
[hidden email]>;
主题: Re: 回复: etl.DataLoadingException: The input file does not exist
Hi 251469031,
Thanks for showing interest in carbon. For your question please refer the
explanation below.
scala> val dataFilePath = new File("hdfs://master:9000/
carbondata/sample.csv").getCanonicalPath
dataFilePath: String = /home/hadoop/carbondata/hdfs:/
master:9000/carbondata/sample.csv
If you use new File, it will always return the pointer for path from local
file system. So Incase you are not appending hdfs url to the file/folder
path in the Load data DDL command, you can configure
*carbon.ddl.base.hdfs.url* in carbon.properties file as suggested by
QiangCai.
*carbon.ddl.base.hdfs.url=hdfs://<IP>:<port>*
example
*carbon.ddl.base.hdfs.url=hdfs://9.82.101.42:54310
<
http://9.82.101.42:54310>*
Regards
Manish Gupta
On Fri, Dec 23, 2016 at 10:09 AM, QiangCai <
[hidden email]> wrote:
> Please find the following item in carbon.properties file, give a proper
> path(hdfs://master:9000/)
> carbon.ddl.base.hdfs.url
>
> During loading, will combine this url and data file path.
>
> BTW, better to provide the version number.
>
>
>
> --
> View this message in context:
http://apache-carbondata-> mailing-list-archive.1130556.n5.nabble.com/etl-DataLoadingException-The-
> input-file-does-not-exist-tp4853p4888.html
> Sent from the Apache CarbonData Mailing List archive mailing list archive
> at Nabble.com.
>