Re: loading data from parquet table always
Posted by
akashrn5 on
May 29, 2018; 1:31pm
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/loading-data-from-parquet-table-always-tp48346p51024.html
Hi,
The exception says, there is problem while copying from local to
carbonstore(HDFS). It means the writing has already finished in the temp
folder and after writing
it will copy the files to hdfs and it is failing during that time.
So with this exception trace, it will be difficult to know the root cause
for the failure, failure can happen because of HDFS also. So you can check
two things
1. Check whether the space is available in HDFS or not
2. When this exception came, check what is the exception in hdfs logs.
May be with that you can get some idea.
There is one property called
*carbon.load.directWriteHdfs.enabled*
By default, this property will be false, and if you make it true, the files
will be directly written to carbonstore(hdfs), instead of writing first in
local and then copying.
You can check by setting this property whether the load is successful or
not.
Regards,
Akash R Nilugal
--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/