> Empty Folder is created when data load from dataframe
> -----------------------------------------------------
>
> Key: CARBONDATA-406
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-406> Project: CarbonData
> Issue Type: Bug
> Components: data-load
> Affects Versions: 0.1.0-incubating
> Reporter: Babulal
> Assignee: Rahul Kumar
> Priority: Trivial
>
> Load the data from dataframe to carbon table with tempCSV=false option .
> Load is success but emtyFolder is getting created in HDFS
> Cluster size= 3 node .
> Type:- Stanalone Spark
> Steps
> val customSchema = StructType(Array( StructField("imei", StringType, true), StructField("deviceInformationId", IntegerType, true), StructField("mac", StringType, true), StructField("productdate", TimestampType , true), StructField("updatetime", TimestampType, true), StructField("gamePointId", DoubleType, true), StructField("contractNumber", DoubleType, true) ));
> val df = cc.read.format("com.databricks.spark.csv").option("header", "false").schema(customSchema).load("/opt/data/xyz/100_default_date_11_header.csv");
> Start data loading
> scala> df.write.format("carbondata").option("tableName","mycarbon2").save();
> Check Logs
> leges:{}, groupPrivileges:null, rolePrivileges:null))
> INFO 10-11 23:52:44,005 - Creating directory if it doesn't exist: hdfs://10.18.102.236:54310/opt/Carbon/Spark/spark/bin/null/bin/carbonshellstore/hivemetadata/mycarbon4
> AUDIT 10-11 23:52:44,037 - [BLR1000007781][root][Thread-1]Table created with Database name [default] and Table name [mycarbon4]
> INFO 10-11 23:52:44,040 - Successfully able to get the table metadata file lock
> In the HDFS this Path is empty
> hdfs://10.18.102.236:54310/opt/Carbon/Spark/spark/bin/null/bin/carbonshellstore/hivemetadata/mycarbon4
> Actual Store location is :- hdfs://10.18.102.236:54310/opt/Carbon/mystore
> Expect :- Empty folder should not be created. . It seems that it is created in SPARK_HOME/bin .
> SPARK_HOME is /opt/Carbon/Spark/spark/bin