is wrongly linked to this Jira. The actual Jira is CARBONDATA-3973
> Data load with partition columns fail with InvalidLoadOptionException when load option 'header' is set to 'true'
> ----------------------------------------------------------------------------------------------------------------
>
> Key: CARBONDATA-3793
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-3793> Project: CarbonData
> Issue Type: Bug
> Components: spark-integration
> Affects Versions: 2.0.0
> Reporter: Venugopal Reddy K
> Priority: Minor
> Fix For: 2.0.0
>
> Attachments: Selection_001.png
>
> Time Spent: 4h 40m
> Remaining Estimate: 0h
>
> *Issue:*
> Data load with partition fail with `InvalidLoadOptionException` when load option `header` is set to `true`
>
> *CallStack:*
> 2020-05-05 21:49:35 AUDIT audit:97 - {"time":"5 May, 2020 9:49:35 PM IST","username":"root1","opName":"LOAD DATA","opId":"199081091980878","opStatus":"FAILED","opTime":"1734 ms","table":"default.source","extraInfo":{color:#ff0000}{"Exception":"org.apache.carbondata.common.exceptions.sql.InvalidLoadOptionException","Message":"When 'header' option is true, 'fileheader' option is not required."}}{color}
> Exception in thread "main" org.apache.carbondata.common.exceptions.sql.InvalidLoadOptionException: When 'header' option is true, 'fileheader' option is not required.
> at org.apache.carbondata.processing.loading.model.CarbonLoadModelBuilder.build(CarbonLoadModelBuilder.java:203)
> at org.apache.carbondata.processing.loading.model.CarbonLoadModelBuilder.build(CarbonLoadModelBuilder.java:126)
> at org.apache.spark.sql.execution.datasources.SparkCarbonTableFormat.prepareWrite(SparkCarbonTableFormat.scala:132)
> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:103)
> at org.apache.spark.sql.execution.command.management.CarbonInsertIntoHadoopFsRelationCommand.run(CarbonInsertIntoHadoopFsRelationCommand.scala:160)
> at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
> at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)