[GitHub] [carbondata] VenuReddy2103 commented on a change in pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] VenuReddy2103 commented on a change in pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

GitBox

VenuReddy2103 commented on a change in pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#discussion_r479973113



##########
File path: sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonWriterBuilder.java
##########
@@ -660,13 +895,39 @@ public CarbonWriter build() throws IOException, InvalidLoadOptionException {
       // removed from the load. LoadWithoutConverter flag is going to point to the Loader Builder
       // which will skip Conversion Step.
       loadModel.setLoadWithoutConverterStep(true);
-      return new AvroCarbonWriter(loadModel, hadoopConf, this.avroSchema);
+      AvroCarbonWriter avroCarbonWriter = new AvroCarbonWriter(loadModel,

Review comment:
       We have some code duplications for each type of writer. Suggest to refactor it. Something like this -
   ```suggestion
       CarbonWriter carbonWriter;
       if (this.writerType == WRITER_TYPE.AVRO) {
         // AVRO records are pushed to Carbon as Object not as Strings. This was done in order to
         // handle multi level complex type support. As there are no conversion converter step is
         // removed from the load. LoadWithoutConverter flag is going to point to the Loader Builder
         // which will skip Conversion Step.
         loadModel.setLoadWithoutConverterStep(true);
         carbonWriter = new AvroCarbonWriter(loadModel, hadoopConf, this.avroSchema);
       } else if (this.writerType == WRITER_TYPE.JSON) {
         loadModel.setJsonFileLoad(true);
         carbonWriter = new JsonCarbonWriter(loadModel, hadoopConf);
       } else if (this.writerType == WRITER_TYPE.PARQUET) {
         loadModel.setLoadWithoutConverterStep(true);
         carbonWriter = new ParquetCarbonWriter(loadModel, hadoopConf, this.avroSchema);
       } else if (this.writerType == WRITER_TYPE.ORC) {
         carbonWriter = new ORCCarbonWriter(loadModel, hadoopConf);
       } else {
         // CSV
         CSVCarbonWriter csvCarbonWriter = new CSVCarbonWriter(loadModel, hadoopConf);
         if (!this.options.containsKey(CarbonCommonConstants.FILE_HEADER)) {
           csvCarbonWriter.setSkipHeader(true);
         }
         carbonWriter = csvCarbonWriter;
       }
       if (!StringUtils.isEmpty(filePath)) {
         carbonWriter.validateAndSetDataFiles(this.dataFiles);
       }
       return carbonWriter;
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]