GitHub user xubo245 opened a pull request:
https://github.com/apache/carbondata/pull/2341 [CARBONDATA-2500] Create are API to read user's schema in SDK The order is different between write and read data type of schema in SDK 1. Create are API to read user's schema in SDK: org.apache.carbondata.sdk.file.CarbonReader#readUserSchema ` /** * Read CarbonData file and return the user schema */ public static List<ColumnSchema> readUserSchema(String dataFilePath) ;` 2.write some test case to sort list of columnSchema. org.apache.carbondata.sdk.file.CarbonReaderTest 3. Fix some spell error in CarbonData Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? No, but add one new interface - [ ] Any backward compatibility impacted? No - [ ] Document update required? Yes - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. add some test case in org.apache.carbondata.sdk.file.CarbonReaderTest - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. No You can merge this pull request into a Git repository by running: $ git pull https://github.com/xubo245/carbondata CARBONDATA-2500-orderDifferent Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2341.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2341 ---- commit 214f46440c47bba481e72e344c2853364b855aca Author: xubo245 <xubo29@...> Date: 2018-05-25T08:07:55Z [CARBONDATA-2500] The order is different between write and read data type of schema in SDK fix style ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2341 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4938/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2341 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6100/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2341 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6101/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2341 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4939/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2341 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5094/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2341 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5095/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2341 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6104/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2341 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4942/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2341 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5098/ --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on the issue:
https://github.com/apache/carbondata/pull/2341 @sounakr @jackylk @ravipesala Please review it. --- |
In reply to this post by qiuchenjian-2
Github user sounakr commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2341#discussion_r191094114 --- Diff: store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReader.java --- @@ -102,7 +104,23 @@ public static CarbonReaderBuilder builder(String tablePath, String tableName) { } /** - * Read schmea file and return table info object + * Read CarbonData file and return the user schema, + * the schema order is the same as user save schema + */ + public static List<ColumnSchema> readUserSchema(String dataFilePath) throws IOException { + CarbonHeaderReader reader = new CarbonHeaderReader(dataFilePath); --- End diff -- Rather than reading the CarbonDataFile, we could read the carbon Index file which is much smaller in size to get the schema. Reading much bigger chuck file in OBS is costly. --- |
In reply to this post by qiuchenjian-2
Github user sounakr commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2341#discussion_r191094243 --- Diff: store/sdk/src/test/java/org/apache/carbondata/sdk/file/CarbonReaderTest.java --- @@ -381,7 +384,343 @@ public boolean accept(File pathname) { , "boolField" , "dateField" , "timeField" - , "decimalField"}).build(); + , "decimalField"}) + .build(); + + int i = 0; + while (reader.hasNext()) { + Object[] row = (Object[]) reader.readNextRow(); + int id = (int) row[2]; + Assert.assertEquals("robot" + (id % 10), row[0]); + Assert.assertEquals(Short.parseShort(String.valueOf(id)), row[1]); + Assert.assertEquals(Long.MAX_VALUE - id, row[3]); + Assert.assertEquals((double) id / 2, row[4]); + Assert.assertEquals(true, (boolean) row[5]); + long day = 24L * 3600 * 1000; + Assert.assertEquals("2019-03-02", new Date((day * ((int) row[6]))).toString()); + Assert.assertEquals("2019-02-12 03:03:34.0", new Timestamp((long) row[7] / 1000).toString()); + i++; + } + Assert.assertEquals(i, 100); + + reader.close(); + FileUtils.deleteDirectory(new File(path)); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, + timestampFormat); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, + badRecordAction); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, + badRecordLoc); + } + + @Test + public void testReadSchemaFileAndSort() throws IOException, InterruptedException { + String timestampFormat = carbonProperties.getProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, + CarbonCommonConstants.CARBON_TIMESTAMP_DEFAULT_FORMAT); + String badRecordAction = carbonProperties.getProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, + CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION_DEFAULT); + String badRecordLoc = carbonProperties.getProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, + CarbonCommonConstants.CARBON_BADRECORDS_LOC_DEFAULT_VAL); + String rootPath = new File(this.getClass().getResource("/").getPath() + + "../../").getCanonicalPath(); + String storeLocation = rootPath + "/target/"; + carbonProperties + .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, storeLocation) + .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy-MM-dd hh:mm:ss") + .addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, "REDIRECT"); + String path = "./testWriteFiles"; + FileUtils.deleteDirectory(new File(path)); + + Field[] fields = new Field[9]; + fields[0] = new Field("stringField", DataTypes.STRING); + fields[1] = new Field("shortField", DataTypes.SHORT); + fields[2] = new Field("intField", DataTypes.INT); + fields[3] = new Field("longField", DataTypes.LONG); + fields[4] = new Field("doubleField", DataTypes.DOUBLE); + fields[5] = new Field("boolField", DataTypes.BOOLEAN); + fields[6] = new Field("dateField", DataTypes.DATE); + fields[7] = new Field("timeField", DataTypes.TIMESTAMP); + fields[8] = new Field("decimalField", DataTypes.createDecimalType(8, 2)); + + try { + CarbonWriterBuilder builder = CarbonWriter.builder() + .isTransactionalTable(true) + .persistSchemaFile(true) + .outputPath(path); + + CarbonWriter writer = builder.buildWriterForCSVInput(new Schema(fields)); + + for (int i = 0; i < 100; i++) { + String[] row2 = new String[]{ + "robot" + (i % 10), + String.valueOf(i), + String.valueOf(i), + String.valueOf(Long.MAX_VALUE - i), + String.valueOf((double) i / 2), + String.valueOf(true), + "2019-03-02", + "2019-02-12 03:03:34", + "12.345" + }; + writer.write(row2); + } + writer.close(); + } catch (Exception e) { + e.printStackTrace(); + Assert.fail(e.getMessage()); + } + + File[] dataFiles = new File(path + "/Metadata").listFiles(new FilenameFilter() { + @Override public boolean accept(File dir, String name) { + return name.endsWith("schema"); + } + }); + TableInfo tableInfo = CarbonReader.readSchemaFile(dataFiles[0].getAbsolutePath()); + + List<ColumnSchema> columns = tableInfo.getFactTable().getListOfColumns(); + + // sort the schema + Collections.sort(tableInfo.getFactTable().getListOfColumns(), new Comparator<ColumnSchema>() { + @Override + public int compare(ColumnSchema o1, ColumnSchema o2) { + return Integer.compare(o1.getSchemaOrdinal(), o2.getSchemaOrdinal()); + } + }); + + // Transform the schema + String[] strings= new String[columns.size()]; + for (int i = 0; i < columns.size(); i++) { + strings[i]= columns.get(i).getColumnName(); + } + + File segmentFolder = new File(CarbonTablePath.getSegmentPath(path, "null")); + Assert.assertTrue(segmentFolder.exists()); + + Assert.assertNotNull(dataFiles); + Assert.assertTrue(dataFiles.length > 0); + + CarbonReader reader = CarbonReader + .builder(path, "_temp") + .projection(strings) + .build(); + + int i = 0; + while (reader.hasNext()) { + Object[] row = (Object[]) reader.readNextRow(); + int id = (int) row[2]; + Assert.assertEquals("robot" + (id % 10), row[0]); + Assert.assertEquals(Short.parseShort(String.valueOf(id)), row[1]); + Assert.assertEquals(Long.MAX_VALUE - id, row[3]); + Assert.assertEquals((double) id / 2, row[4]); + Assert.assertEquals(true, (boolean) row[5]); + long day = 24L * 3600 * 1000; + Assert.assertEquals("2019-03-02", new Date((day * ((int) row[6]))).toString()); + Assert.assertEquals("2019-02-12 03:03:34.0", new Timestamp((long) row[7] / 1000).toString()); + i++; + } + Assert.assertEquals(i, 100); + + reader.close(); + FileUtils.deleteDirectory(new File(path)); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, + timestampFormat); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, + badRecordAction); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, + badRecordLoc); + } + + @Test + public void testReadSchemaInDataFileAndSort() throws IOException, InterruptedException { + String timestampFormat = carbonProperties.getProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, + CarbonCommonConstants.CARBON_TIMESTAMP_DEFAULT_FORMAT); + String badRecordAction = carbonProperties.getProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, + CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION_DEFAULT); + String badRecordLoc = carbonProperties.getProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, + CarbonCommonConstants.CARBON_BADRECORDS_LOC_DEFAULT_VAL); + String rootPath = new File(this.getClass().getResource("/").getPath() + + "../../").getCanonicalPath(); + String storeLocation = rootPath + "/target/"; + carbonProperties + .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, storeLocation) + .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy-MM-dd hh:mm:ss") + .addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, "REDIRECT"); + String path = "./testWriteFiles"; + FileUtils.deleteDirectory(new File(path)); + + Field[] fields = new Field[9]; + fields[0] = new Field("stringField", DataTypes.STRING); + fields[1] = new Field("shortField", DataTypes.SHORT); + fields[2] = new Field("intField", DataTypes.INT); + fields[3] = new Field("longField", DataTypes.LONG); + fields[4] = new Field("doubleField", DataTypes.DOUBLE); + fields[5] = new Field("boolField", DataTypes.BOOLEAN); + fields[6] = new Field("dateField", DataTypes.DATE); + fields[7] = new Field("timeField", DataTypes.TIMESTAMP); + fields[8] = new Field("decimalField", DataTypes.createDecimalType(8, 2)); + + try { + CarbonWriterBuilder builder = CarbonWriter.builder() + .isTransactionalTable(true) + .persistSchemaFile(true) + .outputPath(path); + + CarbonWriter writer = builder.buildWriterForCSVInput(new Schema(fields)); + + for (int i = 0; i < 100; i++) { + String[] row2 = new String[]{ + "robot" + (i % 10), + String.valueOf(i), + String.valueOf(i), + String.valueOf(Long.MAX_VALUE - i), + String.valueOf((double) i / 2), + String.valueOf(true), + "2019-03-02", + "2019-02-12 03:03:34", + "12.345" + }; + writer.write(row2); + } + writer.close(); + } catch (Exception e) { + e.printStackTrace(); + Assert.fail(e.getMessage()); + } + + File[] dataFiles2 = new File(path + "/Fact/Part0/Segment_null/").listFiles(new FilenameFilter() { + @Override public boolean accept(File dir, String name) { + return name.endsWith("carbondata"); + } + }); + + List<ColumnSchema> columns = CarbonReader.readSchemaInDataFile(dataFiles2[0].getAbsolutePath()); + + // sort the schema + Collections.sort(columns, new Comparator<ColumnSchema>() { + @Override + public int compare(ColumnSchema o1, ColumnSchema o2) { + return Integer.compare(o1.getSchemaOrdinal(), o2.getSchemaOrdinal()); + } + }); + + // Transform the schema + String[] strings= new String[columns.size()]; + for (int i = 0; i < columns.size(); i++) { + strings[i]= columns.get(i).getColumnName(); + } + + File segmentFolder = new File(CarbonTablePath.getSegmentPath(path, "null")); + Assert.assertTrue(segmentFolder.exists()); + + CarbonReader reader = CarbonReader + .builder(path, "_temp") + .projection(strings) + .build(); + + int i = 0; + while (reader.hasNext()) { + Object[] row = (Object[]) reader.readNextRow(); + int id = (int) row[2]; + Assert.assertEquals("robot" + (id % 10), row[0]); + Assert.assertEquals(Short.parseShort(String.valueOf(id)), row[1]); + Assert.assertEquals(Long.MAX_VALUE - id, row[3]); + Assert.assertEquals((double) id / 2, row[4]); + Assert.assertEquals(true, (boolean) row[5]); + long day = 24L * 3600 * 1000; + Assert.assertEquals("2019-03-02", new Date((day * ((int) row[6]))).toString()); + Assert.assertEquals("2019-02-12 03:03:34.0", new Timestamp((long) row[7] / 1000).toString()); + i++; + } + Assert.assertEquals(i, 100); + + reader.close(); + FileUtils.deleteDirectory(new File(path)); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, + timestampFormat); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, + badRecordAction); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, + badRecordLoc); + } + + @Test + public void testReadUserSchema() throws IOException, InterruptedException { + String timestampFormat = carbonProperties.getProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, + CarbonCommonConstants.CARBON_TIMESTAMP_DEFAULT_FORMAT); + String badRecordAction = carbonProperties.getProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, + CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION_DEFAULT); + String badRecordLoc = carbonProperties.getProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, + CarbonCommonConstants.CARBON_BADRECORDS_LOC_DEFAULT_VAL); + String rootPath = new File(this.getClass().getResource("/").getPath() + + "../../").getCanonicalPath(); + String storeLocation = rootPath + "/target/"; + carbonProperties + .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, storeLocation) + .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy-MM-dd hh:mm:ss") + .addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, "REDIRECT"); + String path = "./testWriteFiles"; + FileUtils.deleteDirectory(new File(path)); + + Field[] fields = new Field[9]; + fields[0] = new Field("stringField", DataTypes.STRING); + fields[1] = new Field("shortField", DataTypes.SHORT); --- End diff -- Add some test case of ComplexType to validate if the SchemaOrdinal is set properly and we are doing the sorting based on it properly or not. --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2341#discussion_r191153432 --- Diff: store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReader.java --- @@ -102,7 +104,23 @@ public static CarbonReaderBuilder builder(String tablePath, String tableName) { } /** - * Read schmea file and return table info object + * Read CarbonData file and return the user schema, + * the schema order is the same as user save schema + */ + public static List<ColumnSchema> readUserSchema(String dataFilePath) throws IOException { + CarbonHeaderReader reader = new CarbonHeaderReader(dataFilePath); --- End diff -- Ok, I copy the method from PR2345 first --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2341#discussion_r191172697 --- Diff: store/sdk/src/test/java/org/apache/carbondata/sdk/file/CarbonReaderTest.java --- @@ -381,7 +384,343 @@ public boolean accept(File pathname) { , "boolField" , "dateField" , "timeField" - , "decimalField"}).build(); + , "decimalField"}) + .build(); + + int i = 0; + while (reader.hasNext()) { + Object[] row = (Object[]) reader.readNextRow(); + int id = (int) row[2]; + Assert.assertEquals("robot" + (id % 10), row[0]); + Assert.assertEquals(Short.parseShort(String.valueOf(id)), row[1]); + Assert.assertEquals(Long.MAX_VALUE - id, row[3]); + Assert.assertEquals((double) id / 2, row[4]); + Assert.assertEquals(true, (boolean) row[5]); + long day = 24L * 3600 * 1000; + Assert.assertEquals("2019-03-02", new Date((day * ((int) row[6]))).toString()); + Assert.assertEquals("2019-02-12 03:03:34.0", new Timestamp((long) row[7] / 1000).toString()); + i++; + } + Assert.assertEquals(i, 100); + + reader.close(); + FileUtils.deleteDirectory(new File(path)); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, + timestampFormat); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, + badRecordAction); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, + badRecordLoc); + } + + @Test + public void testReadSchemaFileAndSort() throws IOException, InterruptedException { + String timestampFormat = carbonProperties.getProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, + CarbonCommonConstants.CARBON_TIMESTAMP_DEFAULT_FORMAT); + String badRecordAction = carbonProperties.getProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, + CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION_DEFAULT); + String badRecordLoc = carbonProperties.getProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, + CarbonCommonConstants.CARBON_BADRECORDS_LOC_DEFAULT_VAL); + String rootPath = new File(this.getClass().getResource("/").getPath() + + "../../").getCanonicalPath(); + String storeLocation = rootPath + "/target/"; + carbonProperties + .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, storeLocation) + .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy-MM-dd hh:mm:ss") + .addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, "REDIRECT"); + String path = "./testWriteFiles"; + FileUtils.deleteDirectory(new File(path)); + + Field[] fields = new Field[9]; + fields[0] = new Field("stringField", DataTypes.STRING); + fields[1] = new Field("shortField", DataTypes.SHORT); + fields[2] = new Field("intField", DataTypes.INT); + fields[3] = new Field("longField", DataTypes.LONG); + fields[4] = new Field("doubleField", DataTypes.DOUBLE); + fields[5] = new Field("boolField", DataTypes.BOOLEAN); + fields[6] = new Field("dateField", DataTypes.DATE); + fields[7] = new Field("timeField", DataTypes.TIMESTAMP); + fields[8] = new Field("decimalField", DataTypes.createDecimalType(8, 2)); + + try { + CarbonWriterBuilder builder = CarbonWriter.builder() + .isTransactionalTable(true) + .persistSchemaFile(true) + .outputPath(path); + + CarbonWriter writer = builder.buildWriterForCSVInput(new Schema(fields)); + + for (int i = 0; i < 100; i++) { + String[] row2 = new String[]{ + "robot" + (i % 10), + String.valueOf(i), + String.valueOf(i), + String.valueOf(Long.MAX_VALUE - i), + String.valueOf((double) i / 2), + String.valueOf(true), + "2019-03-02", + "2019-02-12 03:03:34", + "12.345" + }; + writer.write(row2); + } + writer.close(); + } catch (Exception e) { + e.printStackTrace(); + Assert.fail(e.getMessage()); + } + + File[] dataFiles = new File(path + "/Metadata").listFiles(new FilenameFilter() { + @Override public boolean accept(File dir, String name) { + return name.endsWith("schema"); + } + }); + TableInfo tableInfo = CarbonReader.readSchemaFile(dataFiles[0].getAbsolutePath()); + + List<ColumnSchema> columns = tableInfo.getFactTable().getListOfColumns(); + + // sort the schema + Collections.sort(tableInfo.getFactTable().getListOfColumns(), new Comparator<ColumnSchema>() { + @Override + public int compare(ColumnSchema o1, ColumnSchema o2) { + return Integer.compare(o1.getSchemaOrdinal(), o2.getSchemaOrdinal()); + } + }); + + // Transform the schema + String[] strings= new String[columns.size()]; + for (int i = 0; i < columns.size(); i++) { + strings[i]= columns.get(i).getColumnName(); + } + + File segmentFolder = new File(CarbonTablePath.getSegmentPath(path, "null")); + Assert.assertTrue(segmentFolder.exists()); + + Assert.assertNotNull(dataFiles); + Assert.assertTrue(dataFiles.length > 0); + + CarbonReader reader = CarbonReader + .builder(path, "_temp") + .projection(strings) + .build(); + + int i = 0; + while (reader.hasNext()) { + Object[] row = (Object[]) reader.readNextRow(); + int id = (int) row[2]; + Assert.assertEquals("robot" + (id % 10), row[0]); + Assert.assertEquals(Short.parseShort(String.valueOf(id)), row[1]); + Assert.assertEquals(Long.MAX_VALUE - id, row[3]); + Assert.assertEquals((double) id / 2, row[4]); + Assert.assertEquals(true, (boolean) row[5]); + long day = 24L * 3600 * 1000; + Assert.assertEquals("2019-03-02", new Date((day * ((int) row[6]))).toString()); + Assert.assertEquals("2019-02-12 03:03:34.0", new Timestamp((long) row[7] / 1000).toString()); + i++; + } + Assert.assertEquals(i, 100); + + reader.close(); + FileUtils.deleteDirectory(new File(path)); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, + timestampFormat); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, + badRecordAction); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, + badRecordLoc); + } + + @Test + public void testReadSchemaInDataFileAndSort() throws IOException, InterruptedException { + String timestampFormat = carbonProperties.getProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, + CarbonCommonConstants.CARBON_TIMESTAMP_DEFAULT_FORMAT); + String badRecordAction = carbonProperties.getProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, + CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION_DEFAULT); + String badRecordLoc = carbonProperties.getProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, + CarbonCommonConstants.CARBON_BADRECORDS_LOC_DEFAULT_VAL); + String rootPath = new File(this.getClass().getResource("/").getPath() + + "../../").getCanonicalPath(); + String storeLocation = rootPath + "/target/"; + carbonProperties + .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, storeLocation) + .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy-MM-dd hh:mm:ss") + .addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, "REDIRECT"); + String path = "./testWriteFiles"; + FileUtils.deleteDirectory(new File(path)); + + Field[] fields = new Field[9]; + fields[0] = new Field("stringField", DataTypes.STRING); + fields[1] = new Field("shortField", DataTypes.SHORT); + fields[2] = new Field("intField", DataTypes.INT); + fields[3] = new Field("longField", DataTypes.LONG); + fields[4] = new Field("doubleField", DataTypes.DOUBLE); + fields[5] = new Field("boolField", DataTypes.BOOLEAN); + fields[6] = new Field("dateField", DataTypes.DATE); + fields[7] = new Field("timeField", DataTypes.TIMESTAMP); + fields[8] = new Field("decimalField", DataTypes.createDecimalType(8, 2)); + + try { + CarbonWriterBuilder builder = CarbonWriter.builder() + .isTransactionalTable(true) + .persistSchemaFile(true) + .outputPath(path); + + CarbonWriter writer = builder.buildWriterForCSVInput(new Schema(fields)); + + for (int i = 0; i < 100; i++) { + String[] row2 = new String[]{ + "robot" + (i % 10), + String.valueOf(i), + String.valueOf(i), + String.valueOf(Long.MAX_VALUE - i), + String.valueOf((double) i / 2), + String.valueOf(true), + "2019-03-02", + "2019-02-12 03:03:34", + "12.345" + }; + writer.write(row2); + } + writer.close(); + } catch (Exception e) { + e.printStackTrace(); + Assert.fail(e.getMessage()); + } + + File[] dataFiles2 = new File(path + "/Fact/Part0/Segment_null/").listFiles(new FilenameFilter() { + @Override public boolean accept(File dir, String name) { + return name.endsWith("carbondata"); + } + }); + + List<ColumnSchema> columns = CarbonReader.readSchemaInDataFile(dataFiles2[0].getAbsolutePath()); + + // sort the schema + Collections.sort(columns, new Comparator<ColumnSchema>() { + @Override + public int compare(ColumnSchema o1, ColumnSchema o2) { + return Integer.compare(o1.getSchemaOrdinal(), o2.getSchemaOrdinal()); + } + }); + + // Transform the schema + String[] strings= new String[columns.size()]; + for (int i = 0; i < columns.size(); i++) { + strings[i]= columns.get(i).getColumnName(); + } + + File segmentFolder = new File(CarbonTablePath.getSegmentPath(path, "null")); + Assert.assertTrue(segmentFolder.exists()); + + CarbonReader reader = CarbonReader + .builder(path, "_temp") + .projection(strings) + .build(); + + int i = 0; + while (reader.hasNext()) { + Object[] row = (Object[]) reader.readNextRow(); + int id = (int) row[2]; + Assert.assertEquals("robot" + (id % 10), row[0]); + Assert.assertEquals(Short.parseShort(String.valueOf(id)), row[1]); + Assert.assertEquals(Long.MAX_VALUE - id, row[3]); + Assert.assertEquals((double) id / 2, row[4]); + Assert.assertEquals(true, (boolean) row[5]); + long day = 24L * 3600 * 1000; + Assert.assertEquals("2019-03-02", new Date((day * ((int) row[6]))).toString()); + Assert.assertEquals("2019-02-12 03:03:34.0", new Timestamp((long) row[7] / 1000).toString()); + i++; + } + Assert.assertEquals(i, 100); + + reader.close(); + FileUtils.deleteDirectory(new File(path)); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, + timestampFormat); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, + badRecordAction); + carbonProperties.addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, + badRecordLoc); + } + + @Test + public void testReadUserSchema() throws IOException, InterruptedException { + String timestampFormat = carbonProperties.getProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, + CarbonCommonConstants.CARBON_TIMESTAMP_DEFAULT_FORMAT); + String badRecordAction = carbonProperties.getProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, + CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION_DEFAULT); + String badRecordLoc = carbonProperties.getProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, + CarbonCommonConstants.CARBON_BADRECORDS_LOC_DEFAULT_VAL); + String rootPath = new File(this.getClass().getResource("/").getPath() + + "../../").getCanonicalPath(); + String storeLocation = rootPath + "/target/"; + carbonProperties + .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC, storeLocation) + .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy-MM-dd hh:mm:ss") + .addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION, "REDIRECT"); + String path = "./testWriteFiles"; + FileUtils.deleteDirectory(new File(path)); + + Field[] fields = new Field[9]; + fields[0] = new Field("stringField", DataTypes.STRING); + fields[1] = new Field("shortField", DataTypes.SHORT); --- End diff -- SchemaOrdinal is -1, the result is incorrect --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2341 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4966/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2341 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6128/ --- |
In reply to this post by qiuchenjian-2
Github user xubo245 commented on the issue:
https://github.com/apache/carbondata/pull/2341 retest this please --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2341 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5116/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2341 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4970/ --- |
Free forum by Nabble | Edit this page |