[GitHub] carbondata pull request #1286: [CARBONDATA-1404] Added Unit test cases for H...

classic Classic list List threaded Threaded
40 messages Options
12
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1286: [CARBONDATA-1404] Added Unit test cases for H...

qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1286#discussion_r137938991
 
    --- Diff: integration/hive/src/test/java/org/apache/carbondata/hive/CarbonHiveRecordReaderTest.java ---
    @@ -0,0 +1,250 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.hive;
    +
    +import java.io.IOException;
    +import java.util.ArrayList;
    +import java.util.List;
    +import java.util.concurrent.ExecutorService;
    +import java.util.concurrent.ForkJoinPool;
    +
    +import org.apache.carbondata.common.CarbonIterator;
    +import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.metadata.CarbonTableIdentifier;
    +import org.apache.carbondata.core.metadata.ColumnarFormatVersion;
    +import org.apache.carbondata.core.scan.executor.exception.QueryExecutionException;
    +import org.apache.carbondata.core.scan.executor.impl.DetailQueryExecutor;
    +import org.apache.carbondata.core.scan.executor.infos.BlockExecutionInfo;
    +import org.apache.carbondata.core.scan.model.QueryModel;
    +import org.apache.carbondata.core.scan.result.BatchResult;
    +import org.apache.carbondata.core.scan.result.iterator.AbstractDetailQueryResultIterator;
    +import org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator;
    +import org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator;
    +import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
    +
    +import mockit.Mock;
    +import mockit.MockUp;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.serde.serdeConstants;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.Text;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.InputSplit;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.junit.Assert;
    +import org.junit.BeforeClass;
    +import org.junit.Test;
    +
    +public class CarbonHiveRecordReaderTest {
    +    private static CarbonHiveRecordReader carbonHiveRecordReaderObj;
    +    private static AbsoluteTableIdentifier absoluteTableIdentifier;
    +    private static QueryModel queryModel = new QueryModel();
    +    private static CarbonReadSupport<ArrayWritable> readSupport = new CarbonDictionaryDecodeReadSupport<>();
    +    private static JobConf jobConf = new JobConf();
    +    private static InputSplit inputSplitNotInstanceOfHiveInputSplit, inputSplitInstanceOfHiveInputSplit;
    +    private static BatchResult batchResult = new BatchResult();
    +    private static Writable writable;
    +    private static CarbonIterator carbonIteratorObject;
    +
    +    @BeforeClass
    +    public static void setUp() throws Exception {
    +        String array[] = {"neha", "01", "vaishali"};
    +        writable = new ArrayWritable(array);
    +        absoluteTableIdentifier = new AbsoluteTableIdentifier(
    +                "/home/neha/Projects/incubator-carbondata/examples/spark2/target/store",
    --- End diff --
   
    please check if really need to define : absoluteTableIdentifier  ?
   
    Because the below code can be dropped :
   
    new MockUp<QueryModel>() {
                @Mock
                public AbsoluteTableIdentifier getAbsoluteTableIdentifier() {
                    return absoluteTableIdentifier;
                }
            };


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1286: [CARBONDATA-1404] Added Unit test cases for H...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1286#discussion_r137939004
 
    --- Diff: integration/hive/src/test/java/org/apache/carbondata/hive/CarbonHiveRecordReaderTest.java ---
    @@ -0,0 +1,250 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.hive;
    +
    +import java.io.IOException;
    +import java.util.ArrayList;
    +import java.util.List;
    +import java.util.concurrent.ExecutorService;
    +import java.util.concurrent.ForkJoinPool;
    +
    +import org.apache.carbondata.common.CarbonIterator;
    +import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.metadata.CarbonTableIdentifier;
    +import org.apache.carbondata.core.metadata.ColumnarFormatVersion;
    +import org.apache.carbondata.core.scan.executor.exception.QueryExecutionException;
    +import org.apache.carbondata.core.scan.executor.impl.DetailQueryExecutor;
    +import org.apache.carbondata.core.scan.executor.infos.BlockExecutionInfo;
    +import org.apache.carbondata.core.scan.model.QueryModel;
    +import org.apache.carbondata.core.scan.result.BatchResult;
    +import org.apache.carbondata.core.scan.result.iterator.AbstractDetailQueryResultIterator;
    +import org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator;
    +import org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator;
    +import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
    +
    +import mockit.Mock;
    +import mockit.MockUp;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.serde.serdeConstants;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.Text;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.InputSplit;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.junit.Assert;
    +import org.junit.BeforeClass;
    +import org.junit.Test;
    +
    +public class CarbonHiveRecordReaderTest {
    +    private static CarbonHiveRecordReader carbonHiveRecordReaderObj;
    +    private static AbsoluteTableIdentifier absoluteTableIdentifier;
    +    private static QueryModel queryModel = new QueryModel();
    +    private static CarbonReadSupport<ArrayWritable> readSupport = new CarbonDictionaryDecodeReadSupport<>();
    +    private static JobConf jobConf = new JobConf();
    +    private static InputSplit inputSplitNotInstanceOfHiveInputSplit, inputSplitInstanceOfHiveInputSplit;
    +    private static BatchResult batchResult = new BatchResult();
    +    private static Writable writable;
    +    private static CarbonIterator carbonIteratorObject;
    +
    +    @BeforeClass
    +    public static void setUp() throws Exception {
    +        String array[] = {"neha", "01", "vaishali"};
    +        writable = new ArrayWritable(array);
    +        absoluteTableIdentifier = new AbsoluteTableIdentifier(
    +                "/home/neha/Projects/incubator-carbondata/examples/spark2/target/store",
    +                new CarbonTableIdentifier("DB", "TBL", "TBLID"));
    +        ColumnarFormatVersion columnarFormatVersion = ColumnarFormatVersion.V3;
    +        Path path = new Path(
    +                "/home/store/database/Fact/Part0/Segment_0/part-0-0_batchno0-0-1502197243476.carbondata");
    +        inputSplitInstanceOfHiveInputSplit =
    +                new CarbonHiveInputSplit("20", path, 1235L, 235L, array, columnarFormatVersion);
    +        List<Object[]> rowsList = new ArrayList(2);
    +        ExecutorService executorService = new ForkJoinPool();
    +        List<BlockExecutionInfo> blockExecutionInfoList = new ArrayList<>();
    +
    +        blockExecutionInfoList.add(new BlockExecutionInfo());
    +        blockExecutionInfoList.add(new BlockExecutionInfo());
    +
    +        jobConf.set("hive.io.file.readcolumn.ids", "01,02");
    +        jobConf.set(serdeConstants.LIST_COLUMN_TYPES, "int,string");
    +
    +        rowsList.add(0, new Object[]{1, "neha"});
    +        rowsList.add(1, new Object[]{3, "divya"});
    +        batchResult.setRows(rowsList);
    +
    +        new MockUp<QueryModel>() {
    +            @Mock
    +            public AbsoluteTableIdentifier getAbsoluteTableIdentifier() {
    +                return absoluteTableIdentifier;
    +            }
    +        };
    +
    +        new MockUp<AbstractDetailQueryResultIterator>() {
    +            @Mock
    --- End diff --
   
    Why need to define "new MockUp<AbstractDetailQueryResultIterator>()" ? i could not find any code use them.


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1286: [CARBONDATA-1404] Added Unit test cases for H...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1286#discussion_r137939031
 
    --- Diff: integration/hive/src/test/java/org/apache/carbondata/hive/CarbonDictionaryDecodeReadSupportTest.java ---
    @@ -0,0 +1,325 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.hive;
    +
    +import java.io.IOException;
    +import java.text.SimpleDateFormat;
    +import java.util.ArrayList;
    +import java.util.Date;
    +import java.util.List;
    +
    +import org.apache.carbondata.core.cache.dictionary.AbstractColumnDictionaryInfo;
    +import org.apache.carbondata.core.cache.dictionary.ColumnDictionaryInfo;
    +import org.apache.carbondata.core.cache.dictionary.Dictionary;
    +import org.apache.carbondata.core.cache.dictionary.DictionaryColumnUniqueIdentifier;
    +import org.apache.carbondata.core.cache.dictionary.ForwardDictionaryCache;
    +import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.metadata.CarbonTableIdentifier;
    +import org.apache.carbondata.core.metadata.datatype.DataType;
    +import org.apache.carbondata.core.metadata.encoder.Encoding;
    +import org.apache.carbondata.core.metadata.schema.table.column.CarbonColumn;
    +import org.apache.carbondata.core.metadata.schema.table.column.CarbonDimension;
    +import org.apache.carbondata.core.metadata.schema.table.column.ColumnSchema;
    +
    +import mockit.Mock;
    +import mockit.MockUp;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.spark.sql.catalyst.expressions.GenericInternalRow;
    +import org.apache.spark.sql.catalyst.util.GenericArrayData;
    +import org.apache.spark.sql.types.ArrayType;
    +import org.junit.Assert;
    +import org.junit.BeforeClass;
    +import org.junit.Test;
    +
    +public class CarbonDictionaryDecodeReadSupportTest {
    +
    +  private static Date date = new Date();
    +  private static DataType dataTypes[] =
    +      new DataType[] { DataType.INT, DataType.STRING, DataType.NULL, DataType.DOUBLE, DataType.LONG,
    +          DataType.SHORT, DataType.DATE, DataType.TIMESTAMP, DataType.DECIMAL, DataType.STRUCT,
    +          DataType.ARRAY, DataType.BYTE_ARRAY };
    +  private static Encoding encodings[] =
    +      new Encoding[] { Encoding.DICTIONARY, Encoding.DIRECT_DICTIONARY, Encoding.BIT_PACKED };
    +  private static CarbonColumn carbonColumnsArray[] = new CarbonColumn[12];
    +  private static ColumnSchema columnSchemas[] = new ColumnSchema[12];
    +  private CarbonDictionaryDecodeReadSupport carbonDictionaryDecodeReadSupportObj =
    +      new CarbonDictionaryDecodeReadSupport();
    +  private static AbsoluteTableIdentifier absoluteTableIdentifier;
    +  private Dictionary dictionary = new ColumnDictionaryInfo(DataType.BOOLEAN);
    +  private String name[] = new String[] { "FirstName", "LastName" };
    +  private Object objects[];
    +  private static String dateFormat = new SimpleDateFormat("yyyy/MM/dd").format(date);
    +  private static String timeStamp = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss").format(date);
    +
    +  @BeforeClass public static void setUp() {
    +
    +    for (int i = 0; i < carbonColumnsArray.length; i++) {
    +      List<Encoding> encodingList = new ArrayList<>();
    +      columnSchemas[i] = new ColumnSchema();
    +      columnSchemas[i].setDataType(dataTypes[i]);
    +
    +      DataType datatype = columnSchemas[i].getDataType();
    +      if (datatype == DataType.STRING) {
    +        encodingList.add(encodings[0]);
    +      } else if (datatype.isComplexType()) {
    +        encodingList.add(encodings[0]);
    +        columnSchemas[i].setNumberOfChild(2);
    +        columnSchemas[i].setDimensionColumn(true);
    +      } else {
    +        encodingList.add(encodings[((i % 2) + 1)]);
    +      }
    +      columnSchemas[i].setEncodingList(encodingList);
    +      carbonColumnsArray[i] = new CarbonDimension(columnSchemas[i], 10, 20, 30, 40);
    +    }
    +
    +    absoluteTableIdentifier = new AbsoluteTableIdentifier(
    +        "/incubator-carbondata/examples/spark2/target/store",
    --- End diff --
   
    why the path including incubator ??  "/incubator-carbondata/examples/spark2/target/store"


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1286: [CARBONDATA-1404] Added Unit test cases for Hive Int...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on the issue:

    https://github.com/apache/carbondata/pull/1286
 
    @PallaviSingh1992  
    1. Please remove redundant code which never be used/invoked by any other code
    2.Please enhance the "absoluteTableIdentifier and path" definition.


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1286: [CARBONDATA-1404] Added Unit test cases for H...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user PallaviSingh1992 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1286#discussion_r137981992
 
    --- Diff: integration/hive/src/test/java/org/apache/carbondata/hive/CarbonHiveRecordReaderTest.java ---
    @@ -0,0 +1,250 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.hive;
    +
    +import java.io.IOException;
    +import java.util.ArrayList;
    +import java.util.List;
    +import java.util.concurrent.ExecutorService;
    +import java.util.concurrent.ForkJoinPool;
    +
    +import org.apache.carbondata.common.CarbonIterator;
    +import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.metadata.CarbonTableIdentifier;
    +import org.apache.carbondata.core.metadata.ColumnarFormatVersion;
    +import org.apache.carbondata.core.scan.executor.exception.QueryExecutionException;
    +import org.apache.carbondata.core.scan.executor.impl.DetailQueryExecutor;
    +import org.apache.carbondata.core.scan.executor.infos.BlockExecutionInfo;
    +import org.apache.carbondata.core.scan.model.QueryModel;
    +import org.apache.carbondata.core.scan.result.BatchResult;
    +import org.apache.carbondata.core.scan.result.iterator.AbstractDetailQueryResultIterator;
    +import org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator;
    +import org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator;
    +import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
    +
    +import mockit.Mock;
    +import mockit.MockUp;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.serde.serdeConstants;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.Text;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.InputSplit;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.junit.Assert;
    +import org.junit.BeforeClass;
    +import org.junit.Test;
    +
    +public class CarbonHiveRecordReaderTest {
    +    private static CarbonHiveRecordReader carbonHiveRecordReaderObj;
    +    private static AbsoluteTableIdentifier absoluteTableIdentifier;
    +    private static QueryModel queryModel = new QueryModel();
    +    private static CarbonReadSupport<ArrayWritable> readSupport = new CarbonDictionaryDecodeReadSupport<>();
    +    private static JobConf jobConf = new JobConf();
    +    private static InputSplit inputSplitNotInstanceOfHiveInputSplit, inputSplitInstanceOfHiveInputSplit;
    +    private static BatchResult batchResult = new BatchResult();
    +    private static Writable writable;
    +    private static CarbonIterator carbonIteratorObject;
    +
    +    @BeforeClass
    +    public static void setUp() throws Exception {
    +        String array[] = {"neha", "01", "vaishali"};
    +        writable = new ArrayWritable(array);
    +        absoluteTableIdentifier = new AbsoluteTableIdentifier(
    +                "/home/neha/Projects/incubator-carbondata/examples/spark2/target/store",
    +                new CarbonTableIdentifier("DB", "TBL", "TBLID"));
    +        ColumnarFormatVersion columnarFormatVersion = ColumnarFormatVersion.V3;
    +        Path path = new Path(
    +                "/home/store/database/Fact/Part0/Segment_0/part-0-0_batchno0-0-1502197243476.carbondata");
    +        inputSplitInstanceOfHiveInputSplit =
    +                new CarbonHiveInputSplit("20", path, 1235L, 235L, array, columnarFormatVersion);
    +        List<Object[]> rowsList = new ArrayList(2);
    +        ExecutorService executorService = new ForkJoinPool();
    +        List<BlockExecutionInfo> blockExecutionInfoList = new ArrayList<>();
    +
    +        blockExecutionInfoList.add(new BlockExecutionInfo());
    +        blockExecutionInfoList.add(new BlockExecutionInfo());
    +
    +        jobConf.set("hive.io.file.readcolumn.ids", "01,02");
    +        jobConf.set(serdeConstants.LIST_COLUMN_TYPES, "int,string");
    +
    +        rowsList.add(0, new Object[]{1, "neha"});
    +        rowsList.add(1, new Object[]{3, "divya"});
    +        batchResult.setRows(rowsList);
    +
    +        new MockUp<QueryModel>() {
    +            @Mock
    +            public AbsoluteTableIdentifier getAbsoluteTableIdentifier() {
    +                return absoluteTableIdentifier;
    +            }
    +        };
    +
    +        new MockUp<AbstractDetailQueryResultIterator>() {
    +            @Mock
    --- End diff --
   
    AbstractDetailQueryResultIterator is called internally from the DetailQueryResultIterator and we require DetailQueryResultIterator in DetailQueryExecutor of which execute method is invoked in the CarbonHiveRecordReader


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1286: [CARBONDATA-1404] Added Unit test cases for Hive Int...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/1286
 
    SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/651/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1286: [CARBONDATA-1404] Added Unit test cases for Hive Int...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/1286
 
    SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/656/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1286: [CARBONDATA-1404] Added Unit test cases for Hive Int...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user PallaviSingh1992 commented on the issue:

    https://github.com/apache/carbondata/pull/1286
 
    retest this please


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1286: [CARBONDATA-1404] Added Unit test cases for Hive Int...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/1286
 
    SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/665/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1286: [CARBONDATA-1404] Added Unit test cases for Hive Int...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/1286
 
    Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/18/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1286: [CARBONDATA-1404] Added Unit test cases for H...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1286#discussion_r138534203
 
    --- Diff: integration/hive/src/test/java/org/apache/carbondata/hive/CarbonHiveRecordReaderTest.java ---
    @@ -0,0 +1,234 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.hive;
    +
    +import java.io.IOException;
    +import java.util.ArrayList;
    +import java.util.List;
    +import java.util.concurrent.ExecutorService;
    +import java.util.concurrent.ForkJoinPool;
    +
    +import org.apache.carbondata.common.CarbonIterator;
    +import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.metadata.CarbonTableIdentifier;
    +import org.apache.carbondata.core.metadata.ColumnarFormatVersion;
    +import org.apache.carbondata.core.scan.executor.exception.QueryExecutionException;
    +import org.apache.carbondata.core.scan.executor.impl.DetailQueryExecutor;
    +import org.apache.carbondata.core.scan.executor.infos.BlockExecutionInfo;
    +import org.apache.carbondata.core.scan.model.QueryModel;
    +import org.apache.carbondata.core.scan.result.BatchResult;
    +import org.apache.carbondata.core.scan.result.iterator.AbstractDetailQueryResultIterator;
    +import org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator;
    +import org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator;
    +import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
    +
    +import mockit.Mock;
    +import mockit.MockUp;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.serde.serdeConstants;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.Text;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.InputSplit;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.junit.Assert;
    +import org.junit.BeforeClass;
    +import org.junit.Test;
    +
    +public class CarbonHiveRecordReaderTest {
    +    private static CarbonHiveRecordReader carbonHiveRecordReaderObj;
    +    private static AbsoluteTableIdentifier absoluteTableIdentifier;
    +    private static QueryModel queryModel = new QueryModel();
    +    private static CarbonReadSupport<ArrayWritable> readSupport = new CarbonDictionaryDecodeReadSupport<>();
    +    private static JobConf jobConf = new JobConf();
    +    private static InputSplit inputSplitNotInstanceOfHiveInputSplit, inputSplitInstanceOfHiveInputSplit;
    +    private static BatchResult batchResult = new BatchResult();
    +    private static Writable writable;
    +    private static CarbonIterator carbonIteratorObject;
    +
    +    @BeforeClass
    +    public static void setUp() throws Exception {
    +        String array[] = {"neha", "01", "vaishali"};
    +        writable = new ArrayWritable(array);
    +        absoluteTableIdentifier = new AbsoluteTableIdentifier(
    +                "carbondata/examples/spark2/target/store",
    --- End diff --
   
    Can you please explain, why using this path "carbondata/examples/spark2/target/store" for presto test module?


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1286: [CARBONDATA-1404] Added Unit test cases for H...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user chenliang613 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1286#discussion_r138534300
 
    --- Diff: integration/hive/src/test/java/org/apache/carbondata/hive/CarbonHiveRecordReaderTest.java ---
    @@ -0,0 +1,234 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.hive;
    +
    +import java.io.IOException;
    +import java.util.ArrayList;
    +import java.util.List;
    +import java.util.concurrent.ExecutorService;
    +import java.util.concurrent.ForkJoinPool;
    +
    +import org.apache.carbondata.common.CarbonIterator;
    +import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.metadata.CarbonTableIdentifier;
    +import org.apache.carbondata.core.metadata.ColumnarFormatVersion;
    +import org.apache.carbondata.core.scan.executor.exception.QueryExecutionException;
    +import org.apache.carbondata.core.scan.executor.impl.DetailQueryExecutor;
    +import org.apache.carbondata.core.scan.executor.infos.BlockExecutionInfo;
    +import org.apache.carbondata.core.scan.model.QueryModel;
    +import org.apache.carbondata.core.scan.result.BatchResult;
    +import org.apache.carbondata.core.scan.result.iterator.AbstractDetailQueryResultIterator;
    +import org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator;
    +import org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator;
    +import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
    +
    +import mockit.Mock;
    +import mockit.MockUp;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.serde.serdeConstants;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.Text;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.InputSplit;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.junit.Assert;
    +import org.junit.BeforeClass;
    +import org.junit.Test;
    +
    +public class CarbonHiveRecordReaderTest {
    +    private static CarbonHiveRecordReader carbonHiveRecordReaderObj;
    +    private static AbsoluteTableIdentifier absoluteTableIdentifier;
    +    private static QueryModel queryModel = new QueryModel();
    +    private static CarbonReadSupport<ArrayWritable> readSupport = new CarbonDictionaryDecodeReadSupport<>();
    +    private static JobConf jobConf = new JobConf();
    +    private static InputSplit inputSplitNotInstanceOfHiveInputSplit, inputSplitInstanceOfHiveInputSplit;
    +    private static BatchResult batchResult = new BatchResult();
    +    private static Writable writable;
    +    private static CarbonIterator carbonIteratorObject;
    +
    +    @BeforeClass
    +    public static void setUp() throws Exception {
    +        String array[] = {"neha", "01", "vaishali"};
    +        writable = new ArrayWritable(array);
    +        absoluteTableIdentifier = new AbsoluteTableIdentifier(
    +                "carbondata/examples/spark2/target/store",
    +                new CarbonTableIdentifier("DB", "TBL", "TBLID"));
    +        ColumnarFormatVersion columnarFormatVersion = ColumnarFormatVersion.V3;
    +        Path path = new Path(
    +                "/home/store/database/Fact/Part0/Segment_0/part-0-0_batchno0-0-1502197243476.carbondata");
    --- End diff --
   
    This path "/home/store/database/Fact/Part0/Segment_0/part-0-0_batchno0-0-1502197243476.carbondata" is not exist, why use it ?


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1286: [CARBONDATA-1404] Added Unit test cases for H...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user PallaviSingh1992 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1286#discussion_r138539875
 
    --- Diff: integration/hive/src/test/java/org/apache/carbondata/hive/CarbonHiveRecordReaderTest.java ---
    @@ -0,0 +1,234 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.hive;
    +
    +import java.io.IOException;
    +import java.util.ArrayList;
    +import java.util.List;
    +import java.util.concurrent.ExecutorService;
    +import java.util.concurrent.ForkJoinPool;
    +
    +import org.apache.carbondata.common.CarbonIterator;
    +import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.metadata.CarbonTableIdentifier;
    +import org.apache.carbondata.core.metadata.ColumnarFormatVersion;
    +import org.apache.carbondata.core.scan.executor.exception.QueryExecutionException;
    +import org.apache.carbondata.core.scan.executor.impl.DetailQueryExecutor;
    +import org.apache.carbondata.core.scan.executor.infos.BlockExecutionInfo;
    +import org.apache.carbondata.core.scan.model.QueryModel;
    +import org.apache.carbondata.core.scan.result.BatchResult;
    +import org.apache.carbondata.core.scan.result.iterator.AbstractDetailQueryResultIterator;
    +import org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator;
    +import org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator;
    +import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
    +
    +import mockit.Mock;
    +import mockit.MockUp;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.serde.serdeConstants;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.Text;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.InputSplit;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.junit.Assert;
    +import org.junit.BeforeClass;
    +import org.junit.Test;
    +
    +public class CarbonHiveRecordReaderTest {
    +    private static CarbonHiveRecordReader carbonHiveRecordReaderObj;
    +    private static AbsoluteTableIdentifier absoluteTableIdentifier;
    +    private static QueryModel queryModel = new QueryModel();
    +    private static CarbonReadSupport<ArrayWritable> readSupport = new CarbonDictionaryDecodeReadSupport<>();
    +    private static JobConf jobConf = new JobConf();
    +    private static InputSplit inputSplitNotInstanceOfHiveInputSplit, inputSplitInstanceOfHiveInputSplit;
    +    private static BatchResult batchResult = new BatchResult();
    +    private static Writable writable;
    +    private static CarbonIterator carbonIteratorObject;
    +
    +    @BeforeClass
    +    public static void setUp() throws Exception {
    +        String array[] = {"neha", "01", "vaishali"};
    +        writable = new ArrayWritable(array);
    +        absoluteTableIdentifier = new AbsoluteTableIdentifier(
    +                "carbondata/examples/spark2/target/store",
    +                new CarbonTableIdentifier("DB", "TBL", "TBLID"));
    +        ColumnarFormatVersion columnarFormatVersion = ColumnarFormatVersion.V3;
    +        Path path = new Path(
    +                "/home/store/database/Fact/Part0/Segment_0/part-0-0_batchno0-0-1502197243476.carbondata");
    --- End diff --
   
    @chenliang613 This is just a dummy path, which is being used to create an object of Input Split. If you want we can rename it to mockPath.


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1286: [CARBONDATA-1404] Added Unit test cases for H...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user PallaviSingh1992 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1286#discussion_r138543758
 
    --- Diff: integration/hive/src/test/java/org/apache/carbondata/hive/CarbonHiveRecordReaderTest.java ---
    @@ -0,0 +1,234 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.hive;
    +
    +import java.io.IOException;
    +import java.util.ArrayList;
    +import java.util.List;
    +import java.util.concurrent.ExecutorService;
    +import java.util.concurrent.ForkJoinPool;
    +
    +import org.apache.carbondata.common.CarbonIterator;
    +import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.metadata.CarbonTableIdentifier;
    +import org.apache.carbondata.core.metadata.ColumnarFormatVersion;
    +import org.apache.carbondata.core.scan.executor.exception.QueryExecutionException;
    +import org.apache.carbondata.core.scan.executor.impl.DetailQueryExecutor;
    +import org.apache.carbondata.core.scan.executor.infos.BlockExecutionInfo;
    +import org.apache.carbondata.core.scan.model.QueryModel;
    +import org.apache.carbondata.core.scan.result.BatchResult;
    +import org.apache.carbondata.core.scan.result.iterator.AbstractDetailQueryResultIterator;
    +import org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator;
    +import org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator;
    +import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
    +
    +import mockit.Mock;
    +import mockit.MockUp;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.serde.serdeConstants;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.Text;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.InputSplit;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.junit.Assert;
    +import org.junit.BeforeClass;
    +import org.junit.Test;
    +
    +public class CarbonHiveRecordReaderTest {
    +    private static CarbonHiveRecordReader carbonHiveRecordReaderObj;
    +    private static AbsoluteTableIdentifier absoluteTableIdentifier;
    +    private static QueryModel queryModel = new QueryModel();
    +    private static CarbonReadSupport<ArrayWritable> readSupport = new CarbonDictionaryDecodeReadSupport<>();
    +    private static JobConf jobConf = new JobConf();
    +    private static InputSplit inputSplitNotInstanceOfHiveInputSplit, inputSplitInstanceOfHiveInputSplit;
    +    private static BatchResult batchResult = new BatchResult();
    +    private static Writable writable;
    +    private static CarbonIterator carbonIteratorObject;
    +
    +    @BeforeClass
    +    public static void setUp() throws Exception {
    +        String array[] = {"neha", "01", "vaishali"};
    +        writable = new ArrayWritable(array);
    +        absoluteTableIdentifier = new AbsoluteTableIdentifier(
    +                "carbondata/examples/spark2/target/store",
    --- End diff --
   
    @chenliang613 this is also a mock identifier that is used. I have renamed them to indicate that they are mock variables.


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1286: [CARBONDATA-1404] Added Unit test cases for Hive Int...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user PallaviSingh1992 commented on the issue:

    https://github.com/apache/carbondata/pull/1286
 
    @chenliang613 please review, I have updated the changes.


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1286: [CARBONDATA-1404] Added Unit test cases for Hive Int...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/1286
 
    SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/727/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1286: [CARBONDATA-1404] Added Unit test cases for Hive Int...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/1286
 
    Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/127/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1286: [CARBONDATA-1404] Added Unit test cases for Hive Int...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/1286
 
    SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/1201/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1286: [CARBONDATA-1404] Added Unit test cases for Hive Int...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/1286
 
    Build Failed  with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/570/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1286: [CARBONDATA-1404] Added Unit test cases for H...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user PallaviSingh1992 closed the pull request at:

    https://github.com/apache/carbondata/pull/1286


---
12