Hi,all:
Recently, I load a carbon table in hive via carbon-spark plugin. I see there is nothing in hive, and all data is stored in a folder named "storePath".
scala code following:

Q1: Does it mean that carbon-spark plugin just create a external table in hive and raw data can be stored anywhere? I have checked the hdfs path, there is only a table directory and nothing under the table directory.
Q2: If i want to build a independent reader for carbondata table, should i read data from hive, or just parse files in the "storePath"?
Q3: I check the files under "storePath", they are not sotred as hdfs format, but common files in linux. Do i get the point?
Q4: I have finished brief read logic for my independent reader, all input path is local.
Test1:[Carbondata-hadoop->Target->store->testdb->testtable] which contains 1K rows generated by testcase, and my code can extract data successfully.
Testcase2: However, i try to parse the data generated by carbon-spark plugin which contains 100W rows, It throws exception @BlockIndexStore.fillLoadedBlocks()

Appreciate for your regard.