Hi,
You can reduce the block size. You can configure while creating the table.
Configure table_blocksize in table properties while create the table.
-Regards
Kumar Vishal
On Tue, Jan 17, 2017 at 11:50 AM, ffpeng90 <
[hidden email]> wrote:
> Hi,all:
> I have loaded 1000W records in carbondata format with carbon-spark
> plugin. I have several questions below:
>
> Q1: I see there are only one XXX.carbondata file and 92 blockets into
> this
> block file. How can I split these blocklets into several blocks when
> generating this file? Is there any config properties?
>
> Q2: There are always Segment_0 ,Part0 in a table. How can i optimize the
> concurrency process for reading? Is there any guidelines?
>
>
>
>
>
>
>
>
> --
> View this message in context:
http://apache-carbondata-> mailing-list-archive.1130556.n5.nabble.com/Logic-about-
> file-storage-tp6458.html
> Sent from the Apache CarbonData Mailing List archive mailing list archive
> at Nabble.com.
>
kumar vishal