[
https://issues.apache.org/jira/browse/CARBONDATA-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17306825#comment-17306825 ]
Mahesh Raju Somalaraju commented on CARBONDATA-4151:
----------------------------------------------------
Hi,
can you please provide some more details regarding this?
Like which all operations from carbondata side you are performing and API(df.sample) input parameters.
> When data sampling is done on large data set using Spark's df.sample function - the size of sampled table is not matching with record size of non sampled (Raw Table)
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: CARBONDATA-4151
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-4151> Project: CarbonData
> Issue Type: Bug
> Components: core
> Affects Versions: 2.0.1
> Environment: Apache carbondata 2.0.1, spark 2.4.5, hadoop 2.7.2
> Reporter: Amaranadh Vayyala
> Priority: Blocker
> Fix For: 2.1.0, 2.0.1
>
>
> Hi Team,
> When we are performing 5%, 10% data sampling on large dataset using Spark's df.sample - the size of sampled table is not matching with record size of non sampled (Raw Table).
> Our Raw table size is around 11 GB, so when we perform 5%, 10% sampling then the sampled table size should come as 550 MB, 1.1 GB. However in our case they are coming as 1.5 GB and 3 GB. Which is 3 times higher than the expected number.
> Could you please check and help us in understand where is the issue?
--
This message was sent by Atlassian Jira
(v8.3.4#803005)