[jira] [Created] (CARBONDATA-4151) When data sampling is done on large data set using Spark's df.sample function - the size of sampled table is not matching with record size of non sampled (Raw Table)

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Created] (CARBONDATA-4151) When data sampling is done on large data set using Spark's df.sample function - the size of sampled table is not matching with record size of non sampled (Raw Table)

Akash R Nilugal (Jira)
Amaranadh Vayyala created CARBONDATA-4151:
---------------------------------------------

             Summary: When data sampling is done on large data set using Spark's df.sample function - the size of sampled table is not matching with record size of non sampled (Raw Table)
                 Key: CARBONDATA-4151
                 URL: https://issues.apache.org/jira/browse/CARBONDATA-4151
             Project: CarbonData
          Issue Type: Bug
          Components: core
    Affects Versions: 2.0.1
         Environment: Apache carbondata 2.0.1, spark 2.4.5, hadoop 2.7.2
            Reporter: Amaranadh Vayyala
             Fix For: 2.0.1, 2.1.0


Hi Team,

When we are performing 5%, 10% data sampling on large dataset using Spark's df.sample - the size of sampled table is not matching with record size of non sampled (Raw Table).

Our Raw table size is around 11 GB, so when we perform 5%, 10% sampling then the sampled table size should come as 550 MB, 1.1 GB. However in our case they are coming as 1.5 GB and 3 GB. Which is 3 times higher than the expected number. 

Could you please check and help us in understand where is the issue?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)