[jira] [Updated] (CARBONDATA-3096) Wrong records size on the input metrics & Free the intermediate page used while adaptive encoding

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Updated] (CARBONDATA-3096) Wrong records size on the input metrics & Free the intermediate page used while adaptive encoding

Akash R Nilugal (Jira)

     [ https://issues.apache.org/jira/browse/CARBONDATA-3096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

dhatchayani updated CARBONDATA-3096:
------------------------------------
    Description:
(1) Scanned record result size is taking from the default batch size. It should be taken from the records scanned.

Steps to reproduce:

spark.sql("DROP TABLE IF EXISTS person")
 spark.sql("create table person (id int, name string) stored by 'carbondata'")
 spark.sql("insert into person select 1,'a'")
 spark.sql("select * from person").show(false)

!3096.PNG!

 

(2) The intermediate page used to sort in adaptive encoding should be freed.

  was:
(1) Scanned record result size is taking from the default batch size. It should be taken from the records scanned.

Steps to reproduce:

spark.sql("DROP TABLE IF EXISTS person")
spark.sql("create table person (id int, name string) stored by 'carbondata'")
spark.sql("insert into person select 1,'a'")
spark.sql("select * from person").show(false)

 

+--------------+-------+-----------------------+----------+----------------+--------------------+----------------+--------------+---------------+---------------+---------------+-----------+-------------+-----------+-----------+-----------------------+--------------------+--------------------+-----------------------+
|query_id |task_id|start_time |total_time|load_blocks_time|load_dictionary_time|carbon_scan_time|carbon_IO_time|scan_blocks_num|total_blocklets|valid_blocklets|total_pages|scanned_pages|valid_pages|+*result_size*+|key_column_filling_time|measure_filling_time|page_uncompress_time|result_preparation_time|
+--------------+-------+-----------------------+----------+----------------+--------------------+----------------+--------------+---------------+---------------+---------------+-----------+-------------+-----------+-----------+-----------------------+--------------------+--------------------+-----------------------+
|29127036821854| 0|2018-11-16 20:22:56.573| 1430ms| 100ms| 0ms| 13| 102| 1| 1| 1| 1| 0| 1| +*64000*+| 0| 0| 927| 0|
+--------------+-------+-----------------------+----------+----------------+--------------------+----------------+--------------+---------------+---------------+---------------+-----------+-------------+-----------+-----------+-----------------------+--------------------+--------------------+-----------------------+

(2) The intermediate page used to sort in adaptive encoding should be freed.


> Wrong records size on the input metrics & Free the intermediate page used while adaptive encoding
> -------------------------------------------------------------------------------------------------
>
>                 Key: CARBONDATA-3096
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-3096
>             Project: CarbonData
>          Issue Type: Bug
>            Reporter: dhatchayani
>            Assignee: dhatchayani
>            Priority: Minor
>         Attachments: 3096.PNG
>
>          Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> (1) Scanned record result size is taking from the default batch size. It should be taken from the records scanned.
> Steps to reproduce:
> spark.sql("DROP TABLE IF EXISTS person")
>  spark.sql("create table person (id int, name string) stored by 'carbondata'")
>  spark.sql("insert into person select 1,'a'")
>  spark.sql("select * from person").show(false)
> !3096.PNG!
>  
> (2) The intermediate page used to sort in adaptive encoding should be freed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)