[
https://issues.apache.org/jira/browse/CARBONDATA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Zhichao Zhang reassigned CARBONDATA-1624:
------------------------------------------
Assignee: Zhichao Zhang
> If SORT_SCOPE is non-GLOBAL_SORT with Spark, set 'carbon.number.of.cores.while.loading' dynamically as per the available executor cores
> ----------------------------------------------------------------------------------------------------------------------------------------
>
> Key: CARBONDATA-1624
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-1624> Project: CarbonData
> Issue Type: Improvement
> Components: data-load, spark-integration
> Affects Versions: 1.3.0
> Reporter: Zhichao Zhang
> Assignee: Zhichao Zhang
> Priority: Minor
>
> If we are using carbondata + spark to load data, we can set
> carbon.number.of.cores.while.loading to the number of executor cores.
> For example, when set the number of executor cores to 6, it shows that there are at
> least 6 cores per node for loading data, so we can set
> carbon.number.of.cores.while.loading to 6 automatically.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)