VenuReddy2103 opened a new pull request #3756: URL: https://github.com/apache/carbondata/pull/3756 ### Why is this PR needed? Dynamic configuration params weren't updated in documentation. ### What changes were proposed in this PR? Updated documentation for dynamic configuration params ### Does this PR introduce any user interface change? - No - Yes. (please explain the change and update document) ### Is any new testcase added? - No - Yes ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] |
CarbonDataQA1 commented on pull request #3756: URL: https://github.com/apache/carbondata/pull/3756#issuecomment-626063977 Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1262/ ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] |
In reply to this post by GitBox
CarbonDataQA1 commented on pull request #3756: URL: https://github.com/apache/carbondata/pull/3756#issuecomment-629487745 Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3041/ ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] |
In reply to this post by GitBox
CarbonDataQA1 commented on pull request #3756: URL: https://github.com/apache/carbondata/pull/3756#issuecomment-629488144 Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1322/ ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] |
In reply to this post by GitBox
ajantha-bhat commented on a change in pull request #3756: URL: https://github.com/apache/carbondata/pull/3756#discussion_r426104559 ########## File path: integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SetParameterTestCase.scala ########## @@ -231,6 +231,26 @@ class SetParameterTestCase extends QueryTest with BeforeAndAfterAll { assert(getLogFileCount("default", "carbon_table_load", "0") >= 1) } + test("TC_013-test set dynamic properties") { + sql("SET carbon.number.of.cores.while.loading=8") Review comment: you need to reset also, else it affect the other test cases ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] |
In reply to this post by GitBox
VenuReddy2103 commented on a change in pull request #3756: URL: https://github.com/apache/carbondata/pull/3756#discussion_r426113152 ########## File path: docs/configuration-parameters.md ########## @@ -210,6 +210,22 @@ RESET | carbon.table.load.sort.scope.<db_name>.<table_name> | Overrides the SORT_SCOPE provided in CREATE TABLE. | | carbon.options.global.sort.partitions | Specifies the number of partitions to be used during global sort. | | carbon.options.serialization.null.format | Default Null value representation in the data being loaded. **NOTE:** Refer to [Data Loading Configuration](#data-loading-configuration)#carbon.options.serialization.null.format for detailed information. | +| carbon.number.of.cores.while.loading | Specifies number of cores to be used while loading data. **NOTE:** Refer to [Data Loading Configuration](#data-loading-configuration)#carbon.number.of.cores.while.loading for detailed information. | +| carbon.number.of.cores.while.compacting | Specifies number of cores to be used while compacting data. **NOTE:** Refer to [Data Loading Configuration](#data-loading-configuration)#carbon.number.of.cores.while.compacting for detailed information. | +| enable.offheap.sort | To enable off-heap memory usage. **NOTE:** Refer to [Data Loading Configuration](#data-loading-configuration)#enable.offheap.sort for detailed information. | +| carbon.blockletgroup.size.in.mb | Specifies the size of each blocklet group. **NOTE:** Refer to [Data Loading Configuration](#data-loading-configuration)#carbon.blockletgroup.size.in.mb for detailed information. | +| carbon.enable.auto.load.merge | To enable compaction along with data loading. **NOTE:** Refer to [Compaction Configuration](#compaction-configuration)#carbon.enable.auto.load.merge for detailed information. | +| carbon.major.compaction.size | To configure major compaction size. **NOTE:** Refer to [Compaction Configuration](#compaction-configuration)#carbon.major.compaction.size for detailed information. | +| carbon.compaction.level.threshold | To configure compaction threshold. **NOTE:** Refer to [Compaction Configuration](#compaction-configuration)#carbon.compaction.level.threshold for detailed information. | +| carbon.enable.vector.reader | To enable fetching data as columnar batch of size 4*1024 rows instead of fetching a row at a time. **NOTE:** Refer to [Query Configuration](#query-configuration)#carbon.enable.vector.reader for detailed information. | +| enable.unsafe.in.query.processing | To enable use of unsafe functions while scanning the data during query. **NOTE:** Refer to [Query Configuration](#query-configuration)#enable.unsafe.in.query.processing for detailed information. | +| carbon.push.rowfilters.for.vector | To enable complete row filters handling by carbon in case of vector. **NOTE:** Refer to [Query Configuration](#query-configuration)#carbon.push.rowfilters.for.vector for detailed information. | +| carbon.query.stage.input.enable | To make query to include staged input files. **NOTE:** Refer to [Query Configuration](#query-configuration)#carbon.query.stage.input.enable for detailed information. | +| carbon.enable.tablestatus.backup | To enable table status file backup. **NOTE:** Refer to [System Configuration](#system-configuration)#carbon.enable.tablestatus.backup for detailed information. | Review comment: Modified and added a testcase now ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] |
In reply to this post by GitBox
CarbonDataQA1 commented on pull request #3756: URL: https://github.com/apache/carbondata/pull/3756#issuecomment-629609525 Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3045/ ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] |
In reply to this post by GitBox
CarbonDataQA1 commented on pull request #3756: URL: https://github.com/apache/carbondata/pull/3756#issuecomment-629609609 Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1326/ ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] |
In reply to this post by GitBox
VenuReddy2103 commented on a change in pull request #3756: URL: https://github.com/apache/carbondata/pull/3756#discussion_r426132913 ########## File path: docs/configuration-parameters.md ########## @@ -210,6 +210,22 @@ RESET | carbon.table.load.sort.scope.<db_name>.<table_name> | Overrides the SORT_SCOPE provided in CREATE TABLE. | | carbon.options.global.sort.partitions | Specifies the number of partitions to be used during global sort. | | carbon.options.serialization.null.format | Default Null value representation in the data being loaded. **NOTE:** Refer to [Data Loading Configuration](#data-loading-configuration)#carbon.options.serialization.null.format for detailed information. | +| carbon.number.of.cores.while.loading | Specifies number of cores to be used while loading data. **NOTE:** Refer to [Data Loading Configuration](#data-loading-configuration)#carbon.number.of.cores.while.loading for detailed information. | +| carbon.number.of.cores.while.compacting | Specifies number of cores to be used while compacting data. **NOTE:** Refer to [Data Loading Configuration](#data-loading-configuration)#carbon.number.of.cores.while.compacting for detailed information. | +| enable.offheap.sort | To enable off-heap memory usage. **NOTE:** Refer to [Data Loading Configuration](#data-loading-configuration)#enable.offheap.sort for detailed information. | +| carbon.blockletgroup.size.in.mb | Specifies the size of each blocklet group. **NOTE:** Refer to [Data Loading Configuration](#data-loading-configuration)#carbon.blockletgroup.size.in.mb for detailed information. | +| carbon.enable.auto.load.merge | To enable compaction along with data loading. **NOTE:** Refer to [Compaction Configuration](#compaction-configuration)#carbon.enable.auto.load.merge for detailed information. | +| carbon.major.compaction.size | To configure major compaction size. **NOTE:** Refer to [Compaction Configuration](#compaction-configuration)#carbon.major.compaction.size for detailed information. | +| carbon.compaction.level.threshold | To configure compaction threshold. **NOTE:** Refer to [Compaction Configuration](#compaction-configuration)#carbon.compaction.level.threshold for detailed information. | +| carbon.enable.vector.reader | To enable fetching data as columnar batch of size 4*1024 rows instead of fetching a row at a time. **NOTE:** Refer to [Query Configuration](#query-configuration)#carbon.enable.vector.reader for detailed information. | +| enable.unsafe.in.query.processing | To enable use of unsafe functions while scanning the data during query. **NOTE:** Refer to [Query Configuration](#query-configuration)#enable.unsafe.in.query.processing for detailed information. | +| carbon.push.rowfilters.for.vector | To enable complete row filters handling by carbon in case of vector. **NOTE:** Refer to [Query Configuration](#query-configuration)#carbon.push.rowfilters.for.vector for detailed information. | +| carbon.query.stage.input.enable | To make query to include staged input files. **NOTE:** Refer to [Query Configuration](#query-configuration)#carbon.query.stage.input.enable for detailed information. | +| carbon.enable.tablestatus.backup | To enable table status file backup. **NOTE:** Refer to [System Configuration](#system-configuration)#carbon.enable.tablestatus.backup for detailed information. | Review comment: Modified and added a testcase ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] |
In reply to this post by GitBox
VenuReddy2103 commented on a change in pull request #3756: URL: https://github.com/apache/carbondata/pull/3756#discussion_r426132929 ########## File path: integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SetParameterTestCase.scala ########## @@ -231,6 +231,26 @@ class SetParameterTestCase extends QueryTest with BeforeAndAfterAll { assert(getLogFileCount("default", "carbon_table_load", "0") >= 1) } + test("TC_013-test set dynamic properties") { + sql("SET carbon.number.of.cores.while.loading=8") Review comment: modified ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] |
In reply to this post by GitBox
QiangCai commented on pull request #3756: URL: https://github.com/apache/carbondata/pull/3756#issuecomment-629720708 LGTM ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] |
In reply to this post by GitBox
asfgit closed pull request #3756: URL: https://github.com/apache/carbondata/pull/3756 ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] |
Free forum by Nabble | Edit this page |