vikramahuja1001 opened a new pull request #3565: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565 ### Why is this PR needed? ### What changes were proposed in this PR? ### Does this PR introduce any user interface change? - No - Yes. (please explain the change and update document) ### Is any new testcase added? - No - Yes ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
CarbonDataQA1 commented on issue #3565: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#issuecomment-571926177 Build Failed with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/1524/ ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
vikramahuja1001 commented on issue #3565: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#issuecomment-572517130 retest this please ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
CarbonDataQA1 commented on issue #3565: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#issuecomment-572546261 Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/1567/ ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
vikramahuja1001 commented on issue #3565: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#issuecomment-572920939 @kunal642 , @akashrn5 , please review ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
kunal642 commented on a change in pull request #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#discussion_r365130483 ########## File path: integration/spark2/src/main/scala/org/apache/carbondata/indexserver/DistributedShowCacheRDD.scala ########## @@ -71,10 +75,21 @@ class DistributedShowCacheRDD(@transient private val ss: SparkSession, tableUniq .getTableUniqueName } else { dataMap.getDataMapSchema.getRelationIdentifier.getDatabaseName + "_" + dataMap - .getDataMapSchema.getDataMapName + .getDataMapSchema.getDataMapName + } + if (executorCache == true) { Review comment: "== true " is not required ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
kunal642 commented on a change in pull request #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#discussion_r365130708 ########## File path: integration/spark2/src/main/scala/org/apache/carbondata/indexserver/IndexServer.scala ########## @@ -57,7 +57,7 @@ trait ServerInterface { /** * Get the cache size for the specified tables. */ - def showCache(tableIds: String) : Array[String] + def showCache(executorCache: Boolean, tableIds: String) : Array[String] Review comment: Add to the end ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
kunal642 commented on a change in pull request #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#discussion_r365130789 ########## File path: integration/spark2/src/main/scala/org/apache/carbondata/indexserver/IndexServer.scala ########## @@ -205,14 +205,18 @@ object IndexServer extends ServerInterface { } } - override def showCache(tableId: String = ""): Array[String] = doAs { + override def showCache(executorCache: Boolean, tableId: String = ""): Array[String] = doAs { val jobgroup: String = "Show Cache " + (tableId match { - case "" => "for all tables" + case "" => + executorCache match { Review comment: replace with if else ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
kunal642 commented on a change in pull request #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#discussion_r365130999 ########## File path: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/cache/CarbonShowCacheCommand.scala ########## @@ -74,7 +92,15 @@ case class CarbonShowCacheCommand(tableIdentifier: Option[TableIdentifier], /** * Assemble result for database */ - getAllTablesCache(sparkSession) + if (showExecutorCache == false) { Review comment: "==false " not required...Change in all other places also ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
kunal642 commented on a change in pull request #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#discussion_r365133227 ########## File path: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/cache/CarbonShowCacheCommand.scala ########## @@ -41,25 +42,42 @@ import org.apache.carbondata.spark.util.CarbonScalaUtil import org.apache.carbondata.spark.util.CommonUtil.bytesToDisplaySize -case class CarbonShowCacheCommand(tableIdentifier: Option[TableIdentifier], +case class CarbonShowCacheCommand(showExecutorCache: Boolean, + tableIdentifier: Option[TableIdentifier], internalCall: Boolean = false) extends MetadataCommand { private lazy val cacheResult: Seq[(String, Int, Long, String)] = { - executeJobToGetCache(List()) + executeJobToGetCache(showExecutorCache, List()) } private val LOGGER = LogServiceFactory.getLogService(classOf[CarbonShowCacheCommand].getName) override def output: Seq[AttributeReference] = { if (tableIdentifier.isEmpty) { - Seq( - AttributeReference("Database", StringType, nullable = false)(), - AttributeReference("Table", StringType, nullable = false)(), - AttributeReference("Index size", StringType, nullable = false)(), - AttributeReference("Datamap size", StringType, nullable = false)(), - AttributeReference("Dictionary size", StringType, nullable = false)(), - AttributeReference("Cache Location", StringType, nullable = false)()) + val isDistributedPruningEnabled = CarbonProperties.getInstance() + .isDistributedPruningEnabled("", "") + if (showExecutorCache == false) { + Seq( + AttributeReference("Database and Table", StringType, nullable = false)(), + AttributeReference("Index size", StringType, nullable = false)(), + AttributeReference("Datamap size", StringType, nullable = false)(), + AttributeReference("Dictionary size", StringType, nullable = false)(), + AttributeReference("Cache Location", StringType, nullable = false)()) + } else { + if (!isDistributedPruningEnabled) { + Seq( + AttributeReference("Database and Table", StringType, nullable = false)(), Review comment: Change "Database and Table" to "Identifier" ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
kunal642 commented on a change in pull request #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#discussion_r365136701 ########## File path: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/cache/CarbonShowCacheCommand.scala ########## @@ -41,25 +42,42 @@ import org.apache.carbondata.spark.util.CarbonScalaUtil import org.apache.carbondata.spark.util.CommonUtil.bytesToDisplaySize -case class CarbonShowCacheCommand(tableIdentifier: Option[TableIdentifier], +case class CarbonShowCacheCommand(showExecutorCache: Boolean, + tableIdentifier: Option[TableIdentifier], internalCall: Boolean = false) extends MetadataCommand { private lazy val cacheResult: Seq[(String, Int, Long, String)] = { - executeJobToGetCache(List()) + executeJobToGetCache(showExecutorCache, List()) } private val LOGGER = LogServiceFactory.getLogService(classOf[CarbonShowCacheCommand].getName) override def output: Seq[AttributeReference] = { if (tableIdentifier.isEmpty) { - Seq( - AttributeReference("Database", StringType, nullable = false)(), - AttributeReference("Table", StringType, nullable = false)(), - AttributeReference("Index size", StringType, nullable = false)(), - AttributeReference("Datamap size", StringType, nullable = false)(), - AttributeReference("Dictionary size", StringType, nullable = false)(), - AttributeReference("Cache Location", StringType, nullable = false)()) + val isDistributedPruningEnabled = CarbonProperties.getInstance() + .isDistributedPruningEnabled("", "") + if (showExecutorCache == false) { Review comment: 1. simplyfy the if else block 2. Block in case of indexserver=false ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
kunal642 commented on a change in pull request #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#discussion_r365138382 ########## File path: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/cache/CarbonShowCacheCommand.scala ########## @@ -110,13 +137,42 @@ case class CarbonShowCacheCommand(tableIdentifier: Option[TableIdentifier], } } + def getAllExecutorCache(sparkSession: SparkSession): Seq[Row] = { + val isDistributedPruningEnabled = CarbonProperties.getInstance() + .isDistributedPruningEnabled("", "") + if (!isDistributedPruningEnabled) { + getAllTablesCache(sparkSession) + } + else { Review comment: fix the indentation for the whole PR ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
kunal642 commented on a change in pull request #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#discussion_r365139875 ########## File path: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/cache/CarbonShowCacheCommand.scala ########## @@ -110,13 +137,42 @@ case class CarbonShowCacheCommand(tableIdentifier: Option[TableIdentifier], } } + def getAllExecutorCache(sparkSession: SparkSession): Seq[Row] = { + val isDistributedPruningEnabled = CarbonProperties.getInstance() + .isDistributedPruningEnabled("", "") + if (!isDistributedPruningEnabled) { + getAllTablesCache(sparkSession) + } + else { + // get all the executor details from the index server + try { + val executorCacheValue = executeJobToGetCache(showExecutorCache, List()) + val result = executorCacheValue.flatMap { + iterator => + Seq(Row(iterator._1, bytesToDisplaySize(iterator._3))) + } + result + } + catch { + case ex: Exception => + LOGGER.error("Error while getting cache from the Index Server", ex) + Seq() + } + } + } + def getAllTablesCache(sparkSession: SparkSession): Seq[Row] = { val currentDatabase = sparkSession.sessionState.catalog.getCurrentDatabase val cache = CacheProvider.getInstance().getCarbonCache val isDistributedPruningEnabled = CarbonProperties.getInstance() .isDistributedPruningEnabled("", "") - if (cache == null && !isDistributedPruningEnabled) { - return makeEmptyCacheRows(currentDatabase) + if (!isDistributedPruningEnabled) { + if (cache == null) { + return makeEmptyCacheRows(currentDatabase, "DRIVER") + } + if (cache.getCurrentSize == 0) { Review comment: make as if (cache == null || cache.getCurrentSize == 0) { ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
kunal642 commented on a change in pull request #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#discussion_r365141311 ########## File path: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/cache/CarbonShowCacheCommand.scala ########## @@ -127,6 +183,8 @@ case class CarbonShowCacheCommand(tableIdentifier: Option[TableIdentifier], carbonTables += carbonTable } } catch { + case ex: AnalysisException => + LOGGER.info("Unable to access Carbon table object for table" + tableIdent.table) Review comment: make this log as debug ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
kunal642 commented on a change in pull request #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#discussion_r365144756 ########## File path: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/cache/CarbonShowCacheCommand.scala ########## @@ -171,39 +252,33 @@ case class CarbonShowCacheCommand(tableIdentifier: Option[TableIdentifier], .toList) if (driverRows.nonEmpty) { (Seq( - Row("ALL", "ALL", driverIndexSize, driverDatamapSize, allDictSize, "DRIVER"), - Row(currentDatabase, - "ALL", - driverdbIndexSize, - driverdbDatamapSize, - driverdbDictSize, - "DRIVER") + Row("TOTAL", driverIndexSize, driverDatamapSize, allDictSize, "DRIVER") ) ++ driverRows).collect { - case row if row.getLong(2) != 0L || row.getLong(3) != 0L || row.getLong(4) != 0L => - Row(row(0), row(1), bytesToDisplaySize(row.getLong(2)), - bytesToDisplaySize(row.getLong(3)), bytesToDisplaySize(row.getLong(4)), "DRIVER") + case row if row.getLong(1) != 0L || row.getLong(2) != 0L || row.getLong(3) != 0L => + Row(row(0), bytesToDisplaySize(row.getLong(1)), + bytesToDisplaySize(row.getLong(2)), bytesToDisplaySize(row.getLong(3)), "DRIVER") } } else { - makeEmptyCacheRows(currentDatabase) + makeEmptyCacheRows(currentDatabase, "DRIVER") } } else { - makeEmptyCacheRows(currentDatabase) + makeEmptyCacheRows(currentDatabase, "DRIVER") } - // val (serverIndexSize, serverDataMapSize) = getAllIndexServerCacheSize - val indexDisplayRows = if (indexServerRows.nonEmpty) { - (Seq( - Row("ALL", "ALL", indexAllIndexSize, indexAllDatamapSize, indexAllDictSize, "INDEX SERVER"), - Row(currentDatabase, - "ALL", - indexdbIndexSize, - indexdbDatamapSize, - driverdbDictSize, - "INDEX SERVER") - ) ++ indexServerRows).collect { - case row if row.getLong(2) != 0L || row.getLong(3) != 0L || row.getLong(4) != 0L => - Row(row.get(0), row.get(1), bytesToDisplaySize(row.getLong(2)), - bytesToDisplaySize(row.getLong(3)), bytesToDisplaySize(row.getLong(4)), "INDEX SERVER") + val indexDisplayRows = if (isDistributedPruningEnabled) { + if (indexServerRows.nonEmpty) { Review comment: revert unnecessary changes ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
kunal642 commented on a change in pull request #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#discussion_r365146606 ########## File path: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/cache/CarbonShowCacheCommand.scala ########## @@ -171,39 +252,33 @@ case class CarbonShowCacheCommand(tableIdentifier: Option[TableIdentifier], .toList) if (driverRows.nonEmpty) { (Seq( - Row("ALL", "ALL", driverIndexSize, driverDatamapSize, allDictSize, "DRIVER"), - Row(currentDatabase, - "ALL", - driverdbIndexSize, - driverdbDatamapSize, - driverdbDictSize, - "DRIVER") + Row("TOTAL", driverIndexSize, driverDatamapSize, allDictSize, "DRIVER") ) ++ driverRows).collect { - case row if row.getLong(2) != 0L || row.getLong(3) != 0L || row.getLong(4) != 0L => - Row(row(0), row(1), bytesToDisplaySize(row.getLong(2)), - bytesToDisplaySize(row.getLong(3)), bytesToDisplaySize(row.getLong(4)), "DRIVER") + case row if row.getLong(1) != 0L || row.getLong(2) != 0L || row.getLong(3) != 0L => + Row(row(0), bytesToDisplaySize(row.getLong(1)), + bytesToDisplaySize(row.getLong(2)), bytesToDisplaySize(row.getLong(3)), "DRIVER") } } else { - makeEmptyCacheRows(currentDatabase) + makeEmptyCacheRows(currentDatabase, "DRIVER") } } else { - makeEmptyCacheRows(currentDatabase) + makeEmptyCacheRows(currentDatabase, "DRIVER") } - // val (serverIndexSize, serverDataMapSize) = getAllIndexServerCacheSize - val indexDisplayRows = if (indexServerRows.nonEmpty) { - (Seq( - Row("ALL", "ALL", indexAllIndexSize, indexAllDatamapSize, indexAllDictSize, "INDEX SERVER"), - Row(currentDatabase, - "ALL", - indexdbIndexSize, - indexdbDatamapSize, - driverdbDictSize, - "INDEX SERVER") - ) ++ indexServerRows).collect { - case row if row.getLong(2) != 0L || row.getLong(3) != 0L || row.getLong(4) != 0L => - Row(row.get(0), row.get(1), bytesToDisplaySize(row.getLong(2)), - bytesToDisplaySize(row.getLong(3)), bytesToDisplaySize(row.getLong(4)), "INDEX SERVER") + val indexDisplayRows = if (isDistributedPruningEnabled) { + if (indexServerRows.nonEmpty) { + (Seq( + Row("TOTAL", indexAllIndexSize, indexAllDatamapSize, indexAllDictSize, "INDEX SERVER") + ) ++ indexServerRows).collect { + case row if row.getLong(1) != 0L || row.getLong(2) != 0L || row.getLong(3) != 0L => + Row(row.get(0), + bytesToDisplaySize(row.getLong(1)), + bytesToDisplaySize(row.getLong(2)), + bytesToDisplaySize(row.getLong(3)), + "INDEX SERVER") + } + } else { + makeEmptyCacheRows(currentDatabase, "INDEXSERVER") Review comment: If there is nothing in the cache then no need to display anything. We can remove this method itself and simplyfy the conditions. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
kunal642 commented on a change in pull request #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#discussion_r365146807 ########## File path: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/cache/CarbonShowCacheCommand.scala ########## @@ -135,23 +193,46 @@ case class CarbonShowCacheCommand(tableIdentifier: Option[TableIdentifier], carbonTables.flatMap { mainTable => try { - makeRows(getTableCacheFromIndexServer(mainTable)(sparkSession), mainTable) + val row = makeRows(getTableCacheFromIndexServer(showExecutorCache, + mainTable)(sparkSession), mainTable) + var res: List[Any] = null + for (i <- row.toList) { + res = i.toSeq.toList + } + if (res(1) == 0 && res(2) == 0 && res(3) == 0) { + Seq() + } else { + row + } } catch { case ex: UnsupportedOperationException => Seq() } } - } else { Seq() } + } else { + Seq() + } val driverRows = if (cache != null) { carbonTables.flatMap { carbonTable => try { - makeRows(getTableCacheFromDriver(sparkSession, carbonTable), carbonTable) + val row = makeRows(getTableCacheFromDriver(sparkSession, carbonTable), carbonTable) + var res: List[Any] = null + for (i <- row.toList) { + res = i.toSeq.toList + } + if (res(1) == 0 && res(2) == 0 && res(3) == 0) { Review comment: can be done in makeRows method ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
kunal642 commented on a change in pull request #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#discussion_r365147091 ########## File path: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/cache/CarbonShowCacheCommand.scala ########## @@ -135,23 +193,46 @@ case class CarbonShowCacheCommand(tableIdentifier: Option[TableIdentifier], carbonTables.flatMap { mainTable => try { - makeRows(getTableCacheFromIndexServer(mainTable)(sparkSession), mainTable) + val row = makeRows(getTableCacheFromIndexServer(showExecutorCache, + mainTable)(sparkSession), mainTable) + var res: List[Any] = null + for (i <- row.toList) { + res = i.toSeq.toList + } + if (res(1) == 0 && res(2) == 0 && res(3) == 0) { + Seq() + } else { + row + } } catch { case ex: UnsupportedOperationException => Seq() } } - } else { Seq() } + } else { + Seq() + } val driverRows = if (cache != null) { carbonTables.flatMap { carbonTable => try { - makeRows(getTableCacheFromDriver(sparkSession, carbonTable), carbonTable) + val row = makeRows(getTableCacheFromDriver(sparkSession, carbonTable), carbonTable) + var res: List[Any] = null + for (i <- row.toList) { + res = i.toSeq.toList Review comment: No need to convert to List. same can be done for Seq also ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
kunal642 commented on a change in pull request #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#discussion_r365149547 ########## File path: integration/spark2/src/main/scala/org/apache/spark/sql/parser/CarbonSpark2SqlParser.scala ########## @@ -562,7 +563,13 @@ class CarbonSpark2SqlParser extends CarbonDDLSqlParser { protected lazy val showCache: Parser[LogicalPlan] = SHOW ~> METACACHE ~> opt(ontable) <~ opt(";") ^^ { case table => - CarbonShowCacheCommand(table) + CarbonShowCacheCommand(false, table) + } + + protected lazy val showExecutorCache: Parser[LogicalPlan] = + SHOW ~> EXECUTOR ~> METACACHE ~> opt(ontable) <~ opt(";") ^^ { Review comment: Combine both showCache and showexecutor Cache parsers like (SHOW ~> opt(EXECUTOR) <~ METACACHE) ~ opt(ontable) ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
In reply to this post by GitBox
CarbonDataQA1 commented on issue #3565: [CARBONDATA-3662]: Changes to show metacache command
URL: https://github.com/apache/carbondata/pull/3565#issuecomment-572986951 Build Failed with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/1583/ ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] With regards, Apache Git Services |
Free forum by Nabble | Edit this page |