VenuReddy2103 commented on a change in pull request #3385: [CARBONDATA-3526]Fix cache issue during update and query
URL:
https://github.com/apache/carbondata/pull/3385#discussion_r327950535
##########
File path: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/cache/CarbonDropCacheCommand.scala
##########
@@ -55,13 +55,12 @@ case class CarbonDropCacheCommand(tableIdentifier: TableIdentifier, internalCall
carbonTable.getTableName)) {
DataMapUtil.executeClearDataMapJob(carbonTable, DataMapUtil.DISTRIBUTED_JOB_NAME)
} else {
- val allIndexFiles = CacheUtil.getAllIndexFiles(carbonTable)(sparkSession)
// Extract dictionary keys for the table and create cache keys from those
val dictKeys: List[String] = CacheUtil.getAllDictCacheKeys(carbonTable)
-
// Remove elements from cache
- val keysToRemove = allIndexFiles ++ dictKeys
- cache.removeAll(keysToRemove.asJava)
+ cache.removeAll(dictKeys.asJava)
+ DataMapStoreManager.getInstance()
Review comment:
nit: `clearDataMaps` method has another variant with just `AbsoluteTableIdentifier` as argument. Probably for consistency, we used that method whenever the second argument(`launchJob` is true). Like below -
`DataMapStoreManager.getInstance().clearDataMaps(carbonTable.getAbsoluteTableIdentifier)`
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[hidden email]
With regards,
Apache Git Services