[
https://issues.apache.org/jira/browse/CARBONDATA-2570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16497763#comment-16497763 ]
Ajantha Bhat commented on CARBONDATA-2570:
------------------------------------------
Steps:
# Take sdk jars and dependent jars
# create a intellij test project without spark cluster dependency
# Create a carbon reader on SDK writer's output. Read files and close the reader.
# Create a reader on another set of SDK writer output (different schema) but same table name.Now can observe that read fails due to schema mismatch. This is because old blocklet datamap with same table name is still present
> Carbon SDK Reader, second time reader instance have an issue in cluster test
> ----------------------------------------------------------------------------
>
> Key: CARBONDATA-2570
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-2570> Project: CarbonData
> Issue Type: Bug
> Reporter: Ajantha Bhat
> Assignee: Ajantha Bhat
> Priority: Major
>
> Debugged the issue, This is happening only in cluster. Not in local.
> root cause: old table's blocklet datamap is not cleared.
>
> solution: In CarbonReader.close() API used for clearing datamap is not clearing all the datamap in cluster
> so change
> DataMapStoreManager.getInstance().getDefaultDataMap(queryModel.getTable()).clear();
> to
> DataMapStoreManager.getInstance()
> .clearDataMaps({color:#660e7a}queryModel{color}.getTable().getAbsoluteTableIdentifier());
>
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)