[jira] [Updated] (CARBONDATA-3698) Drop table will iterate all databases and tables

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Updated] (CARBONDATA-3698) Drop table will iterate all databases and tables

Akash R Nilugal (Jira)

     [ https://issues.apache.org/jira/browse/CARBONDATA-3698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

ShuMing Li updated CARBONDATA-3698:
-----------------------------------
    Description:
When executes `drop table carbon_table_xxx`, it costs huge time if there are a lot of tables in the warehouse.

Here is TraceStack:

 
{code:java}
- locked <0x00000006dd91f450> (a org.apache.spark.sql.hive.HiveExternalCatalog) at org.apache.spark.sql.hive.HiveExternalCatalog.listTables(HiveExternalCatalog.scala:869) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.listTables(SessionCatalog.scala:811) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.listTables(SessionCatalog.scala:795) at org.apache.spark.sql.hive.CarbonFileMetastore$$anonfun$4.apply(CarbonFileMetastore.scala:585) at org.apache.spark.sql.hive.CarbonFileMetastore$$anonfun$4.apply(CarbonFileMetastore.scala:584) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104) at org.apache.spark.sql.hive.CarbonFileMetastore.removeStaleTimeStampEntries(CarbonFileMetastore.scala:583) at org.apache.spark.sql.execution.command.table.CarbonDropTableCommand.processMetadata(CarbonDropTableCommand.scala:192) at org.apache.spark.sql.execution.command.AtomicRunnableCommand$$anonfun$run$3.apply(package.scala:146) at org.apache.spark.sql.execution.command.AtomicRunnableCommand$$anonfun$run$3.apply(package.scala:145) at org.apache.spark.sql.execution.command.Auditable$class.runWithAudit(package.scala:104)
{code}
 

 

Here is code which iterates tables:

 

 
{code:java}
def removeStaleTimeStampEntries(sparkSession: SparkSession): Unit = {
   val tablesList = sparkSession.sessionState.catalog.listDatabases().flatMap
{     database =>       sparkSession.sessionState.catalog.listTables(database)       .map(table => CarbonTable.buildUniqueName(database, table.table))   }
 ...
 }
 
{code}

  was:
When executes `drop table carbon_table_xxx`, it costs huge time if there are a lot of tables in the warehouse.

Here is TraceStack:

```

{{   - locked <0x00000006dd91f450> (a org.apache.spark.sql.hive.HiveExternalCatalog)
        at org.apache.spark.sql.hive.HiveExternalCatalog.listTables(HiveExternalCatalog.scala:869)
        at org.apache.spark.sql.catalyst.catalog.SessionCatalog.listTables(SessionCatalog.scala:811)
        at org.apache.spark.sql.catalyst.catalog.SessionCatalog.listTables(SessionCatalog.scala:795)
        at org.apache.spark.sql.hive.CarbonFileMetastore$$anonfun$4.apply(CarbonFileMetastore.scala:585)
        at org.apache.spark.sql.hive.CarbonFileMetastore$$anonfun$4.apply(CarbonFileMetastore.scala:584)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
        at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
        at org.apache.spark.sql.hive.CarbonFileMetastore.removeStaleTimeStampEntries(CarbonFileMetastore.scala:583)
        at org.apache.spark.sql.execution.command.table.CarbonDropTableCommand.processMetadata(CarbonDropTableCommand.scala:192)
        at org.apache.spark.sql.execution.command.AtomicRunnableCommand$$anonfun$run$3.apply(package.scala:146)
        at org.apache.spark.sql.execution.command.AtomicRunnableCommand$$anonfun$run$3.apply(package.scala:145)
        at org.apache.spark.sql.execution.command.Auditable$class.runWithAudit(package.scala:104)}}

```

 

Here is code which iterates tables:

```

def removeStaleTimeStampEntries(sparkSession: SparkSession): Unit = {
  val tablesList = sparkSession.sessionState.catalog.listDatabases().flatMap {
    database =>
      sparkSession.sessionState.catalog.listTables(database)
      .map(table => CarbonTable.buildUniqueName(database, table.table))
  }
 ...
}

```


> Drop table will iterate all databases and tables
> ------------------------------------------------
>
>                 Key: CARBONDATA-3698
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-3698
>             Project: CarbonData
>          Issue Type: Bug
>          Components: sql
>    Affects Versions: 1.6.1
>            Reporter: ShuMing Li
>            Priority: Major
>
> When executes `drop table carbon_table_xxx`, it costs huge time if there are a lot of tables in the warehouse.
> Here is TraceStack:
>  
> {code:java}
> - locked <0x00000006dd91f450> (a org.apache.spark.sql.hive.HiveExternalCatalog) at org.apache.spark.sql.hive.HiveExternalCatalog.listTables(HiveExternalCatalog.scala:869) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.listTables(SessionCatalog.scala:811) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.listTables(SessionCatalog.scala:795) at org.apache.spark.sql.hive.CarbonFileMetastore$$anonfun$4.apply(CarbonFileMetastore.scala:585) at org.apache.spark.sql.hive.CarbonFileMetastore$$anonfun$4.apply(CarbonFileMetastore.scala:584) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104) at org.apache.spark.sql.hive.CarbonFileMetastore.removeStaleTimeStampEntries(CarbonFileMetastore.scala:583) at org.apache.spark.sql.execution.command.table.CarbonDropTableCommand.processMetadata(CarbonDropTableCommand.scala:192) at org.apache.spark.sql.execution.command.AtomicRunnableCommand$$anonfun$run$3.apply(package.scala:146) at org.apache.spark.sql.execution.command.AtomicRunnableCommand$$anonfun$run$3.apply(package.scala:145) at org.apache.spark.sql.execution.command.Auditable$class.runWithAudit(package.scala:104)
> {code}
>  
>  
> Here is code which iterates tables:
>  
>  
> {code:java}
> def removeStaleTimeStampEntries(sparkSession: SparkSession): Unit = {
>    val tablesList = sparkSession.sessionState.catalog.listDatabases().flatMap
> {     database =>       sparkSession.sessionState.catalog.listTables(database)       .map(table => CarbonTable.buildUniqueName(database, table.table))   }
>  ...
>  }
>  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)