[GitHub] [carbondata] akkio-97 opened a new pull request #4129: [WIP] alter rename complex types

classic Classic list List threaded Threaded
136 messages Options
1234 ... 7
Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] akkio-97 opened a new pull request #4129: [WIP] alter rename complex types

GitBox

akkio-97 opened a new pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129


    ### Why is this PR needed?
   
   
    ### What changes were proposed in this PR?
   
       
    ### Does this PR introduce any user interface change?
    - No
    - Yes. (please explain the change and update document)
   
    ### Is any new testcase added?
    - No
    - Yes
   
       
   


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4129: [WIP] alter rename complex types

GitBox

CarbonDataQA2 commented on pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#issuecomment-831741550


   Build Failed  with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5299/
   


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4129: [WIP] alter rename complex types

GitBox
In reply to this post by GitBox

CarbonDataQA2 commented on pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#issuecomment-831742653


   Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3554/
   


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4129: [WIP] alter rename complex types

GitBox
In reply to this post by GitBox

CarbonDataQA2 commented on pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#issuecomment-843111237


   Build Failed  with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3646/
   


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4129: [WIP] alter rename complex types

GitBox
In reply to this post by GitBox

CarbonDataQA2 commented on pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#issuecomment-843115121


   Build Failed  with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5391/
   


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4129: [WIP] alter rename complex types

GitBox
In reply to this post by GitBox

CarbonDataQA2 commented on pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#issuecomment-843572443


   Build Failed  with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5395/
   


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4129: [WIP] alter rename complex types

GitBox
In reply to this post by GitBox

CarbonDataQA2 commented on pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#issuecomment-843573819


   Build Failed  with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3650/
   


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4129: [CARBONDATA-4179] Support renaming of complex columns (array/struct)

GitBox
In reply to this post by GitBox

CarbonDataQA2 commented on pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#issuecomment-844155210


   Build Failed  with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3654/
   


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4129: [CARBONDATA-4179] Support renaming of complex columns (array/struct)

GitBox
In reply to this post by GitBox

CarbonDataQA2 commented on pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#issuecomment-844163295


   Build Failed  with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5399/
   


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] akkio-97 commented on pull request #4129: [CARBONDATA-4179] Support renaming of complex columns (array/struct)

GitBox
In reply to this post by GitBox

akkio-97 commented on pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#issuecomment-844164769


   retest this please


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4129: [CARBONDATA-4179] Support renaming of complex columns (array/struct)

GitBox
In reply to this post by GitBox

CarbonDataQA2 commented on pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#issuecomment-844252002


   Build Failed  with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5400/
   


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4129: [CARBONDATA-4179] Support renaming of complex columns (array/struct)

GitBox
In reply to this post by GitBox

CarbonDataQA2 commented on pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#issuecomment-844258339


   Build Failed  with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3656/
   


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] akkio-97 commented on pull request #4129: [CARBONDATA-4179] Support renaming of complex columns (array/struct)

GitBox
In reply to this post by GitBox

akkio-97 commented on pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#issuecomment-844719029


   retest this please


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4129: [CARBONDATA-4179] Support renaming of complex columns (array/struct)

GitBox
In reply to this post by GitBox

CarbonDataQA2 commented on pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#issuecomment-844794711


   Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5404/
   


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4129: [CARBONDATA-4179] Support renaming of complex columns (array/struct)

GitBox
In reply to this post by GitBox

CarbonDataQA2 commented on pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#issuecomment-844797837


   Build Failed  with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3660/
   


--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] akashrn5 commented on a change in pull request #4129: [CARBONDATA-4179] Support renaming of complex columns (array/struct)

GitBox
In reply to this post by GitBox

akashrn5 commented on a change in pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#discussion_r636833045



##########
File path: core/src/main/java/org/apache/carbondata/core/scan/executor/util/RestructureUtil.java
##########
@@ -154,11 +160,21 @@
     return presentDimension;
   }
 
+  public static void fillExistingTableColumnIDMap(CarbonDimension tableColumn) {

Review comment:
       why we need this? when we get the parent carbon column, we don't get child CarbonColumn objecs?

##########
File path: docs/ddl-of-carbondata.md
##########
@@ -866,6 +880,8 @@ Users can specify which columns to include and exclude for local dictionary gene
      **NOTE:**
      * Merge index is supported on streaming table from carbondata 2.0.1 version.
      But streaming segments (ROW_V1) cannot create merge index.
+     * Rename column name is only supported for complex columns including array and struct.
+     * Change datatype is not yet supported for complex types.

Review comment:
       i think this is not required to mention, as you should be handling in code to throw error message. Else later again we need to change it once we support

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchemaCommon.scala
##########
@@ -160,7 +159,18 @@ case class DropPartitionCallableModel(carbonLoadModel: CarbonLoadModel,
     carbonTable: CarbonTable,
     sqlContext: SQLContext)
 
-case class DataTypeInfo(dataType: String, precision: Int = 0, scale: Int = 0)
+case class DataTypeInfo(dataType: String,
+    name: String,

Review comment:
       make name as first parameter and rename as columnName

##########
File path: docs/ddl-of-carbondata.md
##########
@@ -841,11 +843,23 @@ Users can specify which columns to include and exclude for local dictionary gene
      ```
      ALTER TABLE test_db.carbon CHANGE a3 a4 STRING
      ```
-     Example3:Change column a3's comment to "col_comment".
+     Example4:Change column a3's comment to "col_comment".
     
      ```
      ALTER TABLE test_db.carbon CHANGE a3 a3 STRING COMMENT 'col_comment'
      ```
+    
+     Example5:Change child column name in column str struct\<a:int> from a to b.
+                  
+     ```
+     ALTER TABLE test_db.carbon CHANGE str str struct<b:int>
+     ```
+    
+     Example6:Change column name in column arr1 array\<int> from arr1 to arr2.

Review comment:
       same as above

##########
File path: docs/ddl-of-carbondata.md
##########
@@ -841,11 +843,23 @@ Users can specify which columns to include and exclude for local dictionary gene
      ```
      ALTER TABLE test_db.carbon CHANGE a3 a4 STRING
      ```
-     Example3:Change column a3's comment to "col_comment".
+     Example4:Change column a3's comment to "col_comment".
     
      ```
      ALTER TABLE test_db.carbon CHANGE a3 a3 STRING COMMENT 'col_comment'
      ```
+    
+     Example5:Change child column name in column str struct\<a:int> from a to b.

Review comment:
       give meaningful column name and child name in document,

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/catalyst/CarbonParserUtil.scala
##########
@@ -1132,12 +1134,63 @@ object CarbonParserUtil {
         } else if (scale < 0 || scale > 38) {
           throw new MalformedCarbonCommandException("Invalid value for scale")
         }
-        DataTypeInfo("decimal", precision, scale)
+        DataTypeInfo("decimal", name, precision, scale)
       case _ =>
-        DataTypeInfo(DataTypeConverterUtil.convertToCarbonType(dataType).getName.toLowerCase)
+        DataTypeInfo(DataTypeConverterUtil.convertToCarbonType(dataType).getName.toLowerCase, name)
     }
   }
 
+  /**
+   * This method will return the instantiated DataTypeInfo by parsing the column
+   */
+  def parseColumn(columnName: String, dataType: DataType,
+      values: Option[List[(Int, Int)]]): DataTypeInfo = {
+    // creates parent dataTypeInfo first
+    val dataTypeInfo = CarbonParserUtil.parseDataType(
+      columnName,
+      DataTypeConverterUtil
+        .convertToCarbonType(dataType.typeName)
+        .getName
+        .toLowerCase,
+      values)
+    // check which child type is present and create children dataTypeInfo accordingly
+    if (dataType.isInstanceOf[ArrayType]) {
+      val childType: DataType = dataType.asInstanceOf[ArrayType].elementType
+      val childName = columnName + ".val"
+      val childValues = childType match {
+        case d: DecimalType => Some(List((d.precision, d.scale)))
+        case _ => None
+      }
+      var childTypeInfoList: List[DataTypeInfo] = null
+      val childDatatypeInfo = parseColumn(childName, childType, childValues)
+      if (childTypeInfoList == null) {
+        childTypeInfoList = List(childDatatypeInfo)
+      }
+      dataTypeInfo.setChildren(childTypeInfoList)
+    } else if (dataType.isInstanceOf[StructType]) {

Review comment:
       replace with match

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/catalyst/CarbonParserUtil.scala
##########
@@ -1132,12 +1134,63 @@ object CarbonParserUtil {
         } else if (scale < 0 || scale > 38) {
           throw new MalformedCarbonCommandException("Invalid value for scale")
         }
-        DataTypeInfo("decimal", precision, scale)
+        DataTypeInfo("decimal", name, precision, scale)
       case _ =>
-        DataTypeInfo(DataTypeConverterUtil.convertToCarbonType(dataType).getName.toLowerCase)
+        DataTypeInfo(DataTypeConverterUtil.convertToCarbonType(dataType).getName.toLowerCase, name)
     }
   }
 
+  /**
+   * This method will return the instantiated DataTypeInfo by parsing the column
+   */
+  def parseColumn(columnName: String, dataType: DataType,
+      values: Option[List[(Int, Int)]]): DataTypeInfo = {
+    // creates parent dataTypeInfo first
+    val dataTypeInfo = CarbonParserUtil.parseDataType(
+      columnName,
+      DataTypeConverterUtil
+        .convertToCarbonType(dataType.typeName)
+        .getName
+        .toLowerCase,
+      values)
+    // check which child type is present and create children dataTypeInfo accordingly
+    if (dataType.isInstanceOf[ArrayType]) {
+      val childType: DataType = dataType.asInstanceOf[ArrayType].elementType
+      val childName = columnName + ".val"
+      val childValues = childType match {
+        case d: DecimalType => Some(List((d.precision, d.scale)))
+        case _ => None
+      }
+      var childTypeInfoList: List[DataTypeInfo] = null
+      val childDatatypeInfo = parseColumn(childName, childType, childValues)
+      if (childTypeInfoList == null) {
+        childTypeInfoList = List(childDatatypeInfo)
+      }
+      dataTypeInfo.setChildren(childTypeInfoList)
+    } else if (dataType.isInstanceOf[StructType]) {
+      val childrenTypeList: StructType = dataType.asInstanceOf[StructType]
+      var childTypeInfoList: List[DataTypeInfo] = null
+      for (i <- 0 to childrenTypeList.size - 1) {

Review comment:
        u can use zipwithindex here

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/catalyst/CarbonParserUtil.scala
##########
@@ -1109,6 +1110,7 @@ object CarbonParserUtil {
    * @return DataTypeInfo object with datatype, precision and scale
    */
   def parseDataType(
+      name: String,

Review comment:
       ```suggestion
         columnName: String,
   ```

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/catalyst/CarbonParserUtil.scala
##########
@@ -1132,12 +1134,63 @@ object CarbonParserUtil {
         } else if (scale < 0 || scale > 38) {
           throw new MalformedCarbonCommandException("Invalid value for scale")
         }
-        DataTypeInfo("decimal", precision, scale)
+        DataTypeInfo("decimal", name, precision, scale)
       case _ =>
-        DataTypeInfo(DataTypeConverterUtil.convertToCarbonType(dataType).getName.toLowerCase)
+        DataTypeInfo(DataTypeConverterUtil.convertToCarbonType(dataType).getName.toLowerCase, name)
     }
   }
 
+  /**
+   * This method will return the instantiated DataTypeInfo by parsing the column
+   */
+  def parseColumn(columnName: String, dataType: DataType,
+      values: Option[List[(Int, Int)]]): DataTypeInfo = {
+    // creates parent dataTypeInfo first
+    val dataTypeInfo = CarbonParserUtil.parseDataType(
+      columnName,
+      DataTypeConverterUtil
+        .convertToCarbonType(dataType.typeName)
+        .getName
+        .toLowerCase,
+      values)
+    // check which child type is present and create children dataTypeInfo accordingly
+    if (dataType.isInstanceOf[ArrayType]) {
+      val childType: DataType = dataType.asInstanceOf[ArrayType].elementType
+      val childName = columnName + ".val"
+      val childValues = childType match {
+        case d: DecimalType => Some(List((d.precision, d.scale)))
+        case _ => None
+      }
+      var childTypeInfoList: List[DataTypeInfo] = null
+      val childDatatypeInfo = parseColumn(childName, childType, childValues)
+      if (childTypeInfoList == null) {
+        childTypeInfoList = List(childDatatypeInfo)
+      }
+      dataTypeInfo.setChildren(childTypeInfoList)
+    } else if (dataType.isInstanceOf[StructType]) {
+      val childrenTypeList: StructType = dataType.asInstanceOf[StructType]
+      var childTypeInfoList: List[DataTypeInfo] = null
+      for (i <- 0 to childrenTypeList.size - 1) {
+        val childField = childrenTypeList(i)
+        val childType = childField.dataType
+        val childName = columnName + "." + childField.name

Review comment:
       use constant for point

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/catalyst/CarbonParserUtil.scala
##########
@@ -1132,12 +1134,63 @@ object CarbonParserUtil {
         } else if (scale < 0 || scale > 38) {
           throw new MalformedCarbonCommandException("Invalid value for scale")
         }
-        DataTypeInfo("decimal", precision, scale)
+        DataTypeInfo("decimal", name, precision, scale)
       case _ =>
-        DataTypeInfo(DataTypeConverterUtil.convertToCarbonType(dataType).getName.toLowerCase)
+        DataTypeInfo(DataTypeConverterUtil.convertToCarbonType(dataType).getName.toLowerCase, name)
     }
   }
 
+  /**
+   * This method will return the instantiated DataTypeInfo by parsing the column
+   */
+  def parseColumn(columnName: String, dataType: DataType,
+      values: Option[List[(Int, Int)]]): DataTypeInfo = {
+    // creates parent dataTypeInfo first
+    val dataTypeInfo = CarbonParserUtil.parseDataType(
+      columnName,
+      DataTypeConverterUtil
+        .convertToCarbonType(dataType.typeName)
+        .getName
+        .toLowerCase,
+      values)
+    // check which child type is present and create children dataTypeInfo accordingly
+    if (dataType.isInstanceOf[ArrayType]) {
+      val childType: DataType = dataType.asInstanceOf[ArrayType].elementType
+      val childName = columnName + ".val"
+      val childValues = childType match {
+        case d: DecimalType => Some(List((d.precision, d.scale)))
+        case _ => None
+      }
+      var childTypeInfoList: List[DataTypeInfo] = null
+      val childDatatypeInfo = parseColumn(childName, childType, childValues)
+      if (childTypeInfoList == null) {

Review comment:
       this null check not required, you can remove and directly assign parseColumn oputput

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -143,27 +199,51 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
       val newColumnPrecision = alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.precision
       val newColumnScale = alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.scale
       // set isDataTypeChange flag
-      if (oldCarbonColumn.head.getDataType.getName
-        .equalsIgnoreCase(alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.dataType)) {
+      val oldDatatype = oldCarbonColumn.head.getDataType
+      val newDatatype = alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.dataType
+      if (isColumnRename && (DataTypes.isMapType(oldDatatype) ||
+                             newDatatype.equalsIgnoreCase(CarbonCommonConstants.MAP))) {
+        throw new UnsupportedOperationException("cannot alter map type column")
+      }
+      if (oldDatatype.getName.equalsIgnoreCase(newDatatype)) {
         val newColumnPrecision =
           alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.precision
         val newColumnScale = alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.scale
         // if the source datatype is decimal and there is change in precision and scale, then
         // along with rename, datatype change is also required for the command, so set the
         // isDataTypeChange flag to true in this case
-        if (oldCarbonColumn.head.getDataType.getName.equalsIgnoreCase("decimal") &&
-            (oldCarbonColumn.head.getDataType.asInstanceOf[DecimalType].getPrecision !=
+        if (oldDatatype.getName.equalsIgnoreCase(CarbonCommonConstants.DECIMAL) &&

Review comment:
       u can use Datatypes.isdecimalType APIs, similarly check other places

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -143,27 +199,51 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
       val newColumnPrecision = alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.precision
       val newColumnScale = alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.scale
       // set isDataTypeChange flag
-      if (oldCarbonColumn.head.getDataType.getName
-        .equalsIgnoreCase(alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.dataType)) {
+      val oldDatatype = oldCarbonColumn.head.getDataType
+      val newDatatype = alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.dataType
+      if (isColumnRename && (DataTypes.isMapType(oldDatatype) ||
+                             newDatatype.equalsIgnoreCase(CarbonCommonConstants.MAP))) {
+        throw new UnsupportedOperationException("cannot alter map type column")

Review comment:
       ```suggestion
           throw new UnsupportedOperationException("Alter rename is unsupported for Map datatype column")
   ```

##########
File path: core/src/main/java/org/apache/carbondata/core/scan/executor/util/RestructureUtil.java
##########
@@ -50,6 +52,10 @@
  * Utility class for restructuring
  */
 public class RestructureUtil {
+  // if table column is of complex type- this look up stores the column id of the parent and their
+  // children. This helps to determine the existence of incoming query column by matching based
+  // on id.
+  private static Map<String, String> existingTableColumnIDMap = new HashMap<>();

Review comment:
       map is initialized twice, please remove

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchemaCommon.scala
##########
@@ -160,7 +159,18 @@ case class DropPartitionCallableModel(carbonLoadModel: CarbonLoadModel,
     carbonTable: CarbonTable,
     sqlContext: SQLContext)
 
-case class DataTypeInfo(dataType: String, precision: Int = 0, scale: Int = 0)
+case class DataTypeInfo(dataType: String,
+    name: String,
+    precision: Int = 0,
+    scale: Int = 0) {
+  private var children: List[DataTypeInfo] = null

Review comment:
       instead of null, initialize to None and take Option to avoid any nullpointer

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -83,6 +76,69 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
     childTableColumnRename: Boolean = false)
   extends CarbonAlterTableColumnRenameCommand(alterTableColRenameAndDataTypeChangeModel.columnName,
     alterTableColRenameAndDataTypeChangeModel.newColumnName) {
+  // stores mapping of altered children column names:
+  // new_column_name -> column_datatype
+  val alteredColumnNamesMap = collection.mutable.HashMap.empty[String, String]
+
+  // stores mapping of: column_name -> new_column_datatype
+  val alteredColumnDatatypesMap = collection.mutable.HashMap.empty[String, String]
+
+  /**
+   * This method checks the structure of the old and new complex columns, and-
+   * 1. throws exception if the number of complex-levels in both columns does not match
+   * 2. throws exception if the number of children of both columns does not match
+   * 3. creates alteredColumnNamesMap: new_column_name -> datatype. Here new_column_name are those
+   *    names of the columns that are altered.
+   * 4. creates alteredColumnDatatypesMap: column_name -> new_datatype.
+   * These maps will later be used while altering the table schema
+   */
+  def validateComplexStructure(dimension: List[CarbonDimension],

Review comment:
       please move the common methods of validation to Util class so that the command class will be much simple and cleaner.

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -83,6 +76,69 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
     childTableColumnRename: Boolean = false)
   extends CarbonAlterTableColumnRenameCommand(alterTableColRenameAndDataTypeChangeModel.columnName,
     alterTableColRenameAndDataTypeChangeModel.newColumnName) {
+  // stores mapping of altered children column names:
+  // new_column_name -> column_datatype
+  val alteredColumnNamesMap = collection.mutable.HashMap.empty[String, String]
+
+  // stores mapping of: column_name -> new_column_datatype
+  val alteredColumnDatatypesMap = collection.mutable.HashMap.empty[String, String]
+
+  /**
+   * This method checks the structure of the old and new complex columns, and-
+   * 1. throws exception if the number of complex-levels in both columns does not match
+   * 2. throws exception if the number of children of both columns does not match
+   * 3. creates alteredColumnNamesMap: new_column_name -> datatype. Here new_column_name are those
+   *    names of the columns that are altered.
+   * 4. creates alteredColumnDatatypesMap: column_name -> new_datatype.
+   * These maps will later be used while altering the table schema
+   */
+  def validateComplexStructure(dimension: List[CarbonDimension],
+      newDataTypeInfo: List[DataTypeInfo]): Unit = {
+    if (dimension == null && newDataTypeInfo == null) {
+      throw new UnsupportedOperationException(
+        "both old and new dimensions are null")
+    } else if (dimension == null || newDataTypeInfo == null) {
+      throw new UnsupportedOperationException(
+        "because either the old or the new dimension is null")
+    } else if (dimension.size != newDataTypeInfo.size) {
+      throw new UnsupportedOperationException(
+        "because number of children of old and new complex columns are not the same")
+    } else {
+      for (i <- 0 to newDataTypeInfo.size - 1) {

Review comment:
       try with zipwithindex if possible

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -83,6 +76,69 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
     childTableColumnRename: Boolean = false)
   extends CarbonAlterTableColumnRenameCommand(alterTableColRenameAndDataTypeChangeModel.columnName,
     alterTableColRenameAndDataTypeChangeModel.newColumnName) {
+  // stores mapping of altered children column names:
+  // new_column_name -> column_datatype
+  val alteredColumnNamesMap = collection.mutable.HashMap.empty[String, String]
+
+  // stores mapping of: column_name -> new_column_datatype
+  val alteredColumnDatatypesMap = collection.mutable.HashMap.empty[String, String]
+
+  /**
+   * This method checks the structure of the old and new complex columns, and-
+   * 1. throws exception if the number of complex-levels in both columns does not match
+   * 2. throws exception if the number of children of both columns does not match
+   * 3. creates alteredColumnNamesMap: new_column_name -> datatype. Here new_column_name are those
+   *    names of the columns that are altered.
+   * 4. creates alteredColumnDatatypesMap: column_name -> new_datatype.
+   * These maps will later be used while altering the table schema
+   */
+  def validateComplexStructure(dimension: List[CarbonDimension],
+      newDataTypeInfo: List[DataTypeInfo]): Unit = {

Review comment:
       please rename variables to its identity

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -83,6 +76,69 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
     childTableColumnRename: Boolean = false)
   extends CarbonAlterTableColumnRenameCommand(alterTableColRenameAndDataTypeChangeModel.columnName,
     alterTableColRenameAndDataTypeChangeModel.newColumnName) {
+  // stores mapping of altered children column names:
+  // new_column_name -> column_datatype
+  val alteredColumnNamesMap = collection.mutable.HashMap.empty[String, String]
+
+  // stores mapping of: column_name -> new_column_datatype
+  val alteredColumnDatatypesMap = collection.mutable.HashMap.empty[String, String]
+
+  /**
+   * This method checks the structure of the old and new complex columns, and-
+   * 1. throws exception if the number of complex-levels in both columns does not match
+   * 2. throws exception if the number of children of both columns does not match
+   * 3. creates alteredColumnNamesMap: new_column_name -> datatype. Here new_column_name are those
+   *    names of the columns that are altered.
+   * 4. creates alteredColumnDatatypesMap: column_name -> new_datatype.
+   * These maps will later be used while altering the table schema
+   */
+  def validateComplexStructure(dimension: List[CarbonDimension],
+      newDataTypeInfo: List[DataTypeInfo]): Unit = {
+    if (dimension == null && newDataTypeInfo == null) {

Review comment:
       i think these null checks are not required as children will be always present

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -83,6 +76,69 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
     childTableColumnRename: Boolean = false)
   extends CarbonAlterTableColumnRenameCommand(alterTableColRenameAndDataTypeChangeModel.columnName,
     alterTableColRenameAndDataTypeChangeModel.newColumnName) {
+  // stores mapping of altered children column names:
+  // new_column_name -> column_datatype
+  val alteredColumnNamesMap = collection.mutable.HashMap.empty[String, String]
+
+  // stores mapping of: column_name -> new_column_datatype
+  val alteredColumnDatatypesMap = collection.mutable.HashMap.empty[String, String]
+
+  /**
+   * This method checks the structure of the old and new complex columns, and-
+   * 1. throws exception if the number of complex-levels in both columns does not match
+   * 2. throws exception if the number of children of both columns does not match
+   * 3. creates alteredColumnNamesMap: new_column_name -> datatype. Here new_column_name are those
+   *    names of the columns that are altered.
+   * 4. creates alteredColumnDatatypesMap: column_name -> new_datatype.
+   * These maps will later be used while altering the table schema
+   */
+  def validateComplexStructure(dimension: List[CarbonDimension],
+      newDataTypeInfo: List[DataTypeInfo]): Unit = {
+    if (dimension == null && newDataTypeInfo == null) {
+      throw new UnsupportedOperationException(
+        "both old and new dimensions are null")
+    } else if (dimension == null || newDataTypeInfo == null) {
+      throw new UnsupportedOperationException(
+        "because either the old or the new dimension is null")
+    } else if (dimension.size != newDataTypeInfo.size) {
+      throw new UnsupportedOperationException(
+        "because number of children of old and new complex columns are not the same")
+    } else {
+      for (i <- 0 to newDataTypeInfo.size - 1) {
+        val old_column_name = dimension(i).getColName.split('.').last
+        val old_column_datatype = dimension(i).getDataType.getName
+        val new_column_name = newDataTypeInfo(i).name.split('.').last
+        val new_column_datatype = newDataTypeInfo(i).dataType
+        if (!old_column_datatype.equalsIgnoreCase(new_column_datatype)) {
+          /**
+           * datatypes of complex children cannot be altered. So throwing exception for now.
+           * TODO: use alteredColumnDatatypesMap to update the carbon schema
+           */
+          alteredColumnDatatypesMap += (dimension(i).getColName -> new_column_datatype)

Review comment:
       please remove this map, its not useful or this PR

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -217,20 +310,52 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
               .setPrecision(newColumnPrecision)
             columnSchema.setScale(newColumnScale)
           }
-          if (newColumnComment.isDefined && columnSchema.getColumnProperties != null) {
-            columnSchema.getColumnProperties.put(
-              CarbonCommonConstants.COLUMN_COMMENT, newColumnComment.get)
-          } else if (newColumnComment.isDefined) {
-            val newColumnProperties = new util.HashMap[String, String]
-            newColumnProperties.put(CarbonCommonConstants.COLUMN_COMMENT, newColumnComment.get)
-            columnSchema.setColumnProperties(newColumnProperties)
+          // only table columns are eligible to have comment
+          if (!isComplexChild(columnSchema)) {
+            if (newColumnComment.isDefined && columnSchema.getColumnProperties != null) {
+              columnSchema.getColumnProperties.put(
+                CarbonCommonConstants.COLUMN_COMMENT, newColumnComment.get)
+            } else if (newColumnComment.isDefined) {
+              val newColumnProperties = new util.HashMap[String, String]
+              newColumnProperties.put(CarbonCommonConstants.COLUMN_COMMENT, newColumnComment.get)
+              columnSchema.setColumnProperties(newColumnProperties)
+            }
+            addColumnSchema = columnSchema
+            timeStamp = System.currentTimeMillis()
+            // make a new schema evolution entry after column rename or datatype change
+            schemaEvolutionEntry = AlterTableUtil

Review comment:
       here I can see, if the child column is renamed, then the schemaEvolutionEntry is not being made on the child column. Please take care of it, also confirm if in case of add and drop column it's handled properly, like, the added entry should have how many children it has for added and dropped parent columns.

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -143,27 +199,51 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
       val newColumnPrecision = alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.precision
       val newColumnScale = alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.scale
       // set isDataTypeChange flag
-      if (oldCarbonColumn.head.getDataType.getName
-        .equalsIgnoreCase(alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.dataType)) {
+      val oldDatatype = oldCarbonColumn.head.getDataType
+      val newDatatype = alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.dataType
+      if (isColumnRename && (DataTypes.isMapType(oldDatatype) ||
+                             newDatatype.equalsIgnoreCase(CarbonCommonConstants.MAP))) {
+        throw new UnsupportedOperationException("cannot alter map type column")
+      }
+      if (oldDatatype.getName.equalsIgnoreCase(newDatatype)) {
         val newColumnPrecision =
           alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.precision
         val newColumnScale = alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.scale
         // if the source datatype is decimal and there is change in precision and scale, then
         // along with rename, datatype change is also required for the command, so set the
         // isDataTypeChange flag to true in this case
-        if (oldCarbonColumn.head.getDataType.getName.equalsIgnoreCase("decimal") &&
-            (oldCarbonColumn.head.getDataType.asInstanceOf[DecimalType].getPrecision !=
+        if (oldDatatype.getName.equalsIgnoreCase(CarbonCommonConstants.DECIMAL) &&
+            (oldDatatype.asInstanceOf[DecimalType].getPrecision !=
              newColumnPrecision ||
-             oldCarbonColumn.head.getDataType.asInstanceOf[DecimalType].getScale !=
+             oldDatatype.asInstanceOf[DecimalType].getScale !=
              newColumnScale)) {
           isDataTypeChange = true
         }
+        if (DataTypes.isArrayType(oldDatatype) ||
+            DataTypes.isStructType(oldDatatype)) {
+          val oldParent = oldCarbonColumn.head
+          val oldChildren = oldParent
+            .asInstanceOf[CarbonDimension]
+            .getListOfChildDimensions
+            .asScala
+            .toList
+          validateComplexStructure(oldChildren,
+            alterTableColRenameAndDataTypeChangeModel.dataTypeInfo.getChildren())
+        }
       } else {
+        if (oldDatatype.isComplexType ||
+            newDatatype.equalsIgnoreCase(CarbonCommonConstants.ARRAY) ||
+            newDatatype.equalsIgnoreCase(CarbonCommonConstants.STRUCT)) {
+          throw new UnsupportedOperationException(
+            "because old and new complex columns are not compatible in structure")
+        }
         isDataTypeChange = true
       }
+
       // If there is no columnrename and datatype change and comment change
       // return directly without execution
-      if (!isColumnRename && !isDataTypeChange && !newColumnComment.isDefined) {
+      if (!isColumnRename && !isDataTypeChange && !newColumnComment.isDefined &&
+          alteredColumnNamesMap.size == 0) {

Review comment:
       replace with isempty

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -195,18 +275,31 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
       var deletedColumnSchema: ColumnSchema = null
       var schemaEvolutionEntry: SchemaEvolutionEntry = null
       val columnSchemaList = tableInfo.fact_table.table_columns.asScala.filter(!_.isInvisible)
+      // to validate duplicate children columns
+      var UniqueColumnSet: mutable.Set[String] = mutable.Set()
+
 
       columnSchemaList.foreach { columnSchema =>
-        if (columnSchema.column_name.equalsIgnoreCase(oldColumnName)) {
+        val columnSchemaName = columnSchema.column_name
+        // column to be renamed is a parent/table column or complex child column
+        if (columnSchemaName.equalsIgnoreCase(oldColumnName) ||
+            isChildOfOldColumn(columnSchemaName, oldColumnName)) {
           deletedColumnSchema = columnSchema.deepCopy()
-          if (alterTableColRenameAndDataTypeChangeModel.isColumnRename) {
-            // if only column rename, just get the column schema and rename, make a
+          // if the table column name has been altered
+          if (isColumnRename) {
+            // if only table column rename, just get the column schema and rename, make a
             // schemaEvolutionEntry
-            columnSchema.setColumn_name(newColumnName)
+            if (isChildOfOldColumn(columnSchemaName, oldColumnName)) {

Review comment:
       here since you are handling only the changing of parent column name of child when parent column name changes, so please explain detail way with proper example, how handling is done only for the parent name part of child column name

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -279,6 +404,23 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
     Seq.empty
   }
 
+  private def isComplexChild(columnSchema: ColumnSchema): Boolean = {

Review comment:
       this can be removed as isChildOfOldColumn fulfil the same purpose

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -195,18 +275,31 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
       var deletedColumnSchema: ColumnSchema = null
       var schemaEvolutionEntry: SchemaEvolutionEntry = null
       val columnSchemaList = tableInfo.fact_table.table_columns.asScala.filter(!_.isInvisible)
+      // to validate duplicate children columns
+      var UniqueColumnSet: mutable.Set[String] = mutable.Set()
+
 
       columnSchemaList.foreach { columnSchema =>
-        if (columnSchema.column_name.equalsIgnoreCase(oldColumnName)) {
+        val columnSchemaName = columnSchema.column_name
+        // column to be renamed is a parent/table column or complex child column
+        if (columnSchemaName.equalsIgnoreCase(oldColumnName) ||
+            isChildOfOldColumn(columnSchemaName, oldColumnName)) {
           deletedColumnSchema = columnSchema.deepCopy()
-          if (alterTableColRenameAndDataTypeChangeModel.isColumnRename) {
-            // if only column rename, just get the column schema and rename, make a
+          // if the table column name has been altered
+          if (isColumnRename) {
+            // if only table column rename, just get the column schema and rename, make a
             // schemaEvolutionEntry
-            columnSchema.setColumn_name(newColumnName)
+            if (isChildOfOldColumn(columnSchemaName, oldColumnName)) {

Review comment:
       `isChildOfOldColumn(columnSchemaName, oldColumnName)` assign to one boolean and use in all tree places

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -364,10 +506,21 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
                       s"Specified precision and scale values will lead to data loss")
           }
         }
+      case CarbonCommonConstants.ARRAY =>

Review comment:
       please remove these changes and handle in datatype change PR, here not required

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -279,6 +404,23 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
     Seq.empty
   }
 
+  private def isComplexChild(columnSchema: ColumnSchema): Boolean = {
+    columnSchema.column_name.contains(CarbonCommonConstants.POINT)
+  }
+
+  private def isChildOfOldColumn(columnSchemaName: String, oldColumnName: String): Boolean = {
+    columnSchemaName.startsWith(oldColumnName + CarbonCommonConstants.POINT)
+  }
+
+  private def checkIfParentIsAltered(columnSchemaName: String): String = {

Review comment:
       you can remove this also, as this won't be required once you fix the previous comment

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -195,18 +275,31 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
       var deletedColumnSchema: ColumnSchema = null
       var schemaEvolutionEntry: SchemaEvolutionEntry = null
       val columnSchemaList = tableInfo.fact_table.table_columns.asScala.filter(!_.isInvisible)
+      // to validate duplicate children columns
+      var UniqueColumnSet: mutable.Set[String] = mutable.Set()

Review comment:
       please remove this and handle the duplicate column validation in `validColumnsForRenaming` only

##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -217,20 +310,52 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
               .setPrecision(newColumnPrecision)
             columnSchema.setScale(newColumnScale)
           }
-          if (newColumnComment.isDefined && columnSchema.getColumnProperties != null) {
-            columnSchema.getColumnProperties.put(
-              CarbonCommonConstants.COLUMN_COMMENT, newColumnComment.get)
-          } else if (newColumnComment.isDefined) {
-            val newColumnProperties = new util.HashMap[String, String]
-            newColumnProperties.put(CarbonCommonConstants.COLUMN_COMMENT, newColumnComment.get)
-            columnSchema.setColumnProperties(newColumnProperties)
+          // only table columns are eligible to have comment
+          if (!isComplexChild(columnSchema)) {
+            if (newColumnComment.isDefined && columnSchema.getColumnProperties != null) {
+              columnSchema.getColumnProperties.put(
+                CarbonCommonConstants.COLUMN_COMMENT, newColumnComment.get)
+            } else if (newColumnComment.isDefined) {
+              val newColumnProperties = new util.HashMap[String, String]
+              newColumnProperties.put(CarbonCommonConstants.COLUMN_COMMENT, newColumnComment.get)
+              columnSchema.setColumnProperties(newColumnProperties)
+            }
+            addColumnSchema = columnSchema
+            timeStamp = System.currentTimeMillis()
+            // make a new schema evolution entry after column rename or datatype change
+            schemaEvolutionEntry = AlterTableUtil
+              .addNewSchemaEvolutionEntry(timeStamp, addColumnSchema, deletedColumnSchema)
+          }
+        }
+
+        if (alteredColumnNamesMap.nonEmpty) {
+          // if complex-child or its children has been renamed
+          if (alteredColumnNamesMap.contains(columnSchemaName)) {
+            // matches exactly
+            val newComplexChildName = alteredColumnNamesMap(columnSchemaName)
+            columnSchema.setColumn_name(newComplexChildName)
+          } else {
+            val oldParent = checkIfParentIsAltered(columnSchemaName)

Review comment:
       i think all the scenarios should be handled in if case itself if your mapping of old to new column contains all, please check and change




--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] akashrn5 commented on a change in pull request #4129: [CARBONDATA-4179] Support renaming of complex columns (array/struct)

GitBox
In reply to this post by GitBox

akashrn5 commented on a change in pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#discussion_r637746504



##########
File path: integration/spark/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/AlterTableColumnRenameTestCase.scala
##########
@@ -48,6 +52,262 @@ class AlterTableColumnRenameTestCase extends QueryTest with BeforeAndAfterAll {
     sql("drop table simple_table")
   }
 
+  test("Rename more than one column at a time in one operation") {
+    sql("drop table if exists test_rename")
+    sql("CREATE TABLE test_rename (str struct<a:struct<b:int, d:int>, c:int>) STORED AS carbondata")
+    sql("insert into test_rename values(named_struct('a11',named_struct('b2',12,'d',12), 'c', 12))")
+    sql("alter table test_rename change str str22 struct<a11:struct<b2:int, d:int>, c:int>")
+    sql("insert into test_rename values(named_struct('a11',named_struct('b2',24,'d',24), 'c', 24))")
+
+    val rows = sql("select str22.a11.b2 from test_rename").collect()
+    assert(rows(0).equals(Row(12)) && rows(1).equals(Row(24)))
+    // check if old column names are still present
+    val ex1 = intercept[AnalysisException] {
+      sql("select str from test_rename").show(false)
+    }
+    assert(ex1.getMessage
+      .contains("cannot resolve '`str`' given input columns: [test_rename.str22]"))
+
+    val ex2 = intercept[AnalysisException] {
+      sql("select str.a from test_rename").show(false)
+    }
+    assert(ex2.getMessage
+      .contains("cannot resolve '`str.a`' given input columns: [test_rename.str22]"))
+
+    // check un-altered columns
+    val rows1 = sql("select str22.c from test_rename").collect()
+    val rows2 = sql("select str22.a11.d from test_rename").collect()
+    assert(rows1.sameElements(Array(Row(12), Row(24))))
+    assert(rows2.sameElements(Array(Row(12), Row(24))))
+  }
+
+  test("rename complex columns with invalid structure/duplicate names/Map type") {
+    sql("drop table if exists test_rename")
+    sql(
+      "CREATE TABLE test_rename (str struct<a:int,b:long>, str2 struct<a:int,b:long>, map1 " +
+      "map<string, string>, str3 struct<a:int, b:map<string, string>>) STORED AS carbondata")
+
+    val ex1 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str struct<a:array<int>,b:long>")
+    }
+    assert(ex1.getMessage
+      .contains(
+        "column rename operation failed: because datatypes of complex children cannot be altered"))
+
+    val ex2 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str struct<a:int,b:long,c:int>")
+    }
+    assert(ex2.getMessage
+      .contains(
+        "column rename operation failed: because number of children of old and new complex " +
+        "columns are not the same"))
+
+    val ex3 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str int")
+    }
+    assert(ex3.getMessage
+      .contains(
+        "column rename operation failed: because old and new complex columns are not compatible " +
+        "in structure"))
+
+    val ex4 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str struct<a:int,a:long>")
+    }
+    assert(ex4.getMessage
+      .contains(
+        "column rename operation failed: because duplicate columns are present"))
+
+    val ex5 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str2 struct<a:int,b:long>")
+    }
+    assert(ex5.getMessage
+      .contains(
+        "Column Rename Operation failed. New column name str2 already exists in table test_rename"))
+
+    val ex6 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change map1 map2 map<string, struct<a:int>>")
+    }
+    assert(ex6.getMessage
+      .contains("rename operation failed: cannot alter map type column"))
+
+    val ex7 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str3 str33 struct<a:int, bc:map<string, string>>")
+    }
+    assert(ex7.getMessage
+      .contains(
+        "rename operation failed: cannot alter complex structure that includes map type column"))
+  }
+
+  def checkAnswerUtil1(df1: DataFrame, df2: DataFrame, df3: DataFrame) {
+    checkAnswer(df1, Seq(Row(Row(Row(2)))))
+    checkAnswer(df2, Seq(Row(Row(2))))
+    checkAnswer(df3, Seq(Row(2)))
+  }
+
+  def checkAnswerUtil2(df1: DataFrame, df2: DataFrame, df3: DataFrame) {
+    checkAnswer(df1, Seq(Row(Row(Row(2))), Row(Row(Row(3)))))
+    checkAnswer(df2, Seq(Row(Row(2)), Row(Row(3))))
+    checkAnswer(df3, Seq(Row(2), Row(3)))
+  }
+
+  test("test alter rename struct of (primitive/struct/array)") {
+    sql("drop table if exists test_rename")
+    sql("CREATE TABLE test_rename (str1 struct<a:int>, str2 struct<a:struct<b:int>>, str3 " +
+        "struct<a:struct<b:struct<c:int>>>, intfield int) STORED AS carbondata")
+    sql("insert into test_rename values(named_struct('a', 2), " +
+        "named_struct('a', named_struct('b', 2)), named_struct('a', named_struct('b', " +
+        "named_struct('c', 2))), 1)")
+
+    // rename parent column from str2 to str22 and read old rows
+    sql("alter table test_rename change str2 str22 struct<a:struct<b:int>>")
+    var df1 = sql("select str22 from test_rename")
+    var df2 = sql("select str22.a from test_rename")
+    var df3 = sql("select str22.a.b from test_rename")
+    assert(df1.collect().size == 1 && df2.collect().size == 1 && df3.collect().size == 1)
+    checkAnswerUtil1(df1, df2, df3)
+
+    // rename child column from a to a11
+    sql("alter table test_rename change str22 str22 struct<a11:struct<b:int>>")
+    df1 = sql("select str22 from test_rename")
+    df2 = sql("select str22.a11 from test_rename")
+    df3 = sql("select str22.a11.b from test_rename")
+    assert(df1.collect().size == 1 && df2.collect().size == 1 && df3.collect().size == 1)
+    checkAnswerUtil1(df1, df2, df3)
+
+    // rename parent column from str22 to str33
+    sql("alter table test_rename change str22 str33 struct<a11:struct<b:int>>")
+    df1 = sql("select str33 from test_rename")
+    df2 = sql("select str33.a11 from test_rename")
+    df3 = sql("select str33.a11.b from test_rename")
+    assert(df1.collect().size == 1 && df2.collect().size == 1 && df3.collect().size == 1)
+    checkAnswerUtil1(df1, df2, df3)
+
+    // insert new rows
+    sql("insert into test_rename values(named_struct('a', 3), " +
+        "named_struct('a', named_struct('b', 3)), named_struct('a', named_struct('b', " +
+        "named_struct('c', 3))), 2)")
+    df1 = sql("select str33 from test_rename")
+    df2 = sql("select str33.a11 from test_rename")
+    df3 = sql("select str33.a11.b from test_rename")
+    assert(df1.collect().size == 2 && df2.collect().size == 2 && df3.collect().size == 2)
+    checkAnswerUtil2(df1, df2, df3)
+
+    sql("alter table test_rename change str33 str33 struct<a11:struct<b11:int>>")
+    sql("alter table test_rename change str33 str33 struct<a22:struct<b11:int>>")

Review comment:
       remove above two lines, not used

##########
File path: integration/spark/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/AlterTableColumnRenameTestCase.scala
##########
@@ -48,6 +52,262 @@ class AlterTableColumnRenameTestCase extends QueryTest with BeforeAndAfterAll {
     sql("drop table simple_table")
   }
 
+  test("Rename more than one column at a time in one operation") {
+    sql("drop table if exists test_rename")
+    sql("CREATE TABLE test_rename (str struct<a:struct<b:int, d:int>, c:int>) STORED AS carbondata")
+    sql("insert into test_rename values(named_struct('a11',named_struct('b2',12,'d',12), 'c', 12))")
+    sql("alter table test_rename change str str22 struct<a11:struct<b2:int, d:int>, c:int>")
+    sql("insert into test_rename values(named_struct('a11',named_struct('b2',24,'d',24), 'c', 24))")
+
+    val rows = sql("select str22.a11.b2 from test_rename").collect()
+    assert(rows(0).equals(Row(12)) && rows(1).equals(Row(24)))
+    // check if old column names are still present
+    val ex1 = intercept[AnalysisException] {
+      sql("select str from test_rename").show(false)
+    }
+    assert(ex1.getMessage
+      .contains("cannot resolve '`str`' given input columns: [test_rename.str22]"))
+
+    val ex2 = intercept[AnalysisException] {
+      sql("select str.a from test_rename").show(false)
+    }
+    assert(ex2.getMessage
+      .contains("cannot resolve '`str.a`' given input columns: [test_rename.str22]"))
+
+    // check un-altered columns
+    val rows1 = sql("select str22.c from test_rename").collect()
+    val rows2 = sql("select str22.a11.d from test_rename").collect()
+    assert(rows1.sameElements(Array(Row(12), Row(24))))
+    assert(rows2.sameElements(Array(Row(12), Row(24))))
+  }
+
+  test("rename complex columns with invalid structure/duplicate names/Map type") {
+    sql("drop table if exists test_rename")
+    sql(
+      "CREATE TABLE test_rename (str struct<a:int,b:long>, str2 struct<a:int,b:long>, map1 " +
+      "map<string, string>, str3 struct<a:int, b:map<string, string>>) STORED AS carbondata")
+
+    val ex1 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str struct<a:array<int>,b:long>")
+    }
+    assert(ex1.getMessage
+      .contains(
+        "column rename operation failed: because datatypes of complex children cannot be altered"))
+
+    val ex2 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str struct<a:int,b:long,c:int>")
+    }
+    assert(ex2.getMessage
+      .contains(
+        "column rename operation failed: because number of children of old and new complex " +
+        "columns are not the same"))
+
+    val ex3 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str int")
+    }
+    assert(ex3.getMessage
+      .contains(
+        "column rename operation failed: because old and new complex columns are not compatible " +
+        "in structure"))
+
+    val ex4 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str struct<a:int,a:long>")
+    }
+    assert(ex4.getMessage
+      .contains(
+        "column rename operation failed: because duplicate columns are present"))
+
+    val ex5 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str2 struct<a:int,b:long>")
+    }
+    assert(ex5.getMessage
+      .contains(
+        "Column Rename Operation failed. New column name str2 already exists in table test_rename"))
+
+    val ex6 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change map1 map2 map<string, struct<a:int>>")
+    }
+    assert(ex6.getMessage
+      .contains("rename operation failed: cannot alter map type column"))
+
+    val ex7 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str3 str33 struct<a:int, bc:map<string, string>>")
+    }
+    assert(ex7.getMessage
+      .contains(
+        "rename operation failed: cannot alter complex structure that includes map type column"))
+  }
+
+  def checkAnswerUtil1(df1: DataFrame, df2: DataFrame, df3: DataFrame) {
+    checkAnswer(df1, Seq(Row(Row(Row(2)))))
+    checkAnswer(df2, Seq(Row(Row(2))))
+    checkAnswer(df3, Seq(Row(2)))
+  }
+
+  def checkAnswerUtil2(df1: DataFrame, df2: DataFrame, df3: DataFrame) {
+    checkAnswer(df1, Seq(Row(Row(Row(2))), Row(Row(Row(3)))))
+    checkAnswer(df2, Seq(Row(Row(2)), Row(Row(3))))
+    checkAnswer(df3, Seq(Row(2), Row(3)))
+  }
+
+  test("test alter rename struct of (primitive/struct/array)") {
+    sql("drop table if exists test_rename")
+    sql("CREATE TABLE test_rename (str1 struct<a:int>, str2 struct<a:struct<b:int>>, str3 " +
+        "struct<a:struct<b:struct<c:int>>>, intfield int) STORED AS carbondata")
+    sql("insert into test_rename values(named_struct('a', 2), " +
+        "named_struct('a', named_struct('b', 2)), named_struct('a', named_struct('b', " +
+        "named_struct('c', 2))), 1)")
+
+    // rename parent column from str2 to str22 and read old rows
+    sql("alter table test_rename change str2 str22 struct<a:struct<b:int>>")
+    var df1 = sql("select str22 from test_rename")
+    var df2 = sql("select str22.a from test_rename")
+    var df3 = sql("select str22.a.b from test_rename")
+    assert(df1.collect().size == 1 && df2.collect().size == 1 && df3.collect().size == 1)
+    checkAnswerUtil1(df1, df2, df3)
+
+    // rename child column from a to a11
+    sql("alter table test_rename change str22 str22 struct<a11:struct<b:int>>")
+    df1 = sql("select str22 from test_rename")
+    df2 = sql("select str22.a11 from test_rename")
+    df3 = sql("select str22.a11.b from test_rename")
+    assert(df1.collect().size == 1 && df2.collect().size == 1 && df3.collect().size == 1)
+    checkAnswerUtil1(df1, df2, df3)
+
+    // rename parent column from str22 to str33
+    sql("alter table test_rename change str22 str33 struct<a11:struct<b:int>>")
+    df1 = sql("select str33 from test_rename")
+    df2 = sql("select str33.a11 from test_rename")
+    df3 = sql("select str33.a11.b from test_rename")
+    assert(df1.collect().size == 1 && df2.collect().size == 1 && df3.collect().size == 1)
+    checkAnswerUtil1(df1, df2, df3)
+
+    // insert new rows
+    sql("insert into test_rename values(named_struct('a', 3), " +
+        "named_struct('a', named_struct('b', 3)), named_struct('a', named_struct('b', " +
+        "named_struct('c', 3))), 2)")
+    df1 = sql("select str33 from test_rename")
+    df2 = sql("select str33.a11 from test_rename")
+    df3 = sql("select str33.a11.b from test_rename")
+    assert(df1.collect().size == 2 && df2.collect().size == 2 && df3.collect().size == 2)
+    checkAnswerUtil2(df1, df2, df3)
+
+    sql("alter table test_rename change str33 str33 struct<a11:struct<b11:int>>")
+    sql("alter table test_rename change str33 str33 struct<a22:struct<b11:int>>")
+    df1 = sql("select str33 from test_rename")
+    df2 = sql("select str33.a22 from test_rename")
+    df3 = sql("select str33.a22.b11 from test_rename")
+    assert(df1.collect().size == 2 && df2.collect().size == 2 && df3.collect().size == 2)
+    checkAnswerUtil2(df1, df2, df3)
+
+    val desc = sql("desc table test_rename").collect()
+    assert(desc(0)(0).equals("str1"))
+    assert(desc(1)(0).equals("str33"))
+    assert(desc(1)(1).equals("struct<a22:struct<b11:int>>"))
+    assert(desc(2)(0).equals("str3"))

Review comment:
       i think in this test case, since all operations done, u can add one more assert of getting schema evolution entry and having asserts in the order of alter rename operations as above

##########
File path: integration/spark/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/AlterTableColumnRenameTestCase.scala
##########
@@ -48,6 +52,262 @@ class AlterTableColumnRenameTestCase extends QueryTest with BeforeAndAfterAll {
     sql("drop table simple_table")
   }
 
+  test("Rename more than one column at a time in one operation") {
+    sql("drop table if exists test_rename")
+    sql("CREATE TABLE test_rename (str struct<a:struct<b:int, d:int>, c:int>) STORED AS carbondata")
+    sql("insert into test_rename values(named_struct('a11',named_struct('b2',12,'d',12), 'c', 12))")
+    sql("alter table test_rename change str str22 struct<a11:struct<b2:int, d:int>, c:int>")
+    sql("insert into test_rename values(named_struct('a11',named_struct('b2',24,'d',24), 'c', 24))")
+
+    val rows = sql("select str22.a11.b2 from test_rename").collect()
+    assert(rows(0).equals(Row(12)) && rows(1).equals(Row(24)))
+    // check if old column names are still present
+    val ex1 = intercept[AnalysisException] {
+      sql("select str from test_rename").show(false)
+    }
+    assert(ex1.getMessage
+      .contains("cannot resolve '`str`' given input columns: [test_rename.str22]"))
+
+    val ex2 = intercept[AnalysisException] {
+      sql("select str.a from test_rename").show(false)
+    }
+    assert(ex2.getMessage
+      .contains("cannot resolve '`str.a`' given input columns: [test_rename.str22]"))
+
+    // check un-altered columns
+    val rows1 = sql("select str22.c from test_rename").collect()
+    val rows2 = sql("select str22.a11.d from test_rename").collect()
+    assert(rows1.sameElements(Array(Row(12), Row(24))))
+    assert(rows2.sameElements(Array(Row(12), Row(24))))
+  }
+
+  test("rename complex columns with invalid structure/duplicate names/Map type") {
+    sql("drop table if exists test_rename")
+    sql(
+      "CREATE TABLE test_rename (str struct<a:int,b:long>, str2 struct<a:int,b:long>, map1 " +
+      "map<string, string>, str3 struct<a:int, b:map<string, string>>) STORED AS carbondata")
+
+    val ex1 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str struct<a:array<int>,b:long>")
+    }
+    assert(ex1.getMessage
+      .contains(
+        "column rename operation failed: because datatypes of complex children cannot be altered"))
+
+    val ex2 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str struct<a:int,b:long,c:int>")
+    }
+    assert(ex2.getMessage
+      .contains(
+        "column rename operation failed: because number of children of old and new complex " +
+        "columns are not the same"))
+
+    val ex3 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str int")
+    }
+    assert(ex3.getMessage
+      .contains(
+        "column rename operation failed: because old and new complex columns are not compatible " +
+        "in structure"))
+
+    val ex4 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str struct<a:int,a:long>")
+    }
+    assert(ex4.getMessage
+      .contains(
+        "column rename operation failed: because duplicate columns are present"))
+
+    val ex5 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str2 struct<a:int,b:long>")
+    }
+    assert(ex5.getMessage
+      .contains(
+        "Column Rename Operation failed. New column name str2 already exists in table test_rename"))
+
+    val ex6 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change map1 map2 map<string, struct<a:int>>")
+    }
+    assert(ex6.getMessage
+      .contains("rename operation failed: cannot alter map type column"))
+
+    val ex7 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str3 str33 struct<a:int, bc:map<string, string>>")
+    }
+    assert(ex7.getMessage
+      .contains(
+        "rename operation failed: cannot alter complex structure that includes map type column"))
+  }
+
+  def checkAnswerUtil1(df1: DataFrame, df2: DataFrame, df3: DataFrame) {
+    checkAnswer(df1, Seq(Row(Row(Row(2)))))
+    checkAnswer(df2, Seq(Row(Row(2))))
+    checkAnswer(df3, Seq(Row(2)))
+  }
+
+  def checkAnswerUtil2(df1: DataFrame, df2: DataFrame, df3: DataFrame) {
+    checkAnswer(df1, Seq(Row(Row(Row(2))), Row(Row(Row(3)))))
+    checkAnswer(df2, Seq(Row(Row(2)), Row(Row(3))))
+    checkAnswer(df3, Seq(Row(2), Row(3)))
+  }
+
+  test("test alter rename struct of (primitive/struct/array)") {
+    sql("drop table if exists test_rename")
+    sql("CREATE TABLE test_rename (str1 struct<a:int>, str2 struct<a:struct<b:int>>, str3 " +
+        "struct<a:struct<b:struct<c:int>>>, intfield int) STORED AS carbondata")
+    sql("insert into test_rename values(named_struct('a', 2), " +
+        "named_struct('a', named_struct('b', 2)), named_struct('a', named_struct('b', " +
+        "named_struct('c', 2))), 1)")
+
+    // rename parent column from str2 to str22 and read old rows
+    sql("alter table test_rename change str2 str22 struct<a:struct<b:int>>")
+    var df1 = sql("select str22 from test_rename")
+    var df2 = sql("select str22.a from test_rename")
+    var df3 = sql("select str22.a.b from test_rename")
+    assert(df1.collect().size == 1 && df2.collect().size == 1 && df3.collect().size == 1)
+    checkAnswerUtil1(df1, df2, df3)
+
+    // rename child column from a to a11
+    sql("alter table test_rename change str22 str22 struct<a11:struct<b:int>>")
+    df1 = sql("select str22 from test_rename")
+    df2 = sql("select str22.a11 from test_rename")
+    df3 = sql("select str22.a11.b from test_rename")
+    assert(df1.collect().size == 1 && df2.collect().size == 1 && df3.collect().size == 1)
+    checkAnswerUtil1(df1, df2, df3)
+
+    // rename parent column from str22 to str33
+    sql("alter table test_rename change str22 str33 struct<a11:struct<b:int>>")
+    df1 = sql("select str33 from test_rename")
+    df2 = sql("select str33.a11 from test_rename")
+    df3 = sql("select str33.a11.b from test_rename")
+    assert(df1.collect().size == 1 && df2.collect().size == 1 && df3.collect().size == 1)
+    checkAnswerUtil1(df1, df2, df3)
+
+    // insert new rows
+    sql("insert into test_rename values(named_struct('a', 3), " +
+        "named_struct('a', named_struct('b', 3)), named_struct('a', named_struct('b', " +
+        "named_struct('c', 3))), 2)")
+    df1 = sql("select str33 from test_rename")
+    df2 = sql("select str33.a11 from test_rename")
+    df3 = sql("select str33.a11.b from test_rename")
+    assert(df1.collect().size == 2 && df2.collect().size == 2 && df3.collect().size == 2)
+    checkAnswerUtil2(df1, df2, df3)
+
+    sql("alter table test_rename change str33 str33 struct<a11:struct<b11:int>>")
+    sql("alter table test_rename change str33 str33 struct<a22:struct<b11:int>>")
+    df1 = sql("select str33 from test_rename")
+    df2 = sql("select str33.a22 from test_rename")
+    df3 = sql("select str33.a22.b11 from test_rename")
+    assert(df1.collect().size == 2 && df2.collect().size == 2 && df3.collect().size == 2)
+    checkAnswerUtil2(df1, df2, df3)
+
+    val desc = sql("desc table test_rename").collect()
+    assert(desc(0)(0).equals("str1"))
+    assert(desc(1)(0).equals("str33"))
+    assert(desc(1)(1).equals("struct<a22:struct<b11:int>>"))
+    assert(desc(2)(0).equals("str3"))
+  }
+
+  test("test alter rename array of (primitive/array/struct)") {
+    sql("drop table if exists test_rename")
+    sql(
+      "CREATE TABLE test_rename (arr1 array<int>, arr2 array<array<int>>, arr3 array<string>, " +
+      "arr4 array<struct<a:int>>) STORED AS carbondata")
+    sql(
+      "insert into test_rename values (array(1,2,3), array(array(1,2),array(3,4)), array('hello'," +
+      "'world'), array(named_struct('a',45)))")
+
+    sql("alter table test_rename change arr1 arr11 array<int>")
+    val df1 = sql("select arr11 from test_rename")
+    assert(df1.collect.size == 1)
+    checkAnswer(df1, Seq(Row(make(Array(1, 2, 3)))))
+
+    sql("alter table test_rename change arr2 arr22 array<array<int>>")
+    val df2 = sql("select arr22 from test_rename")
+    assert(df2.collect.size == 1)
+    checkAnswer(df2, Seq(Row(make(Array(make(Array(1, 2)), make(Array(3, 4)))))))
+
+    sql("alter table test_rename change arr3 arr33 array<string>")
+    val df3 = sql("select arr33 from test_rename")
+    assert(df3.collect.size == 1)
+    checkAnswer(sql("select arr33 from test_rename"), Seq(Row(make(Array("hello", "world")))))
+
+    sql("alter table test_rename change arr4 arr44 array<struct<a:int>>")
+    sql("alter table test_rename change arr44 arr44 array<struct<a11:int>>")
+
+    val df4 = sql("select arr44.a11 from test_rename")
+    assert(df4.collect.size == 1)
+    checkAnswer(df4, Seq(Row(make(Array(45)))))
+
+    // test for new inserted row
+    sql(
+      "insert into test_rename values (array(11,22,33), array(array(11,22),array(33,44)), array" +
+      "('hello11', 'world11'), array(named_struct('a',4555)))")
+    val rows = sql("select arr11, arr22, arr33, arr44.a11 from test_rename").collect
+    assert(rows.size == 2)
+    val secondRow = rows(1)
+    assert(secondRow(0).equals(make(Array(11, 22, 33))) &&
+           secondRow(1).equals(make(Array(make(Array(11, 22)), make(Array(33, 44))))) &&
+           secondRow(2).equals(make(Array("hello11", "world11"))))
+  }
+
+  test("validate alter change datatype for complex children columns") {
+    sql("drop table if exists test_rename")
+    sql(
+      "CREATE TABLE test_rename (str struct<a:int,b:long>) STORED AS carbondata")
+
+    val ex1 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str struct<a:long,b:long>")
+    }
+    assert(ex1.getMessage
+      .contains(
+        "column rename operation failed: because datatypes of complex children cannot be altered"))
+  }
+
+  test("test change comment in case of complex types") {
+    sql("drop table if exists test_rename")
+    sql(
+      "CREATE TABLE test_rename (str struct<a:int> comment 'comment') STORED AS carbondata")
+    sql("alter table test_rename change str str struct<a:int> comment 'new comment'")
+    var describe = sql("desc table test_rename")
+    var count = describe.filter("col_name='str' and comment = 'new comment'").count()
+    assertResult(1)(count)
+
+    sql("alter table test_rename change str str struct<a1:int> comment 'new comment 2'")
+    describe = sql("desc table test_rename")
+    count = describe.filter("col_name='str' and comment = 'new comment 2'").count()
+    assertResult(1)(count)
+  }
+
+  test("check schema evolution after renaming complex column") {
+    sql("drop table if exists test_rename")
+    sql(
+      "CREATE TABLE test_rename (str1 struct<a:int>) STORED AS carbondata")
+    sql("alter table test_rename change str1 str2 struct<abc:int>")
+    val carbonTable = CarbonEnv.getCarbonTable(None, "test_rename")(sqlContext.sparkSession)
+    val schemaEvolutionList = carbonTable.getTableInfo
+      .getFactTable
+      .getSchemaEvolution()
+      .getSchemaEvolutionEntryList()
+    var addedColumns = Seq[ColumnSchema]()
+    var removedColumns = Seq[ColumnSchema]()
+    for (i <- 0 until schemaEvolutionList.size()) {
+      addedColumns ++=
+      JavaConverters
+        .asScalaIteratorConverter(schemaEvolutionList.get(i).getAdded.iterator())
+        .asScala
+        .toSeq
+
+      removedColumns ++=
+      JavaConverters
+        .asScalaIteratorConverter(schemaEvolutionList.get(i).getRemoved.iterator())
+        .asScala
+        .toSeq
+    }
+    assert(addedColumns.size == 1 && removedColumns.size == 1)
+    assert(addedColumns(0).getColumnName.equals("str2"))
+    assert(removedColumns(0).getColumnName.equals("str1"))
+  }
+

Review comment:
       once test for alter revert case in case of failures and confirm

##########
File path: integration/spark/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/AlterTableColumnRenameTestCase.scala
##########
@@ -48,6 +52,262 @@ class AlterTableColumnRenameTestCase extends QueryTest with BeforeAndAfterAll {
     sql("drop table simple_table")
   }
 
+  test("Rename more than one column at a time in one operation") {
+    sql("drop table if exists test_rename")
+    sql("CREATE TABLE test_rename (str struct<a:struct<b:int, d:int>, c:int>) STORED AS carbondata")
+    sql("insert into test_rename values(named_struct('a11',named_struct('b2',12,'d',12), 'c', 12))")

Review comment:
       for validation test cases, no need to insert data and do query, you can remove , it will save test execution time

##########
File path: integration/spark/src/test/scala/org/apache/spark/carbondata/restructure/vectorreader/AlterTableColumnRenameTestCase.scala
##########
@@ -48,6 +52,262 @@ class AlterTableColumnRenameTestCase extends QueryTest with BeforeAndAfterAll {
     sql("drop table simple_table")
   }
 
+  test("Rename more than one column at a time in one operation") {
+    sql("drop table if exists test_rename")
+    sql("CREATE TABLE test_rename (str struct<a:struct<b:int, d:int>, c:int>) STORED AS carbondata")
+    sql("insert into test_rename values(named_struct('a11',named_struct('b2',12,'d',12), 'c', 12))")
+    sql("alter table test_rename change str str22 struct<a11:struct<b2:int, d:int>, c:int>")
+    sql("insert into test_rename values(named_struct('a11',named_struct('b2',24,'d',24), 'c', 24))")
+
+    val rows = sql("select str22.a11.b2 from test_rename").collect()
+    assert(rows(0).equals(Row(12)) && rows(1).equals(Row(24)))
+    // check if old column names are still present
+    val ex1 = intercept[AnalysisException] {
+      sql("select str from test_rename").show(false)
+    }
+    assert(ex1.getMessage
+      .contains("cannot resolve '`str`' given input columns: [test_rename.str22]"))
+
+    val ex2 = intercept[AnalysisException] {
+      sql("select str.a from test_rename").show(false)
+    }
+    assert(ex2.getMessage
+      .contains("cannot resolve '`str.a`' given input columns: [test_rename.str22]"))
+
+    // check un-altered columns
+    val rows1 = sql("select str22.c from test_rename").collect()
+    val rows2 = sql("select str22.a11.d from test_rename").collect()
+    assert(rows1.sameElements(Array(Row(12), Row(24))))
+    assert(rows2.sameElements(Array(Row(12), Row(24))))
+  }
+
+  test("rename complex columns with invalid structure/duplicate names/Map type") {
+    sql("drop table if exists test_rename")
+    sql(
+      "CREATE TABLE test_rename (str struct<a:int,b:long>, str2 struct<a:int,b:long>, map1 " +
+      "map<string, string>, str3 struct<a:int, b:map<string, string>>) STORED AS carbondata")
+
+    val ex1 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str struct<a:array<int>,b:long>")
+    }
+    assert(ex1.getMessage
+      .contains(
+        "column rename operation failed: because datatypes of complex children cannot be altered"))
+
+    val ex2 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str struct<a:int,b:long,c:int>")
+    }
+    assert(ex2.getMessage
+      .contains(
+        "column rename operation failed: because number of children of old and new complex " +
+        "columns are not the same"))
+
+    val ex3 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str int")
+    }
+    assert(ex3.getMessage
+      .contains(
+        "column rename operation failed: because old and new complex columns are not compatible " +
+        "in structure"))
+
+    val ex4 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str struct<a:int,a:long>")
+    }
+    assert(ex4.getMessage
+      .contains(
+        "column rename operation failed: because duplicate columns are present"))
+
+    val ex5 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str str2 struct<a:int,b:long>")
+    }
+    assert(ex5.getMessage
+      .contains(
+        "Column Rename Operation failed. New column name str2 already exists in table test_rename"))
+
+    val ex6 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change map1 map2 map<string, struct<a:int>>")
+    }
+    assert(ex6.getMessage
+      .contains("rename operation failed: cannot alter map type column"))
+
+    val ex7 = intercept[ProcessMetaDataException] {
+      sql("alter table test_rename change str3 str33 struct<a:int, bc:map<string, string>>")
+    }
+    assert(ex7.getMessage
+      .contains(
+        "rename operation failed: cannot alter complex structure that includes map type column"))
+  }
+
+  def checkAnswerUtil1(df1: DataFrame, df2: DataFrame, df3: DataFrame) {
+    checkAnswer(df1, Seq(Row(Row(Row(2)))))
+    checkAnswer(df2, Seq(Row(Row(2))))
+    checkAnswer(df3, Seq(Row(2)))
+  }
+
+  def checkAnswerUtil2(df1: DataFrame, df2: DataFrame, df3: DataFrame) {
+    checkAnswer(df1, Seq(Row(Row(Row(2))), Row(Row(Row(3)))))
+    checkAnswer(df2, Seq(Row(Row(2)), Row(Row(3))))
+    checkAnswer(df3, Seq(Row(2), Row(3)))
+  }
+
+  test("test alter rename struct of (primitive/struct/array)") {
+    sql("drop table if exists test_rename")
+    sql("CREATE TABLE test_rename (str1 struct<a:int>, str2 struct<a:struct<b:int>>, str3 " +
+        "struct<a:struct<b:struct<c:int>>>, intfield int) STORED AS carbondata")
+    sql("insert into test_rename values(named_struct('a', 2), " +
+        "named_struct('a', named_struct('b', 2)), named_struct('a', named_struct('b', " +
+        "named_struct('c', 2))), 1)")
+
+    // rename parent column from str2 to str22 and read old rows
+    sql("alter table test_rename change str2 str22 struct<a:struct<b:int>>")
+    var df1 = sql("select str22 from test_rename")
+    var df2 = sql("select str22.a from test_rename")
+    var df3 = sql("select str22.a.b from test_rename")
+    assert(df1.collect().size == 1 && df2.collect().size == 1 && df3.collect().size == 1)
+    checkAnswerUtil1(df1, df2, df3)
+
+    // rename child column from a to a11
+    sql("alter table test_rename change str22 str22 struct<a11:struct<b:int>>")
+    df1 = sql("select str22 from test_rename")
+    df2 = sql("select str22.a11 from test_rename")
+    df3 = sql("select str22.a11.b from test_rename")
+    assert(df1.collect().size == 1 && df2.collect().size == 1 && df3.collect().size == 1)
+    checkAnswerUtil1(df1, df2, df3)
+
+    // rename parent column from str22 to str33
+    sql("alter table test_rename change str22 str33 struct<a11:struct<b:int>>")
+    df1 = sql("select str33 from test_rename")
+    df2 = sql("select str33.a11 from test_rename")
+    df3 = sql("select str33.a11.b from test_rename")
+    assert(df1.collect().size == 1 && df2.collect().size == 1 && df3.collect().size == 1)
+    checkAnswerUtil1(df1, df2, df3)
+
+    // insert new rows
+    sql("insert into test_rename values(named_struct('a', 3), " +
+        "named_struct('a', named_struct('b', 3)), named_struct('a', named_struct('b', " +
+        "named_struct('c', 3))), 2)")
+    df1 = sql("select str33 from test_rename")
+    df2 = sql("select str33.a11 from test_rename")
+    df3 = sql("select str33.a11.b from test_rename")
+    assert(df1.collect().size == 2 && df2.collect().size == 2 && df3.collect().size == 2)
+    checkAnswerUtil2(df1, df2, df3)
+
+    sql("alter table test_rename change str33 str33 struct<a11:struct<b11:int>>")
+    sql("alter table test_rename change str33 str33 struct<a22:struct<b11:int>>")
+    df1 = sql("select str33 from test_rename")
+    df2 = sql("select str33.a22 from test_rename")
+    df3 = sql("select str33.a22.b11 from test_rename")
+    assert(df1.collect().size == 2 && df2.collect().size == 2 && df3.collect().size == 2)
+    checkAnswerUtil2(df1, df2, df3)
+
+    val desc = sql("desc table test_rename").collect()
+    assert(desc(0)(0).equals("str1"))
+    assert(desc(1)(0).equals("str33"))
+    assert(desc(1)(1).equals("struct<a22:struct<b11:int>>"))
+    assert(desc(2)(0).equals("str3"))
+  }
+
+  test("test alter rename array of (primitive/array/struct)") {
+    sql("drop table if exists test_rename")
+    sql(
+      "CREATE TABLE test_rename (arr1 array<int>, arr2 array<array<int>>, arr3 array<string>, " +
+      "arr4 array<struct<a:int>>) STORED AS carbondata")
+    sql(
+      "insert into test_rename values (array(1,2,3), array(array(1,2),array(3,4)), array('hello'," +
+      "'world'), array(named_struct('a',45)))")
+
+    sql("alter table test_rename change arr1 arr11 array<int>")

Review comment:
       remove this sql if not used and check in all other places and remove




--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] akkio-97 commented on a change in pull request #4129: [CARBONDATA-4179] Support renaming of complex columns (array/struct)

GitBox
In reply to this post by GitBox

akkio-97 commented on a change in pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#discussion_r638736579



##########
File path: core/src/main/java/org/apache/carbondata/core/scan/executor/util/RestructureUtil.java
##########
@@ -50,6 +52,10 @@
  * Utility class for restructuring
  */
 public class RestructureUtil {
+  // if table column is of complex type- this look up stores the column id of the parent and their
+  // children. This helps to determine the existence of incoming query column by matching based
+  // on id.
+  private static Map<String, String> existingTableColumnIDMap = new HashMap<>();

Review comment:
       In createDimensionInfoAndGetCurrentBlockQueryDimension(), if you see we have two for loops - 1) for tableComplexDimensions. 2) for query dimensions.
   Table and query dimensions are compared using the map. If you dont re-initialize for every new incoming table dimension, then map may contain unwanted old dimensions.

##########
File path: core/src/main/java/org/apache/carbondata/core/scan/executor/util/RestructureUtil.java
##########
@@ -154,11 +160,21 @@
     return presentDimension;
   }
 
+  public static void fillExistingTableColumnIDMap(CarbonDimension tableColumn) {

Review comment:
       yes we dont get child related dimensions. For ex, query is on struct.id. This name may not be same with the table dimension name if schema has evolved.
   So first we populate the map with table dimensions(including children). Later matching happens based on id for each query name.




--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] akkio-97 commented on a change in pull request #4129: [CARBONDATA-4179] Support renaming of complex columns (array/struct)

GitBox
In reply to this post by GitBox

akkio-97 commented on a change in pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#discussion_r638737123



##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -279,6 +404,23 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
     Seq.empty
   }
 
+  private def isComplexChild(columnSchema: ColumnSchema): Boolean = {

Review comment:
       isComplexChild will be true if column is a child of any column.




--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


Reply | Threaded
Open this post in threaded view
|

[GitHub] [carbondata] akkio-97 commented on a change in pull request #4129: [CARBONDATA-4179] Support renaming of complex columns (array/struct)

GitBox
In reply to this post by GitBox

akkio-97 commented on a change in pull request #4129:
URL: https://github.com/apache/carbondata/pull/4129#discussion_r638737123



##########
File path: integration/spark/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableColRenameDataTypeChangeCommand.scala
##########
@@ -279,6 +404,23 @@ private[sql] case class CarbonAlterTableColRenameDataTypeChangeCommand(
     Seq.empty
   }
 
+  private def isComplexChild(columnSchema: ColumnSchema): Boolean = {

Review comment:
       isComplexChild will be true if column is a child of any column. Its functionality is not the same




--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[hidden email]


1234 ... 7