ravipesala commented on a change in pull request #3436: [WIP]Geospatial Support: Modified to create and load the table with a nonschema dimension sort column
URL:
https://github.com/apache/carbondata/pull/3436#discussion_r347233273
##########
File path: integration/spark-common/src/main/scala/org/apache/spark/sql/catalyst/CarbonDDLSqlParser.scala
##########
@@ -264,8 +307,152 @@ abstract class CarbonDDLSqlParser extends AbstractCarbonSparkSQLParser {
s"Carbon Implicit column ${col.column} is not allowed in" +
s" column name while creating table")
}
+ }
+ }
+
+ /**
+ * The method parses, validates and processes the index_handler property.
+ * @param tableProperties Table properties
+ * @param tableFields Sequence of table fields
+ * @return <Seq[Field]> Sequence of table fields
+ */
+ private def processIndexProperty(tableProperties: mutable.Map[String, String],
Review comment:
I think adding validations at this level is cumbersome, better create the tableinfo object first and then add validations at IndexHandler level.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[hidden email]
With regards,
Apache Git Services