Re: Questions about dictionary-encoded column and MDK
Posted by Jin Zhou on
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/Questions-about-dictionary-encoded-column-and-MDK-tp9457p9484.html
Exception info:
scala> carbon.sql("create table if not exists test(a integer, b integer, c integer) STORED BY 'carbondata'");
org.apache.carbondata.spark.exception.MalformedCarbonCommandException: Table default.test can not be created without key columns. Please use DICTIONARY_INCLUDE or DICTIONARY_EXCLUDE to set at least one key column if all specified columns are numeric types
at org.apache.spark.sql.catalyst.CarbonDDLSqlParser.prepareTableModel(CarbonDDLSqlParser.scala:240)
at org.apache.spark.sql.parser.CarbonSqlAstBuilder.visitCreateTable(CarbonSparkSqlParser.scala:162)
at org.apache.spark.sql.parser.CarbonSqlAstBuilder.visitCreateTable(CarbonSparkSqlParser.scala:60)
at org.apache.spark.sql.catalyst.parser.SqlBaseParser$CreateTableContext.accept(SqlBaseParser.java:503)
at org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visit(AbstractParseTreeVisitor.java:42)
at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitSingleStatement$1.apply(AstBuilder.scala:66)
at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitSingleStatement$1.apply(AstBuilder.scala:66)
at org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:93)
at org.apache.spark.sql.catalyst.parser.AstBuilder.visitSingleStatement(AstBuilder.scala:65)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parsePlan$1.apply(ParseDriver.scala:54)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parsePlan$1.apply(ParseDriver.scala:53)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:82)
at org.apache.spark.sql.parser.CarbonSparkSqlParser.parse(CarbonSparkSqlParser.scala:56)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:53)
at org.apache.spark.sql.parser.CarbonSparkSqlParser.parsePlan(CarbonSparkSqlParser.scala:46)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
... 50 elided
I didn't notice “if all specified columns are numeric types” in exception info. So I did more tests and found the issue only occurs when all columns are numeric types.
Below are cases I tested:
case 1:
carbon.sql("create table if not exists test(a string, b string, c string) STORED BY 'carbondata' 'DICTIONARY_EXCLUDE'='a,b,c' ");
====> ok, no dictionary column
case 2:
carbon.sql("create table if not exists test(a integer, b integer, c integer) STORED BY 'carbondata'");
====> fail
case 3:
carbon.sql("create tale if not exists test(a integer, b integer, c integer) STORED BY 'carbondata' TBLPROPERTIES ('DICTIONARY_INCLUDE'='a')");
====> ok, at least one dictionary column
One little problem about case 2 is that there are no proper dictionary column when all columns have high cardinality.