[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] It better ...

classic Classic list List threaded Threaded
109 messages Options
123456
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize s...

qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1865#discussion_r164379370
 
    --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/timeseries/TimeSeriesUtil.scala ---
    @@ -54,6 +55,67 @@ object TimeSeriesUtil {
         }
       }
     
    +  def getGranularityKey(dmProperties: Map[String, String]): String = {
    +
    +    for (granularity <- Granularity.values()) {
    +      if (dmProperties.get(granularity.getName).isDefined) {
    +        return granularity.getName
    +      }
    +    }
    +
    +    throw new CarbonIllegalArgumentException(
    +      s"${TIMESERIES.getName} should define time granularity")
    +  }
    +
    +  def validateTimeSeriesGranularity(
    +      dmProperties: Map[String, String],
    +      dmClassName: String): Boolean = {
    +    var count = 0
    +
    +    for (granularity <- Granularity.values()) {
    --- End diff --
   
    Break the loop when found the granularity level in DM properites


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize s...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1865#discussion_r164381622
 
    --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala ---
    @@ -79,32 +79,27 @@ case class CarbonCreateDataMapCommand(
           }
           createPreAggregateTableCommands.flatMap(_.processMetadata(sparkSession))
         } else {
    -      val dataMapSchema = new DataMapSchema(dataMapName, dmClassName)
    -      dataMapSchema.setProperties(new java.util.HashMap[String, String](dmproperties.asJava))
    -      val dbName = CarbonEnv.getDatabaseName(tableIdentifier.database)(sparkSession)
    -      // upadting the parent table about dataschema
    -      PreAggregateUtil.updateMainTable(dbName, tableIdentifier.table, dataMapSchema, sparkSession)
    +      throw new UnsupportedDataMapException(dmClassName)
         }
         LOGGER.audit(s"DataMap $dataMapName successfully added to Table ${ tableIdentifier.table }")
         Seq.empty
       }
     
       override def processData(sparkSession: SparkSession): Seq[Row] = {
    -    if (dmClassName.equals("org.apache.carbondata.datamap.AggregateDataMapHandler") ||
    --- End diff --
   
    I think process meta will handle exception part if class name mentioned in create data map statement is not valid. When call will come to processData or undoMeta validation is already passed so no need to add class validation again


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize s...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1865#discussion_r164381676
 
    --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala ---
    @@ -79,32 +79,27 @@ case class CarbonCreateDataMapCommand(
           }
           createPreAggregateTableCommands.flatMap(_.processMetadata(sparkSession))
         } else {
    -      val dataMapSchema = new DataMapSchema(dataMapName, dmClassName)
    -      dataMapSchema.setProperties(new java.util.HashMap[String, String](dmproperties.asJava))
    -      val dbName = CarbonEnv.getDatabaseName(tableIdentifier.database)(sparkSession)
    -      // upadting the parent table about dataschema
    -      PreAggregateUtil.updateMainTable(dbName, tableIdentifier.table, dataMapSchema, sparkSession)
    +      throw new UnsupportedDataMapException(dmClassName)
         }
         LOGGER.audit(s"DataMap $dataMapName successfully added to Table ${ tableIdentifier.table }")
         Seq.empty
       }
     
       override def processData(sparkSession: SparkSession): Seq[Row] = {
    -    if (dmClassName.equals("org.apache.carbondata.datamap.AggregateDataMapHandler") ||
    -        dmClassName.equalsIgnoreCase("preaggregate")) {
    +    if (dmClassName.equalsIgnoreCase(PREAGGREGATE.getName) ||
    +      dmClassName.equalsIgnoreCase(TIMESERIES.getName)) {
           createPreAggregateTableCommands.flatMap(_.processData(sparkSession))
         } else {
    -      Seq.empty
    +      throw new UnsupportedDataMapException(dmClassName)
         }
       }
     
       override def undoMetadata(sparkSession: SparkSession, exception: Exception): Seq[Row] = {
    -    if (dmClassName.equals("org.apache.carbondata.datamap.AggregateDataMapHandler") ||
    -        dmClassName.equalsIgnoreCase("preaggregate")) {
    -      val timeHierarchyString = dmproperties.get(CarbonCommonConstants.TIMESERIES_HIERARCHY)
    +    if (dmClassName.equalsIgnoreCase(PREAGGREGATE.getName) ||
    --- End diff --
   
    Same as above comment


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize s...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1865#discussion_r164381889
 
    --- Diff: integration/spark-common/src/main/java/org/apache/carbondata/spark/exception/UnsupportedDataMapException.java ---
    @@ -0,0 +1,36 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.spark.exception;
    +
    +import static org.apache.carbondata.core.metadata.schema.table.DataMapClassName.PREAGGREGATE;
    +import static org.apache.carbondata.core.metadata.schema.table.DataMapClassName.TIMESERIES;
    +
    +/**
    + * Throw exception when using unsupported datamap type
    + */
    +public class UnsupportedDataMapException extends MalformedCarbonCommandException {
    +  /**
    +   * default serial version ID.
    +   */
    +  private static final long serialVersionUID = 1L;
    +
    +  public UnsupportedDataMapException(String dataMapType) {
    +    super("Unknown data map type " + dataMapType +
    +            ", Please use one of " + PREAGGREGATE + " or " + TIMESERIES);
    --- End diff --
   
    Please update the message "Unknow data map type" + dataMapType


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize syntax f...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/1865
 
    SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3171/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize s...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1865#discussion_r164400095
 
    --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala ---
    @@ -212,6 +215,60 @@ class TestPreAggCreateCommand extends QueryTest with BeforeAndAfterAll {
         sql("drop datamap agg0 on table maintable")
       }
     
    +  val timeSeries = TIMESERIES.getName
    +
    +  test("test PreAggregate table selection: create with preaggregate and hierarchy") {
    +    sql("drop table if exists maintabletime")
    +    sql(
    +      """
    +        | create table maintabletime(year int,month int,name string,salary int,dob string)
    +        | stored by 'carbondata'
    +        | tblproperties(
    +        |   'sort_scope'='Global_sort',
    +        |   'table_blocksize'='23',
    +        |   'sort_columns'='month,year,name')
    +      """.stripMargin)
    +    sql("insert into maintabletime select 10,11,'x',12,'2014-01-01 00:00:00'")
    +    sql(
    +      s"""
    +         | create datamap agg0 on table maintabletime
    +         | using 'preaggregate'
    +         | as select dob,name from maintabletime
    +         | group by dob,name
    +       """.stripMargin)
    +    val e = intercept[MalformedCarbonCommandException] {
    +      sql(
    +        s"""
    +           | create datamap agg1 on table maintabletime
    +           | using 'preaggregate'
    +           | DMPROPERTIES (
    +           |  'event_time'='dob',
    +           |  'second_granularity'='1')
    +           | as select dob,name from maintabletime
    +           | group by dob,name
    +       """.stripMargin)
    +    }
    +    assert(e.getMessage.contains(s"$timeSeries keyword missing"))
    +    sql("drop table if exists maintabletime")
    +  }
    +
    +  test("test pre agg create table 21: using") {
    +    sql("drop datamap agg0 on table maintable")
    +
    +    val e: Exception = intercept[Exception] {
    +      sql(
    +        """
    +          | create datamap agg0 on table mainTable
    +          | using 'abc'
    +          | as select column3, sum(column3),column5, sum(column5)
    +          | from maintable
    +          | group by column3,column5,column2
    +        """.stripMargin)
    +    }
    +    assert(e.getMessage.contains(
    +      s"Unknown data map type abc, Please use one of $PREAGGREGATE or $TIMESERIES"))
    --- End diff --
   
    I think we should remind user how to use it. We can print out all data map type. How do you think?


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize s...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1865#discussion_r164400397
 
    --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala ---
    @@ -277,12 +281,21 @@ test("test PreAggregate table selection with timeseries and normal together") {
         sql(
           "create datamap agg0 on table maintabletime using 'preaggregate' as select dob,name from " +
           "maintabletime group by dob,name")
    -    sql(
    -      "create datamap agg1 on table maintabletime using 'preaggregate' DMPROPERTIES ('timeseries" +
    -      ".eventTime'='dob', 'timeseries.hierarchy'='hour=1,day=1,month=1,year=1') as select dob," +
    -      "name from maintabletime group by dob,name")
    +
    +  sql(
    +    s"""
    +       | create datamap agg1_year on table maintabletime
    +       | using '$timeSeries'
    +       | DMPROPERTIES (
    +       | 'event_time'='dob',
    +       | 'year_granularity'='1')
    --- End diff --
   
    ok, There are some test case for value is 2, I will add some test case when use other value.


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize s...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1865#discussion_r164409707
 
    --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala ---
    @@ -212,6 +215,60 @@ class TestPreAggCreateCommand extends QueryTest with BeforeAndAfterAll {
         sql("drop datamap agg0 on table maintable")
       }
     
    +  val timeSeries = TIMESERIES.getName
    +
    +  test("test PreAggregate table selection: create with preaggregate and hierarchy") {
    +    sql("drop table if exists maintabletime")
    +    sql(
    +      """
    +        | create table maintabletime(year int,month int,name string,salary int,dob string)
    +        | stored by 'carbondata'
    +        | tblproperties(
    +        |   'sort_scope'='Global_sort',
    +        |   'table_blocksize'='23',
    +        |   'sort_columns'='month,year,name')
    +      """.stripMargin)
    +    sql("insert into maintabletime select 10,11,'x',12,'2014-01-01 00:00:00'")
    +    sql(
    +      s"""
    +         | create datamap agg0 on table maintabletime
    +         | using 'preaggregate'
    +         | as select dob,name from maintabletime
    +         | group by dob,name
    +       """.stripMargin)
    +    val e = intercept[MalformedCarbonCommandException] {
    +      sql(
    +        s"""
    +           | create datamap agg1 on table maintabletime
    +           | using 'preaggregate'
    +           | DMPROPERTIES (
    +           |  'event_time'='dob',
    +           |  'second_granularity'='1')
    +           | as select dob,name from maintabletime
    +           | group by dob,name
    +       """.stripMargin)
    +    }
    +    assert(e.getMessage.contains(s"$timeSeries keyword missing"))
    +    sql("drop table if exists maintabletime")
    +  }
    +
    +  test("test pre agg create table 21: using") {
    +    sql("drop datamap agg0 on table maintable")
    +
    +    val e: Exception = intercept[Exception] {
    +      sql(
    +        """
    +          | create datamap agg0 on table mainTable
    +          | using 'abc'
    +          | as select column3, sum(column3),column5, sum(column5)
    +          | from maintable
    +          | group by column3,column5,column2
    +        """.stripMargin)
    +    }
    +    assert(e.getMessage.contains(
    +      s"Unknown data map type abc, Please use one of $PREAGGREGATE or $TIMESERIES"))
    --- End diff --
   
    ok,done


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize s...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1865#discussion_r164411124
 
    --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/timeseries/TimeSeriesUtil.scala ---
    @@ -54,6 +55,67 @@ object TimeSeriesUtil {
         }
       }
     
    +  def getGranularityKey(dmProperties: Map[String, String]): String = {
    +
    +    for (granularity <- Granularity.values()) {
    +      if (dmProperties.get(granularity.getName).isDefined) {
    +        return granularity.getName
    +      }
    +    }
    +
    +    throw new CarbonIllegalArgumentException(
    +      s"${TIMESERIES.getName} should define time granularity")
    +  }
    +
    +  def validateTimeSeriesGranularity(
    +      dmProperties: Map[String, String],
    +      dmClassName: String): Boolean = {
    +    var count = 0
    --- End diff --
   
    How to do it?


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize s...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1865#discussion_r164413704
 
    --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/timeseries/TimeSeriesUtil.scala ---
    @@ -54,6 +55,67 @@ object TimeSeriesUtil {
         }
       }
     
    +  def getGranularityKey(dmProperties: Map[String, String]): String = {
    +
    +    for (granularity <- Granularity.values()) {
    +      if (dmProperties.get(granularity.getName).isDefined) {
    +        return granularity.getName
    +      }
    +    }
    +
    +    throw new CarbonIllegalArgumentException(
    +      s"${TIMESERIES.getName} should define time granularity")
    +  }
    +
    +  def validateTimeSeriesGranularity(
    +      dmProperties: Map[String, String],
    +      dmClassName: String): Boolean = {
    +    var count = 0
    +
    +    for (granularity <- Granularity.values()) {
    --- End diff --
   
    when we found the second granularity, we can break the loop, but  first can not.


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize s...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1865#discussion_r164413731
 
    --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/timeseries/TimeSeriesUtil.scala ---
    @@ -54,6 +55,67 @@ object TimeSeriesUtil {
         }
       }
     
    +  def getGranularityKey(dmProperties: Map[String, String]): String = {
    +
    +    for (granularity <- Granularity.values()) {
    +      if (dmProperties.get(granularity.getName).isDefined) {
    +        return granularity.getName
    +      }
    +    }
    +
    +    throw new CarbonIllegalArgumentException(
    +      s"${TIMESERIES.getName} should define time granularity")
    +  }
    +
    +  def validateTimeSeriesGranularity(
    +      dmProperties: Map[String, String],
    +      dmClassName: String): Boolean = {
    +    var count = 0
    --- End diff --
   
    ok, done


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize s...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1865#discussion_r164416505
 
    --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala ---
    @@ -79,32 +79,27 @@ case class CarbonCreateDataMapCommand(
           }
           createPreAggregateTableCommands.flatMap(_.processMetadata(sparkSession))
         } else {
    -      val dataMapSchema = new DataMapSchema(dataMapName, dmClassName)
    -      dataMapSchema.setProperties(new java.util.HashMap[String, String](dmproperties.asJava))
    -      val dbName = CarbonEnv.getDatabaseName(tableIdentifier.database)(sparkSession)
    -      // upadting the parent table about dataschema
    -      PreAggregateUtil.updateMainTable(dbName, tableIdentifier.table, dataMapSchema, sparkSession)
    +      throw new UnsupportedDataMapException(dmClassName)
         }
         LOGGER.audit(s"DataMap $dataMapName successfully added to Table ${ tableIdentifier.table }")
         Seq.empty
       }
     
       override def processData(sparkSession: SparkSession): Seq[Row] = {
    -    if (dmClassName.equals("org.apache.carbondata.datamap.AggregateDataMapHandler") ||
    --- End diff --
   
    We can discuss with jacky li, and raise another  PR for this if it necessary


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize s...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1865#discussion_r164416824
 
    --- Diff: integration/spark-common/src/main/java/org/apache/carbondata/spark/exception/UnsupportedDataMapException.java ---
    @@ -0,0 +1,36 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.spark.exception;
    +
    +import static org.apache.carbondata.core.metadata.schema.table.DataMapClassName.PREAGGREGATE;
    +import static org.apache.carbondata.core.metadata.schema.table.DataMapClassName.TIMESERIES;
    +
    +/**
    + * Throw exception when using unsupported datamap type
    + */
    +public class UnsupportedDataMapException extends MalformedCarbonCommandException {
    +  /**
    +   * default serial version ID.
    +   */
    +  private static final long serialVersionUID = 1L;
    +
    +  public UnsupportedDataMapException(String dataMapType) {
    +    super("Unknown data map type " + dataMapType +
    +            ", Please use one of " + PREAGGREGATE + " or " + TIMESERIES);
    --- End diff --
   
    ok, done


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize syntax f...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/1865
 
    Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3232/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize syntax f...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/1865
 
    Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1998/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize syntax f...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/1865
 
    SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3187/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize s...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1865#discussion_r164617642
 
    --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala ---
    @@ -59,44 +59,37 @@ case class CarbonCreateDataMapCommand(
             val details = TimeSeriesUtil
               .getTimeSeriesGranularityDetails(dmproperties, dmClassName)
             val updatedDmProperties = dmproperties - TimeSeriesUtil.getGranularityKey(dmproperties)
    -        details.map { f =>
    -          CreatePreAggregateTableCommand(dataMapName,
    -            tableIdentifier,
    -            dmClassName,
    -            updatedDmProperties,
    -            queryString.get,
    -            Some(f._1))
    -        }.toSeq
    +        CreatePreAggregateTableCommand(dataMapName,
    +          tableIdentifier,
    +          dmClassName,
    +          updatedDmProperties,
    +          queryString.get,
    +          Some(details(0)._1))
           } else {
    -        Seq(CreatePreAggregateTableCommand(
    +        CreatePreAggregateTableCommand(
               dataMapName,
               tableIdentifier,
               dmClassName,
               dmproperties,
               queryString.get
    -        ))
    +        )
           }
    -      createPreAggregateTableCommands.flatMap(_.processMetadata(sparkSession))
    +      createPreAggregateTableCommands.processMetadata(sparkSession)
         } else {
           throw new UnsupportedDataMapException(dmClassName)
         }
    -    LOGGER.audit(s"DataMap $dataMapName successfully added to Table ${ tableIdentifier.table }")
    +    LOGGER.audit(s"DataMap $dataMapName successfully added to Table ${tableIdentifier.table}")
         Seq.empty
       }
     
       override def processData(sparkSession: SparkSession): Seq[Row] = {
    -    if (dmClassName.equalsIgnoreCase(PREAGGREGATE.getName) ||
    -      dmClassName.equalsIgnoreCase(TIMESERIES.getName)) {
    -      createPreAggregateTableCommands.flatMap(_.processData(sparkSession))
    -    } else {
    -      throw new UnsupportedDataMapException(dmClassName)
    -    }
    +    createPreAggregateTableCommands.processData(sparkSession)
    --- End diff --
   
    I think it is better not to remove the validation of the dmClassName, since we will refactor processData and processMeta and add test framework for it later. If we remove the validation, we may forget to do this check when refactoring


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize s...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user xubo245 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1865#discussion_r164629019
 
    --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonCreateDataMapCommand.scala ---
    @@ -59,44 +59,37 @@ case class CarbonCreateDataMapCommand(
             val details = TimeSeriesUtil
               .getTimeSeriesGranularityDetails(dmproperties, dmClassName)
             val updatedDmProperties = dmproperties - TimeSeriesUtil.getGranularityKey(dmproperties)
    -        details.map { f =>
    -          CreatePreAggregateTableCommand(dataMapName,
    -            tableIdentifier,
    -            dmClassName,
    -            updatedDmProperties,
    -            queryString.get,
    -            Some(f._1))
    -        }.toSeq
    +        CreatePreAggregateTableCommand(dataMapName,
    +          tableIdentifier,
    +          dmClassName,
    +          updatedDmProperties,
    +          queryString.get,
    +          Some(details(0)._1))
           } else {
    -        Seq(CreatePreAggregateTableCommand(
    +        CreatePreAggregateTableCommand(
               dataMapName,
               tableIdentifier,
               dmClassName,
               dmproperties,
               queryString.get
    -        ))
    +        )
           }
    -      createPreAggregateTableCommands.flatMap(_.processMetadata(sparkSession))
    +      createPreAggregateTableCommands.processMetadata(sparkSession)
         } else {
           throw new UnsupportedDataMapException(dmClassName)
         }
    -    LOGGER.audit(s"DataMap $dataMapName successfully added to Table ${ tableIdentifier.table }")
    +    LOGGER.audit(s"DataMap $dataMapName successfully added to Table ${tableIdentifier.table}")
         Seq.empty
       }
     
       override def processData(sparkSession: SparkSession): Seq[Row] = {
    -    if (dmClassName.equalsIgnoreCase(PREAGGREGATE.getName) ||
    -      dmClassName.equalsIgnoreCase(TIMESERIES.getName)) {
    -      createPreAggregateTableCommands.flatMap(_.processData(sparkSession))
    -    } else {
    -      throw new UnsupportedDataMapException(dmClassName)
    -    }
    +    createPreAggregateTableCommands.processData(sparkSession)
    --- End diff --
   
    ok, done


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize syntax f...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/1865
 
    Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3237/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1865: [CARBONDATA-2088][CARBONDATA-1516] Optimize syntax f...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/1865
 
    Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2003/



---
123456