[GitHub] carbondata pull request #1192: [CARBONDATA-940] alter table add/split partit...

classic Classic list List threaded Threaded
96 messages Options
12345
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1192: [CARBONDATA-940] alter table add/split partit...

qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1192#discussion_r131059444
 
    --- Diff: integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala ---
    @@ -345,6 +365,65 @@ object CarbonDataRDDFactory {
         compactionThread.run()
       }
     
    +  case class SplitThread(sqlContext: SQLContext,
    +      carbonLoadModel: CarbonLoadModel,
    +      executor: ExecutorService,
    +      storePath: String,
    +      segmentId: String,
    +      partitionId: String,
    +      oldPartitionIdList: List[Int]) extends Thread {
    +      override def run(): Unit = {
    +        try {
    +          DataManagementFunc.executePartitionSplit(sqlContext,
    +            carbonLoadModel, executor, storePath, segmentId, partitionId,
    +            oldPartitionIdList)
    +        } catch {
    +          case e: Exception =>
    +            LOGGER.error(s"Exception in partition split thread: ${ e.getMessage } }")
    +        }
    +      }
    +  }
    +
    +  def startSplitThreads(sqlContext: SQLContext,
    +      carbonLoadModel: CarbonLoadModel,
    +      storePath: String,
    +      partitionId: String,
    +      oldPartitionIdList: List[Int]): Unit = {
    +    val numberOfCores = CarbonProperties.getInstance()
    +      .getProperty(CarbonCommonConstants.NUM_CORES_ALT_PARTITION,
    +        CarbonCommonConstants.DEFAULT_NUMBER_CORES)
    +    val executor : ExecutorService = Executors.newFixedThreadPool(numberOfCores.toInt)
    +    try {
    +      val carbonTable = carbonLoadModel.getCarbonDataLoadSchema.getCarbonTable
    +      val absoluteTableIdentifier = carbonTable.getAbsoluteTableIdentifier
    +      val segmentStatusManager = new SegmentStatusManager(absoluteTableIdentifier)
    +      val validSegments = segmentStatusManager.getValidAndInvalidSegments.getValidSegments.asScala
    +      val threadArray: Array[SplitThread] = new Array[SplitThread](validSegments.size)
    +      var i = 0
    +      for (segmentId: String <- validSegments) {
    --- End diff --
   
    use `map` or `forEach`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1192: [CARBONDATA-940] alter table add/split partit...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1192#discussion_r131059448
 
    --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchema.scala ---
    @@ -184,6 +189,161 @@ case class AlterTableCompaction(alterTableModel: AlterTableModel) extends Runnab
       }
     }
     
    +/**
    + * Command for Alter Table Add & Split partition
    + * Add is a special case of Splitting the default partition (part0)
    + * @param alterTableSplitPartitionModel
    + */
    +case class AlterTableSplitPartition(alterTableSplitPartitionModel: AlterTableSplitPartitionModel)
    +  extends RunnableCommand {
    --- End diff --
   
    with `SchemaProcessCommand` and `DataProcessCommand`
    split `run` method into `processSchema` and `processData`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1192: [CARBONDATA-940] alter table add/split partit...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1192#discussion_r131059447
 
    --- Diff: integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala ---
    @@ -345,6 +365,65 @@ object CarbonDataRDDFactory {
         compactionThread.run()
       }
     
    +  case class SplitThread(sqlContext: SQLContext,
    +      carbonLoadModel: CarbonLoadModel,
    +      executor: ExecutorService,
    +      storePath: String,
    +      segmentId: String,
    +      partitionId: String,
    +      oldPartitionIdList: List[Int]) extends Thread {
    +      override def run(): Unit = {
    +        try {
    +          DataManagementFunc.executePartitionSplit(sqlContext,
    +            carbonLoadModel, executor, storePath, segmentId, partitionId,
    +            oldPartitionIdList)
    +        } catch {
    +          case e: Exception =>
    +            LOGGER.error(s"Exception in partition split thread: ${ e.getMessage } }")
    +        }
    +      }
    +  }
    +
    +  def startSplitThreads(sqlContext: SQLContext,
    +      carbonLoadModel: CarbonLoadModel,
    +      storePath: String,
    +      partitionId: String,
    +      oldPartitionIdList: List[Int]): Unit = {
    +    val numberOfCores = CarbonProperties.getInstance()
    +      .getProperty(CarbonCommonConstants.NUM_CORES_ALT_PARTITION,
    +        CarbonCommonConstants.DEFAULT_NUMBER_CORES)
    +    val executor : ExecutorService = Executors.newFixedThreadPool(numberOfCores.toInt)
    +    try {
    +      val carbonTable = carbonLoadModel.getCarbonDataLoadSchema.getCarbonTable
    +      val absoluteTableIdentifier = carbonTable.getAbsoluteTableIdentifier
    +      val segmentStatusManager = new SegmentStatusManager(absoluteTableIdentifier)
    +      val validSegments = segmentStatusManager.getValidAndInvalidSegments.getValidSegments.asScala
    +      val threadArray: Array[SplitThread] = new Array[SplitThread](validSegments.size)
    +      var i = 0
    +      for (segmentId: String <- validSegments) {
    --- End diff --
   
    use `map` or `forEach`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1192: [CARBONDATA-940] alter table add/split partit...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1192#discussion_r131059761
 
    --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchema.scala ---
    @@ -184,6 +189,161 @@ case class AlterTableCompaction(alterTableModel: AlterTableModel) extends Runnab
       }
     }
     
    +/**
    + * Command for Alter Table Add & Split partition
    + * Add is a special case of Splitting the default partition (part0)
    + * @param alterTableSplitPartitionModel
    + */
    +case class AlterTableSplitPartition(alterTableSplitPartitionModel: AlterTableSplitPartitionModel)
    +  extends RunnableCommand {
    +  val LOGGER = LogServiceFactory.getLogService(this.getClass.getName)
    +
    +  def run(sparkSession: SparkSession): Seq[Row] = {
    +
    +    val tableName = alterTableSplitPartitionModel.tableName
    +    val dbName = alterTableSplitPartitionModel.databaseName
    +      .getOrElse(sparkSession.catalog.currentDatabase)
    +    val splitInfo = alterTableSplitPartitionModel.splitInfo
    +    val partitionId = Integer.parseInt(alterTableSplitPartitionModel.partitionId)
    +    val timestampFormatter = new SimpleDateFormat(CarbonProperties.getInstance
    +      .getProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT,
    +        CarbonCommonConstants.CARBON_TIMESTAMP_DEFAULT_FORMAT))
    +    val dateFormatter = new SimpleDateFormat(CarbonProperties.getInstance
    +      .getProperty(CarbonCommonConstants.CARBON_DATE_FORMAT,
    +        CarbonCommonConstants.CARBON_DATE_DEFAULT_FORMAT))
    +
    +    val locksToBeAcquired = List(LockUsage.METADATA_LOCK,
    +      LockUsage.COMPACTION_LOCK,
    +      LockUsage.DELETE_SEGMENT_LOCK,
    +      LockUsage.DROP_TABLE_LOCK,
    +      LockUsage.CLEAN_FILES_LOCK,
    +      LockUsage.ALTER_PARTITION_LOCK)
    +    var locks = List.empty[ICarbonLock]
    +    try {
    +      locks = AlterTableUtil.validateTableAndAcquireLock(dbName, tableName,
    +        locksToBeAcquired)(sparkSession)
    +      val carbonMetastore = CarbonEnv.getInstance(sparkSession).carbonMetastore
    +      val relation = carbonMetastore.lookupRelation(Option(dbName), tableName)(sparkSession)
    +        .asInstanceOf[CarbonRelation]
    +      val carbonTableIdentifier = relation.tableMeta.carbonTableIdentifier
    +      val storePath = relation.tableMeta.storePath
    +      if (relation == null) {
    +        sys.error(s"Table $dbName.$tableName does not exist")
    +      }
    +      carbonMetastore.checkSchemasModifiedTimeAndReloadTables(storePath)
    +      if (null == CarbonMetadata.getInstance.getCarbonTable(dbName + "_" + tableName)) {
    +        LOGGER.error(s"Alter table failed. table not found: $dbName.$tableName")
    +        sys.error(s"Alter table failed. table not found: $dbName.$tableName")
    +      }
    +      val carbonLoadModel = new CarbonLoadModel()
    +
    +      val table = relation.tableMeta.carbonTable
    +      val partitionInfo = table.getPartitionInfo(tableName)
    +      val partitionIdList = partitionInfo.getPartitionIds.asScala
    +      // keep a copy of partitionIdList before update partitionInfo.
    +      // will be used in partition data scan
    +      val oldPartitionIdList: ArrayBuffer[Int] = new ArrayBuffer[Int]()
    +      for (i: Integer <- partitionIdList) {
    +        oldPartitionIdList.append(i)
    +      }
    +
    +      if (partitionInfo == null) {
    +        sys.error(s"Table $tableName is not a partition table.")
    +      }
    +      if (partitionInfo.getPartitionType == PartitionType.HASH) {
    +        sys.error(s"Hash partition table cannot be added or split!")
    +      }
    +      /**
    +       * verify the add/split information and update the partitionInfo:
    +       *  1. update rangeInfo/listInfo
    +       *  2. update partitionIds
    +       */
    +      val columnDataType = partitionInfo.getColumnSchemaList.get(0).getDataType
    +      val index = partitionIdList.indexOf(partitionId)
    +      if (partitionInfo.getPartitionType == PartitionType.RANGE) {
    --- End diff --
   
    try to create more private functions and use it in `run`, thus `run` is shorter, better for readability


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1192: [CARBONDATA-940] alter table add/split partit...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user lionelcao commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1192#discussion_r131063435
 
    --- Diff: conf/carbon.properties.template ---
    @@ -42,6 +42,9 @@ carbon.enableXXHash=true
     #carbon.max.level.cache.size=-1
     #enable prefetch of data during merge sort while reading data from sort temp files in data loading
     #carbon.merge.sort.prefetch=true
    +######## Alter Partition Configuration ########
    +#Number of cores to be used while alter partition
    --- End diff --
   
    it will be used when take action of multiple segments in parallel. this configuration will allow user to set the threads according to their hardware.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1192: [CARBONDATA-940] alter table add/split partit...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user lionelcao commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1192#discussion_r131063552
 
    --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/CarbonPartitionExample.scala ---
    @@ -101,17 +126,40 @@ object CarbonPartitionExample {
         spark.sql("""
            | CREATE TABLE IF NOT EXISTS t5
            | (
    +       | id Int,
            | vin String,
            | logdate Timestamp,
            | phonenumber Long,
    -       | area String
    +       | area String,
    +       | salary Int
            |)
            | PARTITIONED BY (country String)
            | STORED BY 'carbondata'
            | TBLPROPERTIES('PARTITION_TYPE'='LIST',
    -       | 'LIST_INFO'='(China,United States),UK ,japan,(Canada,Russia), South Korea ')
    +       | 'LIST_INFO'='(China, US),UK ,Japan,(Canada,Russia, Good, NotGood), Korea ')
            """.stripMargin)
     
    +    // load data into partition table
    +    spark.sql(s"""
    +       LOAD DATA LOCAL INPATH '$testData' into table t0 options('BAD_RECORDS_ACTION'='FORCE')
    +       """)
    +    spark.sql(s"""
    +       LOAD DATA LOCAL INPATH '$testData' into table t5 options('BAD_RECORDS_ACTION'='FORCE')
    +       """)
    +
    +    // alter list partition table t5 to add a partition
    +    spark.sql(s"""Alter table t5 add partition ('OutSpace')""".stripMargin)
    +    // alter list partition table t5 to split partition 4 into 3 independent partition
    +    spark.sql(
    +      s"""
    +         Alter table t5 split partition(4) into ('Canada', 'Russia', '(Good, NotGood)')
    +       """.stripMargin)
    --- End diff --
   
    yes, I have written the test case to verify it. Please refer to TestAlterPartitionTable.scala


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1192: [CARBONDATA-940] alter table add/split partit...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user lionelcao commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1192#discussion_r131063903
 
    --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/CarbonInputFormat.java ---
    @@ -107,6 +107,7 @@
       // comma separated list of input files
       public static final String INPUT_FILES =
           "mapreduce.input.carboninputformat.files";
    +  public static final String ALTER_PARTITION_ID = "mapreduce.input.carboninputformat.partitionid";
    --- End diff --
   
    I have migrate all the changes to CarbonTableInputFormat. Just keep the modification in CarbonInputFormat. It's already not used and you can remove it safely.
    But sure, I can recover this file if it's necessary.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1192: [CARBONDATA-940] alter table add/split partit...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user lionelcao commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1192#discussion_r131067032
 
    --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java ---
    @@ -321,6 +321,84 @@ private AbsoluteTableIdentifier getAbsoluteTableIdentifier(Configuration configu
       }
     
       /**
    +   * Read data in one segment. For alter table partition statement
    +   * @param job
    +   * @param targetSegment
    +   * @param oldPartitionIdList  get old partitionId before partitionInfo was changed
    +   * @return
    +   * @throws IOException
    +   */
    +  public List<InputSplit> getSplitsOfOneSegment(JobContext job, String targetSegment,
    +      List<Integer> oldPartitionIdList, PartitionInfo partitionInfo)
    +      throws IOException {
    +    AbsoluteTableIdentifier identifier = getAbsoluteTableIdentifier(job.getConfiguration());
    +    List<String> invalidSegments = new ArrayList<>();
    +    List<UpdateVO> invalidTimestampsList = new ArrayList<>();
    +
    +    List<String> segmentList = new ArrayList<>();
    +    segmentList.add(targetSegment);
    +    setSegmentsToAccess(job.getConfiguration(), segmentList);
    +    try {
    +
    +      // process and resolve the expression
    +      Expression filter = getFilterPredicates(job.getConfiguration());
    +      CarbonTable carbonTable = getOrCreateCarbonTable(job.getConfiguration());
    +      // this will be null in case of corrupt schema file.
    +      if (null == carbonTable) {
    +        throw new IOException("Missing/Corrupt schema file for table.");
    +      }
    +
    +      CarbonInputFormatUtil.processFilterExpression(filter, carbonTable);
    +
    +      // prune partitions for filter query on partition table
    +      String partitionIds = job.getConfiguration().get(ALTER_PARTITION_ID);
    +      BitSet matchedPartitions = null;
    +      if (partitionInfo != null) {
    +        matchedPartitions = setMatchedPartitions(partitionIds, filter, partitionInfo);
    +        if (matchedPartitions != null) {
    +          if (matchedPartitions.cardinality() == 0) {
    +            return new ArrayList<InputSplit>();
    +          } else if (matchedPartitions.cardinality() == partitionInfo.getNumPartitions()) {
    +            matchedPartitions = null;
    +          }
    +        }
    +      }
    +
    +      FilterResolverIntf filterInterface = CarbonInputFormatUtil.resolveFilter(filter, identifier);
    +      // do block filtering and get split
    +      List<InputSplit> splits = getSplits(job, filterInterface, segmentList, matchedPartitions,
    +          partitionInfo, oldPartitionIdList);
    +      // pass the invalid segment to task side in order to remove index entry in task side
    +      if (invalidSegments.size() > 0) {
    +        for (InputSplit split : splits) {
    +          ((CarbonInputSplit) split).setInvalidSegments(invalidSegments);
    +          ((CarbonInputSplit) split).setInvalidTimestampRange(invalidTimestampsList);
    +        }
    +      }
    +      return splits;
    +    } catch (IOException e) {
    +      throw new RuntimeException("Can't get splits of the target segment ", e);
    +    }
    +  }
    +
    +  private BitSet setMatchedPartitions(String partitionIds, Expression filter,
    +      PartitionInfo partitionInfo) {
    +    BitSet matchedPartitions = null;
    +    if (null != partitionIds) {
    +      String[] partList = partitionIds.replace("[", "").replace("]", "").split(",");
    +      matchedPartitions = new BitSet(Integer.parseInt(partList[0]));
    --- End diff --
   
    Currently the partitionIds from alter table statement could only be one element.
    For example
    'alter table t0 split(4) into XXX'
    'alter table t1 drop partition(3)' (will submit in another PR )
    We suppose drop partition is a dangerous operation and allow to drop only one partition in a time.
    Maybe in the future we can discuss to extend drop multiple partitions in one statement.
    And also we can extend MERGE PARTITION action in later versions(no plan in version1.2)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1192: [CARBONDATA-940] alter table add/split partit...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user lionelcao commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1192#discussion_r131067082
 
    --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java ---
    @@ -321,6 +321,84 @@ private AbsoluteTableIdentifier getAbsoluteTableIdentifier(Configuration configu
       }
     
       /**
    +   * Read data in one segment. For alter table partition statement
    +   * @param job
    +   * @param targetSegment
    +   * @param oldPartitionIdList  get old partitionId before partitionInfo was changed
    +   * @return
    +   * @throws IOException
    +   */
    +  public List<InputSplit> getSplitsOfOneSegment(JobContext job, String targetSegment,
    +      List<Integer> oldPartitionIdList, PartitionInfo partitionInfo)
    +      throws IOException {
    +    AbsoluteTableIdentifier identifier = getAbsoluteTableIdentifier(job.getConfiguration());
    +    List<String> invalidSegments = new ArrayList<>();
    +    List<UpdateVO> invalidTimestampsList = new ArrayList<>();
    +
    +    List<String> segmentList = new ArrayList<>();
    +    segmentList.add(targetSegment);
    +    setSegmentsToAccess(job.getConfiguration(), segmentList);
    +    try {
    +
    +      // process and resolve the expression
    +      Expression filter = getFilterPredicates(job.getConfiguration());
    +      CarbonTable carbonTable = getOrCreateCarbonTable(job.getConfiguration());
    +      // this will be null in case of corrupt schema file.
    +      if (null == carbonTable) {
    +        throw new IOException("Missing/Corrupt schema file for table.");
    +      }
    +
    +      CarbonInputFormatUtil.processFilterExpression(filter, carbonTable);
    +
    +      // prune partitions for filter query on partition table
    +      String partitionIds = job.getConfiguration().get(ALTER_PARTITION_ID);
    +      BitSet matchedPartitions = null;
    +      if (partitionInfo != null) {
    +        matchedPartitions = setMatchedPartitions(partitionIds, filter, partitionInfo);
    +        if (matchedPartitions != null) {
    +          if (matchedPartitions.cardinality() == 0) {
    +            return new ArrayList<InputSplit>();
    +          } else if (matchedPartitions.cardinality() == partitionInfo.getNumPartitions()) {
    +            matchedPartitions = null;
    +          }
    +        }
    +      }
    +
    +      FilterResolverIntf filterInterface = CarbonInputFormatUtil.resolveFilter(filter, identifier);
    +      // do block filtering and get split
    +      List<InputSplit> splits = getSplits(job, filterInterface, segmentList, matchedPartitions,
    +          partitionInfo, oldPartitionIdList);
    +      // pass the invalid segment to task side in order to remove index entry in task side
    +      if (invalidSegments.size() > 0) {
    +        for (InputSplit split : splits) {
    +          ((CarbonInputSplit) split).setInvalidSegments(invalidSegments);
    +          ((CarbonInputSplit) split).setInvalidTimestampRange(invalidTimestampsList);
    +        }
    +      }
    +      return splits;
    +    } catch (IOException e) {
    +      throw new RuntimeException("Can't get splits of the target segment ", e);
    +    }
    +  }
    +
    +  private BitSet setMatchedPartitions(String partitionIds, Expression filter,
    +      PartitionInfo partitionInfo) {
    +    BitSet matchedPartitions = null;
    +    if (null != partitionIds) {
    +      String[] partList = partitionIds.replace("[", "").replace("]", "").split(",");
    +      matchedPartitions = new BitSet(Integer.parseInt(partList[0]));
    --- End diff --
   
    Sure, I can add some simple comments in the code


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #1192: [CARBONDATA-940] alter table add/split partit...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user lionelcao commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1192#discussion_r131067233
 
    --- Diff: processing/src/main/java/org/apache/carbondata/processing/spliter/CarbonDataSpliterUtil.java ---
    @@ -0,0 +1,40 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.processing.spliter;
    +
    +import java.util.List;
    +
    +import org.apache.carbondata.common.CarbonIterator;
    +import org.apache.carbondata.common.logging.LogService;
    +import org.apache.carbondata.common.logging.LogServiceFactory;
    +import org.apache.carbondata.core.scan.result.BatchResult;
    +
    +public final class CarbonDataSpliterUtil {
    --- End diff --
   
    Oops, it should be removed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1192: [CARBONDATA-940] alter table add/split partition for...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/1192
 
    Build Failed  with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/3353/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1192: [CARBONDATA-940] alter table add/split partition for...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user lionelcao commented on the issue:

    https://github.com/apache/carbondata/pull/1192
 
    Hi @ravipesala , please help review and merge PR1228 first and then retest this PR. thank you~!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1192: [CARBONDATA-940] alter table add/split partition for...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/1192
 
    Build Success with Spark 1.6, Please check CI http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/755/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1192: [CARBONDATA-940] alter table add/split partition for...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/1192
 
    Build Failed  with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/3355/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1192: [CARBONDATA-940] alter table add/split partition for...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/1192
 
    Build Failed  with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/3356/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1192: [CARBONDATA-940] alter table add/split partition for...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/1192
 
    Build Success with Spark 1.6, Please check CI http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/758/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1192: [CARBONDATA-940] alter table add/split partition for...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user lionelcao commented on the issue:

    https://github.com/apache/carbondata/pull/1192
 
    PR1228 is merged, retest this please.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1192: [CARBONDATA-940] alter table add/split partition for...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/1192
 
    Build Failed  with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/3358/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1192: [CARBONDATA-940] alter table add/split partition for...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/1192
 
    Build Failed with Spark 1.6, Please check CI http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/761/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #1192: [CARBONDATA-940] alter table add/split partition for...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:

    https://github.com/apache/carbondata/pull/1192
 
    Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/3402/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [hidden email] or file a JIRA ticket
with INFRA.
---
12345