[jira] [Commented] (CARBONDATA-284) Abstracting Index and Segment interface

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Commented] (CARBONDATA-284) Abstracting Index and Segment interface

Akash R Nilugal (Jira)

    [ https://issues.apache.org/jira/browse/CARBONDATA-284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15607805#comment-15607805 ]

ASF GitHub Bot commented on CARBONDATA-284:
-------------------------------------------

Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/208#discussion_r85061184
 
    --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/internal/index/memory/InMemoryBTreeIndex.java ---
    @@ -0,0 +1,220 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing,
    + * software distributed under the License is distributed on an
    + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    + * KIND, either express or implied.  See the License for the
    + * specific language governing permissions and limitations
    + * under the License.
    + */
    +
    +package org.apache.carbondata.hadoop.internal.index.memory;
    +
    +import java.io.IOException;
    +import java.util.ArrayList;
    +import java.util.HashMap;
    +import java.util.LinkedList;
    +import java.util.List;
    +import java.util.Map;
    +
    +import org.apache.carbondata.core.carbon.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.carbon.datastore.DataRefNode;
    +import org.apache.carbondata.core.carbon.datastore.DataRefNodeFinder;
    +import org.apache.carbondata.core.carbon.datastore.IndexKey;
    +import org.apache.carbondata.core.carbon.datastore.SegmentTaskIndexStore;
    +import org.apache.carbondata.core.carbon.datastore.block.AbstractIndex;
    +import org.apache.carbondata.core.carbon.datastore.block.BlockletInfos;
    +import org.apache.carbondata.core.carbon.datastore.block.SegmentProperties;
    +import org.apache.carbondata.core.carbon.datastore.block.TableBlockInfo;
    +import org.apache.carbondata.core.carbon.datastore.exception.IndexBuilderException;
    +import org.apache.carbondata.core.carbon.datastore.impl.btree.BTreeDataRefNodeFinder;
    +import org.apache.carbondata.core.carbon.datastore.impl.btree.BlockBTreeLeafNode;
    +import org.apache.carbondata.core.carbon.querystatistics.QueryStatistic;
    +import org.apache.carbondata.core.carbon.querystatistics.QueryStatisticsConstants;
    +import org.apache.carbondata.core.carbon.querystatistics.QueryStatisticsRecorder;
    +import org.apache.carbondata.core.keygenerator.KeyGenException;
    +import org.apache.carbondata.core.util.CarbonTimeStatisticsFactory;
    +import org.apache.carbondata.hadoop.CarbonInputSplit;
    +import org.apache.carbondata.hadoop.internal.index.Index;
    +import org.apache.carbondata.hadoop.internal.segment.Segment;
    +import org.apache.carbondata.hadoop.util.CarbonInputFormatUtil;
    +import org.apache.carbondata.scan.executor.exception.QueryExecutionException;
    +import org.apache.carbondata.scan.filter.FilterExpressionProcessor;
    +import org.apache.carbondata.scan.filter.FilterUtil;
    +import org.apache.carbondata.scan.filter.resolver.FilterResolverIntf;
    +import org.apache.commons.logging.Log;
    +import org.apache.commons.logging.LogFactory;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.mapreduce.InputSplit;
    +import org.apache.hadoop.mapreduce.JobContext;
    +
    +class InMemoryBTreeIndex implements Index {
    +
    +  private static final Log LOG = LogFactory.getLog(InMemoryBTreeIndex.class);
    +  private Segment segment;
    +
    +  InMemoryBTreeIndex(Segment segment) {
    +    this.segment = segment;
    +  }
    +
    +  @Override
    +  public String getName() {
    +    return null;
    +  }
    +
    +  @Override
    +  public List<InputSplit> filter(JobContext job, FilterResolverIntf filter)
    +      throws IOException {
    +
    +    List<InputSplit> result = new LinkedList<InputSplit>();
    +
    +    FilterExpressionProcessor filterExpressionProcessor = new FilterExpressionProcessor();
    +
    +    AbsoluteTableIdentifier absoluteTableIdentifier = null;
    +        //CarbonInputFormatUtil.getAbsoluteTableIdentifier(job.getConfiguration());
    +
    +    //for this segment fetch blocks matching filter in BTree
    +    List<DataRefNode> dataRefNodes = null;
    +    try {
    +      dataRefNodes = getDataBlocksOfSegment(job, filterExpressionProcessor, absoluteTableIdentifier,
    +          filter, segment.getId());
    +    } catch (IndexBuilderException e) {
    +      throw new IOException(e.getMessage());
    +    }
    +    for (DataRefNode dataRefNode : dataRefNodes) {
    +      BlockBTreeLeafNode leafNode = (BlockBTreeLeafNode) dataRefNode;
    +      TableBlockInfo tableBlockInfo = leafNode.getTableBlockInfo();
    +      result.add(new CarbonInputSplit(segment.getId(), new Path(tableBlockInfo.getFilePath()),
    +          tableBlockInfo.getBlockOffset(), tableBlockInfo.getBlockLength(),
    +          tableBlockInfo.getLocations(), tableBlockInfo.getBlockletInfos().getNoOfBlockLets()));
    +    }
    +    return result;
    +  }
    +
    +  private Map<String, AbstractIndex> getSegmentAbstractIndexs(JobContext job,
    +      AbsoluteTableIdentifier identifier, String segmentId)
    --- End diff --
   
    remove segmentId, directly use segment.id in this class


> Abstracting Index and Segment interface
> ---------------------------------------
>
>                 Key: CARBONDATA-284
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-284
>             Project: CarbonData
>          Issue Type: Improvement
>          Components: hadoop-integration
>    Affects Versions: 0.1.0-incubating
>            Reporter: Jacky Li
>             Fix For: 0.3.0-incubating
>
>
> This issue is intended to abstract developer API and user API to achieve following goals:
> Goal 1: User can choose the place to store Index data, it can be stored in
> processing framework's memory space (like in spark driver memory) or in
> another service outside of the processing framework (like using a
> independent database service, which can be shared across client)
> Goal 2: Developer can add more index of his choice to CarbonData files.
> Besides B+ tree on multi-dimensional key which current CarbonData supports,
> developers are free to add other indexing technology to make certain
> workload faster. These new indices should be added in a pluggable way.
> This Jira has been discussed in maillist:
> http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/Abstracting-CarbonData-s-Index-Interface-td1587.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)