[GitHub] carbondata pull request #2440: [WIP][CarbonStore] implement RESTful API: cre...

classic Classic list List threaded Threaded
25 messages Options
12
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2440: [WIP][CarbonStore] implement RESTful API: cre...

qiuchenjian-2
GitHub user QiangCai opened a pull request:

    https://github.com/apache/carbondata/pull/2440

    [WIP][CarbonStore] implement RESTful API: create table, load data and search

    test file:
    store/core/src/test/java/org/apache/carbondata/store/StoreTest.java
   
     - [ ] Any interfaces changed?
     
     - [ ] Any backward compatibility impacted?
     
     - [ ] Document update required?
   
     - [ ] Testing done
            Please provide details on
            - Whether new unit test cases have been added or why no new tests are required?
            - How it is tested? Please attach test report.
            - Is it a performance related change? Please attach the performance test report.
            - Any additional information to help reviewers in testing this change.
           
     - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
   


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/QiangCai/carbondata carbonstore

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/carbondata/pull/2440.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2440
   
----
commit cc99073b2d4c08c429f9420277967d6380e3cad0
Author: QiangCai <qiangcai@...>
Date:   2018-07-03T12:21:18Z

    carbonstore implement create table, load data and search api

----


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2440: [WIP][CarbonStore] implement RESTful API: cre...

qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2440#discussion_r200022137
 
    --- Diff: core/src/main/java/org/apache/carbondata/core/metadata/schema/table/TableSchemaBuilder.java ---
    @@ -54,12 +54,22 @@
     
       private List<ColumnSchema> measures = new LinkedList<>();
     
    +  private Map<String, String> properties;
    +
       private int blockSize;
     
       private int blockletSize;
     
       private String tableName;
     
    +  public TableSchemaBuilder properties(Map<String, String> properties) {
    --- End diff --
   
    change name to `tableProperties`


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2440: [WIP][CarbonStore] implement RESTful API: cre...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2440#discussion_r200022461
 
    --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/CarbonRecordReader.java ---
    @@ -50,6 +50,8 @@
       protected QueryExecutor queryExecutor;
       private InputMetricsStats inputMetricsStats;
     
    +  protected boolean isSearchMode = false;
    --- End diff --
   
    It is not so intuitive to understand this, better to add a subclass to override `close` function


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2440: [WIP][CarbonStore] implement RESTful API: cre...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2440#discussion_r200023235
 
    --- Diff: processing/src/main/java/org/apache/carbondata/processing/util/CarbonLoaderUtil.java ---
    @@ -577,7 +577,7 @@ public static Dictionary getDictionary(AbsoluteTableIdentifier absoluteTableIden
        * @param noOfNodesInput -1 if number of nodes has to be decided
        *                       based on block location information
        * @param blockAssignmentStrategy strategy used to assign blocks
    -   * @param loadMinSize the property load_min_size_inmb specified by the user
    +   * @param expectedMinSizePerNode
    --- End diff --
   
    please add comment for this parameter


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2440: [WIP][CarbonStore] implement RESTful API: cre...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2440#discussion_r200023576
 
    --- Diff: store/conf/store.conf ---
    @@ -0,0 +1,9 @@
    +carbon.worker.host=127.0.0.1
    --- End diff --
   
    please describe this conf file is for master or worker


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2440: [WIP][CarbonStore] implement RESTful API: cre...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2440#discussion_r200025216
 
    --- Diff: store/core/pom.xml ---
    @@ -40,10 +46,41 @@
           <artifactId>scalatest_${scala.binary.version}</artifactId>
           <scope>test</scope>
         </dependency>
    +    <dependency>
    +      <groupId>org.springframework.boot</groupId>
    --- End diff --
   
    It is better to put REST interface into a separate module, so that store-core is provide Store API only


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2440: [WIP][CarbonStore] implement RESTful API: cre...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2440#discussion_r200026300
 
    --- Diff: store/core/src/main/java/org/apache/carbondata/store/master/Master.java ---
    @@ -0,0 +1,530 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.store.master;
    +
    +import java.io.File;
    +import java.io.IOException;
    +import java.lang.ref.SoftReference;
    +import java.net.BindException;
    +import java.util.ArrayList;
    +import java.util.HashMap;
    +import java.util.List;
    +import java.util.Map;
    +import java.util.Objects;
    +import java.util.Random;
    +import java.util.Set;
    +import java.util.UUID;
    +import java.util.concurrent.ExecutionException;
    +import java.util.concurrent.Future;
    +import java.util.concurrent.TimeUnit;
    +import java.util.concurrent.TimeoutException;
    +
    +import org.apache.carbondata.common.logging.LogService;
    +import org.apache.carbondata.common.logging.LogServiceFactory;
    +import org.apache.carbondata.core.constants.CarbonCommonConstants;
    +import org.apache.carbondata.core.datastore.block.Distributable;
    +import org.apache.carbondata.core.datastore.impl.FileFactory;
    +import org.apache.carbondata.core.datastore.row.CarbonRow;
    +import org.apache.carbondata.core.exception.InvalidConfigurationException;
    +import org.apache.carbondata.core.fileoperations.FileWriteOperation;
    +import org.apache.carbondata.core.locks.CarbonLockUtil;
    +import org.apache.carbondata.core.locks.ICarbonLock;
    +import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.metadata.SegmentFileStore;
    +import org.apache.carbondata.core.metadata.converter.SchemaConverter;
    +import org.apache.carbondata.core.metadata.converter.ThriftWrapperSchemaConverterImpl;
    +import org.apache.carbondata.core.metadata.schema.table.CarbonTable;
    +import org.apache.carbondata.core.metadata.schema.table.TableInfo;
    +import org.apache.carbondata.core.mutate.CarbonUpdateUtil;
    +import org.apache.carbondata.core.scan.expression.Expression;
    +import org.apache.carbondata.core.statusmanager.LoadMetadataDetails;
    +import org.apache.carbondata.core.statusmanager.SegmentStatus;
    +import org.apache.carbondata.core.statusmanager.SegmentStatusManager;
    +import org.apache.carbondata.core.util.CarbonProperties;
    +import org.apache.carbondata.core.util.CarbonUtil;
    +import org.apache.carbondata.core.util.path.CarbonTablePath;
    +import org.apache.carbondata.core.writer.ThriftWriter;
    +import org.apache.carbondata.hadoop.CarbonMultiBlockSplit;
    +import org.apache.carbondata.hadoop.api.CarbonInputFormat;
    +import org.apache.carbondata.hadoop.api.CarbonTableInputFormat;
    +import org.apache.carbondata.hadoop.util.CarbonInputFormatUtil;
    +import org.apache.carbondata.processing.loading.model.CarbonLoadModel;
    +import org.apache.carbondata.processing.util.CarbonLoaderUtil;
    +import org.apache.carbondata.store.conf.StoreConf;
    +import org.apache.carbondata.store.exception.ExecutionTimeoutException;
    +import org.apache.carbondata.store.exception.StoreException;
    +import org.apache.carbondata.store.rest.controller.Horizon;
    +import org.apache.carbondata.store.rpc.RegistryService;
    +import org.apache.carbondata.store.rpc.ServiceFactory;
    +import org.apache.carbondata.store.rpc.StoreService;
    +import org.apache.carbondata.store.rpc.impl.RegistryServiceImpl;
    +import org.apache.carbondata.store.rpc.impl.Status;
    +import org.apache.carbondata.store.rpc.model.BaseResponse;
    +import org.apache.carbondata.store.rpc.model.LoadDataRequest;
    +import org.apache.carbondata.store.rpc.model.QueryRequest;
    +import org.apache.carbondata.store.rpc.model.QueryResponse;
    +import org.apache.carbondata.store.rpc.model.RegisterWorkerRequest;
    +import org.apache.carbondata.store.rpc.model.RegisterWorkerResponse;
    +import org.apache.carbondata.store.rpc.model.ShutdownRequest;
    +import org.apache.carbondata.store.scheduler.Schedulable;
    +import org.apache.carbondata.store.scheduler.Scheduler;
    +import org.apache.carbondata.store.util.StoreUtil;
    +
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.ipc.RPC;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.apache.hadoop.mapreduce.InputSplit;
    +import org.apache.hadoop.mapreduce.Job;
    +
    +/**
    + * Master of CarbonSearch.
    + * It provides a Registry service for worker to register.
    + * And it provides search API to fire RPC call to workers.
    + */
    +
    +public class Master {
    +
    +  private static Master instance = null;
    +
    +  private static LogService LOGGER = LogServiceFactory.getLogService(Master.class.getName());
    +
    +  private Map<String, SoftReference<CarbonTable>> cacheTables;
    +
    +  // worker host address map to EndpointRef
    +  private StoreConf conf;
    +  private Configuration hadoopConf;
    +  private Random random = new Random();
    +  private RPC.Server registryServer = null;
    +  private Scheduler scheduler = new Scheduler();
    +
    +  private Master(StoreConf conf) {
    +    cacheTables = new HashMap<>();
    +    this.conf = conf;
    +    this.hadoopConf = this.conf.newHadoopConf();
    +  }
    +
    +  public void start() {
    --- End diff --
   
    It seems `startService` is enough


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2440: [WIP][CarbonStore] implement RESTful API: cre...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2440#discussion_r200026708
 
    --- Diff: store/core/src/main/java/org/apache/carbondata/store/master/Master.java ---
    @@ -0,0 +1,530 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.store.master;
    +
    +import java.io.File;
    +import java.io.IOException;
    +import java.lang.ref.SoftReference;
    +import java.net.BindException;
    +import java.util.ArrayList;
    +import java.util.HashMap;
    +import java.util.List;
    +import java.util.Map;
    +import java.util.Objects;
    +import java.util.Random;
    +import java.util.Set;
    +import java.util.UUID;
    +import java.util.concurrent.ExecutionException;
    +import java.util.concurrent.Future;
    +import java.util.concurrent.TimeUnit;
    +import java.util.concurrent.TimeoutException;
    +
    +import org.apache.carbondata.common.logging.LogService;
    +import org.apache.carbondata.common.logging.LogServiceFactory;
    +import org.apache.carbondata.core.constants.CarbonCommonConstants;
    +import org.apache.carbondata.core.datastore.block.Distributable;
    +import org.apache.carbondata.core.datastore.impl.FileFactory;
    +import org.apache.carbondata.core.datastore.row.CarbonRow;
    +import org.apache.carbondata.core.exception.InvalidConfigurationException;
    +import org.apache.carbondata.core.fileoperations.FileWriteOperation;
    +import org.apache.carbondata.core.locks.CarbonLockUtil;
    +import org.apache.carbondata.core.locks.ICarbonLock;
    +import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.metadata.SegmentFileStore;
    +import org.apache.carbondata.core.metadata.converter.SchemaConverter;
    +import org.apache.carbondata.core.metadata.converter.ThriftWrapperSchemaConverterImpl;
    +import org.apache.carbondata.core.metadata.schema.table.CarbonTable;
    +import org.apache.carbondata.core.metadata.schema.table.TableInfo;
    +import org.apache.carbondata.core.mutate.CarbonUpdateUtil;
    +import org.apache.carbondata.core.scan.expression.Expression;
    +import org.apache.carbondata.core.statusmanager.LoadMetadataDetails;
    +import org.apache.carbondata.core.statusmanager.SegmentStatus;
    +import org.apache.carbondata.core.statusmanager.SegmentStatusManager;
    +import org.apache.carbondata.core.util.CarbonProperties;
    +import org.apache.carbondata.core.util.CarbonUtil;
    +import org.apache.carbondata.core.util.path.CarbonTablePath;
    +import org.apache.carbondata.core.writer.ThriftWriter;
    +import org.apache.carbondata.hadoop.CarbonMultiBlockSplit;
    +import org.apache.carbondata.hadoop.api.CarbonInputFormat;
    +import org.apache.carbondata.hadoop.api.CarbonTableInputFormat;
    +import org.apache.carbondata.hadoop.util.CarbonInputFormatUtil;
    +import org.apache.carbondata.processing.loading.model.CarbonLoadModel;
    +import org.apache.carbondata.processing.util.CarbonLoaderUtil;
    +import org.apache.carbondata.store.conf.StoreConf;
    +import org.apache.carbondata.store.exception.ExecutionTimeoutException;
    +import org.apache.carbondata.store.exception.StoreException;
    +import org.apache.carbondata.store.rest.controller.Horizon;
    +import org.apache.carbondata.store.rpc.RegistryService;
    +import org.apache.carbondata.store.rpc.ServiceFactory;
    +import org.apache.carbondata.store.rpc.StoreService;
    +import org.apache.carbondata.store.rpc.impl.RegistryServiceImpl;
    +import org.apache.carbondata.store.rpc.impl.Status;
    +import org.apache.carbondata.store.rpc.model.BaseResponse;
    +import org.apache.carbondata.store.rpc.model.LoadDataRequest;
    +import org.apache.carbondata.store.rpc.model.QueryRequest;
    +import org.apache.carbondata.store.rpc.model.QueryResponse;
    +import org.apache.carbondata.store.rpc.model.RegisterWorkerRequest;
    +import org.apache.carbondata.store.rpc.model.RegisterWorkerResponse;
    +import org.apache.carbondata.store.rpc.model.ShutdownRequest;
    +import org.apache.carbondata.store.scheduler.Schedulable;
    +import org.apache.carbondata.store.scheduler.Scheduler;
    +import org.apache.carbondata.store.util.StoreUtil;
    +
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.ipc.RPC;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.apache.hadoop.mapreduce.InputSplit;
    +import org.apache.hadoop.mapreduce.Job;
    +
    +/**
    + * Master of CarbonSearch.
    + * It provides a Registry service for worker to register.
    + * And it provides search API to fire RPC call to workers.
    + */
    +
    +public class Master {
    +
    +  private static Master instance = null;
    +
    +  private static LogService LOGGER = LogServiceFactory.getLogService(Master.class.getName());
    +
    +  private Map<String, SoftReference<CarbonTable>> cacheTables;
    +
    +  // worker host address map to EndpointRef
    +  private StoreConf conf;
    +  private Configuration hadoopConf;
    +  private Random random = new Random();
    +  private RPC.Server registryServer = null;
    +  private Scheduler scheduler = new Scheduler();
    +
    +  private Master(StoreConf conf) {
    +    cacheTables = new HashMap<>();
    +    this.conf = conf;
    +    this.hadoopConf = this.conf.newHadoopConf();
    +  }
    +
    +  public void start() {
    +    try {
    +      startService();
    +    } catch (IOException e) {
    +      LOGGER.error(e, "master failed to start");
    +    }
    +  }
    +
    +  /**
    +   * start service and listen on port passed in constructor
    +   */
    +  public void startService() throws IOException {
    +    if (registryServer == null) {
    +
    +      BindException exception;
    +      // we will try to create service at worse case 100 times
    +      int numTry = 100;
    +      String host = conf.masterHost();
    +      int port = conf.masterPort();
    +      LOGGER.info("building registry-service on " + host + ":" + port);
    +
    +      RegistryService registryService = new RegistryServiceImpl(this);
    +      do {
    +        try {
    +          registryServer = new RPC.Builder(hadoopConf).setBindAddress(host).setPort(port)
    +              .setProtocol(RegistryService.class).setInstance(registryService).build();
    +
    +          registryServer.start();
    +          numTry = 0;
    +          exception = null;
    +        } catch (BindException e) {
    +          // port is occupied, increase the port number and try again
    +          exception = e;
    +          LOGGER.error(e, "start registry-service failed");
    +          port = port + 1;
    +          numTry = numTry - 1;
    +        }
    +      } while (numTry > 0);
    +      if (exception != null) {
    +        // we have tried many times, but still failed to find an available port
    +        throw exception;
    +      }
    +      LOGGER.info("registry-service started");
    +    } else {
    +      LOGGER.info("Search mode master has already started");
    +    }
    +  }
    +
    +  public void stopService() throws InterruptedException {
    +    if (registryServer != null) {
    +      registryServer.stop();
    +      registryServer.join();
    +      registryServer = null;
    +    }
    +  }
    +
    +  public void stopAllWorkers() throws IOException {
    +    for (Schedulable worker : getWorkers()) {
    +      try {
    +        worker.service.shutdown(new ShutdownRequest("user"));
    +      } catch (Throwable throwable) {
    +        throw new IOException(throwable);
    +      }
    +      scheduler.removeWorker(worker.getAddress());
    +    }
    +  }
    +
    +  /**
    +   * A new searcher is trying to register, add it to the map and connect to this searcher
    +   */
    +  public RegisterWorkerResponse addWorker(RegisterWorkerRequest request) throws IOException {
    +    LOGGER.info(
    +        "Receive Register request from worker " + request.getHostAddress() + ":" + request.getPort()
    +            + " with " + request.getCores() + " cores");
    +    String workerId = UUID.randomUUID().toString();
    +    String workerAddress = request.getHostAddress();
    +    int workerPort = request.getPort();
    +    LOGGER.info(
    +        "connecting to worker " + request.getHostAddress() + ":" + request.getPort() + ", workerId "
    +            + workerId);
    +
    +    StoreService searchService = ServiceFactory.createStoreService(workerAddress, workerPort);
    +    scheduler.addWorker(
    +        new Schedulable(workerId, workerAddress, workerPort, request.getCores(), searchService));
    +    LOGGER.info("worker " + request + " registered");
    +    return new RegisterWorkerResponse(workerId);
    +  }
    +
    +  private int onSuccess(int queryId, QueryResponse result, List<CarbonRow> output, long globalLimit)
    +      throws IOException {
    +    // in case of RPC success, collect all rows in response message
    +    if (result.getQueryId() != queryId) {
    +      throw new IOException(
    +          "queryId in response does not match request: " + result.getQueryId() + " != " + queryId);
    +    }
    +    if (result.getStatus() != Status.SUCCESS.ordinal()) {
    +      throw new IOException("failure in worker: " + result.getMessage());
    +    }
    +    int rowCount = 0;
    +    Object[][] rows = result.getRows();
    +    for (Object[] row : rows) {
    +      output.add(new CarbonRow(row));
    +      rowCount++;
    +      if (rowCount >= globalLimit) {
    +        break;
    +      }
    +    }
    +    LOGGER.info("[QueryId:" + queryId + "] accumulated result size " + rowCount);
    +    return rowCount;
    +  }
    +
    +  private void onFailure(Throwable e) throws IOException {
    +    throw new IOException("exception in worker: " + e.getMessage());
    +  }
    +
    +  private void onTimeout() {
    +    throw new ExecutionTimeoutException();
    +  }
    +
    +  public String getTableFolder(String database, String tableName) {
    +    return conf.storeLocation() + File.separator + database + File.separator + tableName;
    +  }
    +
    +  public CarbonTable getTable(String database, String tableName) throws StoreException {
    +    String tablePath = getTableFolder(database, tableName);
    +    CarbonTable carbonTable;
    +    SoftReference<CarbonTable> reference = cacheTables.get(tablePath);
    +    if (reference != null) {
    +      carbonTable = reference.get();
    +      if (carbonTable != null) {
    +        return carbonTable;
    +      }
    +    }
    +
    +    try {
    +      org.apache.carbondata.format.TableInfo tableInfo =
    +          CarbonUtil.readSchemaFile(CarbonTablePath.getSchemaFilePath(tablePath));
    +      SchemaConverter schemaConverter = new ThriftWrapperSchemaConverterImpl();
    +      TableInfo tableInfo1 = schemaConverter.fromExternalToWrapperTableInfo(tableInfo, "", "", "");
    +      tableInfo1.setTablePath(tablePath);
    +      carbonTable = CarbonTable.buildFromTableInfo(tableInfo1);
    +      cacheTables.put(tablePath, new SoftReference<CarbonTable>(carbonTable));
    +      return carbonTable;
    +    } catch (IOException e) {
    +      String message = "Failed to get table from " + tablePath;
    +      LOGGER.error(e, message);
    +      throw new StoreException(message);
    +    }
    +  }
    +
    +  public boolean createTable(TableInfo tableInfo, boolean ifNotExists) throws IOException {
    +    AbsoluteTableIdentifier identifier = tableInfo.getOrCreateAbsoluteTableIdentifier();
    +    boolean tableExists = FileFactory.isFileExist(identifier.getTablePath());
    +    if (tableExists) {
    +      if (ifNotExists) {
    +        return true;
    +      } else {
    +        throw new IOException(
    +            "car't create table " + tableInfo.getDatabaseName() + "." + tableInfo.getFactTable()
    +                .getTableName() + ", because it already exists");
    +      }
    +    }
    +
    +    SchemaConverter schemaConverter = new ThriftWrapperSchemaConverterImpl();
    +    String databaseName = tableInfo.getDatabaseName();
    +    String tableName = tableInfo.getFactTable().getTableName();
    +    org.apache.carbondata.format.TableInfo thriftTableInfo =
    +        schemaConverter.fromWrapperToExternalTableInfo(tableInfo, databaseName, tableName);
    +
    +    String schemaFilePath = CarbonTablePath.getSchemaFilePath(identifier.getTablePath());
    +    String schemaMetadataPath = CarbonTablePath.getFolderContainingFile(schemaFilePath);
    +    FileFactory.FileType fileType = FileFactory.getFileType(schemaMetadataPath);
    +    try {
    +      if (!FileFactory.isFileExist(schemaMetadataPath, fileType)) {
    +        boolean isDirCreated = FileFactory.mkdirs(schemaMetadataPath, fileType);
    +        if (!isDirCreated) {
    +          throw new IOException("Failed to create the metadata directory " + schemaMetadataPath);
    +        }
    +      }
    +      ThriftWriter thriftWriter = new ThriftWriter(schemaFilePath, false);
    +      thriftWriter.open(FileWriteOperation.OVERWRITE);
    +      thriftWriter.write(thriftTableInfo);
    +      thriftWriter.close();
    +      return true;
    +    } catch (IOException e) {
    +      LOGGER.error(e, "Failed to handle create table");
    +      throw e;
    +    }
    +  }
    +
    +  public void openSegment(CarbonLoadModel loadModel, boolean isOverwriteTable) throws IOException {
    --- End diff --
   
    Please make the visibility minimum, as of now, I think it is better to provide createTable, loadData, etc only in Master


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2440: [WIP][CarbonStore] implement RESTful API: cre...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2440#discussion_r200028073
 
    --- Diff: store/core/src/main/java/org/apache/carbondata/store/rest/controller/HorizonController.java ---
    @@ -0,0 +1,96 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.store.rest.controller;
    +
    +import java.util.UUID;
    +
    +import org.apache.carbondata.common.logging.LogService;
    +import org.apache.carbondata.common.logging.LogServiceFactory;
    +import org.apache.carbondata.core.datastore.row.CarbonRow;
    +import org.apache.carbondata.store.exception.StoreException;
    +import org.apache.carbondata.store.rest.model.dto.Load;
    +import org.apache.carbondata.store.rest.model.dto.Select;
    +import org.apache.carbondata.store.rest.model.dto.Table;
    +import org.apache.carbondata.store.rest.model.validate.RequestValidator;
    +import org.apache.carbondata.store.rest.model.vo.LoadRequest;
    +import org.apache.carbondata.store.rest.model.vo.SelectRequest;
    +import org.apache.carbondata.store.rest.model.vo.SelectResponse;
    +import org.apache.carbondata.store.rest.model.vo.TableRequest;
    +import org.apache.carbondata.store.rest.service.HorizonService;
    +
    +import org.springframework.http.HttpStatus;
    +import org.springframework.http.MediaType;
    +import org.springframework.http.ResponseEntity;
    +import org.springframework.web.bind.annotation.RequestBody;
    +import org.springframework.web.bind.annotation.RequestMapping;
    +import org.springframework.web.bind.annotation.RestController;
    +
    +@RestController
    +public class HorizonController {
    +
    +  private static LogService LOGGER =
    +      LogServiceFactory.getLogService(HorizonController.class.getName());
    +
    +  private HorizonService service;
    +
    +  public HorizonController() {
    +    service = HorizonService.getInstance();
    +  }
    +
    +  @RequestMapping(value = "/hello")
    +  public ResponseEntity<String> hello() {
    --- End diff --
   
    please remove this


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2440: [WIP][CarbonStore] implement RESTful API: cre...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2440#discussion_r200028170
 
    --- Diff: store/core/src/main/java/org/apache/carbondata/store/rest/controller/HorizonController.java ---
    @@ -0,0 +1,96 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.store.rest.controller;
    +
    +import java.util.UUID;
    +
    +import org.apache.carbondata.common.logging.LogService;
    +import org.apache.carbondata.common.logging.LogServiceFactory;
    +import org.apache.carbondata.core.datastore.row.CarbonRow;
    +import org.apache.carbondata.store.exception.StoreException;
    +import org.apache.carbondata.store.rest.model.dto.Load;
    +import org.apache.carbondata.store.rest.model.dto.Select;
    +import org.apache.carbondata.store.rest.model.dto.Table;
    +import org.apache.carbondata.store.rest.model.validate.RequestValidator;
    +import org.apache.carbondata.store.rest.model.vo.LoadRequest;
    +import org.apache.carbondata.store.rest.model.vo.SelectRequest;
    +import org.apache.carbondata.store.rest.model.vo.SelectResponse;
    +import org.apache.carbondata.store.rest.model.vo.TableRequest;
    +import org.apache.carbondata.store.rest.service.HorizonService;
    +
    +import org.springframework.http.HttpStatus;
    +import org.springframework.http.MediaType;
    +import org.springframework.http.ResponseEntity;
    +import org.springframework.web.bind.annotation.RequestBody;
    +import org.springframework.web.bind.annotation.RequestMapping;
    +import org.springframework.web.bind.annotation.RestController;
    +
    +@RestController
    +public class HorizonController {
    +
    +  private static LogService LOGGER =
    +      LogServiceFactory.getLogService(HorizonController.class.getName());
    +
    +  private HorizonService service;
    +
    +  public HorizonController() {
    +    service = HorizonService.getInstance();
    +  }
    +
    +  @RequestMapping(value = "/hello")
    +  public ResponseEntity<String> hello() {
    +    return new ResponseEntity<>("Hello world", HttpStatus.OK);
    +  }
    +
    +
    +  @RequestMapping(value = "/table/create", produces = MediaType.APPLICATION_JSON_VALUE)
    --- End diff --
   
    please add comment to describe the request and response content


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2440: [WIP][CarbonStore] implement RESTful API: cre...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2440#discussion_r200028299
 
    --- Diff: store/core/src/main/java/org/apache/carbondata/store/rest/model/dto/Select.java ---
    @@ -0,0 +1,88 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.store.rest.model.dto;
    --- End diff --
   
    please rename the package name from `dto` to `service`


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2440: [WIP][CarbonStore] implement RESTful API: cre...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2440#discussion_r200029615
 
    --- Diff: store/core/src/main/java/org/apache/carbondata/store/rest/service/ServiceUtil.java ---
    @@ -0,0 +1,107 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.store.rest.service;
    +
    +import java.util.ArrayList;
    +import java.util.Arrays;
    +import java.util.List;
    +import java.util.Map;
    +import java.util.Set;
    +
    +import org.apache.carbondata.core.constants.CarbonCommonConstants;
    +import org.apache.carbondata.core.metadata.datatype.DataTypes;
    +import org.apache.carbondata.core.scan.expression.Expression;
    +import org.apache.carbondata.core.util.CarbonUtil;
    +import org.apache.carbondata.sdk.file.Field;
    +import org.apache.carbondata.store.exception.StoreException;
    +
    +import org.apache.commons.lang.StringUtils;
    +
    +public class ServiceUtil {
    +
    +  public static Expression parseFilter(String filter) {
    +    if (filter == null) {
    +      return null;
    +    }
    +
    +    // TODO parse filter sql to Expression object
    +
    +    return null;
    +  }
    +
    +  public static List<String> prepareSortColumns(
    --- End diff --
   
    please move this into `TableSchemaBuilder` which validate the sort columns


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2440: [WIP][CarbonStore] implement RESTful API: create tab...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:

    https://github.com/apache/carbondata/pull/2440
 
    please add PR description, to describe what is supported and not supported


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2440: [WIP][CarbonStore] implement RESTful API: create tab...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:

    https://github.com/apache/carbondata/pull/2440
 
    Please start a discussion in mail list first and create JIRA.
    And describe what is supported and not supported in the PR description


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2440: [CARBONDATA-2690][CarbonStore] implement RESTful API...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/2440
 
    Build Failed  with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6795/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2440: [CARBONDATA-2690][CarbonStore] implement REST...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2440#discussion_r200251585
 
    --- Diff: store/core/src/main/java/org/apache/carbondata/store/conf/StoreConf.java ---
    @@ -0,0 +1,185 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.carbondata.store.conf;
    +
    +import java.io.DataInput;
    +import java.io.DataOutput;
    +import java.io.IOException;
    +import java.io.Serializable;
    +import java.util.HashMap;
    +import java.util.Map;
    +import java.util.Set;
    +
    +import org.apache.carbondata.core.datastore.impl.FileFactory;
    +import org.apache.carbondata.store.util.StoreUtil;
    +
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.io.WritableUtils;
    +
    +public class StoreConf implements Serializable, Writable {
    +
    +  public static final String SELECT_PROJECTION = "carbon.select.projection";
    +  public static final String SELECT_FILTER = "carbon.select.filter";
    +  public static final String SELECT_LIMIT = "carbon.select.limit";
    +
    +  public static final String SELECT_ID = "carbon.select.id";
    +
    +  public static final String WORKER_HOST = "carbon.worker.host";
    +  public static final String WORKER_PORT = "carbon.worker.port";
    +  public static final String WORKER_CORE_NUM = "carbon.worker.core.num";
    +  public static final String MASTER_HOST = "carbon.master.host";
    +  public static final String MASTER_PORT = "carbon.master.port";
    +
    +  public static final String STORE_TEMP_LOCATION = "carbon.store.temp.location";
    +  public static final String STORE_LOCATION = "carbon.store.location";
    +
    +  private Map<String, String> conf = new HashMap<>();
    +
    +  public StoreConf() {
    +  }
    +
    +  public StoreConf(String filePath) {
    +    load(filePath);
    +  }
    +
    +  public StoreConf conf(String key, String value) {
    +    conf.put(key, value);
    +    return this;
    +  }
    +
    +  public StoreConf conf(String key, int value) {
    +    conf.put(key, "" + value);
    +    return this;
    +  }
    +
    +  public void load(String filePath) {
    +    StoreUtil.loadProperties(filePath, this);
    +  }
    +
    +  public void conf(StoreConf conf) {
    +    this.conf.putAll(conf.conf);
    +  }
    +
    +  public Object conf(String key) {
    +    return conf.get(key);
    +  }
    +
    +  public String[] projection() {
    +    return stringArrayValue(SELECT_PROJECTION);
    +  }
    +
    +  public String filter() {
    +    return stringValue(SELECT_FILTER);
    +  }
    +
    +  public int limit() {
    +    return intValue(SELECT_LIMIT);
    +  }
    +
    +  public String masterHost() {
    +    return stringValue(MASTER_HOST);
    +  }
    +
    +  public int masterPort() {
    +    return intValue(MASTER_PORT);
    +  }
    +
    +  public String workerHost() {
    +    return stringValue(WORKER_HOST);
    +  }
    +
    +  public int workerPort() {
    +    return intValue(WORKER_PORT);
    +  }
    +
    +  public int workerCoreNum() {
    +    return intValue(WORKER_CORE_NUM);
    +  }
    +
    +  public String storeLocation() {
    +    return stringValue(STORE_LOCATION);
    +  }
    +
    +  public String[] storeTempLocation() {
    +    return stringArrayValue(STORE_TEMP_LOCATION);
    +  }
    +
    +  public String selectId() {
    +    return stringValue(SELECT_ID);
    +  }
    +
    +  public Configuration newHadoopConf() {
    +    Configuration hadoopConf = FileFactory.getConfiguration();
    +    for (Map.Entry<String, String> entry : conf.entrySet()) {
    +      String key = entry.getKey();
    +      String value = entry.getValue();
    +      if (key != null && value != null && key.startsWith("carbon.hadoop.")) {
    +        hadoopConf.set(key.substring("carbon.hadoop.".length()), value);
    +      }
    +    }
    +    return hadoopConf;
    +  }
    +
    +  private String stringValue(String key) {
    +    Object obj = conf.get(key);
    +    if (obj == null) {
    +      return null;
    +    }
    +    return obj.toString();
    +  }
    +
    +  private int intValue(String key) {
    +    String value = conf.get(key);
    +    if (value == null) {
    +      return -1;
    +    }
    +    return Integer.parseInt(value);
    +  }
    +
    +  private String[] stringArrayValue(String key) {
    +    String value = conf.get(key);
    +    if (value == null) {
    +      return null;
    +    }
    +    return value.split(",", -1);
    +  }
    +
    +  @Override public void write(DataOutput out) throws IOException {
    --- End diff --
   
    please move @Override to previous line


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata pull request #2440: [CARBONDATA-2690][CarbonStore] implement REST...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2440#discussion_r200252810
 
    --- Diff: store/horizon/src/main/anltr/Select.g4 ---
    @@ -0,0 +1,164 @@
    +/*
    --- End diff --
   
    please rename the file to `filter.g4`


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2440: [CARBONDATA-2690][CarbonStore] implement RESTful API...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/2440
 
    Build Failed  with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6806/



---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2440: [CARBONDATA-2690][CarbonStore] implement RESTful API...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user QiangCai commented on the issue:

    https://github.com/apache/carbondata/pull/2440
 
    retest this please


---
Reply | Threaded
Open this post in threaded view
|

[GitHub] carbondata issue #2440: [CARBONDATA-2690][CarbonStore] implement RESTful API...

qiuchenjian-2
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/carbondata/pull/2440
 
    Build Failed  with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6864/



---
12