[
https://issues.apache.org/jira/browse/CARBONDATA-308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15633445#comment-15633445 ]
ASF GitHub Bot commented on CARBONDATA-308:
-------------------------------------------
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/262#discussion_r86391469
--- Diff: core/src/main/java/org/apache/carbondata/core/carbon/datastore/block/Distributable.java ---
@@ -16,10 +16,12 @@
*/
package org.apache.carbondata.core.carbon.datastore.block;
+import java.io.IOException;
+
/**
- * Abstract class which is maintains the locations of node.
+ * interface to get the locations of node. Used for making task distribution based on locality
*/
-public abstract class Distributable implements Comparable<Distributable> {
+public interface Distributable extends Comparable<Distributable> {
- public abstract String[] getLocations();
+ String[] getLocations() throws IOException;
--- End diff --
Any reason to throw IOException form this method, I think this is not required ??
> Use CarbonInputFormat in CarbonScanRDD compute
> ----------------------------------------------
>
> Key: CARBONDATA-308
> URL:
https://issues.apache.org/jira/browse/CARBONDATA-308> Project: CarbonData
> Issue Type: Sub-task
> Components: spark-integration
> Reporter: Jacky Li
> Assignee: Jacky Li
> Fix For: 0.2.0-incubating
>
>
> Take CarbonScanRDD as the target RDD, modify as following:
> 1. In driver side, only getSplit is required, so only filter condition is required, no need to create full QueryModel object, so we can move creation of QueryModel from driver side to executor side.
> 2. use CarbonInputFormat.createRecordReader in CarbonScanRDD.compute instead of use QueryExecutor directly
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)