Error while loading - Table is locked for updation. Please try after some time ( Spark 1.6.2 )

classic Classic list List threaded Threaded
14 messages Options
Reply | Threaded
Open this post in threaded view
|

Error while loading - Table is locked for updation. Please try after some time ( Spark 1.6.2 )

Sanoj M George
Hi All,

I am getting below error while trying out Carbondata with Spark 1.6.2 /
Hadoop 2.6.5 / Carbondata 1.

./bin/spark-shell --jars
carbonlib/carbondata_2.10-1.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar
scala> import org.apache.spark.sql.CarbonContext
scala> val cc = new CarbonContext(sc)
scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name string, city
string, age Int) STORED BY 'carbondata'")
scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv' INTO TABLE
t1")
INFO  05-02 14:57:22,346 - main Query [LOAD DATA INPATH
'/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
INFO  05-02 14:57:37,411 - Table MetaData Unlocked Successfully after data
load
java.lang.RuntimeException: Table is locked for updation. Please try after
some time
        at scala.sys.package$.error(package.scala:27)
        at
org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:360)
        at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
        at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)


I followed the docs at
https://github.com/apache/incubator-carbondata/blob/master/docs/installation-guide.md#installing-and-configuring-carbondata-on-standalone-spark-cluster
and
https://github.com/apache/incubator-carbondata/blob/master/docs/quick-start-guide.md
to install carbondata.

While creating the table, I observed below WARN msg in the log :

main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA OPTIONS (TABLENAME
"DEFAULT.T1", TABLEPATH "/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]

WARN  05-02 14:34:30,656 - Couldn't find corresponding Hive SerDe for data
source provider carbondata. Persisting data source relation `default`.`t1`
into Hive metastore in Spark SQL specific format, which is NOT compatible
with Hive.
INFO  05-02 14:34:30,755 - 0: create_table: Table(tableName:t1,
dbName:default, owner:cduser, createTime:1486290870, lastAccessTime:0,
retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col,
type:array<string>, comment:from deserializer)], location:null,
inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat,
compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
serializationLib:org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe,
parameters:{tablePath=/home/cduser/carbon.store/default/t1,
serialization.format=1, tableName=default.t1}), bucketCols:[], sortCols:[],
parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[],
skewedColValueLocationMaps:{})), partitionKeys:[],
parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=carbondata},
viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE,
privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null,
rolePrivileges:null))


Appreciate any help in resolving this.

Thanks,
Sanoj
Reply | Threaded
Open this post in threaded view
|

Re: Error while loading - Table is locked for updation. Please try after some time ( Spark 1.6.2 )

sraghunandan
Dear sanoj,
Pls refer to
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/Dictionary-file-is-locked-for-updation-td5076.html

Let me know if this thread didn't address your problem.

Regards


On 05-Feb-2017 5:22 PM, "Sanoj M George" <[hidden email]> wrote:

Hi All,

I am getting below error while trying out Carbondata with Spark 1.6.2 /
Hadoop 2.6.5 / Carbondata 1.

./bin/spark-shell --jars
carbonlib/carbondata_2.10-1.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar
scala> import org.apache.spark.sql.CarbonContext
scala> val cc = new CarbonContext(sc)
scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name string, city
string, age Int) STORED BY 'carbondata'")
scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv' INTO TABLE
t1")
INFO  05-02 14:57:22,346 - main Query [LOAD DATA INPATH
'/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
INFO  05-02 14:57:37,411 - Table MetaData Unlocked Successfully after data
load
java.lang.RuntimeException: Table is locked for updation. Please try after
some time
        at scala.sys.package$.error(package.scala:27)
        at
org.apache.spark.sql.execution.command.LoadTable.
run(carbonTableSchema.scala:360)
        at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(
commands.scala:58)
        at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.
scala:56)


I followed the docs at
https://github.com/apache/incubator-carbondata/blob/
master/docs/installation-guide.md#installing-and-configuring-carbondata-on-
standalone-spark-cluster
and
https://github.com/apache/incubator-carbondata/blob/
master/docs/quick-start-guide.md
to install carbondata.

While creating the table, I observed below WARN msg in the log :

main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA OPTIONS (TABLENAME
"DEFAULT.T1", TABLEPATH "/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]

WARN  05-02 14:34:30,656 - Couldn't find corresponding Hive SerDe for data
source provider carbondata. Persisting data source relation `default`.`t1`
into Hive metastore in Spark SQL specific format, which is NOT compatible
with Hive.
INFO  05-02 14:34:30,755 - 0: create_table: Table(tableName:t1,
dbName:default, owner:cduser, createTime:1486290870, lastAccessTime:0,
retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col,
type:array<string>, comment:from deserializer)], location:null,
inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat,
compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
serializationLib:org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe,
parameters:{tablePath=/home/cduser/carbon.store/default/t1,
serialization.format=1, tableName=default.t1}), bucketCols:[], sortCols:[],
parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[],
skewedColValueLocationMaps:{})), partitionKeys:[],
parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=carbondata},
viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE,
privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null,
rolePrivileges:null))


Appreciate any help in resolving this.

Thanks,
Sanoj
Reply | Threaded
Open this post in threaded view
|

Re: Error while loading - Table is locked for updation. Please try after some time ( Spark 1.6.2 )

Sanoj M George
Thanks Raghunandan.  Checked the thread but it seems this error is due to
something else.

Below are the parameters that I changed :

**** carbon.properties :
carbon.storelocation=hdfs://localhost:9000/opt/CarbonStore
carbon.ddl.base.hdfs.url=hdfs://localhost:9000/opt/data
carbon.kettle.home=/home/cduser/spark/carbonlib/carbonplugins

**** spark-defaults.conf  :
carbon.kettle.home
/home/cduser/spark/carbonlib/carbonplugins
spark.driver.extraJavaOptions
-Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties
spark.executor.extraJavaOptions
-Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties

Although store location is specified in carbon.properties, spark-shell was
using "/home/cduser/carbon.store" as store location.

Regards

On Sun, Feb 5, 2017 at 4:49 PM, Raghunandan S <
[hidden email]> wrote:

> Dear sanoj,
> Pls refer to
> http://apache-carbondata-mailing-list-archive.1130556.
> n5.nabble.com/Dictionary-file-is-locked-for-updation-td5076.html
>
> Let me know if this thread didn't address your problem.
>
> Regards
>
>
> On 05-Feb-2017 5:22 PM, "Sanoj M George" <[hidden email]> wrote:
>
> Hi All,
>
> I am getting below error while trying out Carbondata with Spark 1.6.2 /
> Hadoop 2.6.5 / Carbondata 1.
>
> ./bin/spark-shell --jars
> carbonlib/carbondata_2.10-1.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar
> scala> import org.apache.spark.sql.CarbonContext
> scala> val cc = new CarbonContext(sc)
> scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name string, city
> string, age Int) STORED BY 'carbondata'")
> scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv' INTO TABLE
> t1")
> INFO  05-02 14:57:22,346 - main Query [LOAD DATA INPATH
> '/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
> INFO  05-02 14:57:37,411 - Table MetaData Unlocked Successfully after data
> load
> java.lang.RuntimeException: Table is locked for updation. Please try after
> some time
>         at scala.sys.package$.error(package.scala:27)
>         at
> org.apache.spark.sql.execution.command.LoadTable.
> run(carbonTableSchema.scala:360)
>         at
> org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult$lzycompute(
> commands.scala:58)
>         at
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.
> scala:56)
>
>
> I followed the docs at
> https://github.com/apache/incubator-carbondata/blob/
> master/docs/installation-guide.md#installing-and-
> configuring-carbondata-on-
> standalone-spark-cluster
> <http://installation-guide.md#installing-and-configuring-carbondata-on-%0Astandalone-spark-cluster>
> and
> https://github.com/apache/incubator-carbondata/blob/
> master/docs/quick-start-guide.md
> to install carbondata.
>
> While creating the table, I observed below WARN msg in the log :
>
> main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA OPTIONS (TABLENAME
> "DEFAULT.T1", TABLEPATH "/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]
>
> WARN  05-02 14:34:30,656 - Couldn't find corresponding Hive SerDe for data
> source provider carbondata. Persisting data source relation `default`.`t1`
> into Hive metastore in Spark SQL specific format, which is NOT compatible
> with Hive.
> INFO  05-02 14:34:30,755 - 0: create_table: Table(tableName:t1,
> dbName:default, owner:cduser, createTime:1486290870, lastAccessTime:0,
> retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col,
> type:array<string>, comment:from deserializer)], location:null,
> inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
> outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat,
> compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
> serializationLib:org.apache.hadoop.hive.serde2.
> MetadataTypedColumnsetSerDe,
> parameters:{tablePath=/home/cduser/carbon.store/default/t1,
> serialization.format=1, tableName=default.t1}), bucketCols:[], sortCols:[],
> parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[],
> skewedColValues:[],
> skewedColValueLocationMaps:{})), partitionKeys:[],
> parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=carbondata},
> viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE,
> privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null,
> rolePrivileges:null))
>
>
> Appreciate any help in resolving this.
>
> Thanks,
> Sanoj
>
Reply | Threaded
Open this post in threaded view
|

Re: Error while loading - Table is locked for updation. Please try after some time ( Spark 1.6.2 )

sraghunandan
You mean the issue is resolved?

Regards
Raghunandan

On 06-Feb-2017 1:36 PM, "Sanoj M George" <[hidden email]> wrote:

Thanks Raghunandan.  Checked the thread but it seems this error is due to
something else.

Below are the parameters that I changed :

**** carbon.properties :
carbon.storelocation=hdfs://localhost:9000/opt/CarbonStore
carbon.ddl.base.hdfs.url=hdfs://localhost:9000/opt/data
carbon.kettle.home=/home/cduser/spark/carbonlib/carbonplugins

**** spark-defaults.conf  :
carbon.kettle.home
/home/cduser/spark/carbonlib/carbonplugins
spark.driver.extraJavaOptions
-Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties
spark.executor.extraJavaOptions
-Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties

Although store location is specified in carbon.properties, spark-shell was
using "/home/cduser/carbon.store" as store location.

Regards

On Sun, Feb 5, 2017 at 4:49 PM, Raghunandan S <
[hidden email]> wrote:

> Dear sanoj,
> Pls refer to
> http://apache-carbondata-mailing-list-archive.1130556.
> n5.nabble.com/Dictionary-file-is-locked-for-updation-td5076.html
>
> Let me know if this thread didn't address your problem.
>
> Regards
>
>
> On 05-Feb-2017 5:22 PM, "Sanoj M George" <[hidden email]> wrote:
>
> Hi All,
>
> I am getting below error while trying out Carbondata with Spark 1.6.2 /
> Hadoop 2.6.5 / Carbondata 1.
>
> ./bin/spark-shell --jars
> carbonlib/carbondata_2.10-1.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar
> scala> import org.apache.spark.sql.CarbonContext
> scala> val cc = new CarbonContext(sc)
> scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name string, city
> string, age Int) STORED BY 'carbondata'")
> scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv' INTO TABLE
> t1")
> INFO  05-02 14:57:22,346 - main Query [LOAD DATA INPATH
> '/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
> INFO  05-02 14:57:37,411 - Table MetaData Unlocked Successfully after data
> load
> java.lang.RuntimeException: Table is locked for updation. Please try after
> some time
>         at scala.sys.package$.error(package.scala:27)
>         at
> org.apache.spark.sql.execution.command.LoadTable.
> run(carbonTableSchema.scala:360)
>         at
> org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult$lzycompute(
> commands.scala:58)
>         at
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.
> scala:56)
>
>
> I followed the docs at
> https://github.com/apache/incubator-carbondata/blob/
> master/docs/installation-guide.md#installing-and-
> configuring-carbondata-on-
> standalone-spark-cluster
> <http://installation-guide.md#installing-and-configuring-
carbondata-on-%0Astandalone-spark-cluster>

> and
> https://github.com/apache/incubator-carbondata/blob/
> master/docs/quick-start-guide.md
> to install carbondata.
>
> While creating the table, I observed below WARN msg in the log :
>
> main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA OPTIONS (TABLENAME
> "DEFAULT.T1", TABLEPATH "/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]
>
> WARN  05-02 14:34:30,656 - Couldn't find corresponding Hive SerDe for data
> source provider carbondata. Persisting data source relation `default`.`t1`
> into Hive metastore in Spark SQL specific format, which is NOT compatible
> with Hive.
> INFO  05-02 14:34:30,755 - 0: create_table: Table(tableName:t1,
> dbName:default, owner:cduser, createTime:1486290870, lastAccessTime:0,
> retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col,
> type:array<string>, comment:from deserializer)], location:null,
> inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
> outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat,
> compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
> serializationLib:org.apache.hadoop.hive.serde2.
> MetadataTypedColumnsetSerDe,
> parameters:{tablePath=/home/cduser/carbon.store/default/t1,
> serialization.format=1, tableName=default.t1}), bucketCols:[],
sortCols:[],

> parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[],
> skewedColValues:[],
> skewedColValueLocationMaps:{})), partitionKeys:[],
> parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=carbondata},
> viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE,
> privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null,
> rolePrivileges:null))
>
>
> Appreciate any help in resolving this.
>
> Thanks,
> Sanoj
>
Reply | Threaded
Open this post in threaded view
|

Re: Error while loading - Table is locked for updation. Please try after some time ( Spark 1.6.2 )

Sanoj M George
Not yet resolved, still getting same error.

On Mon, Feb 6, 2017 at 12:41 PM Raghunandan S <
[hidden email]> wrote:

> You mean the issue is resolved?
>
> Regards
> Raghunandan
>
> On 06-Feb-2017 1:36 PM, "Sanoj M George" <[hidden email]> wrote:
>
> Thanks Raghunandan.  Checked the thread but it seems this error is due to
> something else.
>
> Below are the parameters that I changed :
>
> **** carbon.properties :
> carbon.storelocation=hdfs://localhost:9000/opt/CarbonStore
> carbon.ddl.base.hdfs.url=hdfs://localhost:9000/opt/data
> carbon.kettle.home=/home/cduser/spark/carbonlib/carbonplugins
>
> **** spark-defaults.conf  :
> carbon.kettle.home
> /home/cduser/spark/carbonlib/carbonplugins
> spark.driver.extraJavaOptions
> -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties
> spark.executor.extraJavaOptions
> -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties
>
> Although store location is specified in carbon.properties, spark-shell was
> using "/home/cduser/carbon.store" as store location.
>
> Regards
>
> On Sun, Feb 5, 2017 at 4:49 PM, Raghunandan S <
> [hidden email]> wrote:
>
> > Dear sanoj,
> > Pls refer to
> > http://apache-carbondata-mailing-list-archive.1130556.
> > n5.nabble.com/Dictionary-file-is-locked-for-updation-td5076.html
> >
> > Let me know if this thread didn't address your problem.
> >
> > Regards
> >
> >
> > On 05-Feb-2017 5:22 PM, "Sanoj M George" <[hidden email]> wrote:
> >
> > Hi All,
> >
> > I am getting below error while trying out Carbondata with Spark 1.6.2 /
> > Hadoop 2.6.5 / Carbondata 1.
> >
> > ./bin/spark-shell --jars
> > carbonlib/carbondata_2.10-1.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar
> > scala> import org.apache.spark.sql.CarbonContext
> > scala> val cc = new CarbonContext(sc)
> > scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name string,
> city
> > string, age Int) STORED BY 'carbondata'")
> > scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv' INTO
> TABLE
> > t1")
> > INFO  05-02 14:57:22,346 - main Query [LOAD DATA INPATH
> > '/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
> > INFO  05-02 14:57:37,411 - Table MetaData Unlocked Successfully after
> data
> > load
> > java.lang.RuntimeException: Table is locked for updation. Please try
> after
> > some time
> >         at scala.sys.package$.error(package.scala:27)
> >         at
> > org.apache.spark.sql.execution.command.LoadTable.
> > run(carbonTableSchema.scala:360)
> >         at
> > org.apache.spark.sql.execution.ExecutedCommand.
> > sideEffectResult$lzycompute(
> > commands.scala:58)
> >         at
> > org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.
> > scala:56)
> >
> >
> > I followed the docs at
> > https://github.com/apache/incubator-carbondata/blob/
> > master/docs/installation-guide.md#installing-and-
> > configuring-carbondata-on-
> > standalone-spark-cluster
> > <http://installation-guide.md#installing-and-configuring-
> carbondata-on-%0Astandalone-spark-cluster>
> > and
> > https://github.com/apache/incubator-carbondata/blob/
> > master/docs/quick-start-guide.md
> > to install carbondata.
> >
> > While creating the table, I observed below WARN msg in the log :
> >
> > main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA OPTIONS (TABLENAME
> > "DEFAULT.T1", TABLEPATH "/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]
> >
> > WARN  05-02 14:34:30,656 - Couldn't find corresponding Hive SerDe for
> data
> > source provider carbondata. Persisting data source relation
> `default`.`t1`
> > into Hive metastore in Spark SQL specific format, which is NOT compatible
> > with Hive.
> > INFO  05-02 14:34:30,755 - 0: create_table: Table(tableName:t1,
> > dbName:default, owner:cduser, createTime:1486290870, lastAccessTime:0,
> > retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col,
> > type:array<string>, comment:from deserializer)], location:null,
> > inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
> > outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat,
> > compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
> > serializationLib:org.apache.hadoop.hive.serde2.
> > MetadataTypedColumnsetSerDe,
> > parameters:{tablePath=/home/cduser/carbon.store/default/t1,
> > serialization.format=1, tableName=default.t1}), bucketCols:[],
> sortCols:[],
> > parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[],
> > skewedColValues:[],
> > skewedColValueLocationMaps:{})), partitionKeys:[],
> > parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=carbondata},
> > viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE,
> > privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null,
> > rolePrivileges:null))
> >
> >
> > Appreciate any help in resolving this.
> >
> > Thanks,
> > Sanoj
> >
>
--
Sent from my iPhone
Reply | Threaded
Open this post in threaded view
|

Re: Error while loading - Table is locked for updation. Please try after some time ( Spark 1.6.2 )

manishgupta88
Hi Sanoj,

Please check if there is any file with .lock extension in the carbon store.

Also when you start thrift server carbon store location will be printed in
the thrift server logs. Please validate if there is nay mismatch in the
store location provided by you and the store location getting printed in
the thrift server logs.

Also please provide the complete logs for failure.

Regards
Manish Gupta

On Mon, Feb 6, 2017 at 2:18 PM, Sanoj M George <[hidden email]>
wrote:

> Not yet resolved, still getting same error.
>
> On Mon, Feb 6, 2017 at 12:41 PM Raghunandan S <
> [hidden email]> wrote:
>
> > You mean the issue is resolved?
> >
> > Regards
> > Raghunandan
> >
> > On 06-Feb-2017 1:36 PM, "Sanoj M George" <[hidden email]> wrote:
> >
> > Thanks Raghunandan.  Checked the thread but it seems this error is due to
> > something else.
> >
> > Below are the parameters that I changed :
> >
> > **** carbon.properties :
> > carbon.storelocation=hdfs://localhost:9000/opt/CarbonStore
> > carbon.ddl.base.hdfs.url=hdfs://localhost:9000/opt/data
> > carbon.kettle.home=/home/cduser/spark/carbonlib/carbonplugins
> >
> > **** spark-defaults.conf  :
> > carbon.kettle.home
> > /home/cduser/spark/carbonlib/carbonplugins
> > spark.driver.extraJavaOptions
> > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties
> > spark.executor.extraJavaOptions
> > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties
> >
> > Although store location is specified in carbon.properties, spark-shell
> was
> > using "/home/cduser/carbon.store" as store location.
> >
> > Regards
> >
> > On Sun, Feb 5, 2017 at 4:49 PM, Raghunandan S <
> > [hidden email]> wrote:
> >
> > > Dear sanoj,
> > > Pls refer to
> > > http://apache-carbondata-mailing-list-archive.1130556.
> > > n5.nabble.com/Dictionary-file-is-locked-for-updation-td5076.html
> > >
> > > Let me know if this thread didn't address your problem.
> > >
> > > Regards
> > >
> > >
> > > On 05-Feb-2017 5:22 PM, "Sanoj M George" <[hidden email]>
> wrote:
> > >
> > > Hi All,
> > >
> > > I am getting below error while trying out Carbondata with Spark 1.6.2 /
> > > Hadoop 2.6.5 / Carbondata 1.
> > >
> > > ./bin/spark-shell --jars
> > > carbonlib/carbondata_2.10-1.1.0-incubating-SNAPSHOT-shade-
> hadoop2.2.0.jar
> > > scala> import org.apache.spark.sql.CarbonContext
> > > scala> val cc = new CarbonContext(sc)
> > > scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name string,
> > city
> > > string, age Int) STORED BY 'carbondata'")
> > > scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv' INTO
> > TABLE
> > > t1")
> > > INFO  05-02 14:57:22,346 - main Query [LOAD DATA INPATH
> > > '/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
> > > INFO  05-02 14:57:37,411 - Table MetaData Unlocked Successfully after
> > data
> > > load
> > > java.lang.RuntimeException: Table is locked for updation. Please try
> > after
> > > some time
> > >         at scala.sys.package$.error(package.scala:27)
> > >         at
> > > org.apache.spark.sql.execution.command.LoadTable.
> > > run(carbonTableSchema.scala:360)
> > >         at
> > > org.apache.spark.sql.execution.ExecutedCommand.
> > > sideEffectResult$lzycompute(
> > > commands.scala:58)
> > >         at
> > > org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult(commands.
> > > scala:56)
> > >
> > >
> > > I followed the docs at
> > > https://github.com/apache/incubator-carbondata/blob/
> > > master/docs/installation-guide.md#installing-and-
> > > configuring-carbondata-on-
> > > standalone-spark-cluster
> > > <http://installation-guide.md#installing-and-configuring-
> > carbondata-on-%0Astandalone-spark-cluster>
> > > and
> > > https://github.com/apache/incubator-carbondata/blob/
> > > master/docs/quick-start-guide.md
> > > to install carbondata.
> > >
> > > While creating the table, I observed below WARN msg in the log :
> > >
> > > main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA OPTIONS (TABLENAME
> > > "DEFAULT.T1", TABLEPATH "/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]
> > >
> > > WARN  05-02 14:34:30,656 - Couldn't find corresponding Hive SerDe for
> > data
> > > source provider carbondata. Persisting data source relation
> > `default`.`t1`
> > > into Hive metastore in Spark SQL specific format, which is NOT
> compatible
> > > with Hive.
> > > INFO  05-02 14:34:30,755 - 0: create_table: Table(tableName:t1,
> > > dbName:default, owner:cduser, createTime:1486290870, lastAccessTime:0,
> > > retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col,
> > > type:array<string>, comment:from deserializer)], location:null,
> > > inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
> > > outputFormat:org.apache.hadoop.hive.ql.io.
> HiveSequenceFileOutputFormat,
> > > compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
> > > serializationLib:org.apache.hadoop.hive.serde2.
> > > MetadataTypedColumnsetSerDe,
> > > parameters:{tablePath=/home/cduser/carbon.store/default/t1,
> > > serialization.format=1, tableName=default.t1}), bucketCols:[],
> > sortCols:[],
> > > parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[],
> > > skewedColValues:[],
> > > skewedColValueLocationMaps:{})), partitionKeys:[],
> > > parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=carbondata},
> > > viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE,
> > > privileges:PrincipalPrivilegeSet(userPrivileges:{},
> groupPrivileges:null,
> > > rolePrivileges:null))
> > >
> > >
> > > Appreciate any help in resolving this.
> > >
> > > Thanks,
> > > Sanoj
> > >
> >
> --
> Sent from my iPhone
>
Reply | Threaded
Open this post in threaded view
|

回复: Error while loading - Table is locked for updation. Please tryafter some time ( Spark 1.6.2 )

李寅威
Hi Sanoj,


   maybe you can try init carbonContext by setting the parameter storePath as follows:


    scala> val storePath = "hdfs://localohst:9000/home/hadoop/carbondata/bin/carbonshellstore"
    scala> val cc = new CarbonContext(sc, storePath)








------------------ 原始邮件 ------------------
发件人: "manish gupta";<[hidden email]>;
发送时间: 2017年2月6日(星期一) 下午5:53
收件人: "dev"<[hidden email]>;

主题: Re: Error while loading - Table is locked for updation. Please tryafter some time ( Spark 1.6.2 )



Hi Sanoj,

Please check if there is any file with .lock extension in the carbon store.

Also when you start thrift server carbon store location will be printed in
the thrift server logs. Please validate if there is nay mismatch in the
store location provided by you and the store location getting printed in
the thrift server logs.

Also please provide the complete logs for failure.

Regards
Manish Gupta

On Mon, Feb 6, 2017 at 2:18 PM, Sanoj M George <[hidden email]>
wrote:

> Not yet resolved, still getting same error.
>
> On Mon, Feb 6, 2017 at 12:41 PM Raghunandan S <
> [hidden email]> wrote:
>
> > You mean the issue is resolved?
> >
> > Regards
> > Raghunandan
> >
> > On 06-Feb-2017 1:36 PM, "Sanoj M George" <[hidden email]> wrote:
> >
> > Thanks Raghunandan.  Checked the thread but it seems this error is due to
> > something else.
> >
> > Below are the parameters that I changed :
> >
> > **** carbon.properties :
> > carbon.storelocation=hdfs://localhost:9000/opt/CarbonStore
> > carbon.ddl.base.hdfs.url=hdfs://localhost:9000/opt/data
> > carbon.kettle.home=/home/cduser/spark/carbonlib/carbonplugins
> >
> > **** spark-defaults.conf  :
> > carbon.kettle.home
> > /home/cduser/spark/carbonlib/carbonplugins
> > spark.driver.extraJavaOptions
> > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties
> > spark.executor.extraJavaOptions
> > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties
> >
> > Although store location is specified in carbon.properties, spark-shell
> was
> > using "/home/cduser/carbon.store" as store location.
> >
> > Regards
> >
> > On Sun, Feb 5, 2017 at 4:49 PM, Raghunandan S <
> > [hidden email]> wrote:
> >
> > > Dear sanoj,
> > > Pls refer to
> > > http://apache-carbondata-mailing-list-archive.1130556.
> > > n5.nabble.com/Dictionary-file-is-locked-for-updation-td5076.html
> > >
> > > Let me know if this thread didn't address your problem.
> > >
> > > Regards
> > >
> > >
> > > On 05-Feb-2017 5:22 PM, "Sanoj M George" <[hidden email]>
> wrote:
> > >
> > > Hi All,
> > >
> > > I am getting below error while trying out Carbondata with Spark 1.6.2 /
> > > Hadoop 2.6.5 / Carbondata 1.
> > >
> > > ./bin/spark-shell --jars
> > > carbonlib/carbondata_2.10-1.1.0-incubating-SNAPSHOT-shade-
> hadoop2.2.0.jar
> > > scala> import org.apache.spark.sql.CarbonContext
> > > scala> val cc = new CarbonContext(sc)
> > > scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name string,
> > city
> > > string, age Int) STORED BY 'carbondata'")
> > > scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv' INTO
> > TABLE
> > > t1")
> > > INFO  05-02 14:57:22,346 - main Query [LOAD DATA INPATH
> > > '/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
> > > INFO  05-02 14:57:37,411 - Table MetaData Unlocked Successfully after
> > data
> > > load
> > > java.lang.RuntimeException: Table is locked for updation. Please try
> > after
> > > some time
> > >         at scala.sys.package$.error(package.scala:27)
> > >         at
> > > org.apache.spark.sql.execution.command.LoadTable.
> > > run(carbonTableSchema.scala:360)
> > >         at
> > > org.apache.spark.sql.execution.ExecutedCommand.
> > > sideEffectResult$lzycompute(
> > > commands.scala:58)
> > >         at
> > > org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult(commands.
> > > scala:56)
> > >
> > >
> > > I followed the docs at
> > > https://github.com/apache/incubator-carbondata/blob/
> > > master/docs/installation-guide.md#installing-and-
> > > configuring-carbondata-on-
> > > standalone-spark-cluster
> > > <http://installation-guide.md#installing-and-configuring-
> > carbondata-on-%0Astandalone-spark-cluster>
> > > and
> > > https://github.com/apache/incubator-carbondata/blob/
> > > master/docs/quick-start-guide.md
> > > to install carbondata.
> > >
> > > While creating the table, I observed below WARN msg in the log :
> > >
> > > main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA OPTIONS (TABLENAME
> > > "DEFAULT.T1", TABLEPATH "/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]
> > >
> > > WARN  05-02 14:34:30,656 - Couldn't find corresponding Hive SerDe for
> > data
> > > source provider carbondata. Persisting data source relation
> > `default`.`t1`
> > > into Hive metastore in Spark SQL specific format, which is NOT
> compatible
> > > with Hive.
> > > INFO  05-02 14:34:30,755 - 0: create_table: Table(tableName:t1,
> > > dbName:default, owner:cduser, createTime:1486290870, lastAccessTime:0,
> > > retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col,
> > > type:array<string>, comment:from deserializer)], location:null,
> > > inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
> > > outputFormat:org.apache.hadoop.hive.ql.io.
> HiveSequenceFileOutputFormat,
> > > compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
> > > serializationLib:org.apache.hadoop.hive.serde2.
> > > MetadataTypedColumnsetSerDe,
> > > parameters:{tablePath=/home/cduser/carbon.store/default/t1,
> > > serialization.format=1, tableName=default.t1}), bucketCols:[],
> > sortCols:[],
> > > parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[],
> > > skewedColValues:[],
> > > skewedColValueLocationMaps:{})), partitionKeys:[],
> > > parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=carbondata},
> > > viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE,
> > > privileges:PrincipalPrivilegeSet(userPrivileges:{},
> groupPrivileges:null,
> > > rolePrivileges:null))
> > >
> > >
> > > Appreciate any help in resolving this.
> > >
> > > Thanks,
> > > Sanoj
> > >
> >
> --
> Sent from my iPhone
>
Reply | Threaded
Open this post in threaded view
|

Re: Error while loading - Table is locked for updation. Please try after some time ( Spark 1.6.2 )

Sanoj M George
In reply to this post by manishgupta88
Hi Manish,

Could not find any .lock files incarbon store.

I am getting the error while running spark-shell, did not try thrift server. However, as you can see from attached logs, it is taking the default store location ( not the one from carbon.properties )

scala> cc.storePath
res0: String = /home/cduser/carbon.store


Thanks,
Sanoj




On Mon, Feb 6, 2017 at 1:23 PM, manish gupta <[hidden email]> wrote:
Hi Sanoj,

Please check if there is any file with .lock extension in the carbon store.

Also when you start thrift server carbon store location will be printed in
the thrift server logs. Please validate if there is nay mismatch in the
store location provided by you and the store location getting printed in
the thrift server logs.

Also please provide the complete logs for failure.

Regards
Manish Gupta

On Mon, Feb 6, 2017 at 2:18 PM, Sanoj M George <[hidden email]>
wrote:

> Not yet resolved, still getting same error.
>
> On Mon, Feb 6, 2017 at 12:41 PM Raghunandan S <
> [hidden email]> wrote:
>
> > You mean the issue is resolved?
> >
> > Regards
> > Raghunandan
> >
> > On 06-Feb-2017 1:36 PM, "Sanoj M George" <[hidden email]> wrote:
> >
> > Thanks Raghunandan.  Checked the thread but it seems this error is due to
> > something else.
> >
> > Below are the parameters that I changed :
> >
> > **** carbon.properties :
> > carbon.storelocation=hdfs://localhost:9000/opt/CarbonStore
> > carbon.ddl.base.hdfs.url=hdfs://localhost:9000/opt/data
> > carbon.kettle.home=/home/cduser/spark/carbonlib/carbonplugins
> >
> > **** spark-defaults.conf  :
> > carbon.kettle.home
> > /home/cduser/spark/carbonlib/carbonplugins
> > spark.driver.extraJavaOptions
> > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties
> > spark.executor.extraJavaOptions
> > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties
> >
> > Although store location is specified in carbon.properties, spark-shell
> was
> > using "/home/cduser/carbon.store" as store location.
> >
> > Regards
> >
> > On Sun, Feb 5, 2017 at 4:49 PM, Raghunandan S <
> > [hidden email]> wrote:
> >
> > > Dear sanoj,
> > > Pls refer to
> > > http://apache-carbondata-mailing-list-archive.1130556.
> > > n5.nabble.com/Dictionary-file-is-locked-for-updation-td5076.html
> > >
> > > Let me know if this thread didn't address your problem.
> > >
> > > Regards
> > >
> > >
> > > On 05-Feb-2017 5:22 PM, "Sanoj M George" <[hidden email]>
> wrote:
> > >
> > > Hi All,
> > >
> > > I am getting below error while trying out Carbondata with Spark 1.6.2 /
> > > Hadoop 2.6.5 / Carbondata 1.
> > >
> > > ./bin/spark-shell --jars
> > > carbonlib/carbondata_2.10-1.1.0-incubating-SNAPSHOT-shade-
> hadoop2.2.0.jar
> > > scala> import org.apache.spark.sql.CarbonContext
> > > scala> val cc = new CarbonContext(sc)
> > > scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name string,
> > city
> > > string, age Int) STORED BY 'carbondata'")
> > > scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv' INTO
> > TABLE
> > > t1")
> > > INFO  05-02 14:57:22,346 - main Query [LOAD DATA INPATH
> > > '/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
> > > INFO  05-02 14:57:37,411 - Table MetaData Unlocked Successfully after
> > data
> > > load
> > > java.lang.RuntimeException: Table is locked for updation. Please try
> > after
> > > some time
> > >         at scala.sys.package$.error(package.scala:27)
> > >         at
> > > org.apache.spark.sql.execution.command.LoadTable.
> > > run(carbonTableSchema.scala:360)
> > >         at
> > > org.apache.spark.sql.execution.ExecutedCommand.
> > > sideEffectResult$lzycompute(
> > > commands.scala:58)
> > >         at
> > > org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult(commands.
> > > scala:56)
> > >
> > >
> > > I followed the docs at
> > > https://github.com/apache/incubator-carbondata/blob/
> > > master/docs/installation-guide.md#installing-and-
> > > configuring-carbondata-on-
> > > standalone-spark-cluster
> > > <http://installation-guide.md#installing-and-configuring-
> > carbondata-on-%0Astandalone-spark-cluster>
> > > and
> > > https://github.com/apache/incubator-carbondata/blob/
> > > master/docs/quick-start-guide.md
> > > to install carbondata.
> > >
> > > While creating the table, I observed below WARN msg in the log :
> > >
> > > main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA OPTIONS (TABLENAME
> > > "DEFAULT.T1", TABLEPATH "/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]
> > >
> > > WARN  05-02 14:34:30,656 - Couldn't find corresponding Hive SerDe for
> > data
> > > source provider carbondata. Persisting data source relation
> > `default`.`t1`
> > > into Hive metastore in Spark SQL specific format, which is NOT
> compatible
> > > with Hive.
> > > INFO  05-02 14:34:30,755 - 0: create_table: Table(tableName:t1,
> > > dbName:default, owner:cduser, createTime:1486290870, lastAccessTime:0,
> > > retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col,
> > > type:array<string>, comment:from deserializer)], location:null,
> > > inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
> > > outputFormat:org.apache.hadoop.hive.ql.io.
> HiveSequenceFileOutputFormat,
> > > compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
> > > serializationLib:org.apache.hadoop.hive.serde2.
> > > MetadataTypedColumnsetSerDe,
> > > parameters:{tablePath=/home/cduser/carbon.store/default/t1,
> > > serialization.format=1, tableName=default.t1}), bucketCols:[],
> > sortCols:[],
> > > parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[],
> > > skewedColValues:[],
> > > skewedColValueLocationMaps:{})), partitionKeys:[],
> > > parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=carbondata},
> > > viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE,
> > > privileges:PrincipalPrivilegeSet(userPrivileges:{},
> groupPrivileges:null,
> > > rolePrivileges:null))
> > >
> > >
> > > Appreciate any help in resolving this.
> > >
> > > Thanks,
> > > Sanoj
> > >
> >
> --
> Sent from my iPhone
>


spark-shell-log.txt (49K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Error while loading - Table is locked for updation. Please tryafter some time ( Spark 1.6.2 )

Sanoj M George
In reply to this post by 李寅威
Hi Yinwei,

Tried this, it is using the new store path, but still getting the same
error.

Thanks

On Mon, Feb 6, 2017 at 1:38 PM, Yinwei Li <[hidden email]> wrote:

> Hi Sanoj,
>
>
>    maybe you can try init carbonContext by setting the parameter storePath
> as follows:
>
>
>     scala> val storePath = "hdfs://localohst:9000/home/
> hadoop/carbondata/bin/carbonshellstore"
>     scala> val cc = new CarbonContext(sc, storePath)
>
>
>
>
>
>
>
>
> ------------------ 原始邮件 ------------------
> 发件人: "manish gupta";<[hidden email]>;
> 发送时间: 2017年2月6日(星期一) 下午5:53
> 收件人: "dev"<[hidden email]>;
>
> 主题: Re: Error while loading - Table is locked for updation. Please
> tryafter some time ( Spark 1.6.2 )
>
>
>
> Hi Sanoj,
>
> Please check if there is any file with .lock extension in the carbon store.
>
> Also when you start thrift server carbon store location will be printed in
> the thrift server logs. Please validate if there is nay mismatch in the
> store location provided by you and the store location getting printed in
> the thrift server logs.
>
> Also please provide the complete logs for failure.
>
> Regards
> Manish Gupta
>
> On Mon, Feb 6, 2017 at 2:18 PM, Sanoj M George <[hidden email]>
> wrote:
>
> > Not yet resolved, still getting same error.
> >
> > On Mon, Feb 6, 2017 at 12:41 PM Raghunandan S <
> > [hidden email]> wrote:
> >
> > > You mean the issue is resolved?
> > >
> > > Regards
> > > Raghunandan
> > >
> > > On 06-Feb-2017 1:36 PM, "Sanoj M George" <[hidden email]>
> wrote:
> > >
> > > Thanks Raghunandan.  Checked the thread but it seems this error is due
> to
> > > something else.
> > >
> > > Below are the parameters that I changed :
> > >
> > > **** carbon.properties :
> > > carbon.storelocation=hdfs://localhost:9000/opt/CarbonStore
> > > carbon.ddl.base.hdfs.url=hdfs://localhost:9000/opt/data
> > > carbon.kettle.home=/home/cduser/spark/carbonlib/carbonplugins
> > >
> > > **** spark-defaults.conf  :
> > > carbon.kettle.home
> > > /home/cduser/spark/carbonlib/carbonplugins
> > > spark.driver.extraJavaOptions
> > > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties
> > > spark.executor.extraJavaOptions
> > > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties
> > >
> > > Although store location is specified in carbon.properties, spark-shell
> > was
> > > using "/home/cduser/carbon.store" as store location.
> > >
> > > Regards
> > >
> > > On Sun, Feb 5, 2017 at 4:49 PM, Raghunandan S <
> > > [hidden email]> wrote:
> > >
> > > > Dear sanoj,
> > > > Pls refer to
> > > > http://apache-carbondata-mailing-list-archive.1130556.
> > > > n5.nabble.com/Dictionary-file-is-locked-for-updation-td5076.html
> > > >
> > > > Let me know if this thread didn't address your problem.
> > > >
> > > > Regards
> > > >
> > > >
> > > > On 05-Feb-2017 5:22 PM, "Sanoj M George" <[hidden email]>
> > wrote:
> > > >
> > > > Hi All,
> > > >
> > > > I am getting below error while trying out Carbondata with Spark
> 1.6.2 /
> > > > Hadoop 2.6.5 / Carbondata 1.
> > > >
> > > > ./bin/spark-shell --jars
> > > > carbonlib/carbondata_2.10-1.1.0-incubating-SNAPSHOT-shade-
> > hadoop2.2.0.jar
> > > > scala> import org.apache.spark.sql.CarbonContext
> > > > scala> val cc = new CarbonContext(sc)
> > > > scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name string,
> > > city
> > > > string, age Int) STORED BY 'carbondata'")
> > > > scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv' INTO
> > > TABLE
> > > > t1")
> > > > INFO  05-02 14:57:22,346 - main Query [LOAD DATA INPATH
> > > > '/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
> > > > INFO  05-02 14:57:37,411 - Table MetaData Unlocked Successfully after
> > > data
> > > > load
> > > > java.lang.RuntimeException: Table is locked for updation. Please try
> > > after
> > > > some time
> > > >         at scala.sys.package$.error(package.scala:27)
> > > >         at
> > > > org.apache.spark.sql.execution.command.LoadTable.
> > > > run(carbonTableSchema.scala:360)
> > > >         at
> > > > org.apache.spark.sql.execution.ExecutedCommand.
> > > > sideEffectResult$lzycompute(
> > > > commands.scala:58)
> > > >         at
> > > > org.apache.spark.sql.execution.ExecutedCommand.
> > sideEffectResult(commands.
> > > > scala:56)
> > > >
> > > >
> > > > I followed the docs at
> > > > https://github.com/apache/incubator-carbondata/blob/
> > > > master/docs/installation-guide.md#installing-and-
> > > > configuring-carbondata-on-
> > > > standalone-spark-cluster
> > > > <http://installation-guide.md#installing-and-configuring-
> > > carbondata-on-%0Astandalone-spark-cluster>
> > > > and
> > > > https://github.com/apache/incubator-carbondata/blob/
> > > > master/docs/quick-start-guide.md
> > > > to install carbondata.
> > > >
> > > > While creating the table, I observed below WARN msg in the log :
> > > >
> > > > main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA OPTIONS
> (TABLENAME
> > > > "DEFAULT.T1", TABLEPATH "/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]
> > > >
> > > > WARN  05-02 14:34:30,656 - Couldn't find corresponding Hive SerDe for
> > > data
> > > > source provider carbondata. Persisting data source relation
> > > `default`.`t1`
> > > > into Hive metastore in Spark SQL specific format, which is NOT
> > compatible
> > > > with Hive.
> > > > INFO  05-02 14:34:30,755 - 0: create_table: Table(tableName:t1,
> > > > dbName:default, owner:cduser, createTime:1486290870,
> lastAccessTime:0,
> > > > retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col,
> > > > type:array<string>, comment:from deserializer)], location:null,
> > > > inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
> > > > outputFormat:org.apache.hadoop.hive.ql.io.
> > HiveSequenceFileOutputFormat,
> > > > compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
> > > > serializationLib:org.apache.hadoop.hive.serde2.
> > > > MetadataTypedColumnsetSerDe,
> > > > parameters:{tablePath=/home/cduser/carbon.store/default/t1,
> > > > serialization.format=1, tableName=default.t1}), bucketCols:[],
> > > sortCols:[],
> > > > parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[],
> > > > skewedColValues:[],
> > > > skewedColValueLocationMaps:{})), partitionKeys:[],
> > > > parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=carbondata},
> > > > viewOriginalText:null, viewExpandedText:null,
> tableType:MANAGED_TABLE,
> > > > privileges:PrincipalPrivilegeSet(userPrivileges:{},
> > groupPrivileges:null,
> > > > rolePrivileges:null))
> > > >
> > > >
> > > > Appreciate any help in resolving this.
> > > >
> > > > Thanks,
> > > > Sanoj
> > > >
> > >
> > --
> > Sent from my iPhone
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: Error while loading - Table is locked for updation. Please try after some time ( Spark 1.6.2 )

manishgupta88
In reply to this post by Sanoj M George
Hi Sanoj,

Can you please try the below things.

1. Remove carbon.properties file and let the system take all the default
values. In the logs shared by you I can see that while creating the
CarbonContext it is printing the carbon.properties file path and printing
all the properties in it. So give some invalid path for carbon.properties
and ensure none of the properties is getting printed while creating
CarbonContext.

2. If the 1st point does not work out then set the below property in
carbon.properties file and try loading the data.

carbon.lock.type=HDFSLOCK


Regards

Manish Gupta


On Mon, Feb 6, 2017 at 5:59 PM, Sanoj M George <[hidden email]>
wrote:

> Hi Manish,
>
> Could not find any .lock files incarbon store.
>
> I am getting the error while running spark-shell, did not try thrift
> server. However, as you can see from attached logs, it is taking the
> default store location ( not the one from carbon.properties )
>
> scala> cc.storePath
> res0: String = /home/cduser/carbon.store
>
>
> Thanks,
> Sanoj
>
>
>
>
>
> On Mon, Feb 6, 2017 at 1:23 PM, manish gupta <[hidden email]>
> wrote:
>
>> Hi Sanoj,
>>
>> Please check if there is any file with .lock extension in the carbon
>> store.
>>
>> Also when you start thrift server carbon store location will be printed in
>> the thrift server logs. Please validate if there is nay mismatch in the
>> store location provided by you and the store location getting printed in
>> the thrift server logs.
>>
>> Also please provide the complete logs for failure.
>>
>> Regards
>> Manish Gupta
>>
>> On Mon, Feb 6, 2017 at 2:18 PM, Sanoj M George <[hidden email]>
>> wrote:
>>
>> > Not yet resolved, still getting same error.
>> >
>> > On Mon, Feb 6, 2017 at 12:41 PM Raghunandan S <
>> > [hidden email]> wrote:
>> >
>> > > You mean the issue is resolved?
>> > >
>> > > Regards
>> > > Raghunandan
>> > >
>> > > On 06-Feb-2017 1:36 PM, "Sanoj M George" <[hidden email]>
>> wrote:
>> > >
>> > > Thanks Raghunandan.  Checked the thread but it seems this error is
>> due to
>> > > something else.
>> > >
>> > > Below are the parameters that I changed :
>> > >
>> > > **** carbon.properties :
>> > > carbon.storelocation=hdfs://localhost:9000/opt/CarbonStore
>> > > carbon.ddl.base.hdfs.url=hdfs://localhost:9000/opt/data
>> > > carbon.kettle.home=/home/cduser/spark/carbonlib/carbonplugins
>> > >
>> > > **** spark-defaults.conf  :
>> > > carbon.kettle.home
>> > > /home/cduser/spark/carbonlib/carbonplugins
>> > > spark.driver.extraJavaOptions
>> > > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.
>> properties
>> > > spark.executor.extraJavaOptions
>> > > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.
>> properties
>> > >
>> > > Although store location is specified in carbon.properties, spark-shell
>> > was
>> > > using "/home/cduser/carbon.store" as store location.
>> > >
>> > > Regards
>> > >
>> > > On Sun, Feb 5, 2017 at 4:49 PM, Raghunandan S <
>> > > [hidden email]> wrote:
>> > >
>> > > > Dear sanoj,
>> > > > Pls refer to
>> > > > http://apache-carbondata-mailing-list-archive.1130556.
>> > > > n5.nabble.com/Dictionary-file-is-locked-for-updation-td5076.html
>> > > >
>> > > > Let me know if this thread didn't address your problem.
>> > > >
>> > > > Regards
>> > > >
>> > > >
>> > > > On 05-Feb-2017 5:22 PM, "Sanoj M George" <[hidden email]>
>> > wrote:
>> > > >
>> > > > Hi All,
>> > > >
>> > > > I am getting below error while trying out Carbondata with Spark
>> 1.6.2 /
>> > > > Hadoop 2.6.5 / Carbondata 1.
>> > > >
>> > > > ./bin/spark-shell --jars
>> > > > carbonlib/carbondata_2.10-1.1.0-incubating-SNAPSHOT-shade-
>> > hadoop2.2.0.jar
>> > > > scala> import org.apache.spark.sql.CarbonContext
>> > > > scala> val cc = new CarbonContext(sc)
>> > > > scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name
>> string,
>> > > city
>> > > > string, age Int) STORED BY 'carbondata'")
>> > > > scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv'
>> INTO
>> > > TABLE
>> > > > t1")
>> > > > INFO  05-02 14:57:22,346 - main Query [LOAD DATA INPATH
>> > > > '/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
>> > > > INFO  05-02 14:57:37,411 - Table MetaData Unlocked Successfully
>> after
>> > > data
>> > > > load
>> > > > java.lang.RuntimeException: Table is locked for updation. Please try
>> > > after
>> > > > some time
>> > > >         at scala.sys.package$.error(package.scala:27)
>> > > >         at
>> > > > org.apache.spark.sql.execution.command.LoadTable.
>> > > > run(carbonTableSchema.scala:360)
>> > > >         at
>> > > > org.apache.spark.sql.execution.ExecutedCommand.
>> > > > sideEffectResult$lzycompute(
>> > > > commands.scala:58)
>> > > >         at
>> > > > org.apache.spark.sql.execution.ExecutedCommand.
>> > sideEffectResult(commands.
>> > > > scala:56)
>> > > >
>> > > >
>> > > > I followed the docs at
>> > > > https://github.com/apache/incubator-carbondata/blob/
>> > > > master/docs/installation-guide.md#installing-and-
>> > > > configuring-carbondata-on-
>> > > > standalone-spark-cluster
>> > > > <http://installation-guide.md#installing-and-configuring-
>> > > carbondata-on-%0Astandalone-spark-cluster>
>> > > > and
>> > > > https://github.com/apache/incubator-carbondata/blob/
>> > > > master/docs/quick-start-guide.md
>> > > > to install carbondata.
>> > > >
>> > > > While creating the table, I observed below WARN msg in the log :
>> > > >
>> > > > main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA OPTIONS
>> (TABLENAME
>> > > > "DEFAULT.T1", TABLEPATH "/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]
>> > > >
>> > > > WARN  05-02 14:34:30,656 - Couldn't find corresponding Hive SerDe
>> for
>> > > data
>> > > > source provider carbondata. Persisting data source relation
>> > > `default`.`t1`
>> > > > into Hive metastore in Spark SQL specific format, which is NOT
>> > compatible
>> > > > with Hive.
>> > > > INFO  05-02 14:34:30,755 - 0: create_table: Table(tableName:t1,
>> > > > dbName:default, owner:cduser, createTime:1486290870,
>> lastAccessTime:0,
>> > > > retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col,
>> > > > type:array<string>, comment:from deserializer)], location:null,
>> > > > inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
>> > > > outputFormat:org.apache.hadoop.hive.ql.io.
>> > HiveSequenceFileOutputFormat,
>> > > > compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
>> > > > serializationLib:org.apache.hadoop.hive.serde2.
>> > > > MetadataTypedColumnsetSerDe,
>> > > > parameters:{tablePath=/home/cduser/carbon.store/default/t1,
>> > > > serialization.format=1, tableName=default.t1}), bucketCols:[],
>> > > sortCols:[],
>> > > > parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[],
>> > > > skewedColValues:[],
>> > > > skewedColValueLocationMaps:{})), partitionKeys:[],
>> > > > parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=carbondata},
>> > > > viewOriginalText:null, viewExpandedText:null,
>> tableType:MANAGED_TABLE,
>> > > > privileges:PrincipalPrivilegeSet(userPrivileges:{},
>> > groupPrivileges:null,
>> > > > rolePrivileges:null))
>> > > >
>> > > >
>> > > > Appreciate any help in resolving this.
>> > > >
>> > > > Thanks,
>> > > > Sanoj
>> > > >
>> > >
>> > --
>> > Sent from my iPhone
>> >
>>
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Error while loading - Table is locked for updation. Please try after some time ( Spark 1.6.2 )

Sanoj M George
Hi Manish,

I installed it another test cluster and now it is working fine when I init
CarbonContext by setting storepath as suggested by Yinwei.
val cc = new CarbonContext(sc, "hdfs://localhost:9000/opt/CarbonStore")

Thanks

On Mon, Feb 6, 2017 at 6:12 PM, manish gupta <[hidden email]>
wrote:

> Hi Sanoj,
>
> Can you please try the below things.
>
> 1. Remove carbon.properties file and let the system take all the default
> values. In the logs shared by you I can see that while creating the
> CarbonContext it is printing the carbon.properties file path and printing
> all the properties in it. So give some invalid path for carbon.properties
> and ensure none of the properties is getting printed while creating
> CarbonContext.
>
> 2. If the 1st point does not work out then set the below property in
> carbon.properties file and try loading the data.
>
> carbon.lock.type=HDFSLOCK
>
>
> Regards
>
> Manish Gupta
>
>
> On Mon, Feb 6, 2017 at 5:59 PM, Sanoj M George <[hidden email]>
> wrote:
>
> > Hi Manish,
> >
> > Could not find any .lock files incarbon store.
> >
> > I am getting the error while running spark-shell, did not try thrift
> > server. However, as you can see from attached logs, it is taking the
> > default store location ( not the one from carbon.properties )
> >
> > scala> cc.storePath
> > res0: String = /home/cduser/carbon.store
> >
> >
> > Thanks,
> > Sanoj
> >
> >
> >
> >
> >
> > On Mon, Feb 6, 2017 at 1:23 PM, manish gupta <[hidden email]>
> > wrote:
> >
> >> Hi Sanoj,
> >>
> >> Please check if there is any file with .lock extension in the carbon
> >> store.
> >>
> >> Also when you start thrift server carbon store location will be printed
> in
> >> the thrift server logs. Please validate if there is nay mismatch in the
> >> store location provided by you and the store location getting printed in
> >> the thrift server logs.
> >>
> >> Also please provide the complete logs for failure.
> >>
> >> Regards
> >> Manish Gupta
> >>
> >> On Mon, Feb 6, 2017 at 2:18 PM, Sanoj M George <[hidden email]>
> >> wrote:
> >>
> >> > Not yet resolved, still getting same error.
> >> >
> >> > On Mon, Feb 6, 2017 at 12:41 PM Raghunandan S <
> >> > [hidden email]> wrote:
> >> >
> >> > > You mean the issue is resolved?
> >> > >
> >> > > Regards
> >> > > Raghunandan
> >> > >
> >> > > On 06-Feb-2017 1:36 PM, "Sanoj M George" <[hidden email]>
> >> wrote:
> >> > >
> >> > > Thanks Raghunandan.  Checked the thread but it seems this error is
> >> due to
> >> > > something else.
> >> > >
> >> > > Below are the parameters that I changed :
> >> > >
> >> > > **** carbon.properties :
> >> > > carbon.storelocation=hdfs://localhost:9000/opt/CarbonStore
> >> > > carbon.ddl.base.hdfs.url=hdfs://localhost:9000/opt/data
> >> > > carbon.kettle.home=/home/cduser/spark/carbonlib/carbonplugins
> >> > >
> >> > > **** spark-defaults.conf  :
> >> > > carbon.kettle.home
> >> > > /home/cduser/spark/carbonlib/carbonplugins
> >> > > spark.driver.extraJavaOptions
> >> > > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.
> >> properties
> >> > > spark.executor.extraJavaOptions
> >> > > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.
> >> properties
> >> > >
> >> > > Although store location is specified in carbon.properties,
> spark-shell
> >> > was
> >> > > using "/home/cduser/carbon.store" as store location.
> >> > >
> >> > > Regards
> >> > >
> >> > > On Sun, Feb 5, 2017 at 4:49 PM, Raghunandan S <
> >> > > [hidden email]> wrote:
> >> > >
> >> > > > Dear sanoj,
> >> > > > Pls refer to
> >> > > > http://apache-carbondata-mailing-list-archive.1130556.
> >> > > > n5.nabble.com/Dictionary-file-is-locked-for-updation-td5076.html
> >> > > >
> >> > > > Let me know if this thread didn't address your problem.
> >> > > >
> >> > > > Regards
> >> > > >
> >> > > >
> >> > > > On 05-Feb-2017 5:22 PM, "Sanoj M George" <[hidden email]>
> >> > wrote:
> >> > > >
> >> > > > Hi All,
> >> > > >
> >> > > > I am getting below error while trying out Carbondata with Spark
> >> 1.6.2 /
> >> > > > Hadoop 2.6.5 / Carbondata 1.
> >> > > >
> >> > > > ./bin/spark-shell --jars
> >> > > > carbonlib/carbondata_2.10-1.1.0-incubating-SNAPSHOT-shade-
> >> > hadoop2.2.0.jar
> >> > > > scala> import org.apache.spark.sql.CarbonContext
> >> > > > scala> val cc = new CarbonContext(sc)
> >> > > > scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name
> >> string,
> >> > > city
> >> > > > string, age Int) STORED BY 'carbondata'")
> >> > > > scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv'
> >> INTO
> >> > > TABLE
> >> > > > t1")
> >> > > > INFO  05-02 14:57:22,346 - main Query [LOAD DATA INPATH
> >> > > > '/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
> >> > > > INFO  05-02 14:57:37,411 - Table MetaData Unlocked Successfully
> >> after
> >> > > data
> >> > > > load
> >> > > > java.lang.RuntimeException: Table is locked for updation. Please
> try
> >> > > after
> >> > > > some time
> >> > > >         at scala.sys.package$.error(package.scala:27)
> >> > > >         at
> >> > > > org.apache.spark.sql.execution.command.LoadTable.
> >> > > > run(carbonTableSchema.scala:360)
> >> > > >         at
> >> > > > org.apache.spark.sql.execution.ExecutedCommand.
> >> > > > sideEffectResult$lzycompute(
> >> > > > commands.scala:58)
> >> > > >         at
> >> > > > org.apache.spark.sql.execution.ExecutedCommand.
> >> > sideEffectResult(commands.
> >> > > > scala:56)
> >> > > >
> >> > > >
> >> > > > I followed the docs at
> >> > > > https://github.com/apache/incubator-carbondata/blob/
> >> > > > master/docs/installation-guide.md#installing-and-
> >> > > > configuring-carbondata-on-
> >> > > > standalone-spark-cluster
> >> > > > <http://installation-guide.md#installing-and-configuring-
> >> > > carbondata-on-%0Astandalone-spark-cluster>
> >> > > > and
> >> > > > https://github.com/apache/incubator-carbondata/blob/
> >> > > > master/docs/quick-start-guide.md
> >> > > > to install carbondata.
> >> > > >
> >> > > > While creating the table, I observed below WARN msg in the log :
> >> > > >
> >> > > > main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA OPTIONS
> >> (TABLENAME
> >> > > > "DEFAULT.T1", TABLEPATH "/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]
> >> > > >
> >> > > > WARN  05-02 14:34:30,656 - Couldn't find corresponding Hive SerDe
> >> for
> >> > > data
> >> > > > source provider carbondata. Persisting data source relation
> >> > > `default`.`t1`
> >> > > > into Hive metastore in Spark SQL specific format, which is NOT
> >> > compatible
> >> > > > with Hive.
> >> > > > INFO  05-02 14:34:30,755 - 0: create_table: Table(tableName:t1,
> >> > > > dbName:default, owner:cduser, createTime:1486290870,
> >> lastAccessTime:0,
> >> > > > retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col,
> >> > > > type:array<string>, comment:from deserializer)], location:null,
> >> > > > inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
> >> > > > outputFormat:org.apache.hadoop.hive.ql.io.
> >> > HiveSequenceFileOutputFormat,
> >> > > > compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
> >> > > > serializationLib:org.apache.hadoop.hive.serde2.
> >> > > > MetadataTypedColumnsetSerDe,
> >> > > > parameters:{tablePath=/home/cduser/carbon.store/default/t1,
> >> > > > serialization.format=1, tableName=default.t1}), bucketCols:[],
> >> > > sortCols:[],
> >> > > > parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[],
> >> > > > skewedColValues:[],
> >> > > > skewedColValueLocationMaps:{})), partitionKeys:[],
> >> > > > parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=
> carbondata},
> >> > > > viewOriginalText:null, viewExpandedText:null,
> >> tableType:MANAGED_TABLE,
> >> > > > privileges:PrincipalPrivilegeSet(userPrivileges:{},
> >> > groupPrivileges:null,
> >> > > > rolePrivileges:null))
> >> > > >
> >> > > >
> >> > > > Appreciate any help in resolving this.
> >> > > >
> >> > > > Thanks,
> >> > > > Sanoj
> >> > > >
> >> > >
> >> > --
> >> > Sent from my iPhone
> >> >
> >>
> >
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: Error while loading - Table is locked for updation. Please try after some time ( Spark 1.6.2 )

mryqc
I have encountered the same problem. How to resolve it except by installing carbondata in another cluster?
Reply | Threaded
Open this post in threaded view
|

Re: Error while loading - Table is locked for updation. Please try after some time ( Spark 1.6.2 )

Liang Chen
Administrator
Hi

Please try the solution provided by li yinwei.
Because you using cluster mode and the data will be saved to hdfs, so please specify storePath with <HDFS path>

scala> val storePath = "hdfs://localohst:9000/home/hadoop/carbondata/bin/carbonshellstore"
scala> val cc = new CarbonContext(sc, storePath)

Regards
Liang

mryqc wrote
I have encountered the same problem. How to resolve it except by installing carbondata in another cluster?
Reply | Threaded
Open this post in threaded view
|

Re: Error while loading - Table is locked for updation. Please try after some time ( Spark 1.6.2 )

Jean-Baptiste Onofré
FYI I got the same error on Jenkins. I'm investigating.

Regards
JB

On Mar 19, 2017, 14:46, at 14:46, Liang Chen <[hidden email]> wrote:

>Hi
>
>Please try the solution provided by li yinwei.
>Because you using cluster mode and the data will be saved to hdfs, so
>please
>specify storePath with <HDFS path>
>
>scala> val storePath =
>"hdfs://localohst:9000/home/hadoop/carbondata/bin/carbonshellstore"
>scala> val cc = new CarbonContext(sc, storePath)
>
>Regards
>Liang
>
>
>mryqc wrote
>> I have encountered the same problem. How to resolve it except by
>> installing carbondata in another cluster?
>
>
>
>
>
>--
>View this message in context:
>http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/Error-while-loading-Table-is-locked-for-updation-Please-try-after-some-time-Spark-1-6-2-tp7335p9270.html
>Sent from the Apache CarbonData Mailing List archive mailing list
>archive at Nabble.com.