A warning when loading data

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

A warning when loading data

Zen Wellon
Hi guys,
When I tried to load some data into carbondata table with carbon 0.1.0, I
met a problem below.

WARN  29-08 15:40:17,535 - Lost task 10.0 in stage 2.1 (TID 365,
amlera-30-6.gtj): java.lang.RuntimeException: Dictionary file ***(sensitive
column) is locked for updation. Please try after some time
        at scala.sys.package$.error(package.scala:27)
        at
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.<init>(CarbonGlobalDictionaryRDD.scala:354)
        at
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:294)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

--


Best regards,
William Zen
Reply | Threaded
Open this post in threaded view
|

Re: A warning when loading data

ravipesala
Hi,

Are you getting this exception continuously for every load? Usually it
occurs when you try to load the data concurrently to the same table. So
please make sure that no other instance of carbon is running and data load
on the same table is not happening.
Check if any locks are created under system temp folder with
<detabasename>/<tablename>/lockfile, if it exists please delete.

Thanks & Regards,
Ravi

On Mon, 29 Aug 2016 1:27 pm Zen Wellon, <[hidden email]> wrote:

> Hi guys,
> When I tried to load some data into carbondata table with carbon 0.1.0, I
> met a problem below.
>
> WARN  29-08 15:40:17,535 - Lost task 10.0 in stage 2.1 (TID 365,
> amlera-30-6.gtj): java.lang.RuntimeException: Dictionary file ***(sensitive
> column) is locked for updation. Please try after some time
>         at scala.sys.package$.error(package.scala:27)
>         at
>
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.<init>(CarbonGlobalDictionaryRDD.scala:354)
>         at
>
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:294)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>         at org.apache.spark.scheduler.Task.run(Task.scala:89)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
> --
>
>
> Best regards,
> William Zen
>
Reply | Threaded
Open this post in threaded view
|

Re: A warning when loading data

Zen Wellon
Hi Ravi,

After I upgrade carbon to 0.1.0, this problem occurs every time when I try
to load data, and I'm sure no other carbon is running because I use my
personal dev spark-cluster, I've also tried to recreate a new table, but
it's still there..

2016-08-29 18:11 GMT+08:00 Ravindra Pesala <[hidden email]>:

> Hi,
>
> Are you getting this exception continuously for every load? Usually it
> occurs when you try to load the data concurrently to the same table. So
> please make sure that no other instance of carbon is running and data load
> on the same table is not happening.
> Check if any locks are created under system temp folder with
> <detabasename>/<tablename>/lockfile, if it exists please delete.
>
> Thanks & Regards,
> Ravi
>
> On Mon, 29 Aug 2016 1:27 pm Zen Wellon, <[hidden email]> wrote:
>
> > Hi guys,
> > When I tried to load some data into carbondata table with carbon 0.1.0, I
> > met a problem below.
> >
> > WARN  29-08 15:40:17,535 - Lost task 10.0 in stage 2.1 (TID 365,
> > amlera-30-6.gtj): java.lang.RuntimeException: Dictionary file
> ***(sensitive
> > column) is locked for updation. Please try after some time
> >         at scala.sys.package$.error(package.scala:27)
> >         at
> >
> > org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> RDD$$anon$1.<init>(CarbonGlobalDictionaryRDD.scala:354)
> >         at
> >
> > org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> RDD.compute(CarbonGlobalDictionaryRDD.scala:294)
> >         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.
> scala:306)
> >         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> >         at
> > org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> >         at org.apache.spark.scheduler.Task.run(Task.scala:89)
> >         at
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
> >         at
> >
> > java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
> >         at
> >
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> > --
> >
> >
> > Best regards,
> > William Zen
> >
>



--


Best regards,
William Zen
Reply | Threaded
Open this post in threaded view
|

Re: A warning when loading data

ravipesala
Hi,

Did you check if any locks are created under system temp folder with
<detabasename>/<tablename>/lockfile, if it exists please delete and try.

Thanks,
Ravi.

On 29 August 2016 at 20:29, Zen Wellon <[hidden email]> wrote:

> Hi Ravi,
>
> After I upgrade carbon to 0.1.0, this problem occurs every time when I try
> to load data, and I'm sure no other carbon is running because I use my
> personal dev spark-cluster, I've also tried to recreate a new table, but
> it's still there..
>
> 2016-08-29 18:11 GMT+08:00 Ravindra Pesala <[hidden email]>:
>
> > Hi,
> >
> > Are you getting this exception continuously for every load? Usually it
> > occurs when you try to load the data concurrently to the same table. So
> > please make sure that no other instance of carbon is running and data
> load
> > on the same table is not happening.
> > Check if any locks are created under system temp folder with
> > <detabasename>/<tablename>/lockfile, if it exists please delete.
> >
> > Thanks & Regards,
> > Ravi
> >
> > On Mon, 29 Aug 2016 1:27 pm Zen Wellon, <[hidden email]> wrote:
> >
> > > Hi guys,
> > > When I tried to load some data into carbondata table with carbon
> 0.1.0, I
> > > met a problem below.
> > >
> > > WARN  29-08 15:40:17,535 - Lost task 10.0 in stage 2.1 (TID 365,
> > > amlera-30-6.gtj): java.lang.RuntimeException: Dictionary file
> > ***(sensitive
> > > column) is locked for updation. Please try after some time
> > >         at scala.sys.package$.error(package.scala:27)
> > >         at
> > >
> > > org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> > RDD$$anon$1.<init>(CarbonGlobalDictionaryRDD.scala:354)
> > >         at
> > >
> > > org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> > RDD.compute(CarbonGlobalDictionaryRDD.scala:294)
> > >         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.
> > scala:306)
> > >         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> > >         at
> > > org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> > >         at org.apache.spark.scheduler.Task.run(Task.scala:89)
> > >         at
> > > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
> > >         at
> > >
> > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> > ThreadPoolExecutor.java:1145)
> > >         at
> > >
> > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > ThreadPoolExecutor.java:615)
> > >         at java.lang.Thread.run(Thread.java:745)
> > >
> > > --
> > >
> > >
> > > Best regards,
> > > William Zen
> > >
> >
>
>
>
> --
>
>
> Best regards,
> William Zen
>



--
Thanks & Regards,
Ravi
Reply | Threaded
Open this post in threaded view
|

Re: A warning when loading data

Zen Wellon
 I don't think it's raised by lockfile, because I've tried to recreate a
new table with a totally different name. However, I'll check it tomorrow.

2016-08-29 23:09 GMT+08:00 Ravindra Pesala <[hidden email]>:

> Hi,
>
> Did you check if any locks are created under system temp folder with
> <detabasename>/<tablename>/lockfile, if it exists please delete and try.
>
> Thanks,
> Ravi.
>
> On 29 August 2016 at 20:29, Zen Wellon <[hidden email]> wrote:
>
> > Hi Ravi,
> >
> > After I upgrade carbon to 0.1.0, this problem occurs every time when I
> try
> > to load data, and I'm sure no other carbon is running because I use my
> > personal dev spark-cluster, I've also tried to recreate a new table, but
> > it's still there..
> >
> > 2016-08-29 18:11 GMT+08:00 Ravindra Pesala <[hidden email]>:
> >
> > > Hi,
> > >
> > > Are you getting this exception continuously for every load? Usually it
> > > occurs when you try to load the data concurrently to the same table. So
> > > please make sure that no other instance of carbon is running and data
> > load
> > > on the same table is not happening.
> > > Check if any locks are created under system temp folder with
> > > <detabasename>/<tablename>/lockfile, if it exists please delete.
> > >
> > > Thanks & Regards,
> > > Ravi
> > >
> > > On Mon, 29 Aug 2016 1:27 pm Zen Wellon, <[hidden email]> wrote:
> > >
> > > > Hi guys,
> > > > When I tried to load some data into carbondata table with carbon
> > 0.1.0, I
> > > > met a problem below.
> > > >
> > > > WARN  29-08 15:40:17,535 - Lost task 10.0 in stage 2.1 (TID 365,
> > > > amlera-30-6.gtj): java.lang.RuntimeException: Dictionary file
> > > ***(sensitive
> > > > column) is locked for updation. Please try after some time
> > > >         at scala.sys.package$.error(package.scala:27)
> > > >         at
> > > >
> > > > org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> > > RDD$$anon$1.<init>(CarbonGlobalDictionaryRDD.scala:354)
> > > >         at
> > > >
> > > > org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> > > RDD.compute(CarbonGlobalDictionaryRDD.scala:294)
> > > >         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.
> > > scala:306)
> > > >         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> > > >         at
> > > > org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> > > >         at org.apache.spark.scheduler.Task.run(Task.scala:89)
> > > >         at
> > > > org.apache.spark.executor.Executor$TaskRunner.run(
> Executor.scala:227)
> > > >         at
> > > >
> > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> > > ThreadPoolExecutor.java:1145)
> > > >         at
> > > >
> > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > > ThreadPoolExecutor.java:615)
> > > >         at java.lang.Thread.run(Thread.java:745)
> > > >
> > > > --
> > > >
> > > >
> > > > Best regards,
> > > > William Zen
> > > >
> > >
> >
> >
> >
> > --
> >
> >
> > Best regards,
> > William Zen
> >
>
>
>
> --
> Thanks & Regards,
> Ravi
>



--


Best regards,
William Zen
Reply | Threaded
Open this post in threaded view
|

Re: A warning when loading data

ravipesala
Hi Zen,

It seems this issue is related to the PR
https://github.com/apache/incubator-carbondata/pull/89 .And it is merged to
master. Alternatively please try to add hadoop.tmp.dir to carbon.properties
file to solve this issue.

Thanks,
Ravi

On 29 August 2016 at 20:44, Zen Wellon <[hidden email]> wrote:

>  I don't think it's raised by lockfile, because I've tried to recreate a
> new table with a totally different name. However, I'll check it tomorrow.
>
> 2016-08-29 23:09 GMT+08:00 Ravindra Pesala <[hidden email]>:
>
> > Hi,
> >
> > Did you check if any locks are created under system temp folder with
> > <detabasename>/<tablename>/lockfile, if it exists please delete and try.
> >
> > Thanks,
> > Ravi.
> >
> > On 29 August 2016 at 20:29, Zen Wellon <[hidden email]> wrote:
> >
> > > Hi Ravi,
> > >
> > > After I upgrade carbon to 0.1.0, this problem occurs every time when I
> > try
> > > to load data, and I'm sure no other carbon is running because I use my
> > > personal dev spark-cluster, I've also tried to recreate a new table,
> but
> > > it's still there..
> > >
> > > 2016-08-29 18:11 GMT+08:00 Ravindra Pesala <[hidden email]>:
> > >
> > > > Hi,
> > > >
> > > > Are you getting this exception continuously for every load? Usually
> it
> > > > occurs when you try to load the data concurrently to the same table.
> So
> > > > please make sure that no other instance of carbon is running and data
> > > load
> > > > on the same table is not happening.
> > > > Check if any locks are created under system temp folder with
> > > > <detabasename>/<tablename>/lockfile, if it exists please delete.
> > > >
> > > > Thanks & Regards,
> > > > Ravi
> > > >
> > > > On Mon, 29 Aug 2016 1:27 pm Zen Wellon, <[hidden email]> wrote:
> > > >
> > > > > Hi guys,
> > > > > When I tried to load some data into carbondata table with carbon
> > > 0.1.0, I
> > > > > met a problem below.
> > > > >
> > > > > WARN  29-08 15:40:17,535 - Lost task 10.0 in stage 2.1 (TID 365,
> > > > > amlera-30-6.gtj): java.lang.RuntimeException: Dictionary file
> > > > ***(sensitive
> > > > > column) is locked for updation. Please try after some time
> > > > >         at scala.sys.package$.error(package.scala:27)
> > > > >         at
> > > > >
> > > > > org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> > > > RDD$$anon$1.<init>(CarbonGlobalDictionaryRDD.scala:354)
> > > > >         at
> > > > >
> > > > > org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> > > > RDD.compute(CarbonGlobalDictionaryRDD.scala:294)
> > > > >         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.
> > > > scala:306)
> > > > >         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> > > > >         at
> > > > > org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> > > > >         at org.apache.spark.scheduler.Task.run(Task.scala:89)
> > > > >         at
> > > > > org.apache.spark.executor.Executor$TaskRunner.run(
> > Executor.scala:227)
> > > > >         at
> > > > >
> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> > > > ThreadPoolExecutor.java:1145)
> > > > >         at
> > > > >
> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > > > ThreadPoolExecutor.java:615)
> > > > >         at java.lang.Thread.run(Thread.java:745)
> > > > >
> > > > > --
> > > > >
> > > > >
> > > > > Best regards,
> > > > > William Zen
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > >
> > >
> > > Best regards,
> > > William Zen
> > >
> >
> >
> >
> > --
> > Thanks & Regards,
> > Ravi
> >
>
>
>
> --
>
>
> Best regards,
> William Zen
>



--
Thanks & Regards,
Ravi
Reply | Threaded
Open this post in threaded view
|

Re: A warning when loading data

Zen Wellon
Thank you Ravi, but I've set hadoop.tmp.dir in hadoop's core_site.xml. I'll
build the master and  try it again.

2016-08-30 0:11 GMT+08:00 Ravindra Pesala <[hidden email]>:

> Hi Zen,
>
> It seems this issue is related to the PR
> https://github.com/apache/incubator-carbondata/pull/89 .And it is merged
> to
> master. Alternatively please try to add hadoop.tmp.dir to carbon.properties
> file to solve this issue.
>
> Thanks,
> Ravi
>
> On 29 August 2016 at 20:44, Zen Wellon <[hidden email]> wrote:
>
> >  I don't think it's raised by lockfile, because I've tried to recreate a
> > new table with a totally different name. However, I'll check it tomorrow.
> >
> > 2016-08-29 23:09 GMT+08:00 Ravindra Pesala <[hidden email]>:
> >
> > > Hi,
> > >
> > > Did you check if any locks are created under system temp folder with
> > > <detabasename>/<tablename>/lockfile, if it exists please delete and
> try.
> > >
> > > Thanks,
> > > Ravi.
> > >
> > > On 29 August 2016 at 20:29, Zen Wellon <[hidden email]> wrote:
> > >
> > > > Hi Ravi,
> > > >
> > > > After I upgrade carbon to 0.1.0, this problem occurs every time when
> I
> > > try
> > > > to load data, and I'm sure no other carbon is running because I use
> my
> > > > personal dev spark-cluster, I've also tried to recreate a new table,
> > but
> > > > it's still there..
> > > >
> > > > 2016-08-29 18:11 GMT+08:00 Ravindra Pesala <[hidden email]>:
> > > >
> > > > > Hi,
> > > > >
> > > > > Are you getting this exception continuously for every load? Usually
> > it
> > > > > occurs when you try to load the data concurrently to the same
> table.
> > So
> > > > > please make sure that no other instance of carbon is running and
> data
> > > > load
> > > > > on the same table is not happening.
> > > > > Check if any locks are created under system temp folder with
> > > > > <detabasename>/<tablename>/lockfile, if it exists please delete.
> > > > >
> > > > > Thanks & Regards,
> > > > > Ravi
> > > > >
> > > > > On Mon, 29 Aug 2016 1:27 pm Zen Wellon, <[hidden email]> wrote:
> > > > >
> > > > > > Hi guys,
> > > > > > When I tried to load some data into carbondata table with carbon
> > > > 0.1.0, I
> > > > > > met a problem below.
> > > > > >
> > > > > > WARN  29-08 15:40:17,535 - Lost task 10.0 in stage 2.1 (TID 365,
> > > > > > amlera-30-6.gtj): java.lang.RuntimeException: Dictionary file
> > > > > ***(sensitive
> > > > > > column) is locked for updation. Please try after some time
> > > > > >         at scala.sys.package$.error(package.scala:27)
> > > > > >         at
> > > > > >
> > > > > > org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> > > > > RDD$$anon$1.<init>(CarbonGlobalDictionaryRDD.scala:354)
> > > > > >         at
> > > > > >
> > > > > > org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> > > > > RDD.compute(CarbonGlobalDictionaryRDD.scala:294)
> > > > > >         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.
> > > > > scala:306)
> > > > > >         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> > > > > >         at
> > > > > > org.apache.spark.scheduler.ResultTask.runTask(ResultTask.
> scala:66)
> > > > > >         at org.apache.spark.scheduler.Task.run(Task.scala:89)
> > > > > >         at
> > > > > > org.apache.spark.executor.Executor$TaskRunner.run(
> > > Executor.scala:227)
> > > > > >         at
> > > > > >
> > > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> > > > > ThreadPoolExecutor.java:1145)
> > > > > >         at
> > > > > >
> > > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > > > > ThreadPoolExecutor.java:615)
> > > > > >         at java.lang.Thread.run(Thread.java:745)
> > > > > >
> > > > > > --
> > > > > >
> > > > > >
> > > > > > Best regards,
> > > > > > William Zen
> > > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > >
> > > >
> > > > Best regards,
> > > > William Zen
> > > >
> > >
> > >
> > >
> > > --
> > > Thanks & Regards,
> > > Ravi
> > >
> >
> >
> >
> > --
> >
> >
> > Best regards,
> > William Zen
> >
>
>
>
> --
> Thanks & Regards,
> Ravi
>



--


Best regards,
William Zen