hi :
/usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client --jars /opt/incubator-carbondata/assembly/target/scala-2.10/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-core-3.2.10.jar,/opt//mysql-connector-java-5.1.37.jar scala>import org.apache.spark.sql.CarbonContext scala>import java.io.File scala>import org.apache.hadoop.hive.conf.HiveConf scala>val cc = new CarbonContext(sc, "hdfs://hadoop01/data/carbondata01/store") scala>cc.setConf("hive.metastore.warehouse.dir", "/apps/hive/warehouse") scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, "false") scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib/carbonplugins") scala> cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib/carbonplugins") scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv' into table t4 options('FILEHEADER'='id,name,city,age')") INFO 12-08 14:21:24,461 - main Query [LOAD DATA LOCAL INPATH 'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4 OPTIONS('FILEHEADER'='ID,NAME,CITY,AGE')] INFO 12-08 14:21:39,475 - Table MetaData Unlocked Successfully after data load java.lang.RuntimeException: Table is locked for updation. Please try after some time at scala.sys.package$.error(package.scala:27) at org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1049) at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) thanks a lot --------------------------------------------------------------------------------------------------- Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) is intended only for the use of the intended recipient and may be confidential and/or privileged of Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying is strictly prohibited, and may be unlawful.If you have received this communication in error,please immediately notify the sender by return e-mail, and delete the original message and all copies from your system. Thank you. --------------------------------------------------------------------------------------------------- |
---------- Forwarded message ----------
From: Ravindra Pesala <[hidden email]> Date: 12 August 2016 at 12:45 Subject: Re: load data fail To: dev <[hidden email]> Hi, Are you getting this exception continuously for every load? Usually it occurs when you try to load the data concurrently to the same table. So please make sure that no other instance of carbon is running and data load on the same table is not happening. Check if any locks are created under system temp folder with <detabasename>/<tablename>/lockfile, if it exists please delete. Thanks & Regards, Ravi On 12 August 2016 at 11:55, 金铸 <[hidden email]> wrote: > hi : > /usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client --jars > /opt/incubator-carbondata/assembly/target/scala-2.10/carbond > ata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/ > usr/hdp/2.4.0.0-169/spark/lib/datanucleus-api-jdo-3.2.6.jar, > /usr/hdp/2.4.0.0-169/spark/lib/datanucleus-rdbms-3.2.9. > jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-core-3. > 2.10.jar,/opt//mysql-connector-java-5.1.37.jar > > scala>import org.apache.spark.sql.CarbonContext > scala>import java.io.File > scala>import org.apache.hadoop.hive.conf.HiveConf > > > > > > scala>val cc = new CarbonContext(sc, "hdfs://hadoop01/data/carbonda > ta01/store") > > scala>cc.setConf("hive.metastore.warehouse.dir", "/apps/hive/warehouse") > scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, "false") > scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/ > spark/carbonlib/carbonplugins") > > scala> cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/ > carbonlib/carbonplugins") > > scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv' into > table t4 options('FILEHEADER'='id,name,city,age')") > INFO 12-08 14:21:24,461 - main Query [LOAD DATA LOCAL INPATH > 'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4 OPTIONS('FILEHEADER'='ID,NAME, > CITY,AGE')] > INFO 12-08 14:21:39,475 - Table MetaData Unlocked Successfully after data > load > java.lang.RuntimeException: Table is locked for updation. Please try after > some time > at scala.sys.package$.error(package.scala:27) > at org.apache.spark.sql.execution.command.LoadTable.run(carbonT > ableSchema.scala:1049) > at org.apache.spark.sql.execution.ExecutedCommand.sideEffectRes > ult$lzycompute(commands.scala:58) > at org.apache.spark.sql.execution.ExecutedCommand.sideEffectRes > ult(commands.scala:56) > at org.apache.spark.sql.execution.ExecutedCommand.doExecute(com > mands.scala:70) > at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5. > apply(SparkPlan.scala:132) > at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5. > apply(SparkPlan.scala:130) > at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperati > onScope.scala:150) > > thanks a lot > > > ------------------------------------------------------------ > --------------------------------------- > Confidentiality Notice: The information contained in this e-mail and any > accompanying attachment(s) > is intended only for the use of the intended recipient and may be > confidential and/or privileged of > Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader > of this communication is > not the intended recipient, unauthorized use, forwarding, printing, > storing, disclosure or copying > is strictly prohibited, and may be unlawful.If you have received this > communication in error,please > immediately notify the sender by return e-mail, and delete the original > message and all copies from > your system. Thank you. > ------------------------------------------------------------ > --------------------------------------- > -- Thanks & Regards, Ravi -- Thanks & Regards, Ravi |
Administrator
|
In reply to this post by 金铸
Hi
As we discussed, the error of "Table is locked for updation. Please try
after some time " has been solved through setting directory rights.
The below is new error, please Ravindra check and provide helps :
----------------------------
WARN 12-08 16:29:51,871 - Lost task 1.1 in stage 2.0 (TID 6, hadoop03): java.lang.RuntimeException: Dictionary file name is locked for updation. Please try after some time
at scala.sys.package$.error(package.scala:27)
at org.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.
|
In reply to this post by 金铸
hi jinzhu,
whether this happen on multiple instance loading the same table? currently ,it is no support concurrent load on same table. for this exception 1.please check if any locks are created under system temp folder with<databasename>/<tablename>/lockfile, if it exists please delete. 2.try to change the lock ype: carbon.lock.type = ZOOKEEPERLOCK Regards, Eason 在 2016年08月12日 14:25, 金铸 写道: > hi : /usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client > --jars > /opt/incubator-carbondata/assembly/target/scala-2.10/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-core-3.2.10.jar,/opt//mysql-connector-java-5.1.37.jar > scala>import org.apache.spark.sql.CarbonContext scala>import > java.io.File scala>import org.apache.hadoop.hive.conf.HiveConf > scala>val cc = new CarbonContext(sc, > "hdfs://hadoop01/data/carbondata01/store") > scala>cc.setConf("hive.metastore.warehouse.dir", > "/apps/hive/warehouse") > scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, > "false") > scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib/carbonplugins") > scala> > cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib/carbonplugins") > scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv' > into table t4 options('FILEHEADER'='id,name,city,age')") INFO 12-08 > 14:21:24,461 - main Query [LOAD DATA LOCAL INPATH > 'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4 > OPTIONS('FILEHEADER'='ID,NAME,CITY,AGE')] INFO 12-08 14:21:39,475 - > Table MetaData Unlocked Successfully after data load > java.lang.RuntimeException: Table is locked for updation. Please try > after some time at scala.sys.package$.error(package.scala:27) > at > org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1049) > at > org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) > at > org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) > at > org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) > at > org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) > thanks a lot > --------------------------------------------------------------------------------------------------- > Confidentiality Notice: The information contained in this e-mail and > any accompanying attachment(s) is intended only for the use of the > intended recipient and may be confidential and/or privileged of > Neusoft Corporation, its subsidiaries and/or its affiliates. If any > reader of this communication is not the intended recipient, > unauthorized use, forwarding, printing, storing, disclosure or > copying is strictly prohibited, and may be unlawful.If you have > received this communication in error,please immediately notify the > sender by return e-mail, and delete the original message and all > copies from your system. Thank you. > --------------------------------------------------------------------------------------------------- |
thanks a lot,I solve this。
在 2016/8/17 0:53, Eason 写道: > hi jinzhu, > > whether this happen on multiple instance loading the same table? > > currently ,it is no support concurrent load on same table. > > for this exception > > 1.please check if any locks are created under system temp folder > with<databasename>/<tablename>/lockfile, if it exists please delete. > > 2.try to change the lock ype: > carbon.lock.type = ZOOKEEPERLOCK Regards, > Eason > > 在 2016年08月12日 14:25, 金铸 写道: >> hi : /usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client >> --jars >> /opt/incubator-carbondata/assembly/target/scala-2.10/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-core-3.2.10.jar,/opt//mysql-connector-java-5.1.37.jar >> scala>import org.apache.spark.sql.CarbonContext scala>import >> java.io.File scala>import org.apache.hadoop.hive.conf.HiveConf >> scala>val cc = new CarbonContext(sc, >> "hdfs://hadoop01/data/carbondata01/store") >> scala>cc.setConf("hive.metastore.warehouse.dir", >> "/apps/hive/warehouse") >> scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, >> "false") >> scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib/carbonplugins") >> scala> >> cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib/carbonplugins") >> scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv' >> into table t4 options('FILEHEADER'='id,name,city,age')") INFO 12-08 >> 14:21:24,461 - main Query [LOAD DATA LOCAL INPATH >> 'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4 >> OPTIONS('FILEHEADER'='ID,NAME,CITY,AGE')] INFO 12-08 14:21:39,475 - >> Table MetaData Unlocked Successfully after data load >> java.lang.RuntimeException: Table is locked for updation. Please try >> after some time at scala.sys.package$.error(package.scala:27) >> at >> org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1049) >> at >> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) >> at >> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) >> at >> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) >> at >> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) >> at >> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) >> at >> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) >> thanks a lot >> --------------------------------------------------------------------------------------------------- >> Confidentiality Notice: The information contained in this e-mail and >> any accompanying attachment(s) is intended only for the use of the >> intended recipient and may be confidential and/or privileged of >> Neusoft Corporation, its subsidiaries and/or its affiliates. If any >> reader of this communication is not the intended recipient, >> unauthorized use, forwarding, printing, storing, disclosure or >> copying is strictly prohibited, and may be unlawful.If you have >> received this communication in error,please immediately notify the >> sender by return e-mail, and delete the original message and all >> copies from your system. Thank you. >> --------------------------------------------------------------------------------------------------- > > > --------------------------------------------------------------------------------------------------- Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) is intended only for the use of the intended recipient and may be confidential and/or privileged of Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying is strictly prohibited, and may be unlawful.If you have received this communication in error,please immediately notify the sender by return e-mail, and delete the original message and all copies from your system. Thank you. --------------------------------------------------------------------------------------------------- |
Can you share the case experience: how did you solve it.
Regards Liang -----邮件原件----- 发件人: 金铸 [mailto:[hidden email]] 发送时间: 2016年8月17日 10:31 收件人: [hidden email] 主题: Re: load data fail thanks a lot,I solve this。 在 2016/8/17 0:53, Eason 写道: > hi jinzhu, > > whether this happen on multiple instance loading the same table? > > currently ,it is no support concurrent load on same table. > > for this exception > > 1.please check if any locks are created under system temp folder > with<databasename>/<tablename>/lockfile, if it exists please delete. > > 2.try to change the lock ype: > carbon.lock.type = ZOOKEEPERLOCK Regards, Eason > > 在 2016年08月12日 14:25, 金铸 写道: >> hi : /usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client >> --jars >> /opt/incubator-carbondata/assembly/target/scala-2.10/carbondata_2.10- >> 0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/usr/hdp/2.4.0.0-169/ >> spark/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/2.4.0.0-169/spark/li >> b/datanucleus-rdbms-3.2.9.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucl >> eus-core-3.2.10.jar,/opt//mysql-connector-java-5.1.37.jar >> scala>import org.apache.spark.sql.CarbonContext scala>import >> java.io.File scala>import org.apache.hadoop.hive.conf.HiveConf >> scala>val cc = new CarbonContext(sc, >> "hdfs://hadoop01/data/carbondata01/store") >> scala>cc.setConf("hive.metastore.warehouse.dir", >> "/apps/hive/warehouse") >> scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, >> "false") >> scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/car >> scala>bonlib/carbonplugins") >> scala> >> cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib >> /carbonplugins") >> scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv' >> into table t4 options('FILEHEADER'='id,name,city,age')") INFO 12-08 >> 14:21:24,461 - main Query [LOAD DATA LOCAL INPATH >> 'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4 >> OPTIONS('FILEHEADER'='ID,NAME,CITY,AGE')] INFO 12-08 14:21:39,475 - >> Table MetaData Unlocked Successfully after data load >> java.lang.RuntimeException: Table is locked for updation. Please try >> after some time at scala.sys.package$.error(package.scala:27) >> at >> org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1049) >> at >> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) >> at >> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) >> at >> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) >> at >> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) >> at >> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) >> at >> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.s >> cala:150) >> thanks a lot >> --------------------------------------------------------------------- >> ------------------------------ Confidentiality Notice: The >> information contained in this e-mail and any accompanying >> attachment(s) is intended only for the use of the intended recipient >> and may be confidential and/or privileged of Neusoft Corporation, its >> subsidiaries and/or its affiliates. If any reader of this >> communication is not the intended recipient, unauthorized use, >> forwarding, printing, storing, disclosure or copying is strictly >> prohibited, and may be unlawful.If you have received this >> communication in error,please immediately notify the sender by return >> e-mail, and delete the original message and all copies from your >> system. Thank you. >> --------------------------------------------------------------------- >> ------------------------------ > > > --------------------------------------------------------------------------------------------------- Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) is intended only for the use of the intended recipient and may be confidential and/or privileged of Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying is strictly prohibited, and may be unlawful.If you have received this communication in error,please immediately notify the sender by return e-mail, and delete the original message and all copies from your system. Thank you. --------------------------------------------------------------------------------------------------- |
i drop table,use $hdc_home/hive/conf/hive-sie.xml replace $hdc_home/spark/conf/hive-site.xml,fixed it. but i do not know the principle inside. if t4 is exist in hive's defualt,in other words create table t4 in hive,then create table in carbondata do not reported exception。 在 2016/8/17 10:35, Chenliang (Liang, CarbonData) 写道: > Can you share the case experience: how did you solve it. > > Regards > Liang > -----邮件原件----- > 发件人: 金铸 [mailto:[hidden email]] > 发送时间: 2016年8月17日 10:31 > 收件人: [hidden email] > 主题: Re: load data fail > > thanks a lot,I solve this。 > > > > 在 2016/8/17 0:53, Eason 写道: >> hi jinzhu, >> >> whether this happen on multiple instance loading the same table? >> >> currently ,it is no support concurrent load on same table. >> >> for this exception >> >> 1.please check if any locks are created under system temp folder >> with<databasename>/<tablename>/lockfile, if it exists please delete. >> >> 2.try to change the lock ype: >> carbon.lock.type = ZOOKEEPERLOCK Regards, Eason >> >> 在 2016年08月12日 14:25, 金铸 写道: >>> hi : /usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client >>> --jars >>> /opt/incubator-carbondata/assembly/target/scala-2.10/carbondata_2.10- >>> 0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/usr/hdp/2.4.0.0-169/ >>> spark/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/2.4.0.0-169/spark/li >>> b/datanucleus-rdbms-3.2.9.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucl >>> eus-core-3.2.10.jar,/opt//mysql-connector-java-5.1.37.jar >>> scala>import org.apache.spark.sql.CarbonContext scala>import >>> java.io.File scala>import org.apache.hadoop.hive.conf.HiveConf >>> scala>val cc = new CarbonContext(sc, >>> "hdfs://hadoop01/data/carbondata01/store") >>> scala>cc.setConf("hive.metastore.warehouse.dir", >>> "/apps/hive/warehouse") >>> scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, >>> "false") >>> scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/car >>> scala>bonlib/carbonplugins") >>> scala> >>> cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib >>> /carbonplugins") >>> scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv' >>> into table t4 options('FILEHEADER'='id,name,city,age')") INFO 12-08 >>> 14:21:24,461 - main Query [LOAD DATA LOCAL INPATH >>> 'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4 >>> OPTIONS('FILEHEADER'='ID,NAME,CITY,AGE')] INFO 12-08 14:21:39,475 - >>> Table MetaData Unlocked Successfully after data load >>> java.lang.RuntimeException: Table is locked for updation. Please try >>> after some time at scala.sys.package$.error(package.scala:27) >>> at >>> org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1049) >>> at >>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) >>> at >>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) >>> at >>> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) >>> at >>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) >>> at >>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) >>> at >>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.s >>> cala:150) >>> thanks a lot >>> --------------------------------------------------------------------- >>> > > > > --------------------------------------------------------------------------------------------------- Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) is intended only for the use of the intended recipient and may be confidential and/or privileged of Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying is strictly prohibited, and may be unlawful.If you have received this communication in error,please immediately notify the sender by return e-mail, and delete the original message and all copies from your system. Thank you. --------------------------------------------------------------------------------------------------- |
Free forum by Nabble | Edit this page |