Posted by
Eason on
Aug 16, 2016; 4:53pm
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/load-data-fail-tp100p164.html
hi jinzhu,
whether this happen on multiple instance loading the same table?
currently ,it is no support concurrent load on same table.
for this exception
1.please check if any locks are created under system temp folder
with<databasename>/<tablename>/lockfile, if it exists please delete.
2.try to change the lock ype:
carbon.lock.type = ZOOKEEPERLOCK Regards,
Eason
在 2016年08月12日 14:25, 金铸 写道:
> hi : /usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client
> --jars
> /opt/incubator-carbondata/assembly/target/scala-2.10/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-core-3.2.10.jar,/opt//mysql-connector-java-5.1.37.jar
> scala>import org.apache.spark.sql.CarbonContext scala>import
> java.io.File scala>import org.apache.hadoop.hive.conf.HiveConf
> scala>val cc = new CarbonContext(sc,
> "hdfs://hadoop01/data/carbondata01/store")
> scala>cc.setConf("hive.metastore.warehouse.dir",
> "/apps/hive/warehouse")
> scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname,
> "false")
> scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib/carbonplugins")
> scala>
> cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib/carbonplugins")
> scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv'
> into table t4 options('FILEHEADER'='id,name,city,age')") INFO 12-08
> 14:21:24,461 - main Query [LOAD DATA LOCAL INPATH
> 'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4
> OPTIONS('FILEHEADER'='ID,NAME,CITY,AGE')] INFO 12-08 14:21:39,475 -
> Table MetaData Unlocked Successfully after data load
> java.lang.RuntimeException: Table is locked for updation. Please try
> after some time at scala.sys.package$.error(package.scala:27)
> at
> org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1049)
> at
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
> at
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
> at
> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
> thanks a lot
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and
> any accompanying attachment(s) is intended only for the use of the
> intended recipient and may be confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
> reader of this communication is not the intended recipient,
> unauthorized use, forwarding, printing, storing, disclosure or
> copying is strictly prohibited, and may be unlawful.If you have
> received this communication in error,please immediately notify the
> sender by return e-mail, and delete the original message and all
> copies from your system. Thank you.
> ---------------------------------------------------------------------------------------------------