[ https://issues.apache.org/jira/browse/CARBONDATA-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17258008#comment-17258008 ] Akash R Nilugal commented on CARBONDATA-4033: --------------------------------------------- can you give more details of queries, because i cannot see table A here, so cannot run to check the error. > Error when using merge API with hive table > ------------------------------------------ > > Key: CARBONDATA-4033 > URL: https://issues.apache.org/jira/browse/CARBONDATA-4033 > Project: CarbonData > Issue Type: Bug > Affects Versions: 2.0.0, 2.0.1 > Reporter: Nguyen Dinh Huynh > Priority: Major > Labels: easyfix, features, newbie > > I always get this error when trying to upsert hive table. I'm using CDH 6.3.1 with spark 2.4.3. Is this a bug ? > {code:java} > 2020-10-14 14:59:25 WARN BlockManager:66 - Putting block rdd_21_1 failed due to exception java.lang.RuntimeException: Store location not set for the key __temptable-7bdfc88b-e5b7-46d5-8492-dfbb98b9a1b0_1602662359786_null_389ec940-ed27-41d1-9038-72ed1cd162e90x0. 2020-10-14 14:59:25 WARN BlockManager:66 - Block rdd_21_1 could not be removed as it was not found on disk or in memory 2020-10-14 14:59:25 ERROR Executor:91 - Exception in task 1.0 in stage 0.0 (TID 1) java.lang.RuntimeException: Store location not set for the key __temptable-7bdfc88b-e5b7-46d5-8492-dfbb98b9a1b0_1602662359786_null_389ec940-ed27-41d1-9038-72ed1cd162e90x0 > {code} > My code is: > {code:java} > val map = Map( > col("_external_op") -> col("A._external_op"), > col("_external_ts_sec") -> col("A._external_ts_sec"), > col("_external_row") -> col("A._external_row"), > col("_external_pos") -> col("A._external_pos"), > col("id") -> col("A.id"), > col("order") -> col("A.order"), > col("shop_code") -> col("A.shop_code"), > col("customer_tel") -> col("A.customer_tel"), > col("channel") -> col("A.channel"), > col("batch_session_id") -> col("A.batch_session_id"), > col("deleted_at") -> col("A.deleted_at"), > col("created") -> col("A.created")) > .asInstanceOf[Map[Any, Any]] > val testDf = > spark.sqlContext.read.format("carbondata") > .option("tableName", "package_drafts") > .option("schemaName", "db") > .option("dbName", "db") > .option("databaseName", "d")b > .load() > .as("B") > testDf.printSchema() > testDf.merge(package_draft_view, col("A.id").equalTo(col("B.id"))) > .whenMatched(col("A._external_op") === "u") > .updateExpr(map) > .whenMatched(col("A._external_op") === "c") > .insertExpr(map) > .whenMatched(col("A._external_op") === "d") > .delete() > .execute() > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) |
Free forum by Nabble | Edit this page |