[ https://issues.apache.org/jira/browse/CARBONDATA-1142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095911#comment-16095911 ] Simarpreet Kaur commented on CARBONDATA-1142: --------------------------------------------- Step1: spark-sql> create table BugTest3(id int, name String) stored by 'carbondata'; Step2: spark-sql> load data inpath 'hdfs://localhost:54310/test.csv' into table bugtest3; Step3: spark-sql> show segments for table bugtest3; 17/07/21 13:00:44 INFO CarbonSparkSqlParser: Parsing command: show segments for table bugtest3 17/07/21 13:00:44 INFO CarbonLateDecodeRule: main Skip CarbonOptimizer 17/07/21 13:00:44 INFO CarbonLateDecodeRule: main Skip CarbonOptimizer 17/07/21 13:00:44 INFO SparkContext: Starting job: processCmd at CliDriver.java:376 17/07/21 13:00:44 INFO DAGScheduler: Got job 2 (processCmd at CliDriver.java:376) with 1 output partitions 17/07/21 13:00:44 INFO DAGScheduler: Final stage: ResultStage 2 (processCmd at CliDriver.java:376) 17/07/21 13:00:44 INFO DAGScheduler: Parents of final stage: List() 17/07/21 13:00:44 INFO DAGScheduler: Missing parents: List() 17/07/21 13:00:44 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[11] at processCmd at CliDriver.java:376), which has no missing parents 17/07/21 13:00:44 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 4.7 KB, free 366.3 MB) 17/07/21 13:00:44 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.7 KB, free 366.3 MB) 17/07/21 13:00:44 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.2.218:41596 (size: 2.7 KB, free: 366.3 MB) 17/07/21 13:00:44 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:996 17/07/21 13:00:44 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[11] at processCmd at CliDriver.java:376) 17/07/21 13:00:44 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks 17/07/21 13:00:44 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, localhost, executor driver, partition 0, PROCESS_LOCAL, 6502 bytes) 17/07/21 13:00:44 INFO Executor: Running task 0.0 in stage 2.0 (TID 2) 17/07/21 13:00:44 INFO Executor: Finished task 0.0 in stage 2.0 (TID 2). 1244 bytes result sent to driver 17/07/21 13:00:44 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 2) in 13 ms on localhost (executor driver) (1/1) 17/07/21 13:00:44 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 17/07/21 13:00:44 INFO DAGScheduler: ResultStage 2 (processCmd at CliDriver.java:376) finished in 0.012 s 17/07/21 13:00:44 INFO DAGScheduler: Job 2 finished: processCmd at CliDriver.java:376, took 0.021736 s 0 Success 2017-07-21 12:38:01.884 2017-07-21 12:38:02.971 Time taken: 0.08 seconds, Fetched 1 row(s) 17/07/21 13:00:44 INFO CliDriver: Time taken: 0.08 seconds, Fetched 1 row(s) It is working fine. No problem in Grammar. > carbandata spark-sql grammar problem > ------------------------------------ > > Key: CARBONDATA-1142 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1142 > Project: CarbonData > Issue Type: Bug > Components: sql > Affects Versions: 1.1.0 > Environment: spark 2.1 + hadoop 2.6 > Reporter: 吴志龙 > Attachments: a620e2b7-245e-456c-9c2d-b00bd1a29400.png > > > 1、CREATE TABLE IF NOT EXISTS dp_tmp.order_detail ( id BIGINT, order_code STRING, sales_area_id INT, sales_id INT, order_inputer INT, pro_type STRING, currency INT, exchange_rate DECIMAL, unit_cost_price DECIMAL, unit_selling_price DECIMAL, order_num INTEGER, order_amount DECIMAL, order_discount DOUBLE, order_account_amount DECIMAL, order_time TIMESTAMP, delivery_channel INT, delivery_address STRING, recipients STRING, contact STRING, delivery_date DATE, comments STRING ) STORED BY 'carbondata' TBLPROPERTIES ( 'COLUMN_GROUPS' = '(recipients,contact)', 'DICTIONARY_EXCLUDE' = 'comments', 'DICTIONARY_INCLUDE' = 'sales_area_id,sales_id', 'NO_INVERTED_INDEX' = 'id,order_code' ) > 2、load data inpath 'hdfs://hacluster/data/carbondata/csv/order_detail_1.csv' intotable dp_tmp.order_detail OPTIONS ('DELIMITER'=',','fileheader'='id,order_code,sales_area_id,sales_id,order_inputer,pro_type,currency,exchange_rate,unit_cost_price,unit_selling_price,order_num,order_amount,order_discount,order_account_amount,order_time,delivery_channel,delivery_address,recipients,contact,delivery_date,comments') > 3、spark-sql> SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 4; > Error in query: > missing 'FUNCTIONS' at 'FOR'(line 1, pos 14) > == SQL == > SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 4 > --------------^^^ > spark-sql> SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 1; > Error in query: > missing 'FUNCTIONS' at 'FOR'(line 1, pos 14) > == SQL == > SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 1 > --------------^^^ > spark-sql> DELETE SEGMENTS FROM TABLE dp_tmp.order_detail WHERE STARTTIME BEFORE '2017-06-08 12:05:06'; > Usage: delete [FILE|JAR|ARCHIVE] <value> [<value>]* > 17/06/08 10:36:27 ERROR DeleteResourceProcessor: Usage: delete [FILE|JAR|ARCHIVE] <value> [<value>]* > 4、spark-sql> select count(1) from dp_tmp.order_detail; > 14937665 > 5、spark-sql> insert overwrite table dp_tmp.order_detail select * from dp_tmp.order_detail_orc; > 6、spark-sql> select count(1) from dp_tmp.order_detail; > 34937665 > problem: > 1、show SEGMENTS ,DELETE SEGMENTS:There is a problem with this grammar > 2、insert overwrite :Should be covered, not added -- This message was sent by Atlassian JIRA (v6.4.14#64029) |
Free forum by Nabble | Edit this page |