unable to use CarbonThriftServer and Beeline client

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

unable to use CarbonThriftServer and Beeline client

Li Peng
Hi,
    I try to use CarbonThriftServer and Beeline to query, but failed.

1. CarbonThriftServer  run and exit with FINISHED state soon.

Submit script:  

spark-submit
--queue spark
--conf spark.sql.hive.thriftServer.singleSession=true
--class org.apache.carbondata.spark.thriftserver.CarbonThriftServer  
--jars /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar
/usr/hdp/2.5.0.0-1245/spark/carbonlib/carbondata_2.10-0.2.1-incubating-SNAPSHOT-shade-hadoop2.7.3.jar  
hdfs://julong/carbondata/carbonstore

Here is the log:

17/01/16 09:49:36 INFO ContainerManagementProtocolProxy: Opening proxy : dpnode08:45454
17/01/16 09:49:36 INFO ContainerManagementProtocolProxy: Opening proxy : dpnode05:45454
17/01/16 09:49:40 INFO YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (dpnode08:48441) with ID 1
17/01/16 09:49:40 INFO BlockManagerMasterEndpoint: Registering block manager dpnode08:39271 with 511.5 MB RAM, BlockManagerId(1, dpnode08, 39271)
17/01/16 09:49:46 INFO YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (dpnode05:35569) with ID 2
17/01/16 09:49:46 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
17/01/16 09:49:46 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done
17/01/16 09:49:46 INFO BlockManagerMasterEndpoint: Registering block manager dpnode05:41205 with 511.5 MB RAM, BlockManagerId(2, dpnode05, 41205)
17/01/16 09:49:46 INFO CarbonProperties: Driver Property file path: /usr/hdp/2.5.0.0-1245/spark/conf/carbon.properties
17/01/16 09:49:46 INFO CarbonProperties: Driver ------Using Carbon.properties --------
17/01/16 09:49:46 INFO CarbonProperties: Driver {carbon.number.of.cores.while.loading=6, carbon.number.of.cores.while.compacting=4, carbon.sort.file.buffer.size=20, carbon.inmemory.record.size=120000, carbon.sort.size=500000, carbon.graph.rowset.size=100000, carbon.ddl.base.hdfs.url=/user/spark, carbon.compaction.level.threshold=8,6, carbon.number.of.cores=4, carbon.kettle.home=/usr/hdp/2.5.0.0-1245/spark/carbonlib/carbonplugins, carbon.storelocation=hdfs://julong/carbondata/carbonstore, carbon.enable.auto.load.merge=true, carbon.enableXXHash=true, carbon.sort.intermediate.files.limit=100, carbon.major.compaction.size=1024, carbon.badRecords.location=/opt/Carbon/Spark/badrecords, carbon.use.local.dir=true, carbon.enable.quick.filter=false}
17/01/16 09:49:52 INFO CarbonContext: Initializing execution hive, version 1.2.1
17/01/16 09:49:52 INFO ClientWrapper: Inspected Hadoop version: 2.7.3.2.5.0.0-1245
17/01/16 09:49:52 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.7.3.2.5.0.0-1245
17/01/16 09:49:53 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
17/01/16 09:49:53 INFO ObjectStore: ObjectStore, initialize called
17/01/16 09:49:53 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
17/01/16 09:49:53 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
17/01/16 09:50:01 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
17/01/16 09:50:02 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
17/01/16 09:50:02 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
17/01/16 09:50:09 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
17/01/16 09:50:09 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
17/01/16 09:50:11 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
17/01/16 09:50:11 INFO ObjectStore: Initialized ObjectStore
17/01/16 09:50:11 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
17/01/16 09:50:12 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
17/01/16 09:50:12 INFO HiveMetaStore: Added admin role in metastore
17/01/16 09:50:12 INFO HiveMetaStore: Added public role in metastore
17/01/16 09:50:12 INFO HiveMetaStore: No user is added in admin role, since config is empty
17/01/16 09:50:13 INFO HiveMetaStore: 0: get_all_databases
17/01/16 09:50:13 INFO audit: ugi=spark ip=unknown-ip-addr cmd=get_all_databases
17/01/16 09:50:13 INFO HiveMetaStore: 0: get_functions: db=default pat=*
17/01/16 09:50:13 INFO audit: ugi=spark ip=unknown-ip-addr cmd=get_functions: db=default pat=*
17/01/16 09:50:13 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
17/01/16 09:50:14 INFO SessionState: Created local directory: /data06/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/tmp/yarn
17/01/16 09:50:14 INFO SessionState: Created local directory: /data06/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/tmp/32b000cb-dbec-4399-9edd-3a0872b5942b_resources
17/01/16 09:50:14 INFO SessionState: Created HDFS directory: /tmp/hive/spark/32b000cb-dbec-4399-9edd-3a0872b5942b
17/01/16 09:50:14 INFO SessionState: Created local directory: /data06/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/tmp/yarn/32b000cb-dbec-4399-9edd-3a0872b5942b
17/01/16 09:50:14 INFO SessionState: Created HDFS directory: /tmp/hive/spark/32b000cb-dbec-4399-9edd-3a0872b5942b/_tmp_space.db
17/01/16 09:50:14 INFO CarbonContext: default warehouse location is /user/hive/warehouse
17/01/16 09:50:14 INFO CarbonContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
17/01/16 09:50:14 INFO ClientWrapper: Inspected Hadoop version: 2.7.3.2.5.0.0-1245
17/01/16 09:50:14 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.7.3.2.5.0.0-1245
17/01/16 09:50:15 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
17/01/16 09:50:15 INFO ObjectStore: ObjectStore, initialize called
17/01/16 09:50:15 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
17/01/16 09:50:15 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
17/01/16 09:50:23 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
17/01/16 09:50:25 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
17/01/16 09:50:25 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
17/01/16 09:50:32 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
17/01/16 09:50:32 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
17/01/16 09:50:34 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
17/01/16 09:50:34 INFO ObjectStore: Initialized ObjectStore
17/01/16 09:50:34 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
17/01/16 09:50:35 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
17/01/16 09:50:35 INFO HiveMetaStore: Added admin role in metastore
17/01/16 09:50:35 INFO HiveMetaStore: Added public role in metastore
17/01/16 09:50:35 INFO HiveMetaStore: No user is added in admin role, since config is empty
17/01/16 09:50:36 INFO HiveMetaStore: 0: get_all_databases
17/01/16 09:50:36 INFO audit: ugi=spark ip=unknown-ip-addr cmd=get_all_databases
17/01/16 09:50:36 INFO HiveMetaStore: 0: get_functions: db=default pat=*
17/01/16 09:50:36 INFO audit: ugi=spark ip=unknown-ip-addr cmd=get_functions: db=default pat=*
17/01/16 09:50:36 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
17/01/16 09:50:37 INFO SessionState: Created local directory: /data06/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/tmp/cf5abd18-5b31-483f-89be-8e4ae65a0cb7_resources
17/01/16 09:50:37 INFO SessionState: Created HDFS directory: /tmp/hive/spark/cf5abd18-5b31-483f-89be-8e4ae65a0cb7
17/01/16 09:50:37 INFO SessionState: Created local directory: /data06/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/tmp/yarn/cf5abd18-5b31-483f-89be-8e4ae65a0cb7
17/01/16 09:50:37 INFO SessionState: Created HDFS directory: /tmp/hive/spark/cf5abd18-5b31-483f-89be-8e4ae65a0cb7/_tmp_space.db
17/01/16 09:50:41 INFO CompositeService: Operation log root directory is created: /data06/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/tmp/yarn/operation_logs
17/01/16 09:50:42 INFO AbstractService: HiveServer2: Async execution pool size 100
17/01/16 09:50:42 INFO AbstractService: Service:OperationManager is inited.
17/01/16 09:50:42 INFO AbstractService: Service: SessionManager is inited.
17/01/16 09:50:42 INFO AbstractService: Service: CLIService is inited.
17/01/16 09:50:42 INFO AbstractService: Service:ThriftBinaryCLIService is inited.
17/01/16 09:50:42 INFO AbstractService: Service: HiveServer2 is inited.
17/01/16 09:50:42 INFO AbstractService: Service:OperationManager is started.
17/01/16 09:50:42 INFO AbstractService: Service:SessionManager is started.
17/01/16 09:50:42 INFO AbstractService: Service:CLIService is started.
17/01/16 09:50:42 INFO ObjectStore: ObjectStore, initialize called
17/01/16 09:50:42 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
17/01/16 09:50:42 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
17/01/16 09:50:42 INFO ObjectStore: Initialized ObjectStore
17/01/16 09:50:42 INFO HiveMetaStore: 0: get_databases: default
17/01/16 09:50:42 INFO audit: ugi=spark ip=unknown-ip-addr cmd=get_databases: default
17/01/16 09:50:42 INFO HiveMetaStore: 0: Shutting down the object store...
17/01/16 09:50:42 INFO audit: ugi=spark ip=unknown-ip-addr cmd=Shutting down the object store...
17/01/16 09:50:42 INFO HiveMetaStore: 0: Metastore shutdown complete.
17/01/16 09:50:42 INFO audit: ugi=spark ip=unknown-ip-addr cmd=Metastore shutdown complete.
17/01/16 09:50:42 INFO AbstractService: Service:ThriftBinaryCLIService is started.
17/01/16 09:50:42 INFO AbstractService: Service:HiveServer2 is started.
17/01/16 09:50:42 INFO ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
17/01/16 09:50:42 INFO ThriftCLIService: Starting ThriftBinaryCLIService on port 10000 with 5...500 worker threads
17/01/16 09:50:42 INFO SparkContext: Invoking stop() from shutdown hook
17/01/16 09:50:42 INFO HiveServer2: Shutting down HiveServer2
17/01/16 09:50:42 INFO ThriftCLIService: Thrift server has stopped
17/01/16 09:50:42 INFO AbstractService: Service:ThriftBinaryCLIService is stopped.
17/01/16 09:50:42 INFO AbstractService: Service:OperationManager is stopped.
17/01/16 09:50:42 INFO AbstractService: Service:SessionManager is stopped.
17/01/16 09:50:42 INFO AbstractService: Service:CLIService is stopped.
17/01/16 09:50:42 INFO AbstractService: Service:HiveServer2 is stopped.
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/sqlserver/session/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/sqlserver/session,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/sqlserver/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/sqlserver,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static/sql,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
17/01/16 09:50:42 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
17/01/16 09:50:42 INFO SparkUI: Stopped Spark web UI at http://192.168.50.8:36579
17/01/16 09:50:42 INFO YarnAllocator: Driver requested a total number of 0 executor(s).
17/01/16 09:50:42 INFO YarnClusterSchedulerBackend: Shutting down all executors
17/01/16 09:50:42 INFO YarnClusterSchedulerBackend: Asking each executor to shut down
17/01/16 09:50:42 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
17/01/16 09:50:42 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/01/16 09:50:42 INFO MemoryStore: MemoryStore cleared
17/01/16 09:50:42 INFO BlockManager: BlockManager stopped
17/01/16 09:50:42 INFO BlockManagerMaster: BlockManagerMaster stopped
17/01/16 09:50:42 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/01/16 09:50:42 INFO SparkContext: Successfully stopped SparkContext
17/01/16 09:50:42 INFO ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
17/01/16 09:50:42 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
17/01/16 09:50:42 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
17/01/16 09:50:42 INFO AMRMClientImpl: Waiting for application to be successfully unregistered.
17/01/16 09:50:42 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
17/01/16 09:50:42 INFO ApplicationMaster: Deleting staging directory .sparkStaging/application_1484211706075_0020
17/01/16 09:50:42 INFO ShutdownHookManager: Shutdown hook called
17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory /data11/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/spark-45b2767d-5201-4af8-8ba8-93a1130e4a6a
17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory /data06/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/tmp/spark-c1a03ca3-8dd5-466a-8d1e-2f3bc7280bdc
17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory /data08/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/spark-208e5903-abed-494c-879b-e2d79764c1b9
17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory /data09/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/spark-19e56c57-f2b8-4c40-94a3-d230f8ee0ff9
17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory /data03/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/spark-6fbf013a-cd00-49ae-a43a-6237331a615b
17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory /data04/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/spark-44650c84-ac7f-4908-b038-e16f6cdc59cd
17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory /data02/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/spark-2c200b2c-0f2b-45e2-a305-977b99ed94a3
17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory /data12/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/spark-582b6acc-a159-49ff-832c-472aacae8894
17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory /data05/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/spark-86a35020-0827-4eb4-b40c-d7e708ab62f5
17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory /data06/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/spark-209e2803-4190-4366-9f35-3d2e21e2ff48
17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory /data07/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/spark-1ae5a1f4-18c9-4de9-94bf-df6aaa693df5
17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory /data10/hadoop/yarn/local/usercache/spark/appcache/application_1484211706075_0020/spark-b78c9877-6007-400d-b067-3e55d969542c



2. Beeline client can not query carbon data, and create carbon table.

0: jdbc:hive2://dpnode03:10000> select * from sale limit 10;
+-----------+--+
| sale.col  |
+-----------+--+
+-----------+--+

0: jdbc:hive2://dpnode03:10000> create table info (`name` string, `age` int) stored by 'carbondata';
Error: Error while compiling statement: FAILED: SemanticException Cannot find class 'carbondata' (state=42000,code=40000)



Thanks.
Reply | Threaded
Open this post in threaded view
|

Re: unable to use CarbonThriftServer and Beeline client

Naresh P R
Hi Li Peng,

From the shared logs, i could see thrift server is stopped immediately
after starting

17/01/16 09:50:42 INFO ThriftCLIService: Starting
ThriftBinaryCLIService on port
10000 with 5...500 worker threads
17/01/16 09:50:42 INFO SparkContext: Invoking stop() from shutdown hook
17/01/16 09:50:42 INFO HiveServer2: Shutting down HiveServer2

As CarbonThriftServer first argument is store path, i suspect this could be
reason for thriftserver stop.

Can you try to start your thrift server with below command and check the
query ?

spark-submit
--queue spark
--conf spark.sql.hive.thriftServer.singleSession=true
--class org.apache.carbondata.spark.thriftserver.CarbonThriftServer "
hdfs://julong/carbondata/carbonstore"
--jars /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.
jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-
3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar
/usr/hdp/2.5.0.0-1245/spark/carbonlib/carbondata_2.10-0.2.
1-incubating-SNAPSHOT-shade-hadoop2.7.3.jar

-----
Regards,
Naresh P R


On Mon, Jan 16, 2017 at 11:29 AM, Li Peng <[hidden email]> wrote:

> Hi,
>     I try to use CarbonThriftServer and Beeline to query, but failed.
>
> 1. CarbonThriftServer  run and exit with FINISHED state soon.
>
> Submit script:
>
> spark-submit
> --queue spark
> --conf spark.sql.hive.thriftServer.singleSession=true
> --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
> --jars
> /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.
> jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-
> 3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar
> /usr/hdp/2.5.0.0-1245/spark/carbonlib/carbondata_2.10-0.2.
> 1-incubating-SNAPSHOT-shade-hadoop2.7.3.jar
> hdfs://julong/carbondata/carbonstore
>
> Here is the log:
>
> 17/01/16 09:49:36 INFO ContainerManagementProtocolProxy: Opening proxy :
> dpnode08:45454
> 17/01/16 09:49:36 INFO ContainerManagementProtocolProxy: Opening proxy :
> dpnode05:45454
> 17/01/16 09:49:40 INFO YarnClusterSchedulerBackend: Registered executor
> NettyRpcEndpointRef(null) (dpnode08:48441) with ID 1
> 17/01/16 09:49:40 INFO BlockManagerMasterEndpoint: Registering block
> manager
> dpnode08:39271 with 511.5 MB RAM, BlockManagerId(1, dpnode08, 39271)
> 17/01/16 09:49:46 INFO YarnClusterSchedulerBackend: Registered executor
> NettyRpcEndpointRef(null) (dpnode05:35569) with ID 2
> 17/01/16 09:49:46 INFO YarnClusterSchedulerBackend: SchedulerBackend is
> ready for scheduling beginning after reached minRegisteredResourcesRatio:
> 0.8
> 17/01/16 09:49:46 INFO YarnClusterScheduler:
> YarnClusterScheduler.postStartHook done
> 17/01/16 09:49:46 INFO BlockManagerMasterEndpoint: Registering block
> manager
> dpnode05:41205 with 511.5 MB RAM, BlockManagerId(2, dpnode05, 41205)
> 17/01/16 09:49:46 INFO CarbonProperties: Driver Property file path:
> /usr/hdp/2.5.0.0-1245/spark/conf/carbon.properties
> 17/01/16 09:49:46 INFO CarbonProperties: Driver ------Using
> Carbon.properties --------
> 17/01/16 09:49:46 INFO CarbonProperties: Driver
> {carbon.number.of.cores.while.loading=6,
> carbon.number.of.cores.while.compacting=4, carbon.sort.file.buffer.size=
> 20,
> carbon.inmemory.record.size=120000, carbon.sort.size=500000,
> carbon.graph.rowset.size=100000, carbon.ddl.base.hdfs.url=/user/spark,
> carbon.compaction.level.threshold=8,6, carbon.number.of.cores=4,
> carbon.kettle.home=/usr/hdp/2.5.0.0-1245/spark/carbonlib/carbonplugins,
> carbon.storelocation=hdfs://julong/carbondata/carbonstore,
> carbon.enable.auto.load.merge=true, carbon.enableXXHash=true,
> carbon.sort.intermediate.files.limit=100, carbon.major.compaction.size=
> 1024,
> carbon.badRecords.location=/opt/Carbon/Spark/badrecords,
> carbon.use.local.dir=true, carbon.enable.quick.filter=false}
> 17/01/16 09:49:52 INFO CarbonContext: Initializing execution hive, version
> 1.2.1
> 17/01/16 09:49:52 INFO ClientWrapper: Inspected Hadoop version:
> 2.7.3.2.5.0.0-1245
> 17/01/16 09:49:52 INFO ClientWrapper: Loaded
> org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version
> 2.7.3.2.5.0.0-1245
> 17/01/16 09:49:53 INFO HiveMetaStore: 0: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 17/01/16 09:49:53 INFO ObjectStore: ObjectStore, initialize called
> 17/01/16 09:49:53 INFO Persistence: Property datanucleus.cache.level2
> unknown - will be ignored
> 17/01/16 09:49:53 INFO Persistence: Property
> hive.metastore.integral.jdo.pushdown unknown - will be ignored
> 17/01/16 09:50:01 INFO ObjectStore: Setting MetaStore object pin classes
> with
> hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,
> Partition,Database,Type,FieldSchema,Order"
> 17/01/16 09:50:02 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> "embedded-only" so does not have its own datastore table.
> 17/01/16 09:50:02 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> "embedded-only"
> so does not have its own datastore table.
> 17/01/16 09:50:09 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> "embedded-only" so does not have its own datastore table.
> 17/01/16 09:50:09 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> "embedded-only"
> so does not have its own datastore table.
> 17/01/16 09:50:11 INFO MetaStoreDirectSql: Using direct SQL, underlying DB
> is DERBY
> 17/01/16 09:50:11 INFO ObjectStore: Initialized ObjectStore
> 17/01/16 09:50:11 WARN ObjectStore: Version information not found in
> metastore. hive.metastore.schema.verification is not enabled so recording
> the schema version 1.2.0
> 17/01/16 09:50:12 WARN ObjectStore: Failed to get database default,
> returning NoSuchObjectException
> 17/01/16 09:50:12 INFO HiveMetaStore: Added admin role in metastore
> 17/01/16 09:50:12 INFO HiveMetaStore: Added public role in metastore
> 17/01/16 09:50:12 INFO HiveMetaStore: No user is added in admin role, since
> config is empty
> 17/01/16 09:50:13 INFO HiveMetaStore: 0: get_all_databases
> 17/01/16 09:50:13 INFO audit: ugi=spark ip=unknown-ip-addr
> cmd=get_all_databases
> 17/01/16 09:50:13 INFO HiveMetaStore: 0: get_functions: db=default pat=*
> 17/01/16 09:50:13 INFO audit: ugi=spark ip=unknown-ip-addr
> cmd=get_functions: db=default pat=*
> 17/01/16 09:50:13 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as
> "embedded-only" so does not have its own datastore table.
> 17/01/16 09:50:14 INFO SessionState: Created local directory:
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/
> tmp/yarn
> 17/01/16 09:50:14 INFO SessionState: Created local directory:
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/
> tmp/32b000cb-dbec-4399-9edd-3a0872b5942b_resources
> 17/01/16 09:50:14 INFO SessionState: Created HDFS directory:
> /tmp/hive/spark/32b000cb-dbec-4399-9edd-3a0872b5942b
> 17/01/16 09:50:14 INFO SessionState: Created local directory:
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/
> tmp/yarn/32b000cb-dbec-4399-9edd-3a0872b5942b
> 17/01/16 09:50:14 INFO SessionState: Created HDFS directory:
> /tmp/hive/spark/32b000cb-dbec-4399-9edd-3a0872b5942b/_tmp_space.db
> 17/01/16 09:50:14 INFO CarbonContext: default warehouse location is
> /user/hive/warehouse
> 17/01/16 09:50:14 INFO CarbonContext: Initializing HiveMetastoreConnection
> version 1.2.1 using Spark classes.
> 17/01/16 09:50:14 INFO ClientWrapper: Inspected Hadoop version:
> 2.7.3.2.5.0.0-1245
> 17/01/16 09:50:14 INFO ClientWrapper: Loaded
> org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version
> 2.7.3.2.5.0.0-1245
> 17/01/16 09:50:15 INFO HiveMetaStore: 0: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 17/01/16 09:50:15 INFO ObjectStore: ObjectStore, initialize called
> 17/01/16 09:50:15 INFO Persistence: Property datanucleus.cache.level2
> unknown - will be ignored
> 17/01/16 09:50:15 INFO Persistence: Property
> hive.metastore.integral.jdo.pushdown unknown - will be ignored
> 17/01/16 09:50:23 INFO ObjectStore: Setting MetaStore object pin classes
> with
> hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,
> Partition,Database,Type,FieldSchema,Order"
> 17/01/16 09:50:25 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> "embedded-only" so does not have its own datastore table.
> 17/01/16 09:50:25 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> "embedded-only"
> so does not have its own datastore table.
> 17/01/16 09:50:32 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> "embedded-only" so does not have its own datastore table.
> 17/01/16 09:50:32 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> "embedded-only"
> so does not have its own datastore table.
> 17/01/16 09:50:34 INFO MetaStoreDirectSql: Using direct SQL, underlying DB
> is DERBY
> 17/01/16 09:50:34 INFO ObjectStore: Initialized ObjectStore
> 17/01/16 09:50:34 WARN ObjectStore: Version information not found in
> metastore. hive.metastore.schema.verification is not enabled so recording
> the schema version 1.2.0
> 17/01/16 09:50:35 WARN ObjectStore: Failed to get database default,
> returning NoSuchObjectException
> 17/01/16 09:50:35 INFO HiveMetaStore: Added admin role in metastore
> 17/01/16 09:50:35 INFO HiveMetaStore: Added public role in metastore
> 17/01/16 09:50:35 INFO HiveMetaStore: No user is added in admin role, since
> config is empty
> 17/01/16 09:50:36 INFO HiveMetaStore: 0: get_all_databases
> 17/01/16 09:50:36 INFO audit: ugi=spark ip=unknown-ip-addr
> cmd=get_all_databases
> 17/01/16 09:50:36 INFO HiveMetaStore: 0: get_functions: db=default pat=*
> 17/01/16 09:50:36 INFO audit: ugi=spark ip=unknown-ip-addr
> cmd=get_functions: db=default pat=*
> 17/01/16 09:50:36 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as
> "embedded-only" so does not have its own datastore table.
> 17/01/16 09:50:37 INFO SessionState: Created local directory:
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/
> tmp/cf5abd18-5b31-483f-89be-8e4ae65a0cb7_resources
> 17/01/16 09:50:37 INFO SessionState: Created HDFS directory:
> /tmp/hive/spark/cf5abd18-5b31-483f-89be-8e4ae65a0cb7
> 17/01/16 09:50:37 INFO SessionState: Created local directory:
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/
> tmp/yarn/cf5abd18-5b31-483f-89be-8e4ae65a0cb7
> 17/01/16 09:50:37 INFO SessionState: Created HDFS directory:
> /tmp/hive/spark/cf5abd18-5b31-483f-89be-8e4ae65a0cb7/_tmp_space.db
> 17/01/16 09:50:41 INFO CompositeService: Operation log root directory is
> created:
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/
> tmp/yarn/operation_logs
> 17/01/16 09:50:42 INFO AbstractService: HiveServer2: Async execution pool
> size 100
> 17/01/16 09:50:42 INFO AbstractService: Service:OperationManager is inited.
> 17/01/16 09:50:42 INFO AbstractService: Service: SessionManager is inited.
> 17/01/16 09:50:42 INFO AbstractService: Service: CLIService is inited.
> 17/01/16 09:50:42 INFO AbstractService: Service:ThriftBinaryCLIService is
> inited.
> 17/01/16 09:50:42 INFO AbstractService: Service: HiveServer2 is inited.
> 17/01/16 09:50:42 INFO AbstractService: Service:OperationManager is
> started.
> 17/01/16 09:50:42 INFO AbstractService: Service:SessionManager is started.
> 17/01/16 09:50:42 INFO AbstractService: Service:CLIService is started.
> 17/01/16 09:50:42 INFO ObjectStore: ObjectStore, initialize called
> 17/01/16 09:50:42 INFO Query: Reading in results for query
> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used
> is
> closing
> 17/01/16 09:50:42 INFO MetaStoreDirectSql: Using direct SQL, underlying DB
> is DERBY
> 17/01/16 09:50:42 INFO ObjectStore: Initialized ObjectStore
> 17/01/16 09:50:42 INFO HiveMetaStore: 0: get_databases: default
> 17/01/16 09:50:42 INFO audit: ugi=spark ip=unknown-ip-addr
> cmd=get_databases: default
> 17/01/16 09:50:42 INFO HiveMetaStore: 0: Shutting down the object store...
> 17/01/16 09:50:42 INFO audit: ugi=spark ip=unknown-ip-addr
> cmd=Shutting down
> the object store...
> 17/01/16 09:50:42 INFO HiveMetaStore: 0: Metastore shutdown complete.
> 17/01/16 09:50:42 INFO audit: ugi=spark ip=unknown-ip-addr
> cmd=Metastore
> shutdown complete.
> 17/01/16 09:50:42 INFO AbstractService: Service:ThriftBinaryCLIService is
> started.
> 17/01/16 09:50:42 INFO AbstractService: Service:HiveServer2 is started.
> 17/01/16 09:50:42 INFO ApplicationMaster: Final app status: SUCCEEDED,
> exitCode: 0
> 17/01/16 09:50:42 INFO ThriftCLIService: Starting ThriftBinaryCLIService on
> port 10000 with 5...500 worker threads
> 17/01/16 09:50:42 INFO SparkContext: Invoking stop() from shutdown hook
> 17/01/16 09:50:42 INFO HiveServer2: Shutting down HiveServer2
> 17/01/16 09:50:42 INFO ThriftCLIService: Thrift server has stopped
> 17/01/16 09:50:42 INFO AbstractService: Service:ThriftBinaryCLIService is
> stopped.
> 17/01/16 09:50:42 INFO AbstractService: Service:OperationManager is
> stopped.
> 17/01/16 09:50:42 INFO AbstractService: Service:SessionManager is stopped.
> 17/01/16 09:50:42 INFO AbstractService: Service:CLIService is stopped.
> 17/01/16 09:50:42 INFO AbstractService: Service:HiveServer2 is stopped.
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/sqlserver/session/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/sqlserver/session,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/sqlserver/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/sqlserver,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/static/sql,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/SQL/execution/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/SQL/execution,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/SQL/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/SQL,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/metrics/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/api,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/static,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/executors/threadDump,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/executors/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/executors,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/environment/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/environment,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/storage/rdd,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/storage/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/storage,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/pool/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/pool,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/stage/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/stage,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/jobs/job/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/jobs/job,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/jobs/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/jobs,null}
> 17/01/16 09:50:42 INFO SparkUI: Stopped Spark web UI at
> http://192.168.50.8:36579
> 17/01/16 09:50:42 INFO YarnAllocator: Driver requested a total number of 0
> executor(s).
> 17/01/16 09:50:42 INFO YarnClusterSchedulerBackend: Shutting down all
> executors
> 17/01/16 09:50:42 INFO YarnClusterSchedulerBackend: Asking each executor to
> shut down
> 17/01/16 09:50:42 INFO SchedulerExtensionServices: Stopping
> SchedulerExtensionServices
> (serviceOption=None,
>  services=List(),
>  started=false)
> 17/01/16 09:50:42 INFO MapOutputTrackerMasterEndpoint:
> MapOutputTrackerMasterEndpoint stopped!
> 17/01/16 09:50:42 INFO MemoryStore: MemoryStore cleared
> 17/01/16 09:50:42 INFO BlockManager: BlockManager stopped
> 17/01/16 09:50:42 INFO BlockManagerMaster: BlockManagerMaster stopped
> 17/01/16 09:50:42 INFO
> OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
> OutputCommitCoordinator stopped!
> 17/01/16 09:50:42 INFO SparkContext: Successfully stopped SparkContext
> 17/01/16 09:50:42 INFO ApplicationMaster: Unregistering ApplicationMaster
> with SUCCEEDED
> 17/01/16 09:50:42 INFO RemoteActorRefProvider$RemotingTerminator: Shutting
> down remote daemon.
> 17/01/16 09:50:42 INFO RemoteActorRefProvider$RemotingTerminator: Remote
> daemon shut down; proceeding with flushing remote transports.
> 17/01/16 09:50:42 INFO AMRMClientImpl: Waiting for application to be
> successfully unregistered.
> 17/01/16 09:50:42 INFO RemoteActorRefProvider$RemotingTerminator: Remoting
> shut down.
> 17/01/16 09:50:42 INFO ApplicationMaster: Deleting staging directory
> .sparkStaging/application_1484211706075_0020
> 17/01/16 09:50:42 INFO ShutdownHookManager: Shutdown hook called
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data11/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-45b2767d-5201-4af8-8ba8-93a1130e4a6a
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/
> tmp/spark-c1a03ca3-8dd5-466a-8d1e-2f3bc7280bdc
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data08/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-208e5903-abed-494c-879b-e2d79764c1b9
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data09/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-19e56c57-f2b8-4c40-94a3-d230f8ee0ff9
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data03/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-6fbf013a-cd00-49ae-a43a-6237331a615b
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data04/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-44650c84-ac7f-4908-b038-e16f6cdc59cd
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data02/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-2c200b2c-0f2b-45e2-a305-977b99ed94a3
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data12/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-582b6acc-a159-49ff-832c-472aacae8894
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data05/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-86a35020-0827-4eb4-b40c-d7e708ab62f5
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-209e2803-4190-4366-9f35-3d2e21e2ff48
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data07/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-1ae5a1f4-18c9-4de9-94bf-df6aaa693df5
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data10/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-b78c9877-6007-400d-b067-3e55d969542c
>
>
>
> 2. Beeline client can not query carbon data, and create carbon table.
>
> 0: jdbc:hive2://dpnode03:10000> select * from sale limit 10;
> +-----------+--+
> | sale.col  |
> +-----------+--+
> +-----------+--+
>
> 0: jdbc:hive2://dpnode03:10000> create table info (`name` string, `age`
> int)
> stored by 'carbondata';
> Error: Error while compiling statement: FAILED: SemanticException Cannot
> find class 'carbondata' (state=42000,code=40000)
>
>
>
> Thanks.
>
>
>
> --
> View this message in context: http://apache-carbondata-
> mailing-list-archive.1130556.n5.nabble.com/unable-to-use-
> CarbonThriftServer-and-Beeline-client-tp6252.html
> Sent from the Apache CarbonData Mailing List archive mailing list archive
> at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: Re: unable to use CarbonThriftServer and Beeline client

Li Peng
Hi,
   Any other properties need to be set in the spark-default.conf ?
   And the script is run on what mode :  client or cluster ?
 
Thanks.



Naresh P R wrote
Hi Li Peng,

From the shared logs, i could see thrift server is stopped immediately
after starting

17/01/16 09:50:42 INFO ThriftCLIService: Starting
ThriftBinaryCLIService on port
10000 with 5...500 worker threads
17/01/16 09:50:42 INFO SparkContext: Invoking stop() from shutdown hook
17/01/16 09:50:42 INFO HiveServer2: Shutting down HiveServer2

As CarbonThriftServer first argument is store path, i suspect this could be
reason for thriftserver stop.

Can you try to start your thrift server with below command and check the
query ?

spark-submit
--queue spark
--conf spark.sql.hive.thriftServer.singleSession=true
--class org.apache.carbondata.spark.thriftserver.CarbonThriftServer "
hdfs://julong/carbondata/carbonstore"
--jars /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.
jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-
3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar
/usr/hdp/2.5.0.0-1245/spark/carbonlib/carbondata_2.10-0.2.
1-incubating-SNAPSHOT-shade-hadoop2.7.3.jar

-----
Regards,
Naresh P R


On Mon, Jan 16, 2017 at 11:29 AM, Li Peng <[hidden email]> wrote:

> Hi,
>     I try to use CarbonThriftServer and Beeline to query, but failed.
>
> 1. CarbonThriftServer  run and exit with FINISHED state soon.
>
> Submit script:
>
> spark-submit
> --queue spark
> --conf spark.sql.hive.thriftServer.singleSession=true
> --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
> --jars
> /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.
> jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-
> 3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar
> /usr/hdp/2.5.0.0-1245/spark/carbonlib/carbondata_2.10-0.2.
> 1-incubating-SNAPSHOT-shade-hadoop2.7.3.jar
> hdfs://julong/carbondata/carbonstore
>
> Here is the log:
>
> 17/01/16 09:49:36 INFO ContainerManagementProtocolProxy: Opening proxy :
> dpnode08:45454
> 17/01/16 09:49:36 INFO ContainerManagementProtocolProxy: Opening proxy :
> dpnode05:45454
> 17/01/16 09:49:40 INFO YarnClusterSchedulerBackend: Registered executor
> NettyRpcEndpointRef(null) (dpnode08:48441) with ID 1
> 17/01/16 09:49:40 INFO BlockManagerMasterEndpoint: Registering block
> manager
> dpnode08:39271 with 511.5 MB RAM, BlockManagerId(1, dpnode08, 39271)
> 17/01/16 09:49:46 INFO YarnClusterSchedulerBackend: Registered executor
> NettyRpcEndpointRef(null) (dpnode05:35569) with ID 2
> 17/01/16 09:49:46 INFO YarnClusterSchedulerBackend: SchedulerBackend is
> ready for scheduling beginning after reached minRegisteredResourcesRatio:
> 0.8
> 17/01/16 09:49:46 INFO YarnClusterScheduler:
> YarnClusterScheduler.postStartHook done
> 17/01/16 09:49:46 INFO BlockManagerMasterEndpoint: Registering block
> manager
> dpnode05:41205 with 511.5 MB RAM, BlockManagerId(2, dpnode05, 41205)
> 17/01/16 09:49:46 INFO CarbonProperties: Driver Property file path:
> /usr/hdp/2.5.0.0-1245/spark/conf/carbon.properties
> 17/01/16 09:49:46 INFO CarbonProperties: Driver ------Using
> Carbon.properties --------
> 17/01/16 09:49:46 INFO CarbonProperties: Driver
> {carbon.number.of.cores.while.loading=6,
> carbon.number.of.cores.while.compacting=4, carbon.sort.file.buffer.size=
> 20,
> carbon.inmemory.record.size=120000, carbon.sort.size=500000,
> carbon.graph.rowset.size=100000, carbon.ddl.base.hdfs.url=/user/spark,
> carbon.compaction.level.threshold=8,6, carbon.number.of.cores=4,
> carbon.kettle.home=/usr/hdp/2.5.0.0-1245/spark/carbonlib/carbonplugins,
> carbon.storelocation=hdfs://julong/carbondata/carbonstore,
> carbon.enable.auto.load.merge=true, carbon.enableXXHash=true,
> carbon.sort.intermediate.files.limit=100, carbon.major.compaction.size=
> 1024,
> carbon.badRecords.location=/opt/Carbon/Spark/badrecords,
> carbon.use.local.dir=true, carbon.enable.quick.filter=false}
> 17/01/16 09:49:52 INFO CarbonContext: Initializing execution hive, version
> 1.2.1
> 17/01/16 09:49:52 INFO ClientWrapper: Inspected Hadoop version:
> 2.7.3.2.5.0.0-1245
> 17/01/16 09:49:52 INFO ClientWrapper: Loaded
> org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version
> 2.7.3.2.5.0.0-1245
> 17/01/16 09:49:53 INFO HiveMetaStore: 0: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 17/01/16 09:49:53 INFO ObjectStore: ObjectStore, initialize called
> 17/01/16 09:49:53 INFO Persistence: Property datanucleus.cache.level2
> unknown - will be ignored
> 17/01/16 09:49:53 INFO Persistence: Property
> hive.metastore.integral.jdo.pushdown unknown - will be ignored
> 17/01/16 09:50:01 INFO ObjectStore: Setting MetaStore object pin classes
> with
> hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,
> Partition,Database,Type,FieldSchema,Order"
> 17/01/16 09:50:02 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> "embedded-only" so does not have its own datastore table.
> 17/01/16 09:50:02 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> "embedded-only"
> so does not have its own datastore table.
> 17/01/16 09:50:09 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> "embedded-only" so does not have its own datastore table.
> 17/01/16 09:50:09 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> "embedded-only"
> so does not have its own datastore table.
> 17/01/16 09:50:11 INFO MetaStoreDirectSql: Using direct SQL, underlying DB
> is DERBY
> 17/01/16 09:50:11 INFO ObjectStore: Initialized ObjectStore
> 17/01/16 09:50:11 WARN ObjectStore: Version information not found in
> metastore. hive.metastore.schema.verification is not enabled so recording
> the schema version 1.2.0
> 17/01/16 09:50:12 WARN ObjectStore: Failed to get database default,
> returning NoSuchObjectException
> 17/01/16 09:50:12 INFO HiveMetaStore: Added admin role in metastore
> 17/01/16 09:50:12 INFO HiveMetaStore: Added public role in metastore
> 17/01/16 09:50:12 INFO HiveMetaStore: No user is added in admin role, since
> config is empty
> 17/01/16 09:50:13 INFO HiveMetaStore: 0: get_all_databases
> 17/01/16 09:50:13 INFO audit: ugi=spark ip=unknown-ip-addr
> cmd=get_all_databases
> 17/01/16 09:50:13 INFO HiveMetaStore: 0: get_functions: db=default pat=*
> 17/01/16 09:50:13 INFO audit: ugi=spark ip=unknown-ip-addr
> cmd=get_functions: db=default pat=*
> 17/01/16 09:50:13 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as
> "embedded-only" so does not have its own datastore table.
> 17/01/16 09:50:14 INFO SessionState: Created local directory:
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/
> tmp/yarn
> 17/01/16 09:50:14 INFO SessionState: Created local directory:
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/
> tmp/32b000cb-dbec-4399-9edd-3a0872b5942b_resources
> 17/01/16 09:50:14 INFO SessionState: Created HDFS directory:
> /tmp/hive/spark/32b000cb-dbec-4399-9edd-3a0872b5942b
> 17/01/16 09:50:14 INFO SessionState: Created local directory:
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/
> tmp/yarn/32b000cb-dbec-4399-9edd-3a0872b5942b
> 17/01/16 09:50:14 INFO SessionState: Created HDFS directory:
> /tmp/hive/spark/32b000cb-dbec-4399-9edd-3a0872b5942b/_tmp_space.db
> 17/01/16 09:50:14 INFO CarbonContext: default warehouse location is
> /user/hive/warehouse
> 17/01/16 09:50:14 INFO CarbonContext: Initializing HiveMetastoreConnection
> version 1.2.1 using Spark classes.
> 17/01/16 09:50:14 INFO ClientWrapper: Inspected Hadoop version:
> 2.7.3.2.5.0.0-1245
> 17/01/16 09:50:14 INFO ClientWrapper: Loaded
> org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version
> 2.7.3.2.5.0.0-1245
> 17/01/16 09:50:15 INFO HiveMetaStore: 0: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 17/01/16 09:50:15 INFO ObjectStore: ObjectStore, initialize called
> 17/01/16 09:50:15 INFO Persistence: Property datanucleus.cache.level2
> unknown - will be ignored
> 17/01/16 09:50:15 INFO Persistence: Property
> hive.metastore.integral.jdo.pushdown unknown - will be ignored
> 17/01/16 09:50:23 INFO ObjectStore: Setting MetaStore object pin classes
> with
> hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,
> Partition,Database,Type,FieldSchema,Order"
> 17/01/16 09:50:25 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> "embedded-only" so does not have its own datastore table.
> 17/01/16 09:50:25 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> "embedded-only"
> so does not have its own datastore table.
> 17/01/16 09:50:32 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> "embedded-only" so does not have its own datastore table.
> 17/01/16 09:50:32 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> "embedded-only"
> so does not have its own datastore table.
> 17/01/16 09:50:34 INFO MetaStoreDirectSql: Using direct SQL, underlying DB
> is DERBY
> 17/01/16 09:50:34 INFO ObjectStore: Initialized ObjectStore
> 17/01/16 09:50:34 WARN ObjectStore: Version information not found in
> metastore. hive.metastore.schema.verification is not enabled so recording
> the schema version 1.2.0
> 17/01/16 09:50:35 WARN ObjectStore: Failed to get database default,
> returning NoSuchObjectException
> 17/01/16 09:50:35 INFO HiveMetaStore: Added admin role in metastore
> 17/01/16 09:50:35 INFO HiveMetaStore: Added public role in metastore
> 17/01/16 09:50:35 INFO HiveMetaStore: No user is added in admin role, since
> config is empty
> 17/01/16 09:50:36 INFO HiveMetaStore: 0: get_all_databases
> 17/01/16 09:50:36 INFO audit: ugi=spark ip=unknown-ip-addr
> cmd=get_all_databases
> 17/01/16 09:50:36 INFO HiveMetaStore: 0: get_functions: db=default pat=*
> 17/01/16 09:50:36 INFO audit: ugi=spark ip=unknown-ip-addr
> cmd=get_functions: db=default pat=*
> 17/01/16 09:50:36 INFO Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as
> "embedded-only" so does not have its own datastore table.
> 17/01/16 09:50:37 INFO SessionState: Created local directory:
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/
> tmp/cf5abd18-5b31-483f-89be-8e4ae65a0cb7_resources
> 17/01/16 09:50:37 INFO SessionState: Created HDFS directory:
> /tmp/hive/spark/cf5abd18-5b31-483f-89be-8e4ae65a0cb7
> 17/01/16 09:50:37 INFO SessionState: Created local directory:
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/
> tmp/yarn/cf5abd18-5b31-483f-89be-8e4ae65a0cb7
> 17/01/16 09:50:37 INFO SessionState: Created HDFS directory:
> /tmp/hive/spark/cf5abd18-5b31-483f-89be-8e4ae65a0cb7/_tmp_space.db
> 17/01/16 09:50:41 INFO CompositeService: Operation log root directory is
> created:
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/
> tmp/yarn/operation_logs
> 17/01/16 09:50:42 INFO AbstractService: HiveServer2: Async execution pool
> size 100
> 17/01/16 09:50:42 INFO AbstractService: Service:OperationManager is inited.
> 17/01/16 09:50:42 INFO AbstractService: Service: SessionManager is inited.
> 17/01/16 09:50:42 INFO AbstractService: Service: CLIService is inited.
> 17/01/16 09:50:42 INFO AbstractService: Service:ThriftBinaryCLIService is
> inited.
> 17/01/16 09:50:42 INFO AbstractService: Service: HiveServer2 is inited.
> 17/01/16 09:50:42 INFO AbstractService: Service:OperationManager is
> started.
> 17/01/16 09:50:42 INFO AbstractService: Service:SessionManager is started.
> 17/01/16 09:50:42 INFO AbstractService: Service:CLIService is started.
> 17/01/16 09:50:42 INFO ObjectStore: ObjectStore, initialize called
> 17/01/16 09:50:42 INFO Query: Reading in results for query
> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used
> is
> closing
> 17/01/16 09:50:42 INFO MetaStoreDirectSql: Using direct SQL, underlying DB
> is DERBY
> 17/01/16 09:50:42 INFO ObjectStore: Initialized ObjectStore
> 17/01/16 09:50:42 INFO HiveMetaStore: 0: get_databases: default
> 17/01/16 09:50:42 INFO audit: ugi=spark ip=unknown-ip-addr
> cmd=get_databases: default
> 17/01/16 09:50:42 INFO HiveMetaStore: 0: Shutting down the object store...
> 17/01/16 09:50:42 INFO audit: ugi=spark ip=unknown-ip-addr
> cmd=Shutting down
> the object store...
> 17/01/16 09:50:42 INFO HiveMetaStore: 0: Metastore shutdown complete.
> 17/01/16 09:50:42 INFO audit: ugi=spark ip=unknown-ip-addr
> cmd=Metastore
> shutdown complete.
> 17/01/16 09:50:42 INFO AbstractService: Service:ThriftBinaryCLIService is
> started.
> 17/01/16 09:50:42 INFO AbstractService: Service:HiveServer2 is started.
> 17/01/16 09:50:42 INFO ApplicationMaster: Final app status: SUCCEEDED,
> exitCode: 0
> 17/01/16 09:50:42 INFO ThriftCLIService: Starting ThriftBinaryCLIService on
> port 10000 with 5...500 worker threads
> 17/01/16 09:50:42 INFO SparkContext: Invoking stop() from shutdown hook
> 17/01/16 09:50:42 INFO HiveServer2: Shutting down HiveServer2
> 17/01/16 09:50:42 INFO ThriftCLIService: Thrift server has stopped
> 17/01/16 09:50:42 INFO AbstractService: Service:ThriftBinaryCLIService is
> stopped.
> 17/01/16 09:50:42 INFO AbstractService: Service:OperationManager is
> stopped.
> 17/01/16 09:50:42 INFO AbstractService: Service:SessionManager is stopped.
> 17/01/16 09:50:42 INFO AbstractService: Service:CLIService is stopped.
> 17/01/16 09:50:42 INFO AbstractService: Service:HiveServer2 is stopped.
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/sqlserver/session/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/sqlserver/session,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/sqlserver/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/sqlserver,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/static/sql,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/SQL/execution/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/SQL/execution,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/SQL/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/SQL,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/metrics/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/api,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/static,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/executors/threadDump,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/executors/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/executors,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/environment/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/environment,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/storage/rdd,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/storage/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/storage,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/pool/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/pool,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/stage/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/stage,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/jobs/job/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/jobs/job,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/jobs/json,null}
> 17/01/16 09:50:42 INFO ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/jobs,null}
> 17/01/16 09:50:42 INFO SparkUI: Stopped Spark web UI at
> http://192.168.50.8:36579
> 17/01/16 09:50:42 INFO YarnAllocator: Driver requested a total number of 0
> executor(s).
> 17/01/16 09:50:42 INFO YarnClusterSchedulerBackend: Shutting down all
> executors
> 17/01/16 09:50:42 INFO YarnClusterSchedulerBackend: Asking each executor to
> shut down
> 17/01/16 09:50:42 INFO SchedulerExtensionServices: Stopping
> SchedulerExtensionServices
> (serviceOption=None,
>  services=List(),
>  started=false)
> 17/01/16 09:50:42 INFO MapOutputTrackerMasterEndpoint:
> MapOutputTrackerMasterEndpoint stopped!
> 17/01/16 09:50:42 INFO MemoryStore: MemoryStore cleared
> 17/01/16 09:50:42 INFO BlockManager: BlockManager stopped
> 17/01/16 09:50:42 INFO BlockManagerMaster: BlockManagerMaster stopped
> 17/01/16 09:50:42 INFO
> OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
> OutputCommitCoordinator stopped!
> 17/01/16 09:50:42 INFO SparkContext: Successfully stopped SparkContext
> 17/01/16 09:50:42 INFO ApplicationMaster: Unregistering ApplicationMaster
> with SUCCEEDED
> 17/01/16 09:50:42 INFO RemoteActorRefProvider$RemotingTerminator: Shutting
> down remote daemon.
> 17/01/16 09:50:42 INFO RemoteActorRefProvider$RemotingTerminator: Remote
> daemon shut down; proceeding with flushing remote transports.
> 17/01/16 09:50:42 INFO AMRMClientImpl: Waiting for application to be
> successfully unregistered.
> 17/01/16 09:50:42 INFO RemoteActorRefProvider$RemotingTerminator: Remoting
> shut down.
> 17/01/16 09:50:42 INFO ApplicationMaster: Deleting staging directory
> .sparkStaging/application_1484211706075_0020
> 17/01/16 09:50:42 INFO ShutdownHookManager: Shutdown hook called
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data11/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-45b2767d-5201-4af8-8ba8-93a1130e4a6a
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/container_e26_1484211706075_0020_01_000001/
> tmp/spark-c1a03ca3-8dd5-466a-8d1e-2f3bc7280bdc
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data08/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-208e5903-abed-494c-879b-e2d79764c1b9
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data09/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-19e56c57-f2b8-4c40-94a3-d230f8ee0ff9
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data03/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-6fbf013a-cd00-49ae-a43a-6237331a615b
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data04/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-44650c84-ac7f-4908-b038-e16f6cdc59cd
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data02/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-2c200b2c-0f2b-45e2-a305-977b99ed94a3
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data12/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-582b6acc-a159-49ff-832c-472aacae8894
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data05/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-86a35020-0827-4eb4-b40c-d7e708ab62f5
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data06/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-209e2803-4190-4366-9f35-3d2e21e2ff48
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data07/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-1ae5a1f4-18c9-4de9-94bf-df6aaa693df5
> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> /data10/hadoop/yarn/local/usercache/spark/appcache/
> application_1484211706075_0020/spark-b78c9877-6007-400d-b067-3e55d969542c
>
>
>
> 2. Beeline client can not query carbon data, and create carbon table.
>
> 0: jdbc:hive2://dpnode03:10000> select * from sale limit 10;
> +-----------+--+
> | sale.col  |
> +-----------+--+
> +-----------+--+
>
> 0: jdbc:hive2://dpnode03:10000> create table info (`name` string, `age`
> int)
> stored by 'carbondata';
> Error: Error while compiling statement: FAILED: SemanticException Cannot
> find class 'carbondata' (state=42000,code=40000)
>
>
>
> Thanks.
>
>
>
> --
> View this message in context: http://apache-carbondata-
> mailing-list-archive.1130556.n5.nabble.com/unable-to-use-
> CarbonThriftServer-and-Beeline-client-tp6252.html
> Sent from the Apache CarbonData Mailing List archive mailing list archive
> at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: Re: unable to use CarbonThriftServer and Beeline client

Naresh P R
Hi Li Peng,

You can refer below link for CarbonData deployment related configurations
on Spark Standalone cluster and Spark Yarn Cluster

https://cwiki.apache.org/confluence/display/CARBONDATA/Cluster+deployment+guide
--
Regards,
Naresh P R

On Tue, Jan 17, 2017 at 7:47 AM, Li Peng <[hidden email]> wrote:

> Hi,
>    Any other properties need to be set in the spark-default.conf ?
>    And the script is run on what mode :  client or cluster ?
>
> Thanks.
>
>
>
>
> Naresh P R wrote
> > Hi Li Peng,
> >
> > From the shared logs, i could see thrift server is stopped immediately
> > after starting
> >
> > 17/01/16 09:50:42 INFO ThriftCLIService: Starting
> > ThriftBinaryCLIService on port
> > 10000 with 5...500 worker threads
> > 17/01/16 09:50:42 INFO SparkContext: Invoking stop() from shutdown hook
> > 17/01/16 09:50:42 INFO HiveServer2: Shutting down HiveServer2
> >
> > As CarbonThriftServer first argument is store path, i suspect this could
> > be
> > reason for thriftserver stop.
> >
> > Can you try to start your thrift server with below command and check the
> > query ?
> >
> > spark-submit
> > --queue spark
> > --conf spark.sql.hive.thriftServer.singleSession=true
> > --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer "
> > hdfs://julong/carbondata/carbonstore"
> > --jars /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.
> > jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-
> > 3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar
> > /usr/hdp/2.5.0.0-1245/spark/carbonlib/carbondata_2.10-0.2.
> > 1-incubating-SNAPSHOT-shade-hadoop2.7.3.jar
> >
> > -----
> > Regards,
> > Naresh P R
> >
> >
> > On Mon, Jan 16, 2017 at 11:29 AM, Li Peng &lt;
>
> > pengli0606@
>
> > &gt; wrote:
> >
> >> Hi,
> >>     I try to use CarbonThriftServer and Beeline to query, but failed.
> >>
> >> 1. CarbonThriftServer  run and exit with FINISHED state soon.
> >>
> >> Submit script:
> >>
> >> spark-submit
> >> --queue spark
> >> --conf spark.sql.hive.thriftServer.singleSession=true
> >> --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
> >> --jars
> >> /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.
> >> jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-
> >> 3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar
> >> /usr/hdp/2.5.0.0-1245/spark/carbonlib/carbondata_2.10-0.2.
> >> 1-incubating-SNAPSHOT-shade-hadoop2.7.3.jar
> >> hdfs://julong/carbondata/carbonstore
> >>
> >> Here is the log:
> >>
> >> 17/01/16 09:49:36 INFO ContainerManagementProtocolProxy: Opening proxy
> :
> >> dpnode08:45454
> >> 17/01/16 09:49:36 INFO ContainerManagementProtocolProxy: Opening proxy
> :
> >> dpnode05:45454
> >> 17/01/16 09:49:40 INFO YarnClusterSchedulerBackend: Registered executor
> >> NettyRpcEndpointRef(null) (dpnode08:48441) with ID 1
> >> 17/01/16 09:49:40 INFO BlockManagerMasterEndpoint: Registering block
> >> manager
> >> dpnode08:39271 with 511.5 MB RAM, BlockManagerId(1, dpnode08, 39271)
> >> 17/01/16 09:49:46 INFO YarnClusterSchedulerBackend: Registered executor
> >> NettyRpcEndpointRef(null) (dpnode05:35569) with ID 2
> >> 17/01/16 09:49:46 INFO YarnClusterSchedulerBackend: SchedulerBackend is
> >> ready for scheduling beginning after reached
> minRegisteredResourcesRatio:
> >> 0.8
> >> 17/01/16 09:49:46 INFO YarnClusterScheduler:
> >> YarnClusterScheduler.postStartHook done
> >> 17/01/16 09:49:46 INFO BlockManagerMasterEndpoint: Registering block
> >> manager
> >> dpnode05:41205 with 511.5 MB RAM, BlockManagerId(2, dpnode05, 41205)
> >> 17/01/16 09:49:46 INFO CarbonProperties: Driver Property file path:
> >> /usr/hdp/2.5.0.0-1245/spark/conf/carbon.properties
> >> 17/01/16 09:49:46 INFO CarbonProperties: Driver ------Using
> >> Carbon.properties --------
> >> 17/01/16 09:49:46 INFO CarbonProperties: Driver
> >> {carbon.number.of.cores.while.loading=6,
> >> carbon.number.of.cores.while.compacting=4,
> carbon.sort.file.buffer.size=
> >> 20,
> >> carbon.inmemory.record.size=120000, carbon.sort.size=500000,
> >> carbon.graph.rowset.size=100000, carbon.ddl.base.hdfs.url=/user/spark,
> >> carbon.compaction.level.threshold=8,6, carbon.number.of.cores=4,
> >> carbon.kettle.home=/usr/hdp/2.5.0.0-1245/spark/carbonlib/carbonplugins,
> >> carbon.storelocation=hdfs://julong/carbondata/carbonstore,
> >> carbon.enable.auto.load.merge=true, carbon.enableXXHash=true,
> >> carbon.sort.intermediate.files.limit=100, carbon.major.compaction.size=
> >> 1024,
> >> carbon.badRecords.location=/opt/Carbon/Spark/badrecords,
> >> carbon.use.local.dir=true, carbon.enable.quick.filter=false}
> >> 17/01/16 09:49:52 INFO CarbonContext: Initializing execution hive,
> >> version
> >> 1.2.1
> >> 17/01/16 09:49:52 INFO ClientWrapper: Inspected Hadoop version:
> >> 2.7.3.2.5.0.0-1245
> >> 17/01/16 09:49:52 INFO ClientWrapper: Loaded
> >> org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version
> >> 2.7.3.2.5.0.0-1245
> >> 17/01/16 09:49:53 INFO HiveMetaStore: 0: Opening raw store with
> >> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> >> 17/01/16 09:49:53 INFO ObjectStore: ObjectStore, initialize called
> >> 17/01/16 09:49:53 INFO Persistence: Property datanucleus.cache.level2
> >> unknown - will be ignored
> >> 17/01/16 09:49:53 INFO Persistence: Property
> >> hive.metastore.integral.jdo.pushdown unknown - will be ignored
> >> 17/01/16 09:50:01 INFO ObjectStore: Setting MetaStore object pin classes
> >> with
> >> hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,
> >> Partition,Database,Type,FieldSchema,Order"
> >> 17/01/16 09:50:02 INFO Datastore: The class
> >> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> >> "embedded-only" so does not have its own datastore table.
> >> 17/01/16 09:50:02 INFO Datastore: The class
> >> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> >> "embedded-only"
> >> so does not have its own datastore table.
> >> 17/01/16 09:50:09 INFO Datastore: The class
> >> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> >> "embedded-only" so does not have its own datastore table.
> >> 17/01/16 09:50:09 INFO Datastore: The class
> >> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> >> "embedded-only"
> >> so does not have its own datastore table.
> >> 17/01/16 09:50:11 INFO MetaStoreDirectSql: Using direct SQL, underlying
> >> DB
> >> is DERBY
> >> 17/01/16 09:50:11 INFO ObjectStore: Initialized ObjectStore
> >> 17/01/16 09:50:11 WARN ObjectStore: Version information not found in
> >> metastore. hive.metastore.schema.verification is not enabled so
> recording
> >> the schema version 1.2.0
> >> 17/01/16 09:50:12 WARN ObjectStore: Failed to get database default,
> >> returning NoSuchObjectException
> >> 17/01/16 09:50:12 INFO HiveMetaStore: Added admin role in metastore
> >> 17/01/16 09:50:12 INFO HiveMetaStore: Added public role in metastore
> >> 17/01/16 09:50:12 INFO HiveMetaStore: No user is added in admin role,
> >> since
> >> config is empty
> >> 17/01/16 09:50:13 INFO HiveMetaStore: 0: get_all_databases
> >> 17/01/16 09:50:13 INFO audit: ugi=spark ip=unknown-ip-addr
> >> cmd=get_all_databases
> >> 17/01/16 09:50:13 INFO HiveMetaStore: 0: get_functions: db=default pat=*
> >> 17/01/16 09:50:13 INFO audit: ugi=spark ip=unknown-ip-addr
> >> cmd=get_functions: db=default pat=*
> >> 17/01/16 09:50:13 INFO Datastore: The class
> >> "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as
> >> "embedded-only" so does not have its own datastore table.
> >> 17/01/16 09:50:14 INFO SessionState: Created local directory:
> >> /data06/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/container_e26_
> 1484211706075_0020_01_000001/
> >> tmp/yarn
> >> 17/01/16 09:50:14 INFO SessionState: Created local directory:
> >> /data06/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/container_e26_
> 1484211706075_0020_01_000001/
> >> tmp/32b000cb-dbec-4399-9edd-3a0872b5942b_resources
> >> 17/01/16 09:50:14 INFO SessionState: Created HDFS directory:
> >> /tmp/hive/spark/32b000cb-dbec-4399-9edd-3a0872b5942b
> >> 17/01/16 09:50:14 INFO SessionState: Created local directory:
> >> /data06/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/container_e26_
> 1484211706075_0020_01_000001/
> >> tmp/yarn/32b000cb-dbec-4399-9edd-3a0872b5942b
> >> 17/01/16 09:50:14 INFO SessionState: Created HDFS directory:
> >> /tmp/hive/spark/32b000cb-dbec-4399-9edd-3a0872b5942b/_tmp_space.db
> >> 17/01/16 09:50:14 INFO CarbonContext: default warehouse location is
> >> /user/hive/warehouse
> >> 17/01/16 09:50:14 INFO CarbonContext: Initializing
> >> HiveMetastoreConnection
> >> version 1.2.1 using Spark classes.
> >> 17/01/16 09:50:14 INFO ClientWrapper: Inspected Hadoop version:
> >> 2.7.3.2.5.0.0-1245
> >> 17/01/16 09:50:14 INFO ClientWrapper: Loaded
> >> org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version
> >> 2.7.3.2.5.0.0-1245
> >> 17/01/16 09:50:15 INFO HiveMetaStore: 0: Opening raw store with
> >> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> >> 17/01/16 09:50:15 INFO ObjectStore: ObjectStore, initialize called
> >> 17/01/16 09:50:15 INFO Persistence: Property datanucleus.cache.level2
> >> unknown - will be ignored
> >> 17/01/16 09:50:15 INFO Persistence: Property
> >> hive.metastore.integral.jdo.pushdown unknown - will be ignored
> >> 17/01/16 09:50:23 INFO ObjectStore: Setting MetaStore object pin classes
> >> with
> >> hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,
> >> Partition,Database,Type,FieldSchema,Order"
> >> 17/01/16 09:50:25 INFO Datastore: The class
> >> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> >> "embedded-only" so does not have its own datastore table.
> >> 17/01/16 09:50:25 INFO Datastore: The class
> >> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> >> "embedded-only"
> >> so does not have its own datastore table.
> >> 17/01/16 09:50:32 INFO Datastore: The class
> >> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> >> "embedded-only" so does not have its own datastore table.
> >> 17/01/16 09:50:32 INFO Datastore: The class
> >> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> >> "embedded-only"
> >> so does not have its own datastore table.
> >> 17/01/16 09:50:34 INFO MetaStoreDirectSql: Using direct SQL, underlying
> >> DB
> >> is DERBY
> >> 17/01/16 09:50:34 INFO ObjectStore: Initialized ObjectStore
> >> 17/01/16 09:50:34 WARN ObjectStore: Version information not found in
> >> metastore. hive.metastore.schema.verification is not enabled so
> recording
> >> the schema version 1.2.0
> >> 17/01/16 09:50:35 WARN ObjectStore: Failed to get database default,
> >> returning NoSuchObjectException
> >> 17/01/16 09:50:35 INFO HiveMetaStore: Added admin role in metastore
> >> 17/01/16 09:50:35 INFO HiveMetaStore: Added public role in metastore
> >> 17/01/16 09:50:35 INFO HiveMetaStore: No user is added in admin role,
> >> since
> >> config is empty
> >> 17/01/16 09:50:36 INFO HiveMetaStore: 0: get_all_databases
> >> 17/01/16 09:50:36 INFO audit: ugi=spark ip=unknown-ip-addr
> >> cmd=get_all_databases
> >> 17/01/16 09:50:36 INFO HiveMetaStore: 0: get_functions: db=default pat=*
> >> 17/01/16 09:50:36 INFO audit: ugi=spark ip=unknown-ip-addr
> >> cmd=get_functions: db=default pat=*
> >> 17/01/16 09:50:36 INFO Datastore: The class
> >> "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as
> >> "embedded-only" so does not have its own datastore table.
> >> 17/01/16 09:50:37 INFO SessionState: Created local directory:
> >> /data06/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/container_e26_
> 1484211706075_0020_01_000001/
> >> tmp/cf5abd18-5b31-483f-89be-8e4ae65a0cb7_resources
> >> 17/01/16 09:50:37 INFO SessionState: Created HDFS directory:
> >> /tmp/hive/spark/cf5abd18-5b31-483f-89be-8e4ae65a0cb7
> >> 17/01/16 09:50:37 INFO SessionState: Created local directory:
> >> /data06/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/container_e26_
> 1484211706075_0020_01_000001/
> >> tmp/yarn/cf5abd18-5b31-483f-89be-8e4ae65a0cb7
> >> 17/01/16 09:50:37 INFO SessionState: Created HDFS directory:
> >> /tmp/hive/spark/cf5abd18-5b31-483f-89be-8e4ae65a0cb7/_tmp_space.db
> >> 17/01/16 09:50:41 INFO CompositeService: Operation log root directory is
> >> created:
> >> /data06/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/container_e26_
> 1484211706075_0020_01_000001/
> >> tmp/yarn/operation_logs
> >> 17/01/16 09:50:42 INFO AbstractService: HiveServer2: Async execution
> pool
> >> size 100
> >> 17/01/16 09:50:42 INFO AbstractService: Service:OperationManager is
> >> inited.
> >> 17/01/16 09:50:42 INFO AbstractService: Service: SessionManager is
> >> inited.
> >> 17/01/16 09:50:42 INFO AbstractService: Service: CLIService is inited.
> >> 17/01/16 09:50:42 INFO AbstractService: Service:ThriftBinaryCLIService
> is
> >> inited.
> >> 17/01/16 09:50:42 INFO AbstractService: Service: HiveServer2 is inited.
> >> 17/01/16 09:50:42 INFO AbstractService: Service:OperationManager is
> >> started.
> >> 17/01/16 09:50:42 INFO AbstractService: Service:SessionManager is
> >> started.
> >> 17/01/16 09:50:42 INFO AbstractService: Service:CLIService is started.
> >> 17/01/16 09:50:42 INFO ObjectStore: ObjectStore, initialize called
> >> 17/01/16 09:50:42 INFO Query: Reading in results for query
> >> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection
> used
> >> is
> >> closing
> >> 17/01/16 09:50:42 INFO MetaStoreDirectSql: Using direct SQL, underlying
> >> DB
> >> is DERBY
> >> 17/01/16 09:50:42 INFO ObjectStore: Initialized ObjectStore
> >> 17/01/16 09:50:42 INFO HiveMetaStore: 0: get_databases: default
> >> 17/01/16 09:50:42 INFO audit: ugi=spark ip=unknown-ip-addr
> >> cmd=get_databases: default
> >> 17/01/16 09:50:42 INFO HiveMetaStore: 0: Shutting down the object
> >> store...
> >> 17/01/16 09:50:42 INFO audit: ugi=spark ip=unknown-ip-addr
> >> cmd=Shutting down
> >> the object store...
> >> 17/01/16 09:50:42 INFO HiveMetaStore: 0: Metastore shutdown complete.
> >> 17/01/16 09:50:42 INFO audit: ugi=spark ip=unknown-ip-addr
> >> cmd=Metastore
> >> shutdown complete.
> >> 17/01/16 09:50:42 INFO AbstractService: Service:ThriftBinaryCLIService
> is
> >> started.
> >> 17/01/16 09:50:42 INFO AbstractService: Service:HiveServer2 is started.
> >> 17/01/16 09:50:42 INFO ApplicationMaster: Final app status: SUCCEEDED,
> >> exitCode: 0
> >> 17/01/16 09:50:42 INFO ThriftCLIService: Starting ThriftBinaryCLIService
> >> on
> >> port 10000 with 5...500 worker threads
> >> 17/01/16 09:50:42 INFO SparkContext: Invoking stop() from shutdown hook
> >> 17/01/16 09:50:42 INFO HiveServer2: Shutting down HiveServer2
> >> 17/01/16 09:50:42 INFO ThriftCLIService: Thrift server has stopped
> >> 17/01/16 09:50:42 INFO AbstractService: Service:ThriftBinaryCLIService
> is
> >> stopped.
> >> 17/01/16 09:50:42 INFO AbstractService: Service:OperationManager is
> >> stopped.
> >> 17/01/16 09:50:42 INFO AbstractService: Service:SessionManager is
> >> stopped.
> >> 17/01/16 09:50:42 INFO AbstractService: Service:CLIService is stopped.
> >> 17/01/16 09:50:42 INFO AbstractService: Service:HiveServer2 is stopped.
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/sqlserver/session/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/sqlserver/session,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/sqlserver/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/sqlserver,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/static/sql,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/SQL/execution/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/SQL/execution,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/SQL/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/SQL,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/metrics/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/api,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/static,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/executors/threadDump,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/executors/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/executors,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/environment/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/environment,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/storage/rdd,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/storage/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/storage,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/stages/pool/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/stages/pool,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/stages/stage/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/stages/stage,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/stages/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/stages,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/jobs/job/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/jobs/job,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/jobs/json,null}
> >> 17/01/16 09:50:42 INFO ContextHandler: stopped
> >> o.s.j.s.ServletContextHandler{/jobs,null}
> >> 17/01/16 09:50:42 INFO SparkUI: Stopped Spark web UI at
> >> http://192.168.50.8:36579
> >> 17/01/16 09:50:42 INFO YarnAllocator: Driver requested a total number of
> >> 0
> >> executor(s).
> >> 17/01/16 09:50:42 INFO YarnClusterSchedulerBackend: Shutting down all
> >> executors
> >> 17/01/16 09:50:42 INFO YarnClusterSchedulerBackend: Asking each executor
> >> to
> >> shut down
> >> 17/01/16 09:50:42 INFO SchedulerExtensionServices: Stopping
> >> SchedulerExtensionServices
> >> (serviceOption=None,
> >>  services=List(),
> >>  started=false)
> >> 17/01/16 09:50:42 INFO MapOutputTrackerMasterEndpoint:
> >> MapOutputTrackerMasterEndpoint stopped!
> >> 17/01/16 09:50:42 INFO MemoryStore: MemoryStore cleared
> >> 17/01/16 09:50:42 INFO BlockManager: BlockManager stopped
> >> 17/01/16 09:50:42 INFO BlockManagerMaster: BlockManagerMaster stopped
> >> 17/01/16 09:50:42 INFO
> >> OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
> >> OutputCommitCoordinator stopped!
> >> 17/01/16 09:50:42 INFO SparkContext: Successfully stopped SparkContext
> >> 17/01/16 09:50:42 INFO ApplicationMaster: Unregistering
> ApplicationMaster
> >> with SUCCEEDED
> >> 17/01/16 09:50:42 INFO RemoteActorRefProvider$RemotingTerminator:
> >> Shutting
> >> down remote daemon.
> >> 17/01/16 09:50:42 INFO RemoteActorRefProvider$RemotingTerminator:
> Remote
> >> daemon shut down; proceeding with flushing remote transports.
> >> 17/01/16 09:50:42 INFO AMRMClientImpl: Waiting for application to be
> >> successfully unregistered.
> >> 17/01/16 09:50:42 INFO RemoteActorRefProvider$RemotingTerminator:
> >> Remoting
> >> shut down.
> >> 17/01/16 09:50:42 INFO ApplicationMaster: Deleting staging directory
> >> .sparkStaging/application_1484211706075_0020
> >> 17/01/16 09:50:42 INFO ShutdownHookManager: Shutdown hook called
> >> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> >> /data11/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/spark-45b2767d-5201-4af8-
> 8ba8-93a1130e4a6a
> >> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> >> /data06/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/container_e26_
> 1484211706075_0020_01_000001/
> >> tmp/spark-c1a03ca3-8dd5-466a-8d1e-2f3bc7280bdc
> >> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> >> /data08/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/spark-208e5903-abed-494c-
> 879b-e2d79764c1b9
> >> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> >> /data09/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/spark-19e56c57-f2b8-4c40-
> 94a3-d230f8ee0ff9
> >> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> >> /data03/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/spark-6fbf013a-cd00-49ae-
> a43a-6237331a615b
> >> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> >> /data04/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/spark-44650c84-ac7f-4908-
> b038-e16f6cdc59cd
> >> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> >> /data02/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/spark-2c200b2c-0f2b-45e2-
> a305-977b99ed94a3
> >> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> >> /data12/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/spark-582b6acc-a159-49ff-
> 832c-472aacae8894
> >> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> >> /data05/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/spark-86a35020-0827-4eb4-
> b40c-d7e708ab62f5
> >> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> >> /data06/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/spark-209e2803-4190-4366-
> 9f35-3d2e21e2ff48
> >> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> >> /data07/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/spark-1ae5a1f4-18c9-4de9-
> 94bf-df6aaa693df5
> >> 17/01/16 09:50:42 INFO ShutdownHookManager: Deleting directory
> >> /data10/hadoop/yarn/local/usercache/spark/appcache/
> >> application_1484211706075_0020/spark-b78c9877-6007-400d-
> b067-3e55d969542c
> >>
> >>
> >>
> >> 2. Beeline client can not query carbon data, and create carbon table.
> >>
> >> 0: jdbc:hive2://dpnode03:10000> select * from sale limit 10;
> >> +-----------+--+
> >> | sale.col  |
> >> +-----------+--+
> >> +-----------+--+
> >>
> >> 0: jdbc:hive2://dpnode03:10000> create table info (`name` string, `age`
> >> int)
> >> stored by 'carbondata';
> >> Error: Error while compiling statement: FAILED: SemanticException Cannot
> >> find class 'carbondata' (state=42000,code=40000)
> >>
> >>
> >>
> >> Thanks.
> >>
> >>
> >>
> >> --
> >> View this message in context: http://apache-carbondata-
> >> mailing-list-archive.1130556.n5.nabble.com/unable-to-use-
> >> CarbonThriftServer-and-Beeline-client-tp6252.html
> >> Sent from the Apache CarbonData Mailing List archive mailing list
> archive
> >> at Nabble.com.
> >>
>
>
>
>
>
> --
> View this message in context: http://apache-carbondata-
> mailing-list-archive.1130556.n5.nabble.com/unable-to-use-
> CarbonThriftServer-and-Beeline-client-tp6252p6449.html
> Sent from the Apache CarbonData Mailing List archive mailing list archive
> at Nabble.com.
>