Login  Register

答复: About: Carbon Thrift Server is always hung dead!

Posted by jingych on Feb 04, 2021; 8:28am
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/About-Carbon-Thrift-Server-is-always-hung-dead-tp106006p106029.html

Hi Kunal Kapoor,

Thanks for your reply.

I've switched the carbon thrift server to the Option 1.
And I'll track the new solution for one or two days, then reply if it's ok.

But I still have a question about the HA solution:
We are using jdbc to connect to the carbon table.
So I want to know does the new thrift server solution support the HA?

Thanks!
Jingych.

-----邮件原件-----
发件人: Kunal Kapoor [mailto:[hidden email]]
发送时间: 2021年2月4日 13:15
收件人: [hidden email]
主题: Re: About: Carbon Thrift Server is always hung dead!

Hi jingych,

1. Use of CarbonThriftServer has been deprecated by the community since 2.0 release. Please use the "spark.sql.extensions" property to configure and use carbondata as mentioned here <https://github.com/apache/carbondata/blob/master/docs/quick-start-guide.md#option-1-starting-thrift-server-with-carbonextensionssince-20>
(Option
1).
2. HA for carbondata can be achieved by using the existing spark HA implementation( http://spark.apache.org/docs/latest/spark-standalone.html#high-availability
).

Please try the above mentioned solutions and tell us whether the problem is solved or not. We can check further on this if some issue persists.

You can join slack for better communication with us using this link <https://join.slack.com/t/carbondataworkspace/shared_invite/zt-g8sv1g92-pr3GTvjrW5H9DVvNl6H2dg>
.

Thank you
Kunal Kapoor


On Thu, Feb 4, 2021 at 6:45 AM jingych <[hidden email]> wrote:

> Hello, all!
>
> Thanks for the carbondata community, it's really so fast!
>
> But recently I was confused about the carbon thrift server. It's
> always hang up and dead.
>
> So I do need your help, please!
>
> My environment is:
> 6nodes: Carbon 2.0 + spark2.4.5 + hadoop2.10, Each node: 16cores +
> 64GB mem + 1TB disk
>
> And here is my thrift server shell:
> spark-submit
> --master yarn
> --num-executors 4
> --driver-memory 10G
> --executor-memory 10G
> --executor-cores 4
> --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
>
> ../carbonlib/apache-carbondata-2.1.0-SNAPSHOT-bin-spark2.4.5-hadoop2.7
> .2.jar
>
> So what's the problem? And is there a HA solution for the thrift server?
>
> Thanks!
>
> Best regards!
>
> ________________________________
>  Jingych
> 2021-02-04
>
>
> ----------------------------------------------------------------------
> ----------------------------- Confidentiality Notice: The information
> contained in this e-mail and any accompanying attachment(s) is
> intended only for the use of the intended recipient and may be
> confidential and/or privileged of Neusoft Corporation, its
> subsidiaries and/or its affiliates. If any reader of this
> communication is not the intended recipient,unauthorized
> use,forwarding, printing, storing, disclosure or copying is strictly
> prohibited, and may be unlawful.If you have received this
> communication in error,please immediately notify the sender by return
> e-mail, and delete the original message and all copies from your
> system. Thank you.
>
> ----------------------------------------------------------------------
> -----------------------------
>

---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication
is not the intended recipient,unauthorized use,forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------