Logging problem

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Logging problem

Rana Faisal Munir
Hi,

Today, I was running a filter query ("SELECT  *  FROM widetable WHERE  
col_long_0 = 0") on a wide table with 1187 columns and Spark started
printing the below output. It spills alot of log which I want to turn
off. There is any option to turn it off.  I have tried both option
(ERROR,INFO) in log4j.properties file. It did not work for me.

Thank you

Regards
Faisal


17/05/24 12:39:41 INFO CarbonLateDecodeRule: main Starting to optimize plan
17/05/24 12:39:41 INFO CarbonLateDecodeRule: main Skip CarbonOptimizer
17/05/24 12:39:42 INFO deprecation: mapred.job.id is deprecated.
Instead, use mapreduce.job.id
17/05/24 12:39:42 INFO deprecation: mapred.tip.id is deprecated.
Instead, use mapreduce.task.id
17/05/24 12:39:42 INFO deprecation: mapred.task.id is deprecated.
Instead, use mapreduce.task.attempt.id
17/05/24 12:39:42 INFO deprecation: mapred.task.is.map is deprecated.
Instead, use mapreduce.task.ismap
17/05/24 12:39:42 INFO deprecation: mapred.task.partition is deprecated.
Instead, use mapreduce.task.partition
17/05/24 12:39:42 INFO FileOutputCommitter: File Output Committer
Algorithm version is 1
17/05/24 12:39:42 INFO SQLHadoopMapReduceCommitProtocol: Using output
committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
17/05/24 12:39:44 ERROR CodeGenerator: failed to compile:
org.codehaus.janino.JaninoRuntimeException: Code of method
"processNext()V" of class
"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator"
grows beyond 64 KB
/* 001 */ public Object generate(Object[] references) {
/* 002 */   return new GeneratedIterator(references);
/* 003 */ }
/* 004 */
/* 005 */ final class GeneratedIterator extends
org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */   private Object[] references;
/* 007 */   private scala.collection.Iterator[] inputs;
/* 008 */   private scala.collection.Iterator scan_input;
/* 009 */   private org.apache.spark.sql.execution.metric.SQLMetric
scan_numOutputRows;
/* 010 */   private org.apache.spark.sql.execution.metric.SQLMetric
scan_scanTime;
/* 011 */   private long scan_scanTime1;
/* 012 */   private
org.apache.spark.sql.execution.vectorized.ColumnarBatch scan_batch;
/* 013 */   private int scan_batchIdx;
/* 014 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance0;
/* 015 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance1;
/* 016 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance2;
/* 017 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance3;
/* 018 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance4;
/* 019 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance5;
/* 020 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance6;
/* 021 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance7;
/* 022 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance8;
/* 023 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance9;
/* 024 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance10;
/* 025 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance11;
/* 026 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance12;
/* 027 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance13;
/* 028 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance14;
/* 029 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance15;
/* 030 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance16;
/* 031 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance17;
/* 032 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance18;
/* 033 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance19;
/* 034 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance20;
/* 035 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance21;
/* 036 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance22;
/* 037 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance23;
Reply | Threaded
Open this post in threaded view
|

Re: Logging problem

Liang Chen
Administrator
Hi Rana

Your this query is in Spark-shell ?
Please try the below script:

import org.apache.log4j.Logger
import org.apache.log4j.Level
Logger.getLogger("org").setLevel(Level.OFF)
Logger.getLogger("akka").setLevel(Level.OFF)


Regards
Liang
Rana Faisal Munir wrote
Hi,

Today, I was running a filter query ("SELECT  *  FROM widetable WHERE  
col_long_0 = 0") on a wide table with 1187 columns and Spark started
printing the below output. It spills alot of log which I want to turn
off. There is any option to turn it off.  I have tried both option
(ERROR,INFO) in log4j.properties file. It did not work for me.

Thank you

Regards
Faisal


17/05/24 12:39:41 INFO CarbonLateDecodeRule: main Starting to optimize plan
17/05/24 12:39:41 INFO CarbonLateDecodeRule: main Skip CarbonOptimizer
17/05/24 12:39:42 INFO deprecation: mapred.job.id is deprecated.
Instead, use mapreduce.job.id
17/05/24 12:39:42 INFO deprecation: mapred.tip.id is deprecated.
Instead, use mapreduce.task.id
17/05/24 12:39:42 INFO deprecation: mapred.task.id is deprecated.
Instead, use mapreduce.task.attempt.id
17/05/24 12:39:42 INFO deprecation: mapred.task.is.map is deprecated.
Instead, use mapreduce.task.ismap
17/05/24 12:39:42 INFO deprecation: mapred.task.partition is deprecated.
Instead, use mapreduce.task.partition
17/05/24 12:39:42 INFO FileOutputCommitter: File Output Committer
Algorithm version is 1
17/05/24 12:39:42 INFO SQLHadoopMapReduceCommitProtocol: Using output
committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
17/05/24 12:39:44 ERROR CodeGenerator: failed to compile:
org.codehaus.janino.JaninoRuntimeException: Code of method
"processNext()V" of class
"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator"
grows beyond 64 KB
/* 001 */ public Object generate(Object[] references) {
/* 002 */   return new GeneratedIterator(references);
/* 003 */ }
/* 004 */
/* 005 */ final class GeneratedIterator extends
org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */   private Object[] references;
/* 007 */   private scala.collection.Iterator[] inputs;
/* 008 */   private scala.collection.Iterator scan_input;
/* 009 */   private org.apache.spark.sql.execution.metric.SQLMetric
scan_numOutputRows;
/* 010 */   private org.apache.spark.sql.execution.metric.SQLMetric
scan_scanTime;
/* 011 */   private long scan_scanTime1;
/* 012 */   private
org.apache.spark.sql.execution.vectorized.ColumnarBatch scan_batch;
/* 013 */   private int scan_batchIdx;
/* 014 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance0;
/* 015 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance1;
/* 016 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance2;
/* 017 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance3;
/* 018 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance4;
/* 019 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance5;
/* 020 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance6;
/* 021 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance7;
/* 022 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance8;
/* 023 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance9;
/* 024 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance10;
/* 025 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance11;
/* 026 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance12;
/* 027 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance13;
/* 028 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance14;
/* 029 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance15;
/* 030 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance16;
/* 031 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance17;
/* 032 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance18;
/* 033 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance19;
/* 034 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance20;
/* 035 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance21;
/* 036 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance22;
/* 037 */   private
org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance23;
Reply | Threaded
Open this post in threaded view
|

Re: Logging problem

Liang Chen
Administrator
Hi Rana

Please let us know if your issue be solved?

Regards
Liang

2017-05-25 20:38 GMT+08:00 Liang Chen <[hidden email]>:

> Hi Rana
>
> Your this query is in Spark-shell ?
> Please try the below script:
>
> import org.apache.log4j.Logger
> import org.apache.log4j.Level
> Logger.getLogger("org").setLevel(Level.OFF)
> Logger.getLogger("akka").setLevel(Level.OFF)
>
>
> Regards
> Liang
>
> Rana Faisal Munir wrote
> > Hi,
> >
> > Today, I was running a filter query ("SELECT  *  FROM widetable WHERE
> > col_long_0 = 0") on a wide table with 1187 columns and Spark started
> > printing the below output. It spills alot of log which I want to turn
> > off. There is any option to turn it off.  I have tried both option
> > (ERROR,INFO) in log4j.properties file. It did not work for me.
> >
> > Thank you
> >
> > Regards
> > Faisal
> >
> >
> > 17/05/24 12:39:41 INFO CarbonLateDecodeRule: main Starting to optimize
> > plan
> > 17/05/24 12:39:41 INFO CarbonLateDecodeRule: main Skip CarbonOptimizer
> > 17/05/24 12:39:42 INFO deprecation: mapred.job.id is deprecated.
> > Instead, use mapreduce.job.id
> > 17/05/24 12:39:42 INFO deprecation: mapred.tip.id is deprecated.
> > Instead, use mapreduce.task.id
> > 17/05/24 12:39:42 INFO deprecation: mapred.task.id is deprecated.
> > Instead, use mapreduce.task.attempt.id
> > 17/05/24 12:39:42 INFO deprecation: mapred.task.is.map is deprecated.
> > Instead, use mapreduce.task.ismap
> > 17/05/24 12:39:42 INFO deprecation: mapred.task.partition is deprecated.
> > Instead, use mapreduce.task.partition
> > 17/05/24 12:39:42 INFO FileOutputCommitter: File Output Committer
> > Algorithm version is 1
> > 17/05/24 12:39:42 INFO SQLHadoopMapReduceCommitProtocol: Using output
> > committer class org.apache.hadoop.mapreduce.
> lib.output.FileOutputCommitter
> > 17/05/24 12:39:44 ERROR CodeGenerator: failed to compile:
> > org.codehaus.janino.JaninoRuntimeException: Code of method
> > "processNext()V" of class
> > "org.apache.spark.sql.catalyst.expressions.GeneratedClass$
> GeneratedIterator"
> > grows beyond 64 KB
> > /* 001 */ public Object generate(Object[] references) {
> > /* 002 */   return new GeneratedIterator(references);
> > /* 003 */ }
> > /* 004 */
> > /* 005 */ final class GeneratedIterator extends
> > org.apache.spark.sql.execution.BufferedRowIterator {
> > /* 006 */   private Object[] references;
> > /* 007 */   private scala.collection.Iterator[] inputs;
> > /* 008 */   private scala.collection.Iterator scan_input;
> > /* 009 */   private org.apache.spark.sql.execution.metric.SQLMetric
> > scan_numOutputRows;
> > /* 010 */   private org.apache.spark.sql.execution.metric.SQLMetric
> > scan_scanTime;
> > /* 011 */   private long scan_scanTime1;
> > /* 012 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnarBatch scan_batch;
> > /* 013 */   private int scan_batchIdx;
> > /* 014 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance0;
> > /* 015 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance1;
> > /* 016 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance2;
> > /* 017 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance3;
> > /* 018 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance4;
> > /* 019 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance5;
> > /* 020 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance6;
> > /* 021 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance7;
> > /* 022 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance8;
> > /* 023 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance9;
> > /* 024 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance10;
> > /* 025 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance11;
> > /* 026 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance12;
> > /* 027 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance13;
> > /* 028 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance14;
> > /* 029 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance15;
> > /* 030 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance16;
> > /* 031 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance17;
> > /* 032 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance18;
> > /* 033 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance19;
> > /* 034 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance20;
> > /* 035 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance21;
> > /* 036 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance22;
> > /* 037 */   private
> > org.apache.spark.sql.execution.vectorized.ColumnVector
> scan_colInstance23;
>
>
>
>
>
> --
> View this message in context: http://apache-carbondata-dev-
> mailing-list-archive.1130556.n5.nabble.com/Logging-problem-
> tp13170p13219.html
> Sent from the Apache CarbonData Dev Mailing List archive mailing list
> archive at Nabble.com.
>



--
Regards
Liang
Reply | Threaded
Open this post in threaded view
|

Re: Logging problem

Rana Faisal Munir
Hi Liang,

I made changes in log4j.properties file of Spark. I changed INFO with
ERROR to stop this issue.


Thank you


Regards

Faisal


On 26/05/2017 17:51, Liang Chen wrote:

> Hi Rana
>
> Please let us know if your issue be solved?
>
> Regards
> Liang
>
> 2017-05-25 20:38 GMT+08:00 Liang Chen <[hidden email]>:
>
>> Hi Rana
>>
>> Your this query is in Spark-shell ?
>> Please try the below script:
>>
>> import org.apache.log4j.Logger
>> import org.apache.log4j.Level
>> Logger.getLogger("org").setLevel(Level.OFF)
>> Logger.getLogger("akka").setLevel(Level.OFF)
>>
>>
>> Regards
>> Liang
>>
>> Rana Faisal Munir wrote
>>> Hi,
>>>
>>> Today, I was running a filter query ("SELECT  *  FROM widetable WHERE
>>> col_long_0 = 0") on a wide table with 1187 columns and Spark started
>>> printing the below output. It spills alot of log which I want to turn
>>> off. There is any option to turn it off.  I have tried both option
>>> (ERROR,INFO) in log4j.properties file. It did not work for me.
>>>
>>> Thank you
>>>
>>> Regards
>>> Faisal
>>>
>>>
>>> 17/05/24 12:39:41 INFO CarbonLateDecodeRule: main Starting to optimize
>>> plan
>>> 17/05/24 12:39:41 INFO CarbonLateDecodeRule: main Skip CarbonOptimizer
>>> 17/05/24 12:39:42 INFO deprecation: mapred.job.id is deprecated.
>>> Instead, use mapreduce.job.id
>>> 17/05/24 12:39:42 INFO deprecation: mapred.tip.id is deprecated.
>>> Instead, use mapreduce.task.id
>>> 17/05/24 12:39:42 INFO deprecation: mapred.task.id is deprecated.
>>> Instead, use mapreduce.task.attempt.id
>>> 17/05/24 12:39:42 INFO deprecation: mapred.task.is.map is deprecated.
>>> Instead, use mapreduce.task.ismap
>>> 17/05/24 12:39:42 INFO deprecation: mapred.task.partition is deprecated.
>>> Instead, use mapreduce.task.partition
>>> 17/05/24 12:39:42 INFO FileOutputCommitter: File Output Committer
>>> Algorithm version is 1
>>> 17/05/24 12:39:42 INFO SQLHadoopMapReduceCommitProtocol: Using output
>>> committer class org.apache.hadoop.mapreduce.
>> lib.output.FileOutputCommitter
>>> 17/05/24 12:39:44 ERROR CodeGenerator: failed to compile:
>>> org.codehaus.janino.JaninoRuntimeException: Code of method
>>> "processNext()V" of class
>>> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$
>> GeneratedIterator"
>>> grows beyond 64 KB
>>> /* 001 */ public Object generate(Object[] references) {
>>> /* 002 */   return new GeneratedIterator(references);
>>> /* 003 */ }
>>> /* 004 */
>>> /* 005 */ final class GeneratedIterator extends
>>> org.apache.spark.sql.execution.BufferedRowIterator {
>>> /* 006 */   private Object[] references;
>>> /* 007 */   private scala.collection.Iterator[] inputs;
>>> /* 008 */   private scala.collection.Iterator scan_input;
>>> /* 009 */   private org.apache.spark.sql.execution.metric.SQLMetric
>>> scan_numOutputRows;
>>> /* 010 */   private org.apache.spark.sql.execution.metric.SQLMetric
>>> scan_scanTime;
>>> /* 011 */   private long scan_scanTime1;
>>> /* 012 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnarBatch scan_batch;
>>> /* 013 */   private int scan_batchIdx;
>>> /* 014 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance0;
>>> /* 015 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance1;
>>> /* 016 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance2;
>>> /* 017 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance3;
>>> /* 018 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance4;
>>> /* 019 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance5;
>>> /* 020 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance6;
>>> /* 021 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance7;
>>> /* 022 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance8;
>>> /* 023 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance9;
>>> /* 024 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance10;
>>> /* 025 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance11;
>>> /* 026 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance12;
>>> /* 027 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance13;
>>> /* 028 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance14;
>>> /* 029 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance15;
>>> /* 030 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance16;
>>> /* 031 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance17;
>>> /* 032 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance18;
>>> /* 033 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance19;
>>> /* 034 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance20;
>>> /* 035 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance21;
>>> /* 036 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance22;
>>> /* 037 */   private
>>> org.apache.spark.sql.execution.vectorized.ColumnVector
>> scan_colInstance23;
>>
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-carbondata-dev-
>> mailing-list-archive.1130556.n5.nabble.com/Logging-problem-
>> tp13170p13219.html
>> Sent from the Apache CarbonData Dev Mailing List archive mailing list
>> archive at Nabble.com.
>>
>
>