-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Appender may caused deadlock on AsyncLoggerDisruptor #2893
Comments
Even if it is not a Kafka client component , any other client component that integrates log4j2 has similar issues. Disable log4j2 for client component. |
Is there a way to disable logging for particular lib except increasing logging level threshold?
Not an option, it will be full soon or later. |
Tip TL;DR: Please set the There are already a lot of configuration properties that regulate what happens when the queue becomes full:
Note that Log4j Core never blocks on the asynchronous thread used by Lines 331 to 340 in b593be7
Your problem arises, because the Kafka appender uses its own background thread to send log events and Log4j Core does not know that it should not block on that thread. You have several solutions to this problem:
Warning To switch from the "All asynchronous loggers" to the "Mixed sync/async loggers", you need to restore the default value of the |
@quaff, @ppkarwasz is spot on with his analysis and suggestions. My 2 cents are...
|
Thanks for your quick response. |
The correct system property key is |
Using When you use asynchronous elements, you loose the ability of receiving |
That is the legacy pre-2.10 name of the configuration property. While it still works for backward compatibility,
|
Description
Log operation is blocked when the RingBuffer is full, then waiting for appender to consume LogEvent's from the RingBuffer, but the appender itself may operate logging before the consumption to trigger deadlock.
Here is my actual case in production, I'm using Kafka appender, the Kafka client logging caused deadlock, see TransactionManager.java
Proposal
The issue is not Kafka specified, log4j should discard
LogEvent
from appenders if the underlyingRingBuffer
is full.The text was updated successfully, but these errors were encountered: