-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LEAK: ByteBuf.release() was not called before it's garbage-collected #580
Comments
We keep seeing the issue but we were not able to track it down yet. |
@mp911de You can find the Spring Boot project which has this issue here: https://github.com/arconsis/server_benchmarks/tree/main/bookstore-springboot . The issue pops up only when the service is under load. |
This is part of the problem why we were not able to pin-point the issue because under high load, debugging is impossible. |
We have started seeing this issue reliably in an application which is regularly under high load. I hope following logs with 2023-02-26T19:13:29.268852707Z - io.netty.util.ResourceLeakDetector
2023-02-26T19:13:30.200918569Z - io.netty.util.ResourceLeakDetector
2023-02-26T19:13:32.580323591Z - io.netty.util.ResourceLeakDetector
|
We also encountered the same issue. Any solution/workaround for this ? (besides increasing memory) |
Is it the only error in logs? We kind of have leak in |
Wanted to drop some info here in case it's helpful in tracking this down. I maintain a Spring Boot Webflux service (3.0.0) and am also seeing these leak issues from A client of my service will make a burst of requests all at once, and will sometimes cancel a big batch of them upon some failure on their end. Every time that happens, I get a burst of ByteBuf.release() was not called
DataRow.release() was not called
I've been able to reproduce this synthetically as well with a load test where requests start to time out on the client side under enough load. Hope this helps! |
Getting the same issue. Are we sure this is an r2dbc issue and not reactor-netty? Perhaps to do with reactor/reactor-netty#881? |
Getting the same "DataRow.Release() was not called", which under Edit. The connections to the database remain open, unfortunately. We decided to abandon r2dbc until this issue is resolved. |
Hi, we are running into a similar problem using spring 3.1.3 and r2db2 1.0.1. Our service logs this every now and then:
|
Hello, EDIT : I resolved this in my app by refactoring a supid code causing very high accesses to the database.
|
@siegfried-chaisson Can you post a bit more detail? With knowing, what caused the issue, we could get some insights on what end we need to investigate. |
The impact of this issue remains a consistent problem in several applications. Since no resolutions is in sight, we intend to switch over to JDBC for the meantime and wrap queries in |
I'm sorry, but the bug is still occurring 😓. I switched to an H2 database with no issues, so perhaps the problem isn't related to my code. |
Hello everyone, The problem was related to the timing of updates, which were occurring too rapidly. This resulted in a situation where an update was attempted before a create operation had been completed. In my case, I'm using a UUID as the primary key (@id). In the initial approach: In the second approach: Finally, to resolve this issue, I decided not to merge entities too quickly. Instead, I get back to random UUID and performed a simple save operation. Additionally, I had to implement a grouping mechanism when reading the data. This approach successfully eliminated the leak issue for me. Here's a code snippet example that shows the change I made in the save method:
changed in :
|
Leaks do occur in integration tests as well. At random locations though. 😅 |
Hello, this isn't happening on my side since I read the Netty docs and found un-released ByteBuf(s) in my code or suscribers not completed (map/flatMap/then/subscribe and more)😅 Special thanks to the paranoid level detector from netty, sorry but in my case it wasn't an r2dbc-postgresql issue. |
Bug Report
Hello we are using Spring Boot 3.0.2, Spring Data R2DBC and r2dbc-postgresql. Under normal conditions everything works fine. Though when stress testing our service we observe Netty leaks which force our service to restart. I have attached an example stacktrace below.
Stacktrace:
Our code is very simple we are just using Spring Data Repositories:
and then just using the above repository like this:
Is there anything we did configure wrongly or we have to watchout?
The text was updated successfully, but these errors were encountered: