-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoidance of head-of-line blocking by means of portals suspension? #652
Comments
This is a known issue. R2DBC is a bit more honest than JDBC in that regard as JDBC fetches the entire cursor into memory when running another query while the cursor is open. That is also the reason why reading an entire chunk into I would ask you to file the ticket in https://github.com/spring-projects/spring-data-relational/issues as the driver itself only passes-thru SQL. The issue could be also solved by utilizing multiple connections. That would however force data to escape its transactional closure. Your example is exactly doing that, allocating cursors on the server and progressively consuming these. |
Thank you for the swift reply! I agree it could be solved in spring-data-relational but it would be an ad-hoc solution whereas this issue is intrinsic to basic driver scenarios (its quite a mundane thing to have a downstream which does something with the data from the upstream).
For the sake of conciseness let us write down what happens in the channel:
I want to denote that all this communication happens inside the same transaction (hence inside the same connection) and according to https://www.postgresql.org/message-id/1915c800-2c49-4039-a840-7cafc0654fe4%40iki.fi the driver could get the same result without a cursor by means of a limit in the execute and more sophisticated queue processing algorithm (it wouldn't be a queue actually):
If the driver produced such a traffic the very first chain would work for any number of users:
Do I miss something? Thanks. |
Greeting!
Please consider the following code. It's a conceptual model of a reactive chain which reads a stream of DDD-aggregates (in a real app I would be better off doing the same via spring crud-repositories). The first select retrieves roots of aggregates (user), selects from the downstream retrieve some related entities (accounts):
Problem
Such a chain can read at most 255 users. If the number exceeds an exception ("Cannot exchange messages because the request queue limit is exceeded") is thrown which is frustrating especially for spring-data-crud-repository users. Apparently if one wants to read a stream of (DDD) aggregates (via multiple crud-repositories) they are about to use either pagination (and carefully calculate all the requests the downstream might emit) or joins (the former is nothing to do with the reactive approach (we want streams, not pages)).
The reason of the aforementioned exception is head of line blocking: the conversation for "select userId from users" can't be removed from the conversations queue (io.r2dbc.postgresql.client.ReactorNettyClient) until Postgres's "Command completion" is received and the portal is closed thereafter, simultaneously conversations for the requests from the downstream are queueing up until the limit is reached and the exception is thrown.
Workaround
At the same time the task is doable. The following code (employing a cursor) is capable of reading an infinite stream of users and emitting additional request for each of them:
The key difference is that every cursor-related instruction is treated by the driver as a separate conversation plus suspension is intrinsic to portals created via declare cursor which both make possible to send additional (conventional) selects inside "transformer.invoke(it)".
Possible solution
At the same time according to this answer in pgsql-hackers (https://www.postgresql.org/message-id/1915c800-2c49-4039-a840-7cafc0654fe4%40iki.fi) there is no difference (for a backend) between a portal created via declare cursor and via bind.
So could the following approach pay off: if Postgres's "Portal suspended" is received for Nth conversation (a fetchSize is to be defined for the corresponding query which is already possible) the driver goes about executing subsequent conversations (generated by a downstream (if any)) and resumes the Nth conversation (via a new "Execute" command) only after all subsequent conversations are done, thereby the following simple code, a lot of developers tend to write, could work (now it can't owing to the aforementioned reasons):
Motivation
In my eyes it would be a step change for spring-data users because it seems that the only way to read a stream of complex objects nowadays is either the aforementioned approach (employing a cursor) or joins. The former has undue complexity, the latter has a lot of disadvantages:
Thanks!
The text was updated successfully, but these errors were encountered: