-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Mint.HTTP.recv_response/3 #447
base: main
Are you sure you want to change the base?
Conversation
Pull Request Test Coverage Report for Build 55ab096f5279d6b3e29ddcd2654eda50680b0c15-PR-447Details
💛 - Coveralls |
Another idea is to instead have: {:ok, conn} = Mint.HTTP.connect(:https, "httpbin.org", 443)
{:ok, conn, _acc = nil} =
Mint.HTTP.request(
conn,
"GET",
"/user-agent",
_headers = [],
_body = nil,
_timeout = 5000,
_acc = nil,
fn conn, entry, acc ->
IO.inspect(entry)
{:cont, conn, acc}
end
) which does request/recv_response. Yet another idea is to have a single function that does connect/request/recv_response. I think request/8 above is a nice middle ground though. |
@wojtekmach why would this crash? {:ok, conn} = Mint.HTTP1.connect(:https, "httpbin.org", 443, mode: :passive)
{:ok, conn, ref1} = Mint.HTTP1.request(conn, "GET", "/user-agent", [], nil)
{:ok, conn, ref2} = Mint.HTTP1.request(conn, "GET", "/user-agent", [], nil)
Mint.HTTP1.recv_response(conn, ref1, 5000)
Mint.HTTP1.recv_response(conn, ref2, 5000)
|
By the way, I’m totally in favor for this. 👍 |
Yes I can totally disregard unrelated messages and can document it as such. The only problem is they cannot be recovered. My understanding is in active mode we do selective receive on conn, not request, and in passive mode we’d have read from the socket. Again I think it’s fine. Should we support active mode or passive mode or both? Should we support HTTP/2, ie this becomes a function on Mint.HTTP? Fair enough about not supporting /8. |
Yes IMO |
Yes HTTP/2 for sure. I’m thinking we might want to enforce this on passive conns actually, because if you think about it there's not much advantage to doing a blocking req and using active mode (yeah can recv bytes while parsing but probably negligible). |
Got it, thanks. Would you prefer to start with just recv_response/3 or recv_response/5 (that additionally has |
@wojtekmach can |
I renamed this to |
@whatyouhide interesting! By stream of |
lib/mint/http.ex
Outdated
end | ||
|
||
defp recv_response([{:data, ref, data} | rest], acc, conn, ref, timeout) do | ||
acc = update_in(acc.body, &(&1 <> data)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be:
acc = update_in(acc.body, &(&1 <> data)) | |
acc = update_in(acc.body, &[&1, data]) |
and then on :done
we would IO.iodata_to_binary it. I decided to keep it simple for now, Finch is like that (using <>
) at the moment too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't use <>
and we should probably change Finch to not do that either. That will most likely copy the left-side on every concatenation. The IO version should be fairly more efficient. I would also keep the acc
as a triplet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keep as triplet and convert to map at the end right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One nit and looks good to me!
RE CI failure, sigh, actions/runner-images#9692. |
CI is fixed in #448. |
Damn I always forget that streams cannot carry an accumulator, and we really want to accumulate over the conn 🙄 |
We could have a single function that does connect/request/recv_response/close and it returns {:ok, enumerable}, no conn, it would be something like request/9 or, you know, request/1. :) |
e094168
to
2c10fca
Compare
Since the use case here is to make a request and receive a response in a single operation without having to deal with the intricacies of Mint, then I would go with: Mint.HTTP.request_and_response(
conn,
method,
path,
headers,
body,
options :: [{:timeout, timeout()}]
) which returns a response map (usual |
@whatyouhide yeah, I think this is a good compromise. The only thing that comes to my mind is with a distinct |
I think I would keep two separate functions. You get to support request streaming, passive mode, and pipelining by keeping them apart. Sure they are not “deal breakers” but we are not gaining much by unifying anyway, so why do it? |
That being said I think what we have right now is pretty decent (if I may say so) iex> {:ok, conn} = Mint.HTTP.connect(:https, "httpbin.org", 443, mode: :passive)
iex> {:ok, conn, request_ref} = Mint.HTTP.request(conn, "GET", "/user-agent", [], nil)
iex> {:ok, _conn, response} = Mint.HTTP.recv_response(conn, request_ref, 5000)
iex> response
%{
status: 200,
headers: [{"date", ...}, ...],
body: "{\n \"user-agent\": \"1.6.2\"\n}\n"
} edit: right, ditto |
Yeah I like that but I do not like the fact that it can trip you up if there are multiple in-flight reqs. Can we raise an error if there are? |
Why can you trip up? You can for HTTP1 if you consume the responses out of order but not for HTTP2, right? |
You can because as Wojtek said you will still consume the bytes (or msgs) for other requests. Say you have
in a single packet. If you call |
Yeah see this comment: #447 (comment). I’m fine with discarding (right now on the branch) or crashing. Lmk! |
We can crash if there are multiple reqs in flight, the conn knows that. |
Sorry, to be clear, there is a valid use case for this feature:
So I am not sure why we should raise and forbid all cases? Couldn't we just document it instead that HTTP1 itself requires order? |
@josevalim okay yes in HTTP/1 reqs are in order so this is not an issue. In HTTP/2, this is an issue, as per #447 (comment). If frames come out of order, we can absolutely match them back into their corresponding request, but generally Mint emits "responses" as frames. So in the sequence of frames in the comment above, a normal Mint flow would return [{:headers, ref1, ...}, {:headers, ref2, ...}, {:body, ref1, ...}, {:done, ref1}] In this, you didn't lose the headers of Considering that this is a nice utility for one-shot requests, and you can still implement blocking |
@whatyouhide If we receive messages from |
@josevalim yeah that's the complexity I want to avoid, as I don't think it's justified for the |
Can you expand on the complexity? Because I was thinking it would mostly be a map with frames per ref? |
Complexity as we don't need it 🙃 Also, can you describe a use case where I have |
@whatyouhide I think I finally get what you mean. The functions in Mint.HTTP2 do not receive the Alternatively, we could change So we can definitely do several requests and responses at once but then we should rather build something on top of
Although it would be nice to reduce the number of arguments. |
@josevalim consider that this is just one mor argument than Also, the reason I’m pushing back here on features like body streaming and whatnot is that I want Mint to stay really low level and just an abstraction around the network socket, but with protocol understanding. If we start to add fancy functionality then I’m afraid we will just end up with Finch 😄 A single one-shot req/resp function makes sense for scripting with less deps, and libraries and so on, but I would probably stop there and ask anyone who needs more complex scenarios to just use the normal |
Yes, I agree with this. At the same time, there can be use cases where you only need to do one HTTP request, and people are resorting to Finch because there is nothing slightly more ergonomic in Mint. Also, if the goal is to one day have Mint in Erlang/OTP, I think such conveniences make sense, which is why I am not opposed for some slight exploration here. :) But yeah, if you need body streaming, it is probably fine to rely on the low-level ones. |
@wojtekmach let's go with |
I renamed the function to FWIW, and this is I think mostly because how test helpers were implemented, this functionality was easier to test when we were separately calling |
d1db80f
to
f307e26
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @wojtekmach this is looking great. Left a few comments 🙃
@doc """ | ||
Sends a request and receives a response from the socket in a blocking way. | ||
|
||
This function is a convenience for sending a request with `request/5` and repeatedly calling `recv/3`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function is a convenience for sending a request with `request/5` and repeatedly calling `recv/3`. | |
This function is a convenience for sending a request with `request/5` and calling `recv/3` | |
repeatedly until a full response is received. |
|
||
* `:status` - HTTP response status, an integer. | ||
|
||
* `:headers` - HTTP response headers, a list of tuples `{header_name, header_value}`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also trailing headers?
* `:timeout` - the maximum amount of time in milliseconds waiting to receive the response. | ||
Setting to `:infinity`, disables the timeout. Defaults to `:infinity`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mh, are we ok with a default that potentially blocks forever? Maybe 30s is a better default?
recv_response(rest, {status, headers, [body, data]}, conn, ref, timeout) | ||
end | ||
|
||
defp recv_response([{:done, ref} | _rest], {status, headers, body}, conn, ref, _timeout) do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we error out here if there is trailing response data? That is, this should always be [{:done, ref}]
no?
Contrary to `recv/3`, this function does not return partial responses on errors. Use | ||
`recv/3` for full control. | ||
|
||
> #### Error {: .error} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We said we'd also raise if the mode is not :passive
, right?
Co-authored-by: Andrea Leopardi <an.leopardi@gmail.com>
I'd like to propose to add a
Mint.HTTP1.recv_response/3
that improve ergonomics of making one off requests:There is a bunch of libraries which are simply make one off requests, to name a few: tzdata, goth/aws_credentials (they have their own pool, make one off request every ~hour), esbuild/tailwind.
To play the devils advocate, esbuild/tailwind are simply using httpc, copy-pasting these secure ssl defaults from EEF Security WG.