Description
An important customer of ours is very interested in using HTTP 103 Early Hints to lower the perceived latency experienced by their end users. This requires the ability to return more than one response head during the message exchange. The basic example from the RFC is:
Client request:
GET / HTTP/1.1
Host: example.com
Server response:
HTTP/1.1 103 Early Hints
Link: </style.css>; rel=preload; as=style
Link: </script.js>; rel=preload; as=script
HTTP/1.1 200 OK
Date: Fri, 26 May 2017 10:02:11 GMT
Content-Length: 1234
Content-Type: text/html; charset=utf-8
Link: </style.css>; rel=preload; as=style
Link: </script.js>; rel=preload; as=script
<!doctype html>
[... rest of the response body is omitted from the example ...]
With hyper's service function model, we currently only have the chance to return a single response head. Supporting multiple response heads may require some substantial API changes or additions, so I'm hoping we can discuss potential designs before diving into implementation. Please take any type signatures mentioned below to be a bit handwavy; I know things are more complex with the tower types and such.
Channel/stream-based interfaces
These are my preferred approaches from a hyper user's perspective, but I admit I tend to get lost when trying to follow all the types involved with tower services under the hood, so I'm not sure if this is particularly feasible on the implementation side.
Service function channel argument
We could add a variant of hyper::service::service_fn()
whose closure takes an additional sender argument, i.e., FnMut(Request<R>, mpsc::Sender<Response<()>>) -> S
. The service implementation would be free to send many interim responses, and would return the final response in the same way that existing service functions do. For example:
async fn my_service_fn(
req: Request<Body>,
mut interim_sender: mpsc::Sender<Response<()>>,
) -> Result<Response<Body>> {
let early_hints = Response::builder()
.status(103)
.header("Link", "</style.css>; rel=preload; as=style")
.header("Link", "</script.js>; rel=preload; as=script")
.body(())?;
interim_sender.send(early_hints).await?;
let resp = todo!("build the final response")?;
Ok(resp)
}
Service function response stream return value
A similar variant is to accept a Stream<Item = Result<Response<Body>>>
as the return value from a new service function variant. This would give the flexibility for the service function to use channels internally, or return stream::once()
for a basic case with no hints. For example:
async fn my_service_fn(
req: Request<Body>,
) -> impl Stream<Item = Result<Response<Body>>> {
let (mut result_sender, result_receiver) = mpsc::channel(1);
tokio::task::spawn(async move {
let early_hints = Response::builder()
.status(103)
.header("Link", "</style.css>; rel=preload; as=style")
.header("Link", "</script.js>; rel=preload; as=script")
.body(Body::empty())?;
result_sender.send(Ok(early_hints)).await?;
let resp = todo!("build the final response")?;
result_sender.send(Ok(resp)).await?;
});
result_receiver
}
Error handling gets a bit awkward with this one, as it always does when detaching a task from a service function. Also, we don't enforce that the 103 has no body via the type system as we do when the channel is dedicated only to returning interim responses.
Functional interface
Inspired by hyper::upgrade::on()
, we could add a function (strawman name hyper::interim_response()
) that allows additional closures to be invoked with the current request, each of which would return an interim or final response. For example:
async fn my_service_fn(req: Request<Body>) -> Result<Response<Body>> {
tokio::task::spawn(hyper::interim_response(async move {
let resp = todo!("build the final response")?;
Ok(resp)
}));
Ok(Response::builder()
.status(103)
.header("Link", "</style.css>; rel=preload; as=style")
.header("Link", "</script.js>; rel=preload; as=script")
.body(())?)
}
Error handling is also awkward with this one, and there's a bit of a continuation-passing style feel, but it's worth considering something that resembles the existing 1xx API. I believe the other approaches would be easier to work with, particularly if multiple 103 responses are sent in a single exchange.
Extension to body types
I could imagine implementing subsequent response heads as something that could be polled from the body like trailers. This would probably run afoul of many of the same problems that motivated #2086, though, so it seems unlikely to be the right choice.
I'm sure I'm missing some ideas here, but regardless of the final design chosen it would be great to figure out a plan forward. Please let me know how we can help.