Skip to content

Server push of responses #770

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
AndrewJSchofield opened this issue Aug 25, 2016 · 11 comments
Closed

Server push of responses #770

AndrewJSchofield opened this issue Aug 25, 2016 · 11 comments

Comments

@AndrewJSchofield
Copy link

HTTP/2 RFC 7540 introduced the concept of server push. This allows a server to
pre-emptively send (or "push") responses to a client in association with a previous
client-initiated request.

From an API perspective, it provides an interesting way of implementing the Observer
pattern in which a client issues a request indicating interest in changes in an object's
state, and the notifications of the changes are delivered as pushed responses.

An alternative way of delivering notifications is proposed in #716, #735, #736 and #737. The callback/webhook pattern is widely used and would be a valuable addition
to the specification.

Server push offers a method of delivering notifications from server to client in which the
connection is initiated by the client. The client does not need to listen for connections
from the server, and there is no need for the server to hold credentials for connecting
to the client because it does not initiate connections.

A push section is added to the operation object. This contains a list of possible push
responses. Upon receipt of a request for an operation, the server may push one or more
push responses prior to sending the final response for the operation.

An example below illustrates the concept.

paths:
  /consumer:
    post:
      description: Create a consumer and start delivering messages
      parameters:
      - name: CreateConsumerRequest
        in: body
        required: true
        schema:
          $ref: '#/definitions/CreateConsumerRequest'
      responses:
        200:
          description: Success
      push:
        /event:
          responses:
            200:
              description: Event pushed to consumer
              schema:
                $ref: '#/defintions/Event'
              headers:
                offset:
                  description: Offset of this event
                  type: integer
definitions:
  Event:
   anyOf:
   - $ref: '#/definitions/EventType1'
   - $ref: '#/definitions/EventType2'

The push object holds a list of Push Item Objects.

The Push Item Object contains a Push Path Object.

Field Pattern Type Description
/{path} Push Path Object A path for the pushed response.

The Push Path Object describes the responses.

Field Name Type Description
responses Responses Object Required. The list of possible responses.

Pushed responses are just like regular responses of an operation.

@ePaul
Copy link
Contributor

ePaul commented Aug 25, 2016

A server push is per-hop: if you have an intermediary (e.g. cache/proxy) between the original server and the client, that is allowed to suppress the push (e.g. just storing the data for a later request from the client), or even push additional data it already had cached. A server push semantically just is a shortcut for the client sending a request, when the server knows the client needs the data, this is why this works out.
I'm not sure that having an API which depends on the existence of a server push is a good idea.

Maybe just use the link feature from #742 to link your response to other resource, and allow it to the server to push them if they think the client wants them (but with the fallback option of the client requesting them if they are not pushed – this way it also works when an intermediary caches this).

For the syntax, as the server-push is a pair of request and response, you would need to define also the other properties of that request (like the request method – currently it looks like you use just GET? –, header and query parameters, etc.).

@AndrewJSchofield
Copy link
Author

Thanks for your comments. I think I need to explain the flow of data better.

In the example, the operation is POST on /consumer. Once the client sends a POST request on /consumer, the server is able to push responses on /event and then the response to the original request. I could have chosen a different operation - it's just an example - maybe GET would have been a better choice. The extension I'm proposing is to let you specify the form of these pushed responses in the OpenAPI specification.

From a browser, the idea behind server push is that the server can optimise loading a page by eagerly sending resources such as images to save the browser from requesting them explicitly one by one. So the browser makes a request and gets a heap of useful resources pushed by the server.

I'm proposing to use the same overall mechanism in the context of an API to push responses to a request so that event-based communication can be achieved without either long-polling or webhooks.

@ePaul
Copy link
Contributor

ePaul commented Aug 25, 2016

I think I got what you want – but I don't think this usage really confirms to the semantics of HTTP (including HTTP/2).

At least as I understand RFC 7540, section 8.2, a server push is not meant to send stuff which the client wouldn't be able to get with a normal request (e.g. a GET to some path), but just to shortcut the request roundtrip from the client.

@RobPhippen
Copy link

If I have understood correctly, the abstract interaction pattern (at least at a dataflow level) is not so very different from that for WebHooks - but if course the physical implementation is very different. So - a question - could the solution to #716 be used here too (with some tweaks)?

@AndrewJSchofield
Copy link
Author

Yes, I think the solution to #716 could be used here too.

On the wider point about whether it conforms to HTTP semantics. My view is that it's common practice to use HTTP following particular conventions to implements APIs. For newer features such as HTTP/2 server push, the conventions for use in APIs have not yet been established. I've suggested a convention for providing a feed of notifications using server push.

@darrelmiller
Copy link
Member

@AndrewJSchofield My understanding of the way HTTP/2 server push was intended to be implemented was to be transparent to the client application, as @ePaul suggests. The pushed response would be cached by a client side HTTP caching layer and future client requests would be served by the prescient presence of the response in the cache.
Having APIs become aware and potentially dependent on this feature seems like a bad idea to me.

@AndrewJSchofield
Copy link
Author

All right. So, the consensus seems to be that this would be a misuse of HTTP/2 server push to give an implementation of the Observer pattern using APIs. If I want to stay within HTTP, I guess I still have three options:

  • webhooks
  • WebSocket
  • long-polling

@sarnowski
Copy link
Contributor

I would like to throw in gRPC that makes streaming (also bidirectional) first class definitions:

gRPC Concepts / RPC Lifecycle

Server streaming RPC

A server-streaming RPC is similar to our simple example, except the server sends back a stream of > responses after getting the client’s request message. After sending back all its responses, the server’s status details (status code and optional status message) and optional trailing metadata are sent back to complete on the server side. The client completes once it has all the server’s responses.

Client streaming RPC

A client-streaming RPC is also similar to our simple example, except the client sends a stream of requests to the server instead of a single request. The server sends back a single response, typically but not necessarily after it has received all the client’s requests, along with its status details and optional trailing metadata.

Bidirectional streaming RPC

In a bidirectional streaming RPC, again the call is initiated by the client calling the method and the server receiving the client metadata, method name, and deadline. Again the server can choose to send back its initial metadata or wait for the client to start sending requests.

What happens next depends on the application, as the client and server can read and write in any order - the streams operate completely independently. So, for example, the server could wait until it has received all the client’s messages before writing its responses, or the server and client could “ping-pong”: the server gets a request, then sends back a response, then the client sends another request based on the response, and so on.

This concept is very powerful and uses HTTP 2 to its full extend to get rid of all the hacks with long polling or all the complexities with context-loosing webhooks.

@itsjamie
Copy link

@AndrewJSchofield outside of HTTP/2 Server Push, there is also Server Sent Events.

This is similar to WebSockets, but works with a regular HTTP request, and is also possible to support resumption via the "ID" field.

@TimGoodwill
Copy link

TimGoodwill commented Jan 24, 2023

I think I got what you want – but I don't think this usage really confirms to the semantics of HTTP (including HTTP/2).

At least as I understand RFC 7540, section 8.2, a server push is not meant to send stuff which the client wouldn't be able to get with a normal request (e.g. a GET to some path), but just to shortcut the request roundtrip from the client.

I think a decision not to explicitly support (describe) HTTP/2,3 server push in a constrained pattern is a mistake. The ability to 'subscribe' to a push and receive a predictable payload (per GET method, perhaps on a subscription instance?) as and when the resource changes during a session would be invaluable. Something along the lines of a callback without the need for a callback URL, a WebSocket with well defined operations and semantics.

AsyncAPI will this year simplify the definition of synchronous request/response interactions. In this light, I am leaning toward recommending AsyncAPI to a EDA-focused client as the roadmap specification document going forward, in spite of (some) snyc use-cases that would have been better described by OpenAPI. The ability to describe a constrained WebSocket/callback equiv HTTP/2,3 interface compatible with COTS API management platforms and patterns would have made the argument fora dual specification approach more compelling.

@handrews
Copy link
Member

The original discussion resolved with an agreement that this was out-of-scope for OpenAPI's level of HTTP usage. The last comment indicates that AsyncAPI addresses this use case, so I'm closing this as out of scope. AsyncAPI feels like the right home for this feature, and we're not competing with them for usage. Folks should use whichever specification works best for them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants