See the entire conversation

Due to complexity, missing features, and browser bugs, I don't think I'd recommend HTTP/2-push to anyone unless they'd exhausted all other optimisations (including link[rel=preload]), and have a large expert team to deal with the fallout.
HTTP/2 push is tougher than I thought
There are lots of edge cases I hadn't considered, and it's very inconsistent between browsers. Here's what I found…
127 replies and sub-replies as of Nov 17 2017

…Even with a large team, I can't see how H2 push can be used in Safari. It's too unpredictable. You'd need to work around some Edge bugs too, but they're consistent.
Completely agree. One of the missing features is cache digests, without which you're likely to overpush a lot. Sorely needed.
Agree. H/2 Push is still useful for filling server think time, but is way too hard to hold right today. I've seen many teams using it accidentally over-pushing because Cache Digests aren't a thing yet & they're not using cookies/SW to shim.
103 early hints are an interesting solution to the think-time issue, while maintaining the relative simplicity of link[rel=preload].
To be able to push everything to a service worker cache seamlessly and refresh when needed. That's a very cool functionality that can save several scenarios.
Preloading into a cache is an interesting idea
Sample: legacy app with dynamically (reqjs) loaded files, ~400 reqs per screen. In a high latency env (500ms) is extremely slow to load. Browser cache is unreliable (wiped on browser's discretion). Serv. Worker helps with subsequent loading, but cold load is still super-slow.
Do you have a small example app of what an ideal/good use of H2 push looks like exactly by any chance? It would be nice to compare with/without push to see how much of a different it makes.
See any of @kristoferbaxter’s HN implementations on (e.g Preact or React HN), Polymer HN or the Polymer Shop app:
Hacker News readers as Progressive Web Apps
\o/ \o/ Thanks for finding the time to work on cuckoo filter support. Completely missed that there was movement on that issue this month.
This little dog is waiting for H2 Push to be refined ...
I have a feeling that push is going to be a feature that very very very few will ever use. Too complex, and needs expertise from too different deep skill sets.
This sort of thing, to make it in general use, needs to be workable by one person/role. Not to require hardcore backend people and deep knowledge front end people working together.
This is why I like link[rel=preload]. Easier to set up, easier to debug, and fails in a more predictable way.
I wish all browsers supported it for fetch.
Yeah, I want to get a feel for "best use of H2 push" vs "best use of preload".
I currently think of H2 push as being ~appcache. It has all these interactions with the rest of the spec that you can completely avoid by doing something like SW + WebSockets. It just feels like a pretty big thing when we have really good primitives now.
Also a pretty annoying issue: If you H2 push some modules, you still can't evaluate leaves until you reach them by parsing from the entry point, which means you need all of the modules before you can start executing. We have no "here's a module and plz eagerly evaluate" in H2.
Yeah, and since it's a network-level feature, it can't do that.… is in Canary tho!
Does this just load or also eagerly evaluate?
But this is another reason that a small lib written on top of lower level primitives might be better. Frustratingly, people pouring their hopes and dreams into H2 for years has detailed a lot of reasonable concurrent progress (like web archive)
It does everything short of executing it.
So close! So close!
If it also executed, what would be the difference between it and <script type="module">?
It would allow you to fetch a graph of modules (with names) in order from leaves, and execute the leaves as they come. Today, if you have a -> b -> c and you push c, b, then a, you still have to wait for a to evaluate c, because top import in a is what kicks off eval.
In a large graph, this means that you can't start executing at all until you have an entire subgraph starting from the root, which is deeply suboptimal compared to topsorting the modules and evalling the leaves upward.
Suboptimal as in: you would notice this in a big way.
To check we're on the same page, you're saying that: If 'c' changes the background color of body to red, you'd expect the bg to become red if <link rel=modulepreload href=a>?
I'm saying that yes, but (1) maybe modulepreeval, and (2) not because I want that side effect, but just to allow interleaving of fetch/eval and control latency.
hmm, I agree with then
You can add script type=module tags for b and c.
I think you are saying you want module loading, but without the deterministic execution, right? AMD worked this way IIRC.
Fine, I'll write it up. Probably will be noticed in a year or two ;) #StoryOfMyLife
So I take this to mean this is *not* what you're after? I'm really trying to understand. I *do* see the appeal of non-deterministic execution.
I hadn't seen this, thanks! So it sounds like there is room to get an attribute added in the future for the ASAP strategy. Probably too late to make async work this way I would guess...
There's just a huge difference between "start evaluating once you have all modules from the top" and "start evaluating once you have the leaves" Such a huge difference that once people notice it they will be shocked the design works the way it does.
It's not really about determinism. It's about letting a smart server push a leaf first in response to a top-level request and have the browser eval it right away.
I've been pondering if I can force this through service worker lately. You can .postMessage to clients; would at least allow experimentation
That's my plan.
Just tired of "H2! Solves everything!" @jaffathecake's posts are one aspect. This is another. We have miles to go before we sleep, and should keep pushing forward without H2 reliance for the time being. SW is great!
You can add script type=module tags for b and c.
Hmm, I think of H2 push as being really low level. Eg you have to understand when the browser will use multiple connections to the same origin etc etc.
But it also interoperates with a whole bunch of nonobvious spec things (leading to questions like multi-conn that you wouldn't predict). On the web, I think it's not obvious that there are major wins from shimming the spec's normal HTTP this way.
:( I'm kinda relying on push as a foundation of the performance story
Early hints with link rel=preload is going to be hard to pull off...
At least right now you can't send out request from the renderer before commit (so before we got some HTML). I know there's refactoring underway that'll supposedly make it easier.
A common problem is knowing what to preload before page has loaded. Need to lookup to db to determine if 404, which has different assets
And so by the time you have html, you may as well just start serving that. Assets are close to the top of html, so payoff feels small. Happy to hear to contrary
If we assume 404 is the exception, unresolved preloads feels ok in that case.
Well it's one example of one problem. Different pages requiring different assets is also a problem. Salient point is you don't always know what assets to serve before html render
Sure, preload is not useful if you don't know what you need to preload.
and the same goes for push
Push can still work for that case (assuming it's a credentialed fetch). For the generic case, there are a lot of caveats, so I kinda agree with Jake's point there. With that said, we at Akamai are using push successfully. So it is possible.
Idunno. I'm relying on y'all to tell developers what to do. :) From the specification/implementation side, I plan on just making an HTTP request, and hoping that y'all can make it fast.… doesn't say if the request for the manifest is credentialed or not. Is that defined elsewhere?
If it's not credentialed, then H2 push (in current implementations) won't work. It also means that downloading it would require connection establishment which is costly for any transport that's not QUIC...
I agree that solving that issue will solve both those problems
Isn't there some sort of zero-rt connection establishment in TLS 1.3?
yeah, but for it to be truly 0-RTT, it relies on TCP FastOpen, and rumors are TFO hits deployment hurdles in browsers that try to turn it on (because TCP-aware network components choke on it...)
Ah. Middleboxes. How we love them.
Internet: Remind me to set up a meeting with Mike to discuss push. That’s how this works, right?
Don't you mean "a meeting with Jake"? Because he knows things about push. I'm just a hack. :)
But I wanna talk privacy doomsday scenarios! Plus Jake already knows them :)
Oh. Good. *sigh*
Everything I know about HTTP/2:
Was reading this again... FWIW, FF56 fixed it so we cancel h2 push for things already in the cache. Just FYI in case you want to update the blog post.
1367551 - Cancel HTTP2 push when the resource is already in the disk cache
(no description)
Web Platform Test for it? 😏
That would be great. Thanks for volunteering to write it Ryan! 🙏
Honestly I don't even know if its possible to write this kind of test in WPT infrastructure right now.
It isn’t. Was trying to dig up the bug #, but there’s no H/2 or push support. Which underscores @jaffathecake point of don’t rely on what browsers aren’t testing for interop :)
Ok, everyone out of the pool. Back to h1.1!
Should we also consider WPT support for QUIC as part of the standardization process?
Yes, we absolutely should. Along with independent implementation (e.g. not packaging Chrome server code)
We should let QUIC testing handle main protocol features (as we do TLS or H/2), but should be able to exercise browser path for those features (like conn migration + web platform)
Is testing QUIC like this in the plan @mcmanusducksong? Or still too early?
Is WPT server infra still python. Have had more luck with golang for boringssl testing and chrome perf testing. Understand that this might be hard to get broad buyin though.
Go still banned from Chrome, for example :)
Have you heard of our lord and savior Rust?
telemetry uses wprgo now. technically a different repo (catapult) but pretty well integrated.
There is a hard requirement that developers not need to install, use, or maintain gocode at the moment. Would be a big sell since we want devs to run WPT locally
That said, WPT is a mix of Python and Apache, IIRC
this should be reasonable.. the h2 semantic tests could be reused and there are ~6 (too early right now) open source ietf quic servers being worked on.
At some point we were talking about writing a full fledged H2 test suite (both for clients and for servers), but didn't go anywhere /cc @mnot @tkadlec @colinbendell @dshafik
ohh cheers. Yeah, I need to do an update for a few fixed bugs.
Do you know the reasoning for the seperation caches for all of this? I'm struggling to understand why I wouldn't want the HTTP cache to always get updated.
Currently you can only add something to the HTTP cache if you're allowed to request it. Being able to push to the cache would change that.
When wouldn't you be able to request but would be able to have it pushed to you?
Also, it seems like it would be undesirable for a single request, like a web font, to be able to fill up http cache and push out other origins. (I'm assume h2 push cache limits are separate from http cache limits.)
Makes sense but restricting to same origin feels like a good solution to prevent that issue.
But the HTTP cache is shared across origins. It's one big pool.
I thought Matt was suggesting don't let cross-origin requests do h2 push somehow.
Ah. Not sure that solves the problem as a single origin can still fill up the HTTP cache.
Right, it just comes back to if that origin is top level document, well it can fill up http cache anyway, right? 🤷‍♂️
I was suggesting: Allow cross-origin requests to use h2 push but put requests in the push cache, same-origin requests can go to HTTP cache.
Mobile. Request blocking. Privacy. Sometimes you don’t want the data unless you actually request it. Hence limbo.
There’s no way right now for extensions to block push, or, in the Chrome case, to have SafeBrowsing examine it, without a request ‘pull’ being made.
Add in things like double-keyed caching, and the utility further decreases for push.
This is insanely detailed. Thank you!
TIL HTTP/2 is plaintext over Whatsapp! 😂 Great post 👏
I have no design skill, I can only mimic 😀
The OS X network stack is mostly open source, but you have to dig for it…
My strategy has been to sniff out Safari and Edge and append `nopush` to those clients, and potentially use a cookie to track for others. Push should be easier than this :\
Looking forward to the day that I understand what any of this means 🙈😅 #junior
We've all been there. I still remember the first time I tried to learn JS, "nope, I'll never 'know' this" I thought.
Well in a strange way that makes me feel better about learning! Thanks! 🙃
Salute for being initial user but you seem to be making use of ReadableStream after a compat check on blog, but I think you also need to check if this interface is implemented on the Response object. Otherwise this is what we get (Firefox 56 with ReadableStram enabled) Cheers
Thanks! But it's difficult to cater for all combinations of all flags in all browsers. This is why it isn't enabled by default.
As far as I know, it's enabled by default in Firefox 57 without `response.body` implementation, as I tried it in Firefox Nightly, and read it too somewhere. Maybe someone from firefox can clarify since I still didn't receive my 57 update.
They seem to clear out bugs in the core first, before starting the implementation for I/O sources in Firefox unlike Chrome which I think had the interface implementation for Response for quite some time
Sorry for the confusion, It's off by default as you said, found the tweet by a search
Streams API has landed in firefox 57 (default off). Please enable and test! Kudos to @tschneidereit and @baku82845977 for implementing! 🎉🙏🎉
And unfortunately it didn't make it for FF58 either. Fingers crossed for FF59. 🤞
😢😢😭😭 I was thinking of writing a polyfill for the Response, since it would be awesome if my demo can work on some Firefoxes too, it's good for Chrome, is that a good idea?
A simple ReadableStream wrapper which writes all the content in the one go, that'd be a crime, but it's more workable though
Seems like a lot of effort when we are so close to shipping. If you want to demo/test in firefox, can't you flip the two prefs?
Can I enable the request.body by flag, which one, I only knew about the Stream one?
You need to enable "dom.streams.enabled" and "javascript.options.streams".
I can find only `javascript.options.streams` in Firefox 56, it's time to go for Firefox Dev Edition. Thanks for the help I appreciate it :)
I have `javascript.options.streams` set to true, is there any other flag I need to enable?