See the entire conversation

a response to @Rich_Harris's talk "Have Single-Page Apps Ruined the Web?" htmx.org/essays/a-respo…
131 replies and sub-replies collected so far...

I think you are missing his weakest argument: that classic HTML sites cannot partially reload parts of the screen. That's BS and there are old and new ways of doing that
Indeed! I find it odd that many on the other side of the argument conveniently seem to forget about AJAX.
AJAX and subsequent updates of the DOM are functions of JavaScript, not HTML.
It can be with portals (but there is no "old way" of doing it, that's for sure)
Yes, indeed it is. What's your point?
Then link to hyperscript is broken
Why does he claim edge computing is the end of opposition to JavaScript? Isn't that suppose to help with the latency issue of MPAs?
have fun getting a rails app to run in a cloudflare worker!
yep this is where diverge: we'll trade "good enough" latency on mutateable resource requests for simplicity of deployment & programming model depends on your app requirements of course
we'll have to disagree about which solution provides the simpler deployment and programming model. but even accepting that premise, accepting that trade-off at such a foundational level is a violation of the priority of constituencies
if you can provide good enough performance for your users using simpler programming and deployment models, you should, w3c aside simplicity compounds, as does complexity software engineering is all about making the right complexity tradeoffs and asking "is this worth it?"
again, we will agree to disagree on whether peppering the DOM with directives and deploying/maintaining a server is simpler and more scalable than a component model with a cohesive approach to the back end, running on e.g. stateless workers. though to be honest, looking at your
examples, i'm less concerned about htmx's performance than its accessibility, which isn't mentioned once in the docs — the examples shown actively prevent progressive enhancement, which transitional app frameworks take seriously
accessibility is a reasonable concern: you can make accessible software w/ htmx but it requires thought as with most javascript
it's not a reasonable concern, it's _the job_. and it starts with the examples provided by tooling authors: when you ask "why should only <a> and <form> be able to make HTTP requests?" and replace <form action> with <form hx-put> you are actively encouraging inaccessibility.
we are pragmatic about the issue: improving HTML as a hypermedia does involve reaching beyond what it can currently do at times we leave it to htmx users to decide where they fall do all the svelte demos work without js? (I have no idea)
i should note, @unpolyjs does a much better job of focusing on progressive enhancement than htmx does, while still using the hypermedia model
This is not a fucking debate, you're either on the correct side or the incorrect side. If you're not actively working to make your framework more accessible, you're on the wrong side. Accessibility is not optional.
none of your javascript libraries, not svelte, not react, not vue, not angular, nothing, just magically provide accessibility out of the box. your argument is a straw man. do better.
they don't actively undermine it. htmx does. anyway, since you chose this tone for your first interaction with a complete stranger, you earned yourself a mute. goodbye. next time you rock up in a stranger's mentions, do better.
Thanks for doing the lord’s work in this thread, Rich. Seems htmx is something I can safely ignore 😂
your circlejerk is showing.
trying to save this thread (narrator: he failed) htmx & svelte are two very different approaches to building web apps, each w/ strengths & weaknesses twitter encourages polarization & hit-and-run dunking (I'm guilty as anyone) but there is some good intellectual grist here 😑
htmlx is weird and hurts my brain, but I do appreciate they can repond with thought and take the piss of themselves.
the htmx developer starter kit - print out - tape to wall
Plus, we all know the companies that use rails. I’m not using it rn, but the argument ‘it’s performance is not great, but is good enough, and has pretty good DX’ rings truthful to me.
Also, also, people have bad days and are grumpy, we’re all human
I’m a fan of htmx, so there’s that, but please educate me on accessibility. I don’t understand how a pile of HTML rendered by javascript and shoved into the DOM is any different from a pile of HTML shoved rendered by a server and shoved into the DOM by javascript. Help?
Consider the <form>. Ordinarily, however the HTML was generated, submitting it will trigger an HTTP request. You can progressively enhance it with client-side behaviour. But if you instead do bizarre non-standard things like <form hx-put="/blah">, it will break without JS
using htmx, you can progressively enhance it with `hx-boost` or, you can do so by including both `hx-put` as well as an action and a rails style `method` input again & as w/ nearly all js libraries, the htmx user can choose how deep they want to go on it, based on requirements
peppering the DOM with directives... like hrefs? :) htmx is a conceptual extension of HTML as a hypermedia a different model than the JS/RPC model, but it has a good pedigree and can address UI issues w/ MPAs in a manner consistent w/ the original architecture of the web
> deploying/maintaining a server and > cohesive approach to the back end, running on e.g. stateless workers So deploying and maintaining a server?
I'm confused. What is the difference in your head?
servers don't manage themselves. the difference between provisioning a server, monitoring its health, managing scaling etc vs deploying code to a worker is like collecting firewood, starting a fire, boiling water vs turning on the kettle
OK by 'server' you mean 'physical or logical host like EC2', and I'm talking about a server application i.e. backend. Anyway, it's trivial to run the latter without worrying about provisioning on e.g. Google Cloud Run, so I think this argument is irrelevant for SPA vs htmx.
Running at the edge doesn’t solve latency for 99% of applications. DBs have not caught up yet and aren’t on par with the distribution of serverless. Like okay cool, your processing is now closer to the user, but now you have to deal with other problems you didn’t have before.
Like connection pooling is soooo annoying and harder than you’d think to manage. Tools like AWS’ RDS proxy and Prisma’s new thingy are great, but it’s still not the complete distribution we need across lots of data centers.
Edge workers are great for small use cases like quick customization for A/B tests, stuff you can keep in workers KV, but they are definitely not a good fit for running your app logic off of. Vercel + Fly get this right imo.
agree re DBs, though i'd quibble with the 99% — i think there's a lot of apps that are very well suited to workers KV or similar. am also curious to see if/how streaming renderers change things, if the first bytes of HTML can be returned while DB is being queried
Streaming renderers can definitely help, but that’s only useful for the first visit. Any other interactions use the API I don’t recommend Workers KV for most applications for the same reason I don’t recommend DynamoDB: lack of flexibility, no migrations, no indices.
Workers KV is an amazing tool when it has the right fit, just like DynamoDB is. But I don’t think it’s right for about 80% of use cases. SQL and NoSQL DBs offer flexibility that more KV-based DBs do not.
yeah, you don't have to convince me re flexibility — i just finished converting something from KV to Postgres for that very reason. but i come from the news world, so i tend to think of the web more in terms of 'content' than 'data', and content is a perfect fit for KV. YMMV!
Ooooo DEFINITELY re-content. Actually was wanting to work on a CMS built on KV because it’s just suchhhhhh a good fit!
I was asking out of genuine interest because the rise of edge computing and serverless platforms is not directly tied to javascript itself. But I guess I see your point...
there's a direct tension between the number of edge nodes your app is served from and the likelihood of encountering cold starts, _unless_ you're using a runtime that doesn't suffer from cold starts. AFAIK there's only one such runtime — V8
That seems like a valid point. Out of curiosity, why does V8 not suffer from cold starts? I googled and couldn't find a definite answer.
V8 has something called 'isolates' - it's what enables multiple tabs in chrome to safely share a process. Edge platforms like Cloudflare Workers use this feature: developers.cloudflare.com/workers/learni…
V8 suffers from cold starts. The larger your js bundle, the longer it takes an isolate to accept a request. You can optimize this with v8 snapshots. But you can optimize pretty much any process in a similar way. We can resume a Rails process in a few ms.
(longer it takes a *new* isolate)
my understanding was that a new isolate's cold start will _very_ rarely outlast the SSL handshake, hence 0ms in practical terms. re rails, are you saying you can scale from 0 to 1 in a rarely-used region in a few ms? (we're outside my wheelhouse, grateful for your expertise)
Back when we were doing v8 isolates, 800kb+ bundles would take 1s to "boot" in a new isolate. Tiny 10kb bundles were a few ms. We _can_ boot a Rails snapshot in a new region quickly. Most Rails apps need a nearby DB. It's more useful for, like, stateless Go apps. Or Deno.
useful data points, thank you
Why can't I have a rust/js worker returning HTML and use @htmx_org in the frontend to interact with that worker? I don't tie HTML-driven applications to Rails/Django/etc. I see it more as an app design choice rather than a tech stack one.
you can, because workers have good support for rust. that doesn't generalise to other languages
I agree! But I can still have an HTML-driven app and use 100% JS in the backend. I don't interpret the essay as "don't use Javascript", but rather as a different way to create interactive interfaces (which I don't see incompatible with serverless or other new technologies).
an excellent point the question is what sort of UX can be delivered via hypermedia & is it good enough to meet your user needs while simplifying the programming model? for some (many?) websites, we believe so if so, you are free to choose the back end lang you prefer, even js!
They'll run great on fly.io though, which can run copies in regions around the world for you
for sure, but you pay for those instances, right? to compete directly with the cloudflare edge network you'd need to run ~250 instances (more, if you expect some of those regions to need more than a single instance)
Yup, Fly doesn't have scale to zero (yet?) so you have to pay for each instance in each region at the moment
You do pay for instances on Flyio. Sometime in 2022 you will pay for compute/RAM time. Data locality is more important that PoPs though. Your data isn't in 250 regions unless you're paying for it. Some pricing is transparent, most CDN pricing isn't.
ah, interesting — will look out for those pricing changes. thanks
CF Workers for do not run in all CF PoPs. Also, CF KV are not stored on all PoPs. I'm in Queretaro MX and there's a CF data center right here, but when I deploy and run a Worker that happens in the US (Dallas IIRC). Maybe this is different for exorbitant enterprise plans.
Also, for most use cases, the problem is not really the logic but the database. And even with solutions like Fauna, Spanner, etc, you have to consider GDPR and data residency laws. Distributed applications are a lot more complicated than just deploying a couple of workers.
they certainly claim that both workers and KV run on all PoPs (unless the word 'accessible' is doing overtime) — would be curious to hear their explanation
Hypothesis: workers have the _ability_ to run anywhere, but they get to decide where a given worker actually runs and how to route to it in they that is the most optimal.
Imagine you have 90% of visitors from France and 0.1% from Spain. It could be faster and more cost-effective to serve the Spanish visitors from an already-running worker instance in the Paris data center.
yeah, that seems plausible — so the Queretaro Post might be served from a local worker, but Le Monde might come in via Dallas
ha, never mind!
I like your enthusiasm, but you are overestimating our ability to route at that granular a level. When we route a packet, we have no idea where and how many isolates are already running an isolate.
At present we run a Worker in whatever datacenter (indeed, whatever machine) the connection arrives at. I could imagine us introducing an option in the future to run in fewer locations in exchange for, say, larger code size or memory limits.
But not all pops run workers, correct?
Incorrect. All PoPs run Workers.
As @evanderkoogh said, sometimes the closest PoP on the network is not the closest one physically, e.g. because your ISP doesn't interconnect locally. Or the local PoP might be over-capacity in which case some traffic will be rerouted elsewhere.
The way it works as I understood it is that all PoPs are the same in principle, but in cases of high traffic, free plans are the first to moved to higher capacity ones. Also, depending on peering arrangements, closer in distance might not actually be faster too.
These factors affect all Cloudflare users, regardless of whether you use Workers.
I'm in MX and my workers *always* run in the US (LAX most of the times). There are 2 CF PoPs in MX. One is about 30 mins by car from my house and the other about 3 hours in Mexico City. Also, see this conversation with @eastdakota
Not all sites will be in all cities. Generally you’re correct that Free sites may not be in some smaller PoPs depending on capacity and what peering relationships we have.
I did upgrade to the Pro plan and it made absolutely no difference.
In they that -> In the way. Damn autocorrect.
I like your enthusiasm, but you are overestimating our ability to route at that granular a level. When we route a packet, we have no idea where and how many isolates are already running an isolate.
Kind of makes sense now that I think about it. Synchronizing that data could be slower than just running the thing wherever we can. What’s a plausible reason then for Dallas to serve a worker for Mexico?
It is hard to say. Dallas is one of our bigger colo if I am not mistaken. So it has multiple peering options, lots of capacity to take over in backup situations. Etc.
Thank you. Also answered here for anyone following the conversation:
The way it works as I understood it is that all PoPs are the same in principle, but in cases of high traffic, free plans are the first to moved to higher capacity ones. Also, depending on peering arrangements, closer in distance might not actually be faster too.
The way it works as I understood it is that all PoPs are the same in principle, but in cases of high traffic, free plans are the first to moved to higher capacity ones. Also, depending on peering arrangements, closer in distance might not actually be faster too.
makes sense, thank you!
This is *exactly* why I am so excited about Durable Objects and their ability to restrict what data centres a particular object is stored at. It is going to take a while for good patterns around DOs to establish, and they certainly aren’t a drop-in replacement for a DB just yet.
Not sure if running at the edge is a huge advantage for 99% of applications though. Database just haven’t caught up with serverless tech yet. So like yeah, you can run your app code in 200+ data centers, but DB latency is now the bottleneck.
And in order for the DB latency to be low, you have to have loooots of replications in a good amount of data centers. Otherwise you still have the network latency problem.
Why running in a cloudflare worker is even a pre-requisite?
because we're talking about edge computing. in practical terms, that means V8
there's a direct tension between the number of edge nodes your app is served from and the likelihood of encountering cold starts, _unless_ you're using a runtime that doesn't suffer from cold starts. AFAIK there's only one such runtime — V8
Edge Computing and similar solutions and an over-optimization in most use cases. It's not a requirement, most of the web has been working just fine without it.
Not having a different language for front and back ends was one of the main reasons we started doing SPAs!
well, three languages: back end, front end, HTML + CSS I suppose, so four? but since you'll always have HTML anyway, a hypermedia architecture reduces it back to three: your preferred back end language, HTML & CSS Hypertext On Whatever you'd Like: HOWL
until you need literally any client-side interactivity
htmx.org/examples shows that many common UX patterns are achievable using a relatively simple hypermedia-based approach that goes with the grain of the original model of the web not all, obviously, our argument is not that hypermedia is right for every web application
Which is where the htmx attributes come in...
C'mon. Don't kick the context out of the conversation. Client side interactivity is subjective and conversations will be better if we specify is it - The Newyork times embedded charts ( Svelte or react are better at it ) - Or just toggling a few modals + dropdowns
precisely why you should favour solutions that scale both up and down. requirements change
Again the context is missing. Whats the team size, what one is building, initial complexity of opting svelte If I am onto build campaign monitor style app myself. Opting into a frontend framework is the last thing Ill worry about
Most of the frontend creators assume the backend is already in the place or its too simple like a blog.
The language for the front end is HTML.
you can't build a data-driven app with HTML. you need to generate HTML programmatically. the question is: do you do so with the same language you use for client-side updates, or do you maintain separate client- and server-side apps?
what if we used hypermedia as the client-side language? then HTML is our client-side language (it's already there anyway) and then we are free to choose whatever server-side language we like
First, I’m a big fan, equally of Ryan, equally of your rap battles. I’ve done a lot of deep Gatsby, etc. I view HTML as the language of the web. JavaScript is progressive enhancement. JavaScript as a universal VM is something else. Maybe it could start calling itself “web3”. 😇
i mean, you're preaching to the choir! i often say 'HTML is the language of the web' at JS conferences, and Svelte began from that principle
htmx: HTML if they kept going
the year is 2030 a lunatic in montana has taken full control of the w3c in "the Name of The True Hypermedia" HTML advances in ways not seen since the early 1990s world peace breaks out and the instagram back button works this seems plausible to me, not so you?
i think we're done here. goodnight
Is every website a data-driven app? I think not. Most are just content with a dash of interaction.
even a personal blog is a data-driven app, unless you hand-author each post in HTML. but this isn't the topic at issue
If you define it as such then everything is data-driven, but I'm looking at a reasonable meaning like 'a spreadsheet webapp'. And AFAIK the current popular practice is to author in Markdown and use an SSG to render the blog.
you're having a different conversation to everyone else, gonna leave this subthread. have fun
The question is can you afford a useful amount of interactivity with a small, general library like htmx. Everything I've seen and done with htmx says, yes, plus a sprinkle of javascript here and there that requires no elaborate frameworks and no elaborate build systems.
For me, SPAs reduce complexity because I can build a completely static app hosted on s3/gcs/cloudflare and not worry about monitoring, scaling, or latency of a backend app
What does this have to do with a backend app? You still fundamentally have to have a database somewhere, and plenty of apps with backends have CDN-hosted SPAs. Also, if you're not monitoring your frontend app, you're leaving your UX up to s3/gcs/cloudflare. GL w/ that.
I've built entire functional CMS applications as a static frontend app that simply interacts with the GitHub API or other managed services. No docker, no separate thing running on a VM somewhere just to serve some HTML.
My point is that I don't see the benefit of having a backend application in charge of display logic. Mobile apps generally bundle all view logic as a part of the application and data interface for the backend, why should web apps be different?
Because of the flexibility and simplicity of the uniform interface of REST The hypermedia approach has drawbacks, to be sure, but strengths as well By increasing the expressiveness of html as a hypermedia, a larger set of problems can be addressed effectively with it
Sure, I think it's totally appropriate to debate the quality and performance of JavaScript, but I ultimately would prefer these to be concerns of the client, ie taking the solutions of something like htmx natively into the browser and html.
yes htmx should not have to exist its functionality should be baked into html but, for whatever reasons, html has stalled as a hypermedia for many years, leaving javascript to pick up the slack
Who built the CMS?
I don’t understand. Is there or is there not a client side runtime with htmx and how does that not undermine your entire argument with respect to added complexity. Just because you think your abstraction is better?
yes, to a large extent and for many projects I think the hypermedia abstraction is better certainly simpler there are essays that delve into the topic more deeply here: htmx.org/talk
I took a quick look; color me unpersuaded at first glance. 🤷🏻‍♂️ Nevertheless, whole lotta hubris to role up, here, like ya did, even if you knew what you were talking about.
never tell me the odds, kid
2yrs ago got switched from java full stack with @apache_wicket to @quasarframework and i have the feeling that it hides a lot of complexity. Honestly im focusing only on building. I don't see any problems.