See the entire conversation

73 replies and sub-replies as of Nov 28 2022

We talk a lot about wanting our servers to be stateless. But unless you're serving a static site, you have state somewhere. If you push it all into the database layer, your app is limited by what your database can do.
Guess what a lot of databases aren't good at? Real-time collaboration. You can't build Google Docs with stateless servers, much less video conferencing or game servers. But also, databases all have their own limitations that your app has to work around, even if it isn't real-time
Durable Objects are a primitive which let you build stateful distributed systems at a lower level. You could even build a database on top of it, but a lot of apps can do much cooler stuff by working directly at this level.
BTW, most of the real work on Durable Objects was done by @alexwritescode (ex-cockroachdb) and @bretthoerner! Couldn't have built this without them.
WDO seem pretty mind-blowingly useful. I can't see any info on pricing as yet however? How do WDOs support the requirement to hold data in specific regions for compliance purposes?
We need to get some feedback from beta customers before we can determine pricing, but we always aim to keep pricing low. We don't support explicit region tagging yet, but that's definitely a big part of the plan.
"Each document or spreadsheet is it's own object" feels familiar. ;)
Anything you can reveal about how they work under the hood? :D
Reading this reminded me of a conversation we had a year ago where we talked about workers being *the* framework for developing real-time collaboration products like Figma... I’m so happy this feature is now a reality! Congratulations Kenton and friends!
been waiting for your announcement this week. congratulations on the release and announcement!! 🤠🤠
It feels very similar to a globally distributed actor messaging system. Very cool!
all the way down. ;)
Congrats! Reminds me of Azure Functions Durable Entities which is serverless virtual actors/grains, except for your edge focus:
Durable entities - Azure Functions
Learn what durable entities are and how to use them in the Durable Functions extension for Azure Functions.
docs.microsoft.com
Check out temporal.io which is an open source project around the same idea.
The FAQ is great. How do you achieve immediate consistency? Is it trading off performance for keeping all data on single node? Would be awesome to know how you deal with single node crash (some kind of quorum replication)? Great work to the team and enjoy your vacation!
We do use quorum replication to be able to stay online with no loss of consistency in the face of single node crashes or colo outages.
Is data stored on the compute nodes or separately?
On compute nodes, although that's of course subject to change in the future.
Are there many management capabilities like creating/exporting backups of all object state? What does Worker KV use for replication/consensus?
You say durable objects, I see persistent capabilities 😄
Yep. And it's all built on object-capability RPC (Cap'n Proto).
, will you check this?
A distributed, durable actors system (for JavaScript!)! Very cool, seems very useful. Would love to kick the tires for use at @NotionHQ
Is... is that a websocket server using workers? The code comments say it’s not... but, it is?
Yes it's totally WebSocket-based! :) WebSockets are a big part of the Durable Objects beta. Where do you see comments saying it's not WebSocket?
This is new with the Durable Objects beta. That page needs some updating. :)
this seems so cool. Could you use Durable Objects + it's Public IP + Websockets as a way get get around NAT, avoid a TURN server in WebRTC and Peer-to-Peer systems?
I mean, if I can connect two browsers via durable serverless websocket we could skip WebRTC all together
The use cases described are awesome. Do multiple DO instances on the same POP share the same isolate or do they each get their own? If their own, would it be an abuse of the purpose to create a new DO for every request as a way to spawn separate isolates?
Whether the impact of creating them per request affects perf is also a good follow up.
Seems like you could also create “at least one DO per POP” by having the root worker create one and keep its ID in global memory, yeah?
So many questions 😅 but that’s a good thing. Can’t wait to play.
Multiple live objects may be hosted in the same isolate. We wanted the marginal overhead for each object to be measured in kilobytes, so that creating tons of them is just fine.
How long are objects kept live, in memory? Wondering about how to write a Distributed Object that denounces updates then pushes its local storage state into a centralized backend database - would such an object need a “pump” request to guarantee consistency with the backend?
Great question. The system may evict the live object at any time. What you really need for this is a way to schedule an event to happen later. Incidentally we also launched scheduled workers today! But they're not fully integrated yet. Soon...
I’ve seen people turn their nose at stateful FaaS. Imitation is further evidence that it’s a viable market, and more importantly, needed/wanted! Congrats @cgillum & @jeffhollan for being years ahead of the curve with Azure Durable Functions & Entities
Durable entities - Azure Functions
Learn what durable entities are and how to use them in the Durable Functions extension for Azure Functions.
docs.microsoft.com
We came up with this independently (it's inspired by my previous work on Sandstorm.io), but that's even better evidence that this is a good idea!
Sandstorm
Real-time collaborative web productivity suite behind the firewall.
sandstorm.io
How the heck is it that fast??? Testing chat application from two devices and one of them is connected to VPN halfway around the world and there is less than 120 ms delay between messages 🤯
Ruthless elimination of network round trips! :)
Can we store any type of static data (encrypted user data, images, blobs) inside Durable Objects, just to ensure global availability and uniqueness of the id, then just put it in the local edge cache to reduce access costs ? Another question, is there an API to remove objects ?
Edge cache > Durable Objects cache > storage
Sure, you could store any of those things, and using the Cache API in front can make a lot of sense. There isn't an API to remove entire objects at the moment, but there will be.
Workers only work outside China, right ? Your partner Baidu doesn't propose this kind of refinement, which means deploying in China will require a different architecture ?
Workers are available in China, but new features usually take a bit longer to reach our China network for a number of reasons. Durable Objects are not yet available in China today.
Wow, thanks, I didn't know these features where replicated in Mainland China.
Congratulations to your team you keep launching awesome stuff
I saw in the docs that I can use script=@mymodule.mjs;type=application/javascript+module ... Does it means I can upload multiple .mjs and dynamically import them (await import them) in the main module ? No more need to pack all JS inside a single file ?
Yep. This is new and we have some work to do on the tooling, but the idea is that you shouldn't have to use webpack anymore. More on this to be announced soon...
Is there a size limit on side files ? Like 1MB for total upload, or 1MB for each file ? Asking because await import files in the Worker could make the deploy / availability of PWA code instantaneous (no need to wait KV 60s) ! ML models of 2MB would require splitting but thats OK
The limit is still 1MB (compressed) for the whole worker (all modules combined). Lifting that limit is a separate effort.
Thanks for the fast reply on actual status ! Indeed, would be nice to increase total space to 256MB so you can deploy all your latest static app code (JS libs, 100 ML models, icons). And for dynamic writing (outside deploy) like user data, user images, variables, etc. there is KV
Of course you would also need as a dev to double write this static app data to KV so people who have an old version of your PWA can still download old assets.
Sometimes. But note that your worker will only run in China if you've been explicitly enabled for China network access, which requires a license from the Chinese government. Otherwise your Worker will only be served from locations outside mainland China.
Cloudflare China Network | Cloudflare
Deliver a fast, secure online experience to users in China with our global network with dozens of points of presence in mainland China.
cloudflare.com
Seems it is not possible to have the same domain on both networks. Enabling China network requires 1 domain for China and 1 domain for the rest of the world. Makes me cry but I guess it is not physically possible to have 1 unique domain or u would already have proposed the option
I don't think that's true. Are you possibly misreading the note about Yunjiasu? Yunjiasu is a different thing from Cloudflare's China Network.
Yes ! Thank you for confirming it's possible to have a unique url (super happy !). I suppose it also means that data written in KV in China can be accessible outside China (ex: replicated on European edges), and vice-versa, so KV is a true global store
Not necessarily. Please talk to your CSM before using KV in China.
At least in 2019 it seems KV was not accessible in China - Disclaimer : Post from an AWS employee - mpasierbski.com/post/2019-11-1…
Does this make any attempt to solve the problem where several messages that will result in a state update need to be sent to different objects and either have them all be delivered and acted upon, or none?
Regardless of whether that's supported within the system or not, this is really nifty, and will solve a lot of problems. Given the description, it obviously can be supported if you write your own implementation of two-phase commit or something similar.
At present you'd have to write your own 2PC. But you might find it's not too hard to implement 2PC here, because the objects can all store the ID of the transaction coordinator object and call back to it after a failure in a straightforward way.
I was thinking that exact thing when I was thinking how I'd implement it. 🙂
What are the pricing levels?
Now THAT is the interesting question. Worker pricing is mostly fair, but looking at the absolutely crazy Spectrum pricing, I expect the worst.
Do Durable Objects have any limitations on reads/writes per second? I recall that the KV store was limited to 1 write / second for the same key so using it for rate limiting was not really possible.