See the entire conversation

111 replies and sub-replies as of May 13 2017

Only the errors are due to me. 🙂 Lots of help from @slightlylate, @estark37, @sirdarckcat, @ericlaw, many others!
How long can a SW receive occasional sync events after the user visits a site installing one? Do those stop if the user doesn't go back?
i.e. if a user visits a site, does that mean it gets to occasionally run a little bit of code forever unless the user clears their cookies?
Or do those events stop if the user hasn't gone back for a certain amount of time? That would address much of the security concern.
When you say "occasional sync events" are you asking about activations due to push notifications?
That has back off behaviour and total invocation limit.
How long after the user has visited a site can it get an event though? Can a site register a monthly sync event for a whole year?
Not seeing documentation on the limits of this. If a user visits a malicious site, how long are they exposed to running code from there?
Not worried about farming out CPU usage but rather the potential of a long-term persistent beachhead to exploit at some point the future.
Can you describe an exploit scenario? Are you concerned about people opting in to push notifications from a purposefully malicious origin?
Yeah, that's one possible scenario. Also wasn't sure if those periodic background sync events (rather than push) required a permission.
Basically an attacker being able to persistently run code, with the user not being aware that the site is still able to run code.
Even if at some point they needed to allow it, there doesn't seem to be any prominent user-facing UI (site settings isn't ) showing these.
So for example, an attacker has a window to exploit the publicly disclosed bugs every Chrome release cycle, particularly for ChromeOS.
Since ChromeOS releases lag a fair bit behind and have a feeling users take a much longer time to reboot... update notification is subtle.
vs. before, where the user actually has to visit the malicious site when attacker has working exploit for their version, way less likely.
To be clear, we're talking about a malicious site that has enough of a value proposition to get people to give it push permission?
1. gauntface.github.io/simple-push-de… 2. chrome://serviceworker-internals/ 3. chrome://settings/search#clear%20browsing%20data, Clear ...
4. chrome://serviceworker-internals/ 5. Close Simple Push Demo tab 6. chrome://serviceworker-internals/
It looks to me like Clear Browsing Data clears Service Workers. (You can also deny SWs by not letting sites set local data.)
And lose all other forms of local data as part of that, which is not practical, so this is being forced onto users in practice.
Clear Browsing Data is the well-known UX for clearing out one's browser profile, and it works. Maybe we could add a specific [x] for SWs.
Clearing site data already clears your SWs.
I think the idea is that maybe you want to keep your other site data but nuke your workers.
Struggling to understand why. A SW w/o the caches/IDB dbs is blind and mute. Getting a half-broken app isn't in the user's favour.
I think @CopperheadOS is looking for the reverse: local data w/o SWs. Would still likely result in a half-broken app. 🤷🏻‍♂️
Yeah... Don't get that
It's a security issue, mostly not a privacy issue (but it's that too, separate from the security issue).
It being able to update and run code in the background only spread out by lots of time and for short periods of time doesn't lessen that.
If you can write up an attack scenario, preferably with a PoC, we have a VRP. Prove me wrong. 🙂
When was it implied there was a vulnerability? A huge new form of attack surface exposing users to more danger != something covered by that.
s/attack surface/attack vector/ to be clearer
If we pretend Chrome is bug-free, then there isn't a security issue. Still a major new form of invisible user tracking for web sites though.
Users see which sites are open. On the other hand, cookies and these are effectively invisible, but cookies don't run code and update in bg.
User leaving open a malicious site for 3 months vs. visiting once, never going back, but being persistently exposed to attacks is different.
I think/hope you'll find that the API surface available to SWs is not as extensive as you seem to think.
We also historically have rewarded generously for partial exploit chains.
Updated JavaScript code + network access with a periodic sync event is really bad. Haven't been thinking about anything else, really.
Just so we're super clear: backround sync API is one-shot sync, not periodic.
Yeah, but they do exist, and the permissions don't communicate the privacy and security risk, which seems to be missed anyways.
We have thought hard about all of these aspects. Of course, might have missed something! Looking forward to your POC = )
FAQ suggests push notifications are just one of the ways to keep SWs running. What are the others? Do they all require user permission?
Background Sync will allow one-shot wakeups. No ability to schedule arbitrary wakeups w/o user consent.
The user isn't consenting if they're only accepting something like notifications, not background coarse location tracking via IP, etc.
Notifications / push != persistent, background application to an end user, even experts on web standards and security that missed this.
Is there a sample application somewhere with use of periodic events, rather than push?
There is no periodic sync implemented anywhere, in any browser. Does not exist.
...which folks who play with the API or look the source code can know easily. E.g.: cs.chromium.org/chromium/src/t…
Nobody "missed" this. It was discussed to death.
If you discussed the same things implied here (tweets are too short to actually write it all out) and deployed it, that makes it worse.
Post a background geo-IP tracker PoC.
Why? Not interested in helping Chromium development based on bad interactions with Chromium developers, unlike Android.
Just trying to understand reasoning and what was or wasn't considered. Quite aware it's not going to get removed or crippled from how it is.
Consider Android: do users believe they're giving up the ability for an app to ping servers when they allow notifications?
And recall here that Android allows silent push notifications (which we do not).
Android has code signing and users explicitly install the apps.
And sure, understand the goal with these standards is to compete with native apps. Except the consent and signing model is not present...
Users explicitly opt-in to per-site notification permissions here. Nothing is implicit.
And what does signing buy? We can kill-switch bad push senders just as (or more) easily.
Yeah, not buying that. Google claims to police GCM too. Don't believe in the enumerating badness and IDS fluff.
Signing means a compromised server doesn't allow executing code on the user's device and obtaining the data they have stored in that app.
TLS isn't comparable to code being signed and verified, not just authentication of the transport layer between an app and the server.
Although, the end goal seems to remove what little user control, privacy and security exists so makes sense why it wouldn't be considered.
User data controlled by the user and not stored on a cloud server without end-to-end encryption is apparently just a problem to be solved.
Fundamentally incompatible views on how software should work -> misunderstandings and unproductive discussion.
They consent to notifications, not to persistent background code execution, as I stated. There's also no ongoing consent to it like apps.
What is the delta in "ongoing consent"? If users tap into the "site settings" link on every push, they can remove push.
Users have expectations about what a web page is and it's not persistent code that can run without the site open.
"Allow notifications" or "Allow push" definitely doesn't imply or communicate that, and there's another big difference from Android and iOS.
Android and iOS directly present the installed apps to the user. It's front and centre. They explicitly install, and see which are there.
If you list service workers on new tab page, users can drag them to a trash bin and there's a prompt to install, then I have no complaints.
Service Workers are not a user concept. Push notifications are, so we enable management of *those*.
Except you don't ask users for consent for the security and privacy implications, you ask them for consent to display notifications.
You also don't acquire their ongoing consent, since there isn't a prominent list of every app that's doing this with an easy way to remove.
It's *right in your face* when you get a notification. Literally under your thumb. Every time.
The fact that there's an app running is in no way clear.
Do you mean except for the fact that we force the site to show a notification (or show one on it's behalf)?
So unlike iOS and Android apps, it's invasive without consent, and a misleading notification prompt doesn't change anything.
...and in our model users are always presented with UI when work happens on behalf of an origin, giving them control.
Users don't need to search for a web application in a store and then explicitly install it. Completely different model for code signing too.
To prove your claim, and because we love to reward bug reporters. I think you'll find we have earned a good reputation on that point.
There aren't rewards for a WONTFIX / NOTABUG or DUPLICATE result. It's not productive when it's already clear what the result will be.
Not planning on doing free work for any company. Submit Android bugs when they are found along the way, doing paid work.
So if you're on a different network when connectivity is restored, you leak to site new IP address. Even if you never visit that site again.
You also leak to network operator that you previously visited that site due to DNS/SNI/etc. leaks.
Seems bad to permit this without user consent.
We have discussed that at length too.
Discussed at length != addressed the concerns or came to conclusions that other people think is reasonable.
Pretty sure there's internal discussion on a lot of things that are unreasonable, like telling sites build id + phone model / OS build id.
Or giving precise details on GPU information and allowing dumping of the rendered buffers. Or the battery API that's not even portable.
Every API we add has the potential for misdesign. That's why we focus on working in standards and ensuring impl flexibility.
For instance, for one-shot background sync, one idea we discussed was only allowing site/network pairs that had been previously seen.
And the API has been designed to allow a flexible policy like this should we decide it's better in the future.
Issue is not really API design, it's a fundamentally different view on a what a browser should be. Anyway, we'll see how it goes.
The Google Play instant apps feature is approaching the same disaster from the other direction. I guess those will even be on ChromeOS now.
Fetch events created by in-page JS (no permission, as normal)
That's tied to the page lifetime. Page closes == no way to wake up SW w/ fetch event.
Basically that matches the user's intuition about pages: they can run JS when open. Not new.
If someone shows our controls&mitigations aren't as effective as we thought, we often reward for that.
Is that permission required for background sync? Not talking about push. chrome:// URLs are for developers, not something users see / use.
"show notifications" also isn't communicating that it means the site persists as something that's able to run in your browser.
The document you linked to shows that BG sync requires a permission. The chrome:// URLs are just for you to test.
What do you mean by "sync event"? Push notifications? (Push notifications require user permission: gauntface.github.io/simple-push-de…)
No, bg sync events (linked to the doc above). Pretty sure those don't need permission, although Android (not desktop?) has a global toggle.
Those are also gated by (origin-scoped) permissions.
Background sync fires an event once when the device comes back online. No concept of "periodic" running in background.
also @alexainslie @lgarron @jaffathecake and more whose Twitter handles I don't know...